Has Google’s AI watermarking system been reverse-engineered?
TL;DR
A software developer claims to have reverse-engineered Google DeepMind's SynthID system, showing how AI watermarks can be stripped from generated images or manually inserted into other works.
Key Points
- A claim that, according to Google, isn't true.
- " A little weed also seemed to help.
- No proprietary access," Aloshdenny said on Medium.
Nauti's Take
The upside: public security research, real or not, accelerates better solutions. The risk: Google's rebuttal stays vague, which isn't reassuring.
If SynthID is genuinely vulnerable, AI watermarks need this setback as a wake-up call to become more robust.