A few minutes every morning is all you need.
Stay up to date on the world's Headlines and Human Stories. It's fun, it's factual, it's fluff-free.
As generative artificial intelligence (AI) becomes more and more sophisticated as it “learns” and can create more realistic and human-like content, it gets harder for us humans to tell the difference between what’s authentic and what’s the product of some computer somewhere. This distinction is more important in some areas than others. For example, Google Cloud CEO Thomas Kurian explains that “if you’re in hospitals scanning tumors, you really want to make sure that was not a synthetically generated image.”
One possible solution to this problem is embedding invisible watermarks into AI-generated photos and videos. This year, this method has been tested out and launched by different testing firms. For instance, in August, Google DeepMind launched its own AI watermarking tool, called SynthID, which users could apply to Google’s AI image generator, Imagen, if they wanted to. The watermark isn’t visible to the naked eye but is detectable by Imagen’s technology. A similar tool had also been employed by the open-source AI image generator Stable Diffusion.
But can these watermarks be “broken” or erased by people who want to pass off AI-generated images as authentic? Unfortunately, it looks like they can be. Researchers at the University of Maryland (UMD) recently took up a project to try and break all of the existing watermarks for AI images, and they were successful.
“We don’t have any reliable watermarking at this point,” said researcher and computer science professor Soheil Feizi, who worked on this study. “We broke all of them.” He also said: “There’s no hope.” So, things aren’t looking super optimistic for this new tech. These invisible watermarks can be removed or even added to authentic images by regular old human beings. Other studies have had similar findings, like one conducted by the University of California, Santa Barbara and Carnegie Mellon University.
On the other hand, part of the UMD research team was able to develop its own version of a watermark that can’t really be broken without damaging the image itself. We’ll have to see if someone else is able to wash it out or if there really is a way to detect AI content with this kind of approach.