Google is testing a digital watermark to identify images generated by artificial intelligence (AI) to combat misinformation online.
SynthID, developed by DeepMind, Google’s artificial intelligence division, is designed to detect images generated by artificial intelligence. The technology modifies changes in individual pixels in an image, making watermarks invisible to the human eye, but still detectable by computers.
However, DeepMind clarifies that SynthID is not completely immune to highly modified images. As AI-generated images become more complex, distinguishing genuine images from fake ones becomes increasingly difficult.
AI-powered image generators have grown in popularity, with tools like Midjourney having over 14.5 million users.
- These tools allow users to quickly create images based on simple text instructions, raising copyright and property rights concerns.
The Google Imagen Image Generator will be covered by a watermarking system that creates and validates watermarks for images created with this tool. Unlike traditional watermarks, these new watermarks remain effective even if images are edited or cropped.
Pushmit Kohli, head of research at DeepMind, told the BBC that his system changes images so subtly. “As for you and me, for one person it does not change.”
Unlike hashing (using a number to represent a piece of computer data so it can be secured or quickly found), he says, even after an image is later cropped or edited, The company’s software can still detect the presence of a watermark.
“You can change the color, the contrast, even change the size… and DeepMind can still see that it was created by artificial intelligence”He said.
In July, Google was among seven artificial intelligence companies that voluntarily agreed in the US to provide development and safe use of AI. This commitment involves the introduction of watermarks to allow people to identify computer-generated images.
More news on republikanews.ro.
You can find us on the RepublikaNews Facebook page.