Skip to main content

Google’s AI detection tool is now available for anyone to try

Google announced via a post on X (formerly Twitter) on Wednesday that SynthID is now available to anybody who wants to try it. The authentication system for AI-generated content embeds imperceptible watermarks into generated images, video, and text, enabling users to verify whether a piece of content was made by humans or machines.

“We’re open-sourcing our SynthID Text watermarking tool,” the company wrote. “Available freely to developers and businesses, it will help them identify their AI-generated content.”

Recommended Videos

SynthID debuted in 2023 as a means to watermark AI-generated images, audio, and video. It was initially integrated into Imagen, and the company subsequently announced its incorporation into the Gemini chatbot this past May at I/O 2024.

The system works by encoding tokens — those are the foundational chunks of data (be it a single character, word, or part of a phrase) that a generative AI uses to understand the prompt and predict the next word in its reply — with imperceptible watermarks during the text generation process. It does so, according to a DeepMind blog from May, by “introducing additional information in the token distribution at the point of generation by modulating the likelihood of tokens being generated.”

By comparing the model’s word choices along with its “adjusted probability scores” against the expected pattern of scores for watermarked and unwatermarked text, SynthID can detect whether an AI wrote that sentence.

Here’s how SynthID watermarks AI-generated content across modalities. ↓ pic.twitter.com/CVxgP3bnt2

— Google DeepMind (@GoogleDeepMind) October 23, 2024

This process does not impact the response’s accuracy, quality, or speed, according to a study published in Nature on Wednesday, nor can it be easily bypassed. Unlike standard metadata, which can be easily stripped and erased, SynthID’s watermark reportedly remains even if the content has been cropped, edited, or otherwise modified.

“Achieving reliable and imperceptible watermarking of AI-generated text is fundamentally challenging, especially in scenarios where [large language model] outputs are near deterministic, such as factual questions or code generation tasks,” Soheil Feizi, an associate professor at the University of Maryland, told MIT Technology Review, noting that its open-source nature “allows the community to test these detectors and evaluate their robustness in different settings, helping to better understand the limitations of these techniques.”

The system is not foolproof, however. While it is resistant to tampering, SynthID’s watermarks can be removed if the text is run through a language translation app or if it’s been heavily rewritten. It is also less effective with short passages of text and in determining whether a reply based on a factual statement was generated by AI. For example, there’s only one right answer to the prompt, “what is the capital of France?” and both humans and AI will tell you that it’s Paris.

If you’d like to try SynthID yourself, it can be downloaded from Hugging Face as part of Google’s updated Responsible GenAI Toolkit.

Andrew Tarantola
Former Digital Trends Contributor
Andrew Tarantola is a journalist with more than a decade reporting on emerging technologies ranging from robotics and machine…
Samsung might put AI smart glasses on the shelves this year
Google's AR smartglasses translation feature demonstrated.

Samsung’s Project Moohan XR headset has grabbed all the spotlights in the past few months, and rightfully so. It serves as the flagship launch vehicle for a reinvigorated Android XR platform, with plenty of hype from Google’s own quarters.
But it seems Samsung has even more ambitious plans in place and is reportedly experimenting with different form factors that go beyond the headset format. According to Korea-based ET News, the company is working on a pair of smart glasses and aims to launch them by the end of the ongoing year.
Currently in development under the codename “HAEAN” (machine-translated name), the smart glasses are reportedly in the final stages of locking the internal hardware and functional capabilities. The wearable device will reportedly come equipped with camera sensors, as well.

What to expect from Samsung’s smart glasses?
The Even G1 smart glasses have optional clip-on gradient shades. Photo by Tracey Truly / Digital Trends
The latest leak doesn’t dig into specifics about the internal hardware, but another report from Samsung’s home market sheds some light on the possibilities. As per Maeil Business Newspaper, the Samsung smart glasses will feature a 12-megapixel camera built atop a Sony IMX681 CMOS image sensor.
It is said to offer a dual-silicon architecture, similar to Apple’s Vision Pro headset. The main processor on Samsung’s smart glasses is touted to be Qualcomm’s Snapdragon AR1 platform, while the secondary processing hub is a chip supplied by NXP.
The onboard camera will open the doors for vision-based capabilities, such as scanning QR codes, gesture recognition, and facial identification. The smart glasses will reportedly tip the scales at 150 grams, while the battery size is claimed to be 155 mAh.

Read more
This AI app boosts my productivity in a way that Apple Intelligence can’t
A microphone in front of a MacBook on a desk.

Apple Intelligence offers a bunch of interesting features, but if you’ve tried most of them for more than a few minutes, you realise they’re not quite up to scratch compared to the best artificial intelligence (AI) tools.

Image Playground is fine, for instance, but not particularly useful. Writing Tools work well enough but aren’t as ground-breaking as Apple might make out. And the less that’s said about Siri, the better.

Read more
I saw Google’s Gemini AI erase copyright evidence. I am deeply worried
Gemini Advanced on the Google Pixel 9 Pro Fold.

Update: Google has responded to Digital Trends’ queries. The story has been updated with company’s statement below.
The rise of generative AI has been a fairly messy process, especially from fair usage ethics and copyright perspective. AI giants are inking deals with publishers to avoid legal hassles, while at the same time, they are embroiled in copyright tussles in courts in multiple countries.
As the ravenous appetite for training AI on user data grows, we might be in for another ethical conundrum. Multiple users on X and Reddit have shared demonstrations of how Google’s latest Gemini 2.0 series AI model can remove watermarks from copyright-protected images.
Going by the before/after samples of images, it seems Gemini is doing a fairly good job of removing the watermarks. Notably, it’s not only erasing those banner-style watermarks, but also fairly complex overlays with design and stylized text elements.
https://x.com/ajiteshleo/status/1901484196972945872?s=61
The model in question is the Gemini 2.0 Flash, which was released earlier this year and got a reasoning upgrade, as well. It is worth noting here that you can not remove the watermark if you are using the mobile or desktop version of the Gemini 2.0 Flash model. Trying to do so returns a message like this:
“Removing watermarks from images without the explicit permission of the copyright holder is illegal in most countries. It is important to respect the copyright laws and intellectual property rights. If you want to use an image with a watermark, you should contact the copyright holder and ask for permission.”
You can, however, try and remove the watermark from images in the Google AI Studio. Digital Trends successfully removed watermarks from a variety of images using the Gemini 2.0 Flash (Image Generation) Experimental model.
 
It is a violation of local copyright laws and any usage of AI-modified material without due consent could land you in legal trouble. Moreover, it is a deeply unethical act, which is also why artists and authors are fighting in court over companies using their work to train AI models without duly compensating them or seeking their explicit nod.

How are the results?
A notable aspect is that the images produced by the AI are fairly high quality. Not only is it removing the watermark artifacts, but also fills the gap with intelligent pixel-level reconstruction. In its current iteration, it works somewhat like the Magic Eraser feature available in the Google Photos app for smartphones.
Furthermore, if the input image is low quality, Gemini is not only wiping off the watermark details but also upscaling the overall picture. .
https://x.com/kaiju_ya/status/1901099096930496720?s=61
The output image, however, has its own Gemini watermark, although this itself can be removed with a simple crop. There are a few minor differences in the final image produced by Gemini after its watermark removal process, such as slightly different color temperatures and fuzzy surface details in photorealistic shots.

Read more