Watermarking in books: can AI bypass filters?

Watermarking in books: can AI bypass filters?

The unsettling challenges of watermarks in the age of AI-generated content

by Suswati Basu
0 comment

With the proliferation of AI-generated content, the need to distinguish genuine books from AI-created fabrications has never been more pressing. Recent strides in generative AI models have prompted researchers to explore innovative techniques for identifying AI-generated images and texts, with watermarking emerging as a promising approach. However, a groundbreaking study led by computer science professor Soheil Feizi from the University of Maryland has cast doubt on the effectiveness of watermarking as a reliable solution.

Why are we talking about watermarking in the world of books, and how is this related to AI you ask? Well, a watermark is seen as the most effective protection against unauthorised disclosure and piracy without restricting readers and reviewers. It is also intended to serve as a stamp of authenticity, enabling users to trace the origins of content and discern between genuine and AI-generated materials. This technology has been heralded as a potential safeguard against manipulated media, particularly given the rising concerns surrounding deepfake videos and AI-authored books.

The fragile promise of watermarking in books due to AI

In response to these concerns, major tech players such as OpenAI, Alphabet, Meta, Amazon, and Google’s DeepMind have pledged to develop watermarking technology to combat the spread of misinformation. Google wrote on its blog that “None of us can get AI right on our own.” The idea behind this strategy is to flag AI-generated content as it is being created, much like physical watermarking authenticates paper currency during production.

Read: Authors’ pirated books used to train Generative AI

However, Professor Feizi’s research challenges the viability of this approach. His study, which is yet to undergo peer review, exposes the vulnerabilities of watermarking on books, particularly in the context of AI-generated content. According to WIRED, Feizi argues that current watermarking techniques are far from reliable, and he goes so far as to assert that “we don’t have any reliable watermarking at this point.” His research demonstrates the ease with which malicious actors can evade watermarking attempts, effectively “washing out” the watermark.

AI robots bypass watermarking filters in books, with purple watermarks across image.
AI robots bypass watermark filters in books. Credit: Suswati Basu.

Feizi, who is a prominent figure in AI detection, delves into two types of watermarking—low perturbation and high perturbation. Low perturbation watermarks, invisible to the naked eye, are deemed especially vulnerable. Feizi’s findings cast a shadow over the notion of high perturbation watermarking offering a solution, as even these seemingly more robust watermarks can be manipulated.

What is perturbation in AI?

Perturbation is a privacy protection method that involves adding random noise to data values or query results. This helps safeguard data from privacy breaches that depend on knowing exact values and can be applied to conceal quasi identifiers. So for example, if you see those giant watermarks on books or professional images, the material is being blocked from being copied and then used elsewhere.

Despite these findings, some experts in the AI detection space believe watermarking has its place, provided its limitations are understood. Professor Hany Farid from UC Berkeley School of Information argues that “many watermarking strategies have been proposed that are robust – though not impervious – to attempts to remove them.” Combining watermarking with other technologies may enhance its effectiveness in thwarting AI-generated content.

Read: Unauthorised AI training: 183,000 books incite legal clashes

While some experts caution against placing excessive hope in watermarking, others view it as a valuable tool in the fight against AI-generated misinformation. While PhD students Xuandong Zhao, Kexun Zhang and others, from the University of California, Santa Barbara, suggest “all invisible image watermarks are vulnerable to our attack,” they added that “one instance of semantic watermark showed promising results.”

University of Maryland PhD student Yuxin Wen and Tom Goldstein, a computer science professor, sees watermarking as a way to reduce harm that can be effective against lower-level AI-generated content but may not thwart high-level attacks.

“For many reasons, the ability to detect and audit the usage of machine-generated text becomes a key principle of harm reduction for large language models.”

Yuxin Wen et al., A Watermark for Large Language Models (2023)

Navigating the Future of Content Authentication

DeepMind, in its recent announcement of the SynthID watermarking tool, also emphasises that watermarking “isn’t foolproof” and “isn’t perfect,” signalling a tempered approach to its potential.

Hence, while watermarking remains an attractive proposition for combatting AI-generated content, Feizi’s research highlights the formidable challenges it faces. The study underscores the need for a multi-pronged approach to tackling AI-generated misinformation, with watermarking being just one element in a complex puzzle. The future may require a combination of innovative technologies and strategies to effectively discern authentic content from AI-generated fabrications. As the battle against misinformation continues to evolve, it remains to be seen whether watermarking can play a significant role in this ongoing struggle.

Subscribe to my newsletter for new blog posts, recommendations & episodes. Let’s stay updated!


Make a one-time donation

Make a monthly donation

Make a yearly donation

Choose an amount


Or enter a custom amount


Your contribution is appreciated, as everything you give we put back so we can provide the best information.

Your contribution is appreciated, as everything you give we put back so we can provide the best information.

Your contribution is appreciated, as everything you give we put back so we can provide the best information.

DonateDonate monthlyDonate yearly

You may also like

Leave a Reply

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?