Advertisement

Is an Image Real or AI? Google’s New Way to Tell

October 22nd, 2024 Jump to Comment Section 2
Is an Image Real or AI? Google's New Way to Tell

Google, the well-renowned tech behemoth, is working on an AI image detection tool that will sort authentic images from manipulated or AI-generated imagery. Google will soon add an AI filtration to their search engine. This will give users a better understanding of their search result’s authenticity and should help us avoid misinformation, disinformation, and fakes. As the “battle” against AI-based disinformation is a cross-platform campaign, Google has recently joined the C2PA steering committee and will cooperate with fellow members there.

Google participates in the C2PA, the Coalition for Content Provenance and Authenticity. The C2PA is among the leading initiatives trying to address authenticity issues revolving around and stemming from AI-generated imagery. The C2PA consists of various stakeholders: tech giants like Microsoft, Adobe, OpenAI, Google, and others; Camera manufacturers like Leica, Sony, Nikon, and Canon already implementing authentication technologies; as well as news corporations, etc. Being the world’s most prominent search engine and a source of much of the world’s information, Google’s participation holds significant potential.

How is this going to work?

Google will tackle authenticity issues on various fronts. First, their SynthID AI detection tool will create a C2PA-compliant digital watermark on generated content, be it images, video, audio, or text. The “watermark” will be digitally woven into the files in a rather manipulation-resistant method.

Google will also enable their search engine to find and filter other digital watermarks, though this depends on other parties implementing it. The company is also looking into a more widespread platform integration, such as YouTube and more.

What can you expect from the Google AI image detection tool?

The C2PA offers a rather optimistic prospect, in which AI-generated media, as well as manipulated media, is labeled at the source and adds its entire editing history along the way. This edit history should be available to every end-user, offering a respectable level of transparency.

C2PA provenance diagram. Image credit: C2PA

This, in turn, should fortify authenticity and trust, both heavily eroded since Generated AI has burst into our lives (and way before that, to be honest). This optimistic vision gets a significant boost with its adoption by Google, perhaps the most influential player in the field of web information.

Far from over

While this is good news, vigilance is still required. Standardization works with those who comply. While this could make the digital world a bit safer, its success depends on broad cooperation, particularly among mainstream media. Even if this optimistic scenario unfolds, significant challenges remain. Non-mainstream actors—such as unlawful groups, chaos agents, and real adversaries—are unlikely to follow these rules. A step in the right direction, but we have a long way ahead.

Do you trust the efforts and intentions of industry giants regarding AI constraints and regulations? Can this omnipotent genie return to its lamp, or is it a lost cause? Let us know your thoughts in the comments.

2 Comments

Subscribe
Notify of

Filter:
all
Sort by:
latest
Filter:
all
Sort by:
latest

Take part in the CineD community experience