Adobe announces CR pin for AI images
Spot the real from the deepfake with a new “icon of transparency” that identifies AI-generated images and videos. Richard Trenholm explains how the Content Credentials icon (the CR pin) will work.
Generative AI has thrown open creativity to anyone and everyone, to the point where it’s impossible to know whether images are “real”. Now Adobe, Microsoft and a host of other companies are addressing some of those complaints using a simple symbol designed to tackle deceptive photos and deepfake videos.
The new symbol displays the letters “CR” on imagery and videos made with AI. Described as an “icon of transparency” or “digital nutrition label”, it’s essentially a watermark identifying images and video created with AI tools.
The CR icon (which slightly awkwardly stands for “Content Credentials”) can be added to the image itself, so viewers can hover over it to see information about its creation. That includes which AI tools were used, and whether the image has been edited.
Even if the creator chooses not to display the symbol, the information will still be permanently encoded into the image or video’s metadata. Websites or apps could read that metadata and display the icon anyway, even if the creator opted not to show it.
Industry backing for CR pin
The open source icon is the brainchild of the Coalition for Content Provenance and Authenticity (C2PA), a group of companies including tech companies Adobe, Arm and Intel; photographic firms Nikon, Leica and Sony; the BBC; and marketing giant Publicis.
The symbol isn’t quite an industry standard yet, as Google has a rival tag called SynthID. But the Content Credentials pin will be used in software such as Photoshop, Premiere and Adobe’s AI system Firefly.
Microsoft, meanwhile, will soon phase out its own AI watermark and use the new symbol to tag AI content made with Bing Image Generator.
What the CR pin tells you
The CR pin also identifies the creator of an image, although it doesn’t address the accusation from (human) artists that AI image creators such as Dall-E, Midjourney and Firefly plagiarise their artwork — a concern which could lead to copyright headaches for companies using AI-created imagery.
But it does address the issue of transparency, as AI-generated images and deepfake videos become indistinguishable from the real thing. Tagging AI content is intended to highlight AI-imagined fakery, so viewers can trust the images and videos they see online.
“The importance of trust in that content, specifically where it came from, how it was made, and edited, is critically important,” said Jem Ripley, CEO of marketing company and C2PA member Publicis Digital Experience.
“Of equal importance is ensuring our clients’ brand safety against the risks of synthetic content, and fairly and appropriately crediting creators for their work.”
NEXT UP
IBM pushes for EU to make AI open and collaborative
If the EU wants to remain a global digital leader then it needs to make AI open and trusted. So says IBM in its new digital policy agenda for Europe.
Barantech unveils the Connected Boat: the future of boating technology is here
Despite the maritime world’s slow adoption of cutting-edge technologies, Barantech is making waves with its newly launched Connected Boat system
Fully Homomorphic Encryption (FHE) explained
From Caesar’s cypher to Fully Homomorphic Encryption (FHE) – Jeremy Bradley, COO, Zama, explains, in this sponsored article, exactly what FHE is, how it has evolved, what it is now capable of and how far off truly universalised FHE is.