Katy Perry was belting out some new tunes in her recording studio when she received a text from her mother stating “didn’t know you went to the Met” To Katy’s surprise the text was accompanied by a picture of her wearing a gorgeous floral gown with photographers falling over each other to take a shot.
The same happened to the singer Rihanna whose pictures with a splendid gown started doing rounds online while she never attended the Gala. If you are still confused then let us clear the air. Those pictures were generated with the help of generative Artificial intelligence and were posted on various social media platforms by anonymous accounts. Both celebrities never made it to the Met Gala.
Known formally as the Costume Institute Gala, it started in 1948 as an annual charity fundraising event and has raised around $250 million ever since. It draws top talent from celebrities, fashion designers, and influential figures from all around the world to walk on the red (it was green this year according to the theme) carpet. Although it is considered a privilege to attend this prestigious event, the point is that those two celebrities never made it and yet images of Katy Perry alone were seen more than 13 million times.
Authenticity in AI-Generated Art
Now, it may have stirred up a storm in the fashion world for all the right reasons but for us AI enthusiasts, the astonishing reach and popularity of the fake pictures is something that requires a closer look. In recent times, the Rise of AI has disrupted many traditional practices and traditional photography is one of them. Photography is witnessing an alarming AI revolution that is making it difficult to distinguish between what is real and what is fake.
Artificial intelligence is not just hurting this industry it can also help professional photographers remove flaws with tools such as Adobe-Firefly and can create royalty-free images in an instant with tools such as DALL-E. however, image generation without prior authorization and image generation with intentional malice is what is worrying many in this industry.
Up until now, it is very hard to identify the authenticity of the images that are being created by artificial intelligence tools. There are tools available that can or at least claim to identify AI-generated images however, the authenticity of those tools is disappointing. You require precision results when it comes to identifying the real from fake and the current tools are not able to manage that.
Efforts to Safeguard and Distinguish AI Safety
In February 2024, experts hailing from 30 countries came together to collaborate on a comprehensive report that was aimed at addressing global artificial intelligence safety concerns. Those experts recommended immediate and extended attention to policy frameworks, technical standards, educational initiatives, and collaborative efforts to mitigate AI risks. The report was considered to be a milestone towards achieving the full potential of AI while mitigating associated risks and safeguarding the overall well-being of society. It is believed that unless the creators of those images such as the applications that help create such generative AI images take a step forward, it may not be possible to get the problem resolved.
Open AI Steps up the AI Content Transparency
Most recently, Open AI has responded to the popular demand and has taken solid steps to address the issues that generative AI publications are causing in the field of imaging and content departments. Open AI has joined the Coalition for Content Provenance and Authenticity (C2PA) to integrate metadata into its generative AI models to increase transparency in its generative content.
Metadata provides context with details such as the source, type, owner, and relationships to other datasets. Metadata not only is the cornerstone for generative AI but can also work as a tag that can identify the origin of the image or the text.
Open AI has already started adding C2PA metadata to images in all of its Dall E outputs and ChatGPT including OpenAI API. However, sources from Open AI also commented on this issue that while they make every possible effort with technical measures, eventually, it requires a collective effort from content creators, handlers, and hosting platforms to enable content authenticity. Open AI is also working on tamper-resistant watermarking to detect images and audio generated with the help of AI.
Conclusion
With increasing awareness and easy availability of tools, fake or manipulated images are becoming an increasing phenomenon, especially when it has the potential to garner huge interest. However constant efforts to improve the situation are making a mark in this battle.
It may not be possible to distinguish if the images or the content is AI-generated or fake but with the help of metadata associated and temper-resistant watermarking, we are getting nearer and nearer to a conclusion where we might be able to differentiate between the two. The impact on our fabric of society is immense in both ways as it could be hugely beneficial for many and detrimental for some. But together we can help tame this problem and make this amazing technology work alongside humankind to deliver beneficial results.