In the wake of President Biden’s Executive Order emphasizing the potential societal harms associated with artificial intelligence (AI), Adobe finds itself at the center of a controversy. A photorealistic image of a Gaza explosion, generated by AI, was used by several small blogs and websites without being properly labeled as AI-generated. The Australian news outlet Crikey was the first to report this issue, igniting significant pushback on social media platforms.
Amidst the uproar, an Adobe spokesperson addressed the controversy by stating that Adobe Stock, the platform in question, requires all generative AI content to be clearly labeled as such when submitted. According to Adobe, the images in question were properly labeled as generative AI when they were submitted and made available for licensing. The company believes that it is crucial for customers to know which Adobe Stock images were created using AI tools.
Furthermore, Adobe highlights its commitment to fighting misinformation through initiatives like the Content Authenticity Initiative. This initiative aims to enhance the adoption of Content Credentials, which provide crucial context about the creation and editing of digital content, including the use of AI tools. By working with publishers, camera manufacturers, and other stakeholders, Adobe strives to improve transparency in the digital media landscape.
Adobe Stock, a platform that provides designers and businesses with access to a wide range of curated and royalty-free creative assets, is primarily known for its vast collection of photos, illustrations, videos, and templates. However, it is less prominent in the realm of editorial or photojournalism imagery compared to competitors like Getty Images.
While Adobe Stock previously offered editorial assets, it no longer includes them in its stock offering. Instead, Adobe focuses on the storytelling possibilities of its stock imagery, both traditional and AI-generated. The company defines “illustrative editorial” as conceptual imagery designed to illustrate articles on current events and newsworthy topics. This type of content often features images of real brands and products, providing visual context to convey a story.
However, it is essential to note that Adobe distinguishes illustrative editorial from traditional editorial content that documents ongoing or past events. The platform currently does not accept traditional editorial content. In terms of illustrative editorial, Adobe imposes certain restrictions on the types of images it accepts, such as avoiding recognizable people, copyrighted material, and digitally manipulated trademarked logos.
The recent AI-generated image controversy is not the first instance where Adobe Stock has faced scrutiny over generative AI. In a previous report, it was revealed that contributors to Adobe Stock raised concerns that Adobe had trained its Firefly model using their stock images without explicit notification or consent. As a result, the contributors argued that Firefly’s popularity was diminishing the demand for their stock images on the platform. They also claimed that the influx of AI-generated images was cannibalizing the platform.
In response to these concerns, Adobe stated that it respects the rights of third parties and enforces terms that contributors must adhere to, including guidelines specific to the use of generative AI tools.
The controversy surrounding AI-generated images on Adobe Stock highlights the need for transparency in the use of AI tools and the labeling of generative AI content. While Adobe affirms its commitment to combating misinformation and improving transparency, incidents like these underscore the challenges faced by platforms that provide creative assets. As AI continues to shape the digital media landscape, it is crucial for industry leaders to proactively address concerns and ensure responsible practices to maintain trust among users and creators.