In a recent blog post, Meta Platforms announced that starting from 2024, advertisers on Facebook and Instagram will be required to disclose the use of artificial intelligence (AI) or any other digital methods in altering or creating political, social, or election-related advertisements. This move by Meta, the world’s second-largest digital ad platform, aims to address concerns around the manipulation of content and the spread of misinformation. By enforcing transparency, Meta plans to ensure that users are aware of any digital alterations or fabricated elements within these advertisements.
Under the new policy updates, Meta will mandate advertisers to reveal if their ads portray real individuals as engaging in actions or saying things they never did. Furthermore, advertisers must disclose if they have digitally created a realistic-looking individual who does not exist. This disclosure requirement is crucial to prevent the dissemination of deepfakes or misleading content that can potentially sway public opinion.
In addition to these requirements, Meta will also require advertisers to indicate whether their ads depict events that did not occur, alter footage from real events, or use images, videos, or audio recordings unrelated to the actual event. By addressing these aspects, Meta is aiming to preserve the authenticity and integrity of the content available on its platforms.
These policy updates are an extension of Meta’s earlier efforts to minimize the misuse of AI in advertising. Last month, Meta announced that it was limiting the use of generative AI ads tools by political advertisers. The company recognized the potential of generative AI tools to create realistic yet fabricated content, leading to concerns over their impact on democratic processes and elections. As the largest digital advertising company, Alphabet’s Google also recently introduced similar image-customizing generative AI ad tools, accompanied by measures to keep politics out of its products. The reliance on “political keywords” blocking is one example of how digital platforms are acknowledging the risks posed by AI-generated content.
The involvement of AI in content creation and manipulation has been a subject of concern among US lawmakers. The creation and spread of deepfakes that falsely depict political candidates in advertisements is seen as a potential threat to the integrity of federal elections. The emergence of affordable and accessible generative AI tools has made it easier for malicious actors to create convincing deepfakes, amplifying these concerns.
By implementing stricter disclosure requirements for AI-altered political advertisements, Meta aims to address these concerns head-on. The transparency measures will provide users with a clearer understanding of the content they encounter, enabling them to make informed decisions and mitigating the potential influence of false or misleading information.
Meta Platforms’ decision to enforce transparency in AI-altered political advertisements marks a significant step towards combating the manipulation of content on Facebook and Instagram. By requiring advertisers to disclose the use of AI or other digital methods, Meta aims to safeguard the authenticity and integrity of political, social, and election-related ads. This move aligns with a broader industry trend, as digital platforms like Google also take measures to prevent the misuse of generative AI tools. Ultimately, these initiatives aim to preserve the trust of users and uphold the democratic processes that rely on fair and transparent information dissemination.