Truth in Online Political Ads


Google, which distributes a large portion of ads on the web, tightened its restrictions on potentially misleading political ads in advance of national elections in the United States, India, and South Africa.

What’s new: Starting in November 2023, in select countries, Google’s ad network will require clear disclosure of political ads that contain fictionalized depictions of real people or events, the company announced. The policy doesn’t explicitly mention generative AI, which can automate production of misleading ads.

How it works: In certain countries, Google accepts election-related ads only from advertisers that pass a lengthy verification process. Under the new rules, verified advertisers that promote “inauthentic” images, video, or audio of real-world people or events must declare, in a place where users are likely to notice it, that their depiction does not represent reality accurately.

  • Disclosure will be required for (i) ads that make a person appear to have said or done something they did not say or do and (ii) ads that depict real events but include scenes that did not take place.
  • Disclosure is not required for synthetic content that does not affect an ad’s claims, including minor image edits, color and defect corrections, and edited backgrounds that do not depict real events.
  • The updated requirement will apply in Argentina, Australia, Brazil, the European Union, India, Israel, New Zealand, South Africa, Taiwan, the United Kingdom, and the United States. Google already requires verified election advertisers in these regions to disclose funding sources.

Behind the news: Some existing AI-generated political messages may run afoul of Google’s restrictions.

  • A group affiliated with Ron DeSantis, who is challenging Donald Trump to become the Republican Party’s nominee for U.S. president, released an audio ad that included an AI-generated likeness of Trump’s voice attacking a third politician’s character. The words came from a post on one of Trump’s social media accounts, but Trump never spoke the words aloud.
  • In India, in advance of a 2020 state-level election in Delhi, Manoj Tiwari of the Bharatiya Janata Party pushed videos of himself speaking in multiple languages. AI rendered the clips, originally recorded in Hindi, in Haryanvi and English, and a generative adversarial network conformed the candidate’s lip movements to the generated languages. In the context of Google’s requirements, the translated clips made it appear as though the candidate had done something he didn’t do.
  • In January 2023, China’s internet watchdog issued new rules that similarly require generated media to bear a clear label if it might mislead an audience into believing false information.

Yes, but: The rules’ narrow focus on inauthentic depictions of real people or events may leave room for misleading generated imagery. For instance, a U.S. Republican Party video contains generated images of a fictional dystopian future stemming from Joe Biden’s hypothetical re-election in 2024. The images don’t depict real events, so they may not require clear labeling under Google’s new policy.

Why it matters: Digital disinformation has influenced elections for years, and the rise of generative AI gives manipulators a new toolbox. Google, which delivers an enormous quantity of advertising via Search, YouTube, and the web at large, is a powerful vector for untruths and propaganda. With its new rules, the company will assume the role of regulating itself in an environment where few governments have enacted restrictions.

We’re thinking: Kudos to Google for setting standards for political ads, generated or otherwise. The rules leave some room for interpretation; for instance, does a particular image depict a real event inauthentically or simply depict a fictional one? On the other hand, if Google enforces the policy, it’s likely to reduce disinformation. We hope the company will provide a public accounting of enforcement actions and outcomes.