RegImpact
fccproposed· Published 8/5/2024· Effective 9/4/2024

Disclosure and Transparency of Artificial Intelligence-Generated Content in Political Advertisements

In this document, the Federal Communications Commission (Commission or FCC) initiates a proceeding to provide greater transparency regarding the use of artificial intelligence-generated content in political advertising. Specifically, the Commission proposes to require radio and television broadcast stations; cable operators, Direct Broadcast Satellite (DBS) providers, and Satellite Digital Audio Radio Service (SDARS) licensees engaged in origination programming; and permit holders transmitting programming pursuant to section 325(c) of the Communications Act of 1934 (Act), to provide an on-air announcement for all political ads (including both candidate ads and issue ads) that contain artificial intelligence (AI)-generated content disclosing the use of such content in the ad. The Commission also propose to require these licensees and regulatees to include a notice in their online political files for all political ads that include AI-generated content disclosing that the ad contains such content.

What this rule actually says

The FCC wants TV, radio, cable, and satellite platforms to tell viewers when a political ad contains AI-generated content. If a campaign runs a 30-second spot with a deepfake voice or AI-generated imagery attacking a candidate, the station must say on-air "this ad contains artificial intelligence-generated content" and log it in their public political file online.

Who it applies to

  • If you run political ads on broadcast/cable/satellite TV or radio: this applies to you.
  • If you build AI tools but don't buy ad time yourself: this does not apply to you (the platforms running the ads are responsible, not the tool maker).
  • If you're in the US: this is FCC jurisdiction only—no impact outside the country.
  • AI use cases that trigger it: deepfakes, AI voice synthesis, synthetic imagery, generated text-to-speech in political spots. Regular candidate photos or standard editing do not count as "AI-generated content" under this proposal.
  • Other AI products unaffected: medical scribes, hiring assistants, support chatbots, and any non-political use are completely outside this rule's scope.

What founders need to do

  1. Check if you're actually a broadcaster or ad platform (1 hour). If you're building an indie AI tool and not running TV/radio spots yourself, stop here—this doesn't require action from you.
  1. If you do buy political ad time: audit your ads for AI-generated elements (1–2 days). Identify any AI voice, deepfakes, or synthetic imagery you plan to use.
  1. Work with your broadcaster/platform's compliance team (ongoing). When you buy ad time, disclose AI use upfront in writing. The platform (not you) is legally responsible for the on-air notice, but they'll need your disclosure to do it correctly.
  1. Document what's AI-generated (1 day). Keep records of which ad assets use AI so you can quickly answer platform compliance requests.
  1. Monitor FCC updates (10 min/month). This is still a *proposed* rule as of August 2024—final rules may change. Subscribe to FCC updates or have your ad buyer watch for finalization.

Bottom line

Unless you're actively buying political ads on broadcast/cable/satellite TV or radio with AI-generated content in them, monitor but don't act—this rule doesn't apply to indie AI product builders yet, and it's still in the proposal phase anyway.