RegImpact
fccproposed· Published 9/10/2024

Implications of Artificial Intelligence Technologies on Protecting Consumers From Unwanted Robocalls and Robotexts

In this document, the Federal Communications Commission (Commission or FCC) proposes steps to protect consumers from the abuse of Artificial Intelligence (AI) in robocalls alongside actions that clear the path for positive uses of AI, including its use to improve access to the telephone network for people with disabilities. Specifically, the document proposes to: define AI-generated calls, adopt new rules that would require callers disclose to consumers when they receive an AI-generated call, adopt protections for consumers to ensure that callers adequately apprise them of their use of AI- generated calls when consumers affirmatively consent to receive such calls, adopt protections to ensure that positive uses of AI that have already helped people with disabilities use the telephone network can thrive without threat of Telephone Consumer Protection Act (TCPA) liability. The document also seeks additional comment and information on developing technologies that can alert consumers to unwanted or illegal calls and texts, including AI-generated calls.

What this rule actually says

The FCC wants to stop bad actors from using AI to flood people with unwanted robocalls and robotexts—think AI-powered spam calls pretending to be humans. The proposed rule requires anyone making AI-generated calls to tell recipients upfront that they're hearing an AI voice. There's an exception carved out for legitimate uses like accessibility tools that help disabled people use phones.

Who it applies to

  • If you're building a calling/texting product that uses any AI-generated voice or text to reach end users, this likely applies to you
  • If you're in the US, this matters (FCC jurisdiction); international founders should watch this but it won't bind you unless you're calling US numbers
  • If your AI scribe, hiring assistant, or support chatbot makes outbound calls or sends texts to end users, you're in scope
  • If you're only sending messages to users who explicitly asked you to contact them (like appointment reminders), you may still need disclosure, though the rule seeks comment on "affirmative consent" scenarios
  • If your tool helps people with disabilities make or receive calls, you might get special protection, but only if you can document that purpose
  • If you only process audio/text internally and don't contact end users, this doesn't apply to you

What founders need to do

  1. Audit your calling/texting (1-2 days): Map out whether your product makes any outbound calls or texts using AI-generated voices or content. If no, you're clear for now.
  1. Plan for disclosure mechanics (3-5 days): If you do make calls/texts, figure out how you'll tell users an AI is contacting them. This could be a voice disclaimer at the start of calls or a text prefix.
  1. Document user consent (ongoing): Keep records that users actually opted in to receive your AI-generated communications. Email confirmations, checkbox logs, etc.
  1. Monitor the final rule (5 minutes/month): This is still "proposed"—not final law yet. FCC will take comments through late 2024, then issue a final rule in 2025. Subscribe to FCC updates or check back in Q1 2025.
  1. If you're disability-focused, gather evidence (1 week): If your product genuinely helps disabled users access phone services, document it. You may qualify for an exemption.

Bottom line

Monitor this—it's not law yet—but start planning for disclosure requirements if you're making AI-generated outbound calls or texts, and don't assume "they consented once" means you're off the hook.