The FCC Says AI Voice Requires Specific Consent — Most Consent Forms Don’t Have It

If your company uses AI voice technology to reach consumers, you already know that the FCC weighed in on this in 2024. What most companies have not fully reckoned with is the gap between the consent language they are currently using and what the FCC’s ruling actually demands. The technology has changed. The consent forms have not. That gap is where class action plaintiffs and their attorneys are setting up camp.

This article breaks down exactly what the FCC has said, where the current regulatory framework is heading, and what the absence of AI-specific consent language means for your business right now.

Your Consent Language Was Written Before AI Voice Existed

Most companies’ TCPA consent disclosures were drafted when “artificial or prerecorded voice” referred to a recorded message deposited on a voicemail or played at the start of a robocall. The statutory phrase was never written with real-time AI voice generation in mind, because real-time AI voice generation did not exist when these consent forms were first designed. The language references “prerecorded or artificial voice” because that is what the statute requires, but the practical reality those words were meant to describe is fundamentally different from a conversational AI agent that generates speech dynamically, responds to questions, and sustains a dialogue that sounds indistinguishable from a human caller.

That disconnect matters for a specific reason. Consent under the TCPA is supposed to be knowing and voluntary. A consumer who consented in 2019 or 2021 to receiving “calls using prerecorded or artificial voice” likely understood that to mean a recorded message, not a ten-minute AI-powered conversation designed to mimic human interaction. When the technology has leapt this far forward and the consent language has stayed frozen in place, the legal footing becomes unstable. The FCC has noticed, and the litigation community has noticed even more quickly.

What the FCC Actually Said About AI Voice and the TCPA

In February 2024, the FCC issued a unanimous Declaratory Ruling that removed any ambiguity on this point: AI-generated voices constitute “artificial or prerecorded voice” under the TCPA. It does not matter how human-like the AI sounds. It does not matter whether the system generates speech in real-time or plays back pre-recorded elements. If AI technology is producing the voice that contacts the consumer, TCPA consent requirements apply.

This ruling catches many voice AI builders off guard for a subtle but important reason. Companies that have structured their calling programs around the assumption that no auto-dialer means no TCPA problem are operating under a framework the FCC has now corrected. The “artificial voice” trigger in the statute is entirely separate from the auto-dialer (ATDS) analysis. A company can avoid ATDS liability entirely and still face full TCPA exposure simply because it is using AI-generated voice to contact consumers. The consent requirement follows the technology, not the dialing mechanism.

Beyond the consent question, the February 2024 ruling made clear that all existing TCPA rules apply without modification to AI voice calls. Calling hours restrictions, Do Not Call list obligations, identification requirements, and opt-out mechanisms — none of these have a carve-out for AI. The technology is new. The rules are not.

The Proposed AI-Generated Call Disclosure — Where the FCC Is Heading

In September 2024, the FCC issued a Notice of Proposed Rulemaking that goes further than the February declaratory ruling. Where the earlier ruling clarified that existing rules apply to AI voice, the NPRM proposes new requirements specifically designed for AI-generated calls.

The proposed definition of an “AI-Generated Call” is broad enough to cover the full range of current voice AI technologies. It encompasses any call that uses computational methods, machine learning, predictive algorithms, or large language models to produce voice or text communication to a called party on an outbound telephone call. That definition would bring AI-generated text messages explicitly under the same consent framework, which represents a meaningful expansion of where the FCC sees its authority.

The practical requirements proposed in the NPRM center on what the FCC is calling an AI-Generated Call Disclosure. Under the proposed rules, consent obtained for AI voice calls must specifically reference AI, consent obtained for AI-generated text messages must specifically reference AI, and the disclosure must be separate and distinct from the general TCPA consent language already required. The last point is the one most companies are not anticipating. A generic consent that covers “prerecorded and artificial voice calls” would not satisfy the proposed requirement. The disclosure addressing AI-generated calls would need to stand on its own.

One important limit in the proposed rules: the FCC explicitly acknowledged that TCPA requirements do not extend to inbound call technologies. The proposed framework applies to outbound calls only. For companies using AI to handle inbound customer service calls, the proposed NPRM would not impose new consent obligations on that side of the interaction.

The timing on a final rule is uncertain. The comment period closed in October 2024, and the Trump administration has signaled that new rulemaking is not a priority. There is a meaningful possibility that the proposed AI-specific disclosure rules do not reach final form quickly. But that uncertainty cuts both ways. Companies that wait for a final rule to update their consent language will be scrambling to retrofit disclosures across an existing customer base when the rule eventually arrives. Companies that implement AI-specific disclosures now will be ahead of the requirement and insulated from the argument that their consent language was inadequate even before new rules took effect.

The Lowery v. OpenAI Lawsuit Changes the Calculus

Just before the new year, a federal lawsuit filed in Virginia put a sharper point on what platform liability for AI voice calls could look like in practice. The plaintiff, William Lowery, alleged he received more than 30 unwanted texts and multiple AI-powered robocalls about estate planning services. He sued not only the company making the calls, but also Twilio and OpenAI.

Twilio did not make the calls. OpenAI did not send the texts. A company called Fresh Start Group did. But the complaint’s theory is that Twilio and OpenAI enabled, facilitated, and profited from the violations while having the technical capability to prevent them. The inclusion of OpenAI as a defendant is new terrain in TCPA litigation. Previous platform liability cases had gone after communications infrastructure providers. Targeting an AI provider for the autonomous calling capabilities its technology makes possible is a different kind of argument, and one with potentially significant implications for the entire AI voice ecosystem.

The complaint’s specific allegations against OpenAI focus on the autonomous nature of the AI calling system at issue. OpenAI’s own tutorials and blog posts describe building voice agents that can make cold calls with zero human involvement. The AI initiates the call, generates the audio, and interacts with the consumer through the entire conversation without a human making any individual decision to dial. The complaint asks a question that existing TCPA doctrine has not fully answered: when AI makes the decision to place a call, who initiated it under the statute?

For companies operating voice AI platforms, this case is a reason to think carefully about your platform design, your terms of service, and what your compliance infrastructure actually requires of customers before they can use your technology for outbound consumer contact. The “we are just the platform” defense is becoming harder to sustain in proportion to how specifically and deeply the platform is integrated into the calling operation.

The Gap in Most Consent Forms Right Now

The standard consent language most companies are currently using reads something like this: “You consent to receive calls using prerecorded or artificial voice and/or automatic telephone dialing systems.” That language was adequate for the regulatory environment that existed before 2024. It is not adequate now, and it will become increasingly difficult to defend as the FCC’s proposed rules work their way toward finalization.

The gaps in pre-2024 disclosures and what consent requires now are specific:

·      There is no reference to AI-generated voice technology.

·      There is no disclosure that calls may involve AI that generates speech in real-time rather than playing a pre-recorded message.

·      There is no distinction between the experience of receiving a robocall with a recorded script and the experience of a sustained conversation with an AI system.

·      There is no reference to AI-generated text messages, which the FCC’s proposed rules would bring explicitly under the same consent framework.

These gaps create a specific legal vulnerability. A consumer who agreed to receive “prerecorded voice” calls may have a credible argument that they did not consent to a ten-minute AI-driven conversation that sounded like a human being. The more the technology diverges from what a reasonable consumer would have understood “artificial voice” to mean in the year they signed the consent, the more that argument gains traction with courts and juries. Plaintiff’s attorneys understand this, and they are actively looking for companies whose consent language was written before AI voice became a practical reality.

There is also a specificity problem that runs in both directions. Vague consent language is easier to challenge as inadequate notice. Specific consent language that clearly identifies what technology will contact the consumer is harder to attack and easier to defend. Companies that have taken the time to update their consent forms with AI-specific language are not just better positioned legally; they are making a documented choice to give consumers accurate information about what they are agreeing to, which courts treat favorably.

What AI Voice Consent Language Should Include

The FCC’s proposed rules are not yet final, but smart operators are not waiting. The elements of an adequate AI voice consent disclosure are reasonably clear from the February 2024 ruling, the September 2024 NPRM, and the direction of platform liability litigation. At minimum, consent language for AI voice outreach should explicitly reference AI-generated voice technology in addition to the general ATDS consent language rather than bundling everything into a single generic clause.

Don’t Wait for the Final Rule

The February 2024 declaratory ruling is already in effect. There is no waiting period, no grace period, and no ambiguity about whether the requirement applies to AI voice calls being made today. Whatever uncertainty exists around the FCC’s proposed NPRM rules, the consent obligation for AI voice is not part of that uncertainty. It is settled.

State laws are moving independently and on their own timelines. Colorado’s AI Act takes effect in 2026 with its own set of requirements for companies using AI in consequential decision-making contexts, including insurance. Illinois’ BIPA has implications for companies capturing and processing voice data. California’s privacy framework creates disclosure obligations that overlap with the federal TCPA requirements in ways that require careful analysis.

The cost of updating consent language now is not a significant undertaking. The cost of defending a class action brought by a plaintiff whose AI-generated calls were covered by consent language that did not specifically reference AI voice is a different calculation entirely. Retrofitting consent language across an existing customer base after a lawsuit has been filed, or after a final rule has been published, is operationally painful and legally exposed in ways that proactive updating is not.

The regulatory environment around AI and telecommunications is going to keep changing. The FCC’s proposed rules may arrive in modified form, may be delayed, or may be superseded by congressional action. What will not change is the basic requirement that consumers receive clear, accurate, and specific disclosure about the technology that is being used to contact them. Building consent language around that principle rather than around the minimum acceptable by the last regulatory update is the approach that holds up over time.

John H. Henson

John Henson founded Henson Legal, PLLC in May 2025 after a career guiding household-name brands through TCPA, state privacy laws, and FTC regulations—including serving as interim General Counsel at LendingTree. He focuses on helping lead sellers and lead buyers manage TCPA vicarious liability risks, and advising AI voice product builders on FCC artificial voice compliance. John's clients span insurance, financial services, and technology companies on the leading edge of customer acquisition.

https://www.henson-legal.com/about
Previous
Previous

Your Consent Language Is Probably Missing These 14 Elements

Next
Next

The SCAM Act: What Lead Generation Companies and Insurance Agents Need to Know About Social Media Advertiser Verification