“She Didn’t Sound Human”: A Mortgage Lender’s AI Cold Calls Just Became a TCPA Class Action
The pauses were awkward. The inflection was off. And the voice claimed to be “just giving a quick call back” to someone who had never called in the first place.
That is how Chase Lamb described the AI-generated voice that cold-called his cell phone in January 2026 on behalf of Mortgage One Funding, a Michigan-based mortgage lender. The voice pitched cash-out refinancing and home improvement loans before transferring Lamb to a live sales agent. A follow-up email confirmed the company’s identity.
On February 24, 2026, Lamb filed a class action complaint in the U.S. District Court for the Eastern District of Michigan, alleging that Mortgage One violated the Telephone Consumer Protection Act (“TCPA”) by using an artificial voice to cold-call consumers without their prior express written consent. The complaint also alleges the company called numbers registered on the National Do Not Call Registry.
The proposed class definition is broad: it includes anyone in the United States who received a telemarketing call from Mortgage One — or from any of the company’s vendors, lead generators, or agents — featuring an artificial or prerecorded voice, where proper consent was not obtained. With potential damages exceeding $5 million and penalties of $500 to $1,500 per call, this case is a textbook illustration of why AI voice agent compliance cannot be an afterthought.
Mortgage One has not yet filed a response. A jury trial has been requested.
How Mortgage One's AI Voice Playbook Created TCPA Liability
The complaint paints a clear picture of a common AI voice outbound strategy: the artificial voice initiates contact, warms the lead with a pitch, and then hands off to a human closer. It is a model that prioritizes efficiency and volume. And from a compliance standpoint, it is a model that is remarkably easy to get wrong.
According to the complaint, the AI voice identified itself as part of “the Mortgage One Funding rate team” and offered to help lower the recipient’s monthly payment or explore cash-out options. The voice claimed to be returning a previous call. Lamb says he never contacted Mortgage One.
Lamb described the voice as unmistakably artificial. There were awkward pauses between responses, vocal inflection that did not track with natural conversation, and responses that did not align with what he was saying. His complaint states plainly: his “lived experiences led him to easily conclude that the voice was unmistakably an artificial, non-human voice.”
The deception angle matters. The artificial voice attempted to mimic a live person and claimed to be returning a call that never happened. This is exactly the kind of conduct that turns a straightforward TCPA case into one where plaintiffs argue for willful violations, which triples the per-call damages from $500 to $1,500.
Why This AI TCPA Lawsuit Matters for AI Voice Compliance
The Lamb complaint is not the first AI voice TCPA case, but it is one of the most instructive for companies deploying AI voice agents in their marketing operations. There are several features of this case that every business operator should pay attention to.
The Class Definition Reaches Beyond the Company
The proposed class sweeps in anyone who received an artificial voice call not just from Mortgage One directly, but also from any of the company’s “vendors, lead generators, or agents.” This is significant. Many companies outsource their AI calling to third-party vendors or lead generation partners. This class definition makes clear that the plaintiff’s attorneys intend to hold Mortgage One responsible for every AI call made on its behalf, regardless of which entity physically placed the call.
This is consistent with the FCC’s longstanding position that the entity “on whose behalf” calls are made can be held liable for TCPA violations. It also echoes the platform liability theories we analyzed in the Lowrey v. OpenAI case, where Twilio and OpenAI were named as defendants for enabling illegal robocalls. The Lamb complaint takes a different but equally dangerous approach: rather than suing the platform, it reaches through the principal to capture all downstream callers.
The FCC’s February 2024 Ruling Is Doing Exactly What It Was Designed to Do
The FCC’s unanimous February 2024 Declaratory Ruling confirmed that AI-generated voices constitute “artificial or prerecorded voice” under the TCPA. At the time, some companies treated this as a theoretical risk. The Lamb case demonstrates it is now an operational reality. Plaintiff’s attorneys are citing the FCC’s ruling to support claims that AI voice calls require the same level of consent as traditional robocalls: prior express consent for informational calls, and prior express written consent for telemarketing.
Mortgage One allegedly had neither. But even companies that believe they have consent need to examine whether that consent specifically authorizes the use of artificial or prerecorded voices. General consent to “receive calls” may not be sufficient. The FCC’s pending Notice of Proposed Rulemaking would go further, requiring AI-specific consent language and mandatory in-call AI disclosure. Smart operators are implementing these requirements now, before they become mandatory.
DNC Violations Stack on Top of Consent Claims
Lamb’s phone number was on the National Do Not Call Registry. This creates a separate and independent TCPA claim on top of the artificial voice allegations. DNC violations are often the easiest claims for plaintiffs to prove because the analysis is straightforward: either the number was on the registry and you called it, or it was not.
We have seen this pattern before. American Income Life settled a DNC class action for $14 million involving nearly 50,000 phone numbers. The lesson from that case applies here with equal force: if you are making outbound calls of any kind, whether through human agents or AI voice technology, you must scrub your calling lists against the National DNC Registry at least every 31 days. This is not optional. It is federal law.
A Growing Pattern of AI Voice TCPA Lawsuits
The Lamb case is not an isolated incident. It fits into an accelerating pattern of TCPA litigation targeting AI-generated voice calls.
In April 2025, a plaintiff sued healthcare marketer Altrua Healthshare in the Northern District of Illinois after receiving five AI-generated voice messages. The Finley v. Altrua Ministries complaint alleged no consent was obtained and that the calls were misdirected to someone who never had a relationship with the company. That case reached a tentative settlement in February 2026, signaling that defendants are choosing to resolve these claims rather than litigate them.
Meanwhile, the Lowrey v. OpenAI complaint, filed in December 2025, pushed AI voice liability even further by naming the technology providers, Twilio and OpenAI, as defendants alongside the telemarketer. That case argues that platforms enabling AI calls share liability when they have knowledge of violations and the technical capability to prevent them.
Taken together, these cases establish a clear trajectory. Plaintiff’s attorneys are testing different theories, different defendants, and different industries. The common thread is AI voice technology deployed without adequate consent architecture. If your company is anywhere in the AI voice calling chain, from the platform provider to the lead generator to the end caller, you are in the litigation crosshairs.
Three Questions Every Company Using AI Voice Must Answer
The Lamb complaint distills the compliance challenge into three fundamental questions. If you cannot answer all three with confidence and documentation, your AI outreach program carries the same exposure that Mortgage One is now facing in federal court.
1. Do You Have Prior Express Written Consent?
Not just consent. Not verbal consent. Not implied consent from a web form submission. Prior express written consent that meets the TCPA’s specific requirements.
For telemarketing calls using an artificial or prerecorded voice, the TCPA requires a signed, written agreement that clearly discloses the consumer is agreeing to receive telemarketing calls using automated technology. The agreement must identify the entity that will be calling. The consumer’s signature must be obtained through a process that demonstrates affirmative consent, meaning no pre-checked boxes, no consent buried in general terms and conditions, and no ambiguous language.
This is the threshold question. If you do not have it, everything else is irrelevant. You are making illegal calls.
If you are purchasing leads from third parties, the question becomes even more pointed: can your lead vendor produce the exact consent language that was shown to each consumer, along with a timestamp and verification of the consumer’s affirmative action? If the answer is “I think so” or “our vendor says they have it,” that is not good enough. The QuoteWizard $19 million settlement happened because the company could not trace consent back through its vendor chain. Do not make the same mistake.
2. Does Your Consent Language Specifically Authorize Artificial or Prerecorded Voice Calls?
This is the question that catches even companies that have consent programs in place. Many businesses obtained consent using language that was written before AI voice agents existed. That language may authorize “calls and text messages” or “telemarketing communications.” It may even mention “automated technology” or “automatic telephone dialing systems.”
But does it specifically mention artificial or prerecorded voices? Under the current regulatory framework, AI-generated voices are classified as artificial voices under the TCPA. The FCC’s pending NPRM would require explicit AI-specific disclosure in consent language. Even before that rule is finalized, the safest approach is to update your consent forms now to include clear authorization for calls using AI-generated or artificial voice technology.
Sample language to consider adding: “I expressly consent to receive calls from [Company], including calls made using artificial, AI-generated, or prerecorded voice technology, at the telephone number provided.” This is not a template for your specific situation, as the required language will depend on your use case and jurisdiction. But it illustrates the specificity that separates defensible consent from consent that collapses under litigation pressure.
3. Are Your Vendors and Lead Generation Partners Following the Same Standards?
The Lamb class definition does not distinguish between calls Mortgage One made directly and calls made by its vendors, lead generators, or agents. This is intentional. Under TCPA vicarious liability principles, the company on whose behalf calls are made can be held liable for its vendors’ violations.
If you are using third-party vendors for AI outreach, your compliance obligation does not end at your own consent forms. You need to know what consent language your vendors are using, whether that language specifically authorizes AI-generated voice calls, whether your vendors are scrubbing against the National DNC Registry, how your vendors handle revocation requests during AI calls, and whether you can produce documentation of all of the above within 48 hours of receiving a demand letter.
Contractual protections matter too. Your vendor agreements should include TCPA compliance certifications, indemnification clauses broad enough to cover defense costs, audit rights that allow you to verify consent practices, and requirements to produce consent documentation on demand. But contracts alone do not insulate you from liability. The Klassen v. SolidQuote decision demonstrated that even companies four entities removed from the actual caller can face vicarious liability when their agents ratify non-compliant calls. You cannot contract your way out of a compliance failure. You have to verify.
What the Lamb Case Tells Us About Where AI Voice Litigation Is Heading
There are several signals in this complaint that suggest how AI voice TCPA litigation will continue to develop.
First, plaintiff’s attorneys are getting better at identifying AI voices. The complaint’s description of “awkward pauses,” “odd vocal inflection,” and unnatural responses reads like a consumer education guide for spotting AI callers. As AI technology improves, the detection methods will evolve too, but the key takeaway is that consumers are paying attention and documenting what they hear.
Second, the deception element is becoming a recurring theme. In both the Lamb and Finley cases, plaintiffs alleged the AI voice attempted to pass as human. This matters because courts may treat deceptive AI calls more harshly than calls that disclose their artificial nature up front. Companies that fail to disclose AI use are not just risking TCPA liability. They are inviting courts to find willful violations, tripling the per-call damages.
Third, the class definitions are getting broader. By including vendors, lead generators, and agents within the proposed class, Lamb’s attorneys are signaling that they intend to cast the widest net possible. This puts pressure on companies to either prove that every call in the campaign was compliant, or face liability for every call that was not.
The Bottom Line
AI voice technology is not going away. The efficiency gains are too significant, and the market demand is too strong. But the Lamb v. Mortgage One Funding case is a clear signal that deploying AI voice agents without a compliance foundation is not a viable business strategy. It is a liability-creation exercise with seven-figure consequences.
The companies that will succeed with AI voice are the ones building compliance into their architecture from day one: verifying consent before enabling calls, disclosing AI use on every call, scrubbing against DNC lists, handling revocation requests in real time, and maintaining documentation that can withstand litigation discovery.
Hope is not a compliance strategy. If you cannot answer those three questions with confidence and documentation, the time to fix your consent architecture is now, before you become the next case study.