Colorado Just Rewrote Its AI Law. Here's What Actually Changed.
The AI Policy Working Group's unanimous framework replaces the high-risk system you've been prepping for with something fundamentally different. Your compliance playbook needs an update.
If you have been prepping for Colorado's AI Act based on the original SB 24-205, slow down. The state's AI Policy Working Group just released a proposed replacement framework that scraps the regulatory structure you have been reading about and builds something meaningfully different. Governor Polis announced unanimous support for the new framework on March 17, 2026, after five months of closed-door negotiations among business groups, consumer advocates, technology companies, hospitals, school districts, and civil-rights organizations.
The original law, which was supposed to take effect in February 2026, was already delayed once to June 30, 2026, after two prior attempts to rewrite it collapsed in the legislature. This is the third attempt in two years to find agreement on how to regulate AI decision-making in Colorado. And this time, the framework has to be turned into a bill and passed before the current session ends.
For lead generation companies, AI voice platform builders, and insurance agencies using AI-powered tools, this rewrite changes what you need to do, when you need to do it, and who bears the risk when something goes wrong. Here is what you need to know.
The "High-Risk" Framework Is Dead
The original law was built around "high-risk artificial intelligence systems." That entire concept is gone. The new framework replaces it with "Covered ADMT," which stands for automated decision-making technology. The key distinction is not just a label change. It is a different threshold test for when the law applies.
Under the old law, your system was covered if it was a "substantial factor" in a consequential decision. The new standard focuses on whether the technology "materially influences" a consequential decision. The working group's framework defines covered ADMT as any technology that processes personal information and engages in computation to make predictions, recommendations, rankings, or other information used to guide or assist decision-making about an individual.
Importantly, the framework explicitly excludes consumer tools like spell-check, calculators, spreadsheets, robocall filtering, and general-purpose large language models like ChatGPT. That last carve-out matters. If your platform uses a general-purpose LLM as an underlying component, the question becomes whether your specific implementation of that technology qualifies as covered ADMT based on how it is used in a consequential decision.
For lead generators running AI-powered qualification, routing, or recommendation engines in covered verticals, you are still squarely in scope. But the line between "covered" and "not covered" just got clearer.
The Risk Management Program Requirement Is Gone
This is the biggest practical change. The original law required deployers to implement a formal risk management program, conduct detailed impact assessments, and rewarded compliance with a rebuttable presumption of reasonable care. That entire incentive structure has been removed.
The new framework replaces it with disclosure and record-keeping obligations. No impact assessments. No iterative risk management programs. No rebuttable presumption. The onerous compliance and reporting requirements that drew criticism from the technology industry, and that were blamed for chilling AI development in Colorado, are gone.
If you have already been building out a risk management framework in anticipation of the original law, that work is not wasted. It is just no longer legally required under this proposal. Keep it as a best-practices layer, because demonstrating proactive risk management still strengthens your position if enforcement actions or litigation arise.
What You Actually Have to Do Now
The deployer obligations under the new framework boil down to three requirements.
Point-of-Interaction Notice
When you use a covered ADMT in a consequential decision, you have to tell the consumer. The framework requires a "clear and conspicuous notice" to individuals when automated decision-making technology is being used for a consequential decision about them. This can be satisfied by a public posting at or near the point of consumer interaction, such as a disclosure on your website or within your lead flow.
If you have read our analysis of the SnapCommerce case on consent language placement, the same principles apply here. "Clear and conspicuous" means hard to miss. Burying a disclosure in small gray text at the bottom of a form will not satisfy this requirement.
Post-Adverse Outcome Notice (30 Days)
If a consequential decision results in an adverse outcome, the deployer must provide within 30 days a description of the decision, the role the ADMT played in it, the types and sources of personal data used, instructions for data correction under the Colorado Privacy Act, and information about requesting meaningful human review. The framework directs the Attorney General's office to adopt detailed rules on these post-adverse disclosures by December 31, 2026.
The appeals process has been narrowed compared to the original law. Where SB 24-205 seemed to allow unlimited direct appeals, which companies warned would divert most personnel to handling appeal cases, the new framework limits human review and reconsideration to what is "commercially reasonable." That phrase will almost certainly be the subject of future regulatory guidance and litigation, but it represents a significant concession to business interests.
Record Retention (3 Years Minimum)
Keep records sufficient to demonstrate compliance, including system version identifiers, change logs, and documentation of material mitigation changes. This is straightforward if you have good documentation practices already. If you do not, start building them now.
The Liability Framework Got Teeth
Here is where it gets interesting, and where your vendor contracts need attention.
The liability question is what killed both prior rewrite attempts. During the 2025 special session, disputes centered on whether developers and deployers should face joint and several liability, meaning either party could be held responsible for the full amount of damages. That provision grounded both competing bills.
The working group's solution: allocate fault between developers and deployers based on relative fault under a several liability model. Developers are only liable when the ADMT was used as intended or as contracted. If a deployer goes off-script and uses the system in ways it was not intended, marketed, or contracted for, the developer walks.
The framework also preserves the enforceability of contract terms agreed to between deployers and developers. That means your existing vendor agreements are not automatically overridden by the statute, but they need to account for the new liability allocation.
Critically, indemnification clauses that attempt to shield a party against its own discriminatory acts using ADMT are void as against public policy.
Read that again.
If your platform agreement or marketplace terms include standard mutual indemnification for discrimination claims, and one party's own use of ADMT caused the violation, that indemnification clause will not save them.
For lead generation platforms operating as intermediaries, connecting buyers and sellers through AI-powered matching, routing, or qualification, this directly affects how you structure your contractual risk allocation.
Enforcement: AG Only, No Private Right of Action
The framework assigns exclusive enforcement authority to the Colorado Attorney General's office. This is a major win for business interests and a significant departure from the original law's structure.
There is no new private right of action. Individual consumers cannot sue deployers or developers directly under this statute. The AG can seek civil penalties for violations and injunctive relief to prevent future violations. Deployers and developers get a 90-day cure period after receiving notice of an alleged violation, during which they can fix the problem without incurring civil penalties.
That said, the AG-only enforcement model does not eliminate all litigation risk. If discriminatory AI decisions independently violate existing anti-discrimination statutes, those claims remain available to private plaintiffs through other legal theories. This framework simply does not create an additional cause of action.
The Covered Domains
The framework applies when ADMT is used for consequential decisions across seven domains: education, employment, housing, financial and lending services, insurance (including underwriting, pricing, coverage, and claims adjudication), healthcare services, and essential government services and public benefits.
Notable change: legal services dropped off the list. The original SB 24-205 covered legal services as a consequential decision domain. That is no longer the case. If your lead generation model is exclusively legal referrals, your Colorado ADMT exposure just changed. However, if you also operate in insurance, lending, or other covered verticals, you are still in scope for those activities.
The small business exemption is gone. The original law exempted deployers with fewer than 50 employees who did not train on their own data. That carve-out does not exist in the new proposal. Size does not matter anymore. Only whether you are using covered ADMT in consequential decisions.
Insurance Gets Special Treatment
Insurance decisions are explicitly named as a consequential decision domain, covering underwriting, pricing, coverage, and claims adjudication. For insurance agencies using AI lead scoring, automated underwriting, or AI-powered customer service tools, you are likely covered.
This intersects with the NAIC's model AI guidelines, which 24 states have now adopted. Those guidelines already require insurers to have written AI programs, maintain governance structures, and manage vendor relationships carefully. The Colorado framework adds state-specific disclosure and notice obligations on top of whatever you are already doing for NAIC compliance.
The key point for insurance agents: you are responsible for AI systems your vendors use on your behalf. If your lead generation partner uses AI to qualify leads or score consumers, that could create compliance obligations for you under this framework. This is the same vendor liability principle we see in TCPA compliance, and it is becoming a recurring theme across regulatory regimes.
What This Means for AI Voice Platforms
If your AI voice agent is making calls that influence a consequential decision in a covered domain, this framework applies to you in addition to your existing TCPA obligations. An AI voice agent that qualifies insurance leads, routes consumers to lenders based on creditworthiness signals, or pre-screens employment candidates is performing covered ADMT functions.
The disclosure requirement creates a practical compliance question for voice interactions: how do you provide "clear and conspicuous notice" that ADMT is being used during a phone call? The answer likely involves a disclosure at the beginning of the call, similar to what many platforms are already implementing for AI voice disclosure under TCPA and state law requirements.
The bigger issue for platform operators is the liability allocation. Under the framework, if your platform provides the AI technology and your customer deploys it for insurance underwriting decisions that produce discriminatory outcomes, fault will be allocated based on relative responsibility. If you built the model and knew it would be used for insurance decisions, you cannot claim ignorance. If your customer used it in a way you did not intend or contract for, the framework may protect you.
What to Do Right Now
Audit Your AI Tools Against the New Standard
Walk through each AI-powered system in your lead flow and map whether its output makes predictions, recommendations, rankings, or classifications that guide or assist decision-making about an individual in a covered domain. If it does, it is likely covered ADMT under this framework.
Review Your Vendor and Platform Agreements
The relative fault liability model means your existing contractual risk-shifting may not hold up the way you expect. If you are a platform operator with marketplace sellers, or a lead buyer contracting with lead generators using AI, your agreements need to account for the new liability allocation. Pay particular attention to indemnification clauses. Any provision that attempts to indemnify against a party's own discriminatory use of ADMT will not be enforceable.
Update Your Disclosure Infrastructure
The point-of-interaction notice and post-adverse outcome notice are simpler than the old impact assessment regime, but they still need to be built into your lead flow, your application process, or your customer-facing workflows. Start designing those disclosures now, before the AG's office finalizes its rulemaking.
Do Not Sleep on the Timeline
The current version of SB 24-205 is still scheduled to take effect June 30, 2026. The working group's framework needs to be turned into a bill, passed through the legislature, and signed by the governor before that date. If the bill stalls or fails, the original law goes live. Monitor the legislative process closely.
Additionally, the AG is directed to finalize rulemaking on post-adverse disclosures by December 31, 2026. That means the detailed rules of the road could still shift between now and go-live. This is a moving target.
The Bigger Picture: Federal Preemption and the Colorado Experiment
Colorado is not operating in a vacuum. President Trump signed an executive order in late 2025 specifically targeting state AI laws, calling out Colorado by name. The order creates a task force at the Department of Justice to challenge state AI regulations and asks the FTC and FCC to issue guidance that could override state requirements.
But executive orders cannot preempt state laws. Only Congress or the courts can do that. The administration tried twice to get Congress to pass AI preemption legislation, and both attempts failed. So for now, state laws like Colorado's remain enforceable.
The working group's framework appears designed, in part, to respond to this federal pressure. By replacing the onerous compliance requirements that drew criticism from the Trump administration and the technology industry with a lighter-touch disclosure model, Colorado may be trying to make its AI law harder to attack as an example of regulatory overreach. Whether that strategy succeeds politically and legally is an open question.
What is not an open question: if you are using AI for consequential decisions in Colorado, you need a compliance plan. The specific requirements may shift as the bill works through the legislature, but the direction is clear. Disclosure, accountability, and record-keeping are coming.