Chatbots and AI: What Marketers Need to Know About State-Level AI Regulations
While there is little to no federal requirements around chatbots, there are some state level issues surrounding the use of AI in chatbots. The issues can range from chatbot specific requirements to broader requirements around the use of AI generally in communicating with consumers.
There are three main states which specifically make a distinction between AI-chatbots and human-powered chatbots: Utah, Colorado, and California.
Utah's AI chatbot bill requires a website utilizing a chatbot to disclose to the consumer that the chatbot is using generative AI and is not an actual human IF the consumer provides a "clear and unambiguous request" to determine if the chatbot is a human or not. This general rule applies to all websites, unless the business is regulated by the Utah Department of Commerce and requires the business to hold a license.
These regulated businesses must proactively disclose if the consumer is interacting with generative AI and the generative AI is "high-risk artificial intellegence interaction". These "high risk" interactions are interactions where consumer data is collected (including health, financial, or biometric data) and the personalized recommendations are being made around financial advice, legal advice, medical advice, or mental health advice. The regulated businesses include certain healthcare professions, certain trades, and other professions and services.
The Colorado AI law, however, is more general than the Utah law and takes a more "risk based" approach. The Colorado law, which is arguably the most comprehensive AI law in the country, generally regulates "high-risk artificial intelligence systems" which are making "or are a substantial factor in making, a consequential decision." Similar to Utah, a consequential decision is a decision that has a "material legal or similarly significant effect" on the provision of services or opportunities to a consumer.
Chatbots, however, are specifically called out as not included in "high-risk artificial intelligence systems" as long as the chatbot does not make or become a substantial factor in making a consequential decision. The exemption specifically states removes from "high-risk artificial intelligence systems" any technology "that communicates with consumers in natural language for the purpose of providing consumers with information, making referrals or recommendations, and answering questions and is subject to an accepted use policy that prohibits generating content that is discriminatory or harmful." See Colorado AI Act, Sec. 6-1-1701(9)(b).
However, Colorado does require disclosure for chatbots and other communication technologies powered by AI unless "it would be obvious to a reasonable person that the person is interacting with an artificial intelligence system." This is regardless of whether the chatbot or communication platform is determined to be a "high-risk artificial intelligence system".
It is important to note the Colorado law doesn't go into effect until February 2026, and there have been attempts to amend this bill due to its breadth. But, those efforts so far have failed.
California actually already has a chatbot law on the books and has since 2019. However, there are still changes and proposals being made in California regarding the use of AI for chatbots. Most of the proposals have been around "companion chatbots" and not necessarily marketing related chatbots.
The law currently on the books makes it
unlawful for any person to use a bot to communicate or interact with another person in California online, with the intent to mislead the other person about its artificial identity for the purpose of knowingly deceiving the person about the content of communication in order to incentivize a purchase or sale of goods or services in a commercial transaction or to influence a vote in an election. A person using a bot shall not be liable under this section if the person discloses that it is a bot.
The disclosure should be "clear, conspicuous, and reasonably designed to inform persons with whom the bot communicates or interacts that it is a bot."
The three takeaways for marketers considering the use of an AI chatbot:
You should probably always disclose that the chatbot is not a person. While not necessarily required in every circumstance, the disclosure can be easily implemented without losing consumers.
If the chatbot is making "consequential decisions", then you MUST disclose the chatbot. Again, it's so easy to do that it's not worth the fight.
While none of the three states clearly give a direct private right of action under these specific laws, there may be private rights of action under other consumer protection laws in these states (such as Utah's consumer protection laws).
It’s clear these sorts of laws will continue to flourish as more and more states address the use of AI.