Listen to this post

As we enter the end of the summer, the AI regulatory steam is not slowing down. Colorado is now the first US state to have a comprehensive AI law (going into effect February 1, 2026), and the EU published its sweeping AI law in July (with rolling applicability between February 2025 and August 2026).

Both the Colorado and EU laws apply to entities that make AI systems, as well as those that use them. The EU law will also apply to entities who import and distribute such systems into EU Member States. Both laws define AI systems to include all types of AI, not only generative AI, and specifically regulate “high-risk” AI systems (with some exceptions). I.e., those that make consequential decisions about things like education, employment, and housing. Both contain exemptions, like calculators that use AI, or AI used for certain research purposes.

As a reminder, for those watching AI law developments, you know that in the US there is also NYC ‘s employment-related AI law, which went into effect July 5, 2023 and regulates use of AI in making employment decisions. Utah has an AI law as well. It requires disclosures to individuals when they interact with AI, and went into effect May 1, 2024. Finally, Tennessee’s AI law focuses on individuals’ rights against deepfakes, and went into effect July 1, 2024.

What should companies keep in mind as we anticipate Colorado’s and the EU’s more comprehensive AI laws? Especially if future similar laws are passed? Here are some top-of mind concerns, both for those who develop the systems, as well as those who deploy them:

  • Avoid algorithmic discrimination. Both Colorado and the EU will specifically require that companies take steps to mitigate bias and algorithmic discrimination. For Colorado, discrimination means treating people differently based on characteristics like age, race, gender, or religion. For the EU, discrimination means not avoiding a foreseeable risk that impacts the health, safety, or fundamental rights of an individual. The EU law requires that both developers and deployers implement quality and risk management systems that address and mitigate foreseeable risks like algorithmic discrimination. Colorado’s law also imposes a duty on developers and deployers to avoid algorithmic discrimination.
  • Be transparent. Like the law in Utah, both AI laws will require disclosures to individuals when they are interacting with a consumer-facing AI system – unless such an interaction would be “obvious” to a reasonable person. This notice is required regardless of whether the system is high-risk or not. Companies will also need to give Colorado residents notice of their right to opt-out and how to do so. In the EU, companies will need to disclose AI systems that produce deepfakes or process biometric data or emotion recognition.
  • Developers should give sufficient information to those who deploy high-risk AI systems. In the EU, those who develop high-risk AI systems also need to provide clear details to those who will use them (i.e., “deployers”) about how they work, along with detailed use instructions. The instructions should include, among other things, information about known or foreseeable risks and the high-risk AI systems’ technical capabilities. Colorado has similar requirements for developers of high-risk AI systems.
  • Have risk management systems in place. Those who deploy AI systems in Colorado will need to adopt a “reasonable” risk management program like NIST’s recently-updated AI Risk Management Framework, ISO/IEC 42001, or a comparable framework. Businesses in the EU that use high-risk AI systems will also need to establish and maintain risk management systems. These risk-management systems will need to adopt the most appropriate measures in light of the state of the art in AI.
  • Conduct impact assessments. Those who deploy high-risk AI systems will need to conduct an annual impact assessment under both laws. Companies should keep in mind, though, that these kinds of assessments may already be needed under both GDPR and those US state laws (California, Colorado, Connecticut, Delaware, Florida, Indiana, Kentucky, Maryland, Minnesota, Montana, Nebraska, New Hampshire, New Jersey, Oregon, Rhode Island, Tennessee, Texas, and Virginia) that require impact assessment when a business engages in, among other things, high-risk profiling.
  • Keep in mind related laws’ requirements. Before these laws go into effect, companies should remember obligations under GDPR (for the EU) and US state laws (as noted above), as well as unfair and deceptive trade practices laws. Both EU regulators and the FTC have cautioned that they will enforce many of the same concepts from these upcoming laws under those regulatory regimes.

A violation of Colorado’s law constitutes an unfair trade practice. The Colorado AG office has exclusive enforcement power. Under the EU law, the AI Office and member state authorities will share enforcement power.

Putting it into Practice: While there is still time before these two laws go into effect, it would be prudent to begin preparing now. Not only will 2025 and 2026 be here before we know it, but some taking these requirements into account can address current regulatory concerns and scrutiny brought under GDPR, US state laws, and concepts of unfair and deceptive trade practices.