Colorado Governor Signs AI Regulation: A New Era for Artificial Intelligence Compliance

On May 17, Governor Jared Polis of Colorado signed into law the Colorado Artificial Intelligence Act (SB 205) (“CAIA”). This landmark legislation, passed on May 8, is set to take effect on February 1, 2026. The CAIA aims to address both intentional and unintentional algorithmic discrimination through comprehensive notice, disclosure, risk mitigation, and opt-out requirements for developers and deployers of “high-risk” artificial intelligence (AI) systems. Additionally, the law mandates disclosures for AI systems more broadly.

Governor Polis acknowledged his concerns in his signing statement, highlighting the creation of a “complex compliance regime” for AI developers and operators in Colorado. He noted that this could prompt other states to enact similar laws, potentially resulting in a fragmented regulatory landscape that might stifle innovation and hinder competition. To mitigate these issues, Governor Polis called for federal regulation of AI technologies to standardize compliance burdens and ensure a level playing field across states. He also urged the legislature to revisit the CAIA’s scope regarding discriminatory conduct, as the law uniquely prohibits all discriminatory outcomes from AI systems, regardless of intent.

The CAIA builds on existing rules for profiling and automated decision-making technology, finalized by the Colorado Attorney General under the Colorado Privacy Act. The Attorney General will have enforcement and rulemaking authority to implement the CAIA’s extensive requirements, indicating that additional mandates may emerge during this process. Developers and deployers might be able to align some of their current Colorado Privacy Act compliance processes with the new CAIA regulations.

Governor Polis signed the CAIA into law on May 17, 2024, with its provisions set to become effective on February 1, 2026.

Key Provisions of the Colorado Artificial Intelligence Act

The Colorado Artificial Intelligence Act (CAIA) introduces significant new restrictions and compliance obligations for developers and deployers of high-risk AI systems, particularly those designed to interact with consumers and influence critical decisions in areas like employment, insurance, housing, credit, education, and healthcare.

Core Requirements Under the CAIA:

Developer Obligations:

  • Developers of high-risk AI systems must exercise reasonable care to shield consumers from any known or reasonably foreseeable risks of algorithmic discrimination linked to the intended and contracted uses of these systems.
  • A rebuttable presumption of reasonable care is established if developers adhere to specific CAIA provisions. These include disclosing detailed information about the technology on their websites and completing an impact assessment.

Deployer Obligations:

  • Deployers of high-risk AI systems are also required to exercise reasonable care to prevent algorithmic discrimination risks.
  • A rebuttable presumption of reasonable care is granted if deployers comply with designated CAIA provisions. Additionally, deployers must implement a comprehensive risk management policy and program that includes an impact assessment.

General AI System Requirements:

  • Any AI system available to consumers, not just high-risk systems, must inform consumers that they are interacting with an AI system, unless this is already obvious to a reasonable person.

These key aspects of the CAIA establish a framework aimed at mitigating the risks of algorithmic discrimination, with further details and requirements elaborated below.

Key Definitions

The Colorado Artificial Intelligence Act (CAIA) has broad implications for developers and deployers of high-risk AI systems that interact with Colorado residents and influence major decisions. Here are the key definitions:

Deployer: Any individual or entity operating in Colorado that utilizes a high-risk AI system.

Developer: Any individual or entity in Colorado responsible for creating or significantly altering an AI system, including those not classified as high-risk.

Artificial Intelligence System: A machine-based system designed to process inputs and produce outputs, such as content, decisions, predictions, or recommendations, capable of affecting both physical and virtual environments.

High-Risk Artificial Intelligence System: An AI system that plays a critical role in making significant decisions.

Consequential Decision: Decisions with substantial legal or material effects on the availability, cost, or terms of services such as education, employment, financial services, government services, healthcare, housing, insurance, or legal services.

Algorithmic Discrimination: Situations where an AI system leads to unlawful differential treatment or impact, disadvantaging individuals or groups based on characteristics like age, color, disability, ethnicity, genetic information, language proficiency, national origin, race, religion, reproductive health, sex, veteran status, or other protected classifications under Colorado or federal law, with specific exceptions.

These definitions clarify the CAIA’s scope and the responsibilities of developers and deployers, ensuring they understand the compliance requirements associated with high-risk AI systems.

Requirements for High-Risk AI Developers

Starting from the effective date of the CAIA, developers of high-risk AI systems are obligated to provide extensive documentation and disclosures to deployers, other developers, and the Colorado Attorney General within 90 days upon request. This requirement ensures transparency and accountability in the deployment and use of high-risk AI systems.

Developers must first offer a general description that outlines the expected uses of the high-risk AI system, including any known harmful or inappropriate applications. This initial overview sets the context for more detailed documentation that follows.

A crucial part of the required documentation involves providing high-level summaries of the types of data used to train the AI system. Developers must disclose any known or reasonably anticipated limitations of the AI system, especially those that might lead to algorithmic discrimination. Additionally, they need to explain the specific purpose of the high-risk AI system, its intended benefits, and its applications.

To ensure compliance with the CAIA, developers must include essential details that help deployers meet their legal obligations. This includes information on how the AI system was tested for performance and measures taken to mitigate the risks of algorithmic discrimination before the system was made available. Furthermore, developers must outline their data governance practices, detailing how they manage training datasets, evaluate data sources for biases, and implement mitigation strategies.

The documentation should also describe the intended outputs of the high-risk AI system and the measures taken to mitigate any known or foreseeable risks of algorithmic discrimination from its deployment. Developers must provide clear guidance on how the AI system should and should not be used, along with instructions for monitoring its use when making significant decisions. Any additional documentation necessary to help deployers understand the system’s outputs and monitor its performance for algorithmic discrimination risks must also be included.

Whenever feasible, this comprehensive documentation should be provided using common industry formats, such as model cards, dataset cards, or other impact assessments. These formats help deployers or third parties contracted by them to complete the impact assessments required by the CAIA.

Developers who also act as deployers for their high-risk AI systems are exempt from producing this documentation unless the AI system is provided to an unaffiliated entity acting as a deployer. This exemption simplifies compliance for developers using their systems internally while ensuring transparency and accountability when systems are deployed externally.

These comprehensive requirements are designed to promote transparency and accountability in the use of high-risk AI systems, ensuring that developers provide the necessary information to manage and mitigate the risks associated with these technologies.

Website Notice Requirements for High-Risk AI Developers

Under the Colorado Artificial Intelligence Act (CAIA), developers of high-risk AI systems are required to provide clear and accessible information on their websites or in a public use case inventory. This notice must include a summary of two key points.

Firstly, developers need to outline the types of high-risk AI systems they have developed or significantly modified and currently make available to deployers or other developers. This summary should provide an overview of the AI systems’ functionalities and intended applications.

Secondly, the notice must describe how the developer manages known or reasonably foreseeable risks of algorithmic discrimination. This includes detailing the strategies and measures implemented to mitigate these risks during the development and substantial modification of high-risk AI systems.

By making this information readily available, developers ensure transparency and foster trust among consumers and regulatory bodies, highlighting their commitment to ethical AI practices and compliance with the CAIA.

Obligation to Report Algorithmic Discrimination

Developers of high-risk AI systems have an affirmative duty under the Colorado Artificial Intelligence Act (CAIA) to report any known or reasonably foreseeable risks of algorithmic discrimination to the Colorado Attorney General and all known deployers or other developers of the AI system. This disclosure must be made without unreasonable delay and no later than 90 days after one of the following events:

  • Firstly, if through ongoing testing and analysis, a developer discovers that their high-risk AI system has been deployed and has either caused or is likely to cause algorithmic discrimination, they must report this finding. This proactive monitoring ensures that potential issues are identified and addressed promptly.
  • Secondly, if a developer receives a credible report from a deployer indicating that the high-risk AI system has been deployed and has caused algorithmic discrimination, they are required to disclose this information. This accountability measure ensures that any instances of discrimination are communicated swiftly to both regulatory authorities and stakeholders, facilitating timely remediation and compliance with the CAIA.

By adhering to these reporting requirements, developers demonstrate their commitment to mitigating the risks of algorithmic discrimination and ensuring ethical AI practices.

Requirements for those utilizing High-Risk AI Systems

Under the Colorado Artificial Intelligence Act (CAIA), deployers of high-risk AI systems must establish and maintain a comprehensive risk management policy and program to oversee the deployment of these systems. This program must outline and incorporate the principles, processes, and personnel responsible for identifying, documenting, and mitigating known or reasonably foreseeable risks of algorithmic discrimination.

The risk management policy and program must be an ongoing, iterative process. It should be carefully planned, implemented, and subject to regular and systematic reviews and updates throughout the lifecycle of the high-risk AI system. This continuous evaluation ensures that the policy remains effective and responsive to new risks as they emerge.

To comply with the CAIA, the risk management policy and program must be reasonable and align with recognized standards. These include the latest version of the NIST AI Risk Management Framework, ISO/IEC 42001 standards from the International Organization for Standardization, or other nationally or internationally recognized AI risk management frameworks that are equivalent to or more stringent than NIST AI RMF or ISO/IEC 42001. Additionally, any risk management framework for AI systems designated by the Attorney General must be considered.

The implementation of the risk management policy must also take into account several factors:

  1. Size and Complexity of the Deployer: The scale and sophistication of the deploying entity must be considered.
  2. Nature and Scope of AI Systems: The types of high-risk AI systems deployed, including their intended uses, should be factored into the risk management strategy.
  3. Data Sensitivity and Volume: The sensitivity and amount of data processed by the high-risk AI systems should be a key consideration in the risk management approach.

Comprehensive Impact Assessments and Record keeping for High-Risk AI Systems

Under CAIA, deployers of high-risk AI systems are required to conduct detailed impact assessments annually and within ninety days following any significant modifications to the system. These assessments are crucial in identifying and mitigating potential risks of algorithmic discrimination.

Each impact assessment must include several key components. First, deployers must provide a statement detailing the purpose, intended use cases, deployment context, and benefits of the high-risk AI system. This initial overview sets the stage for a more thorough examination of the system’s impact.

A critical part of the assessment involves analyzing whether the deployment of the AI system poses any known or reasonably foreseeable risks of algorithmic discrimination. This analysis should describe the nature of these risks and the steps taken to mitigate them. Additionally, deployers must describe the categories of data processed by the AI system as inputs and the outputs it generates. If the system has been customized using specific data, an overview of the data categories used for customization must also be included.

Performance metrics used to evaluate the AI system and any known limitations must be documented. Transparency measures are another essential element, requiring deployers to detail how consumers are informed about the use of the AI system when it is in operation. Post-deployment monitoring and user safeguards must also be described, outlining the oversight, use, and learning processes established to address issues arising from the system’s deployment.

In cases where the AI system has been intentionally and substantially modified, deployers must disclose whether the system’s use has remained consistent with or varied from the developer’s intended uses.

Deployers are required to maintain the most recent impact assessment, all related records, and all previous assessments for at least three years following the final deployment of the high-risk AI system. They must also review the deployment annually to ensure the system does not cause algorithmic discrimination.

An impact assessment prepared for compliance with other applicable laws or regulations can fulfill the CAIA’s requirements if it is “reasonably similar in scope and effect.” This provision allows deployers to streamline their compliance efforts by completing a single impact assessment that meets both the CAIA and Colorado Privacy Act requirements.

Consume Notification for High-Risk AI Decisions

Under the Colorado Artificial Intelligence Act (CAIA), deployers of high-risk AI systems are required to inform consumers when these systems are used to make or significantly influence consequential decisions. This notification must occur before the decision is made, ensuring transparency and consumer awareness.

The notification must include several key elements. Firstly, deployers must provide a statement that discloses the purpose of the high-risk AI system and the nature of the consequential decision it influences. This statement should also include the deployer’s contact information, allowing consumers to reach out with any questions or concerns.

Additionally, the statement must describe the high-risk AI system in plain language, despite the inherent complexity of such systems. This ensures that consumers can understand the role and function of the AI in the decision-making process. Instructions on how to access the full statement must also be provided, ensuring that consumers can easily obtain all relevant information.

Furthermore, if applicable, deployers must inform consumers about their rights under the Colorado Privacy Act. This includes the right to opt out of the processing of personal data for profiling purposes, especially in decisions that have legal or similarly significant effects on the consumer.

Requirements for Adverse Consequential Decisions

When a high-risk AI system makes or significantly influences a consequential decision that negatively affects a consumer, the deployer must provide the consumer with specific information and opportunities to address the decision. This ensures transparency and allows consumers to respond to potentially adverse outcomes.

First, the deployer must disclose the principal reasons for the adverse decision. This includes detailing how and to what extent the high-risk AI system contributed to the decision. The statement must also specify the types of data processed by the AI system in making the decision and identify the sources of this data. This information helps the consumer understand the basis of the decision and the role of the AI system.

Consumers must also be given the opportunity to correct any incorrect personal data that was processed by the AI system. This step is crucial for ensuring that decisions are based on accurate and fair data.

Additionally, the deployer must provide the consumer with the opportunity to appeal the adverse decision. This appeal process should, if technically feasible, include the possibility of human review. However, if offering an appeal is not in the best interest of the consumer, such as in situations where a delay might pose a risk to the consumer’s life or safety, this requirement may be waived.

Website Notice for High-Risk AI Systems

Deployers of high-risk AI systems are required to provide a clear and accessible notice on their website, detailing specific information about their AI deployments. This notice must include the following elements:

Firstly, the statement should summarize the types of high-risk AI systems currently in use. This helps the public understand which AI technologies are being deployed and their general purposes.

Secondly, the notice must explain how the deployer manages known or reasonably foreseeable risks of algorithmic discrimination associated with each high-risk AI system. This includes outlining the measures and strategies implemented to mitigate such risks, ensuring transparency in how these potential issues are addressed.

Lastly, the notice should provide a detailed description of the nature, source, and extent of the information collected and utilized by the deployer. This includes specifying the types of data gathered, where the data comes from, and how extensively it is used in the AI systems’ operations.

By making this information readily available, deployers promote transparency and accountability, helping consumers and stakeholders understand the implications and safeguards associated with high-risk AI systems.

Affirmative Duty to Report Algorithmic Discrimination to the Attorney General

Under the CAIA, deployers of high-risk AI systems have a proactive obligation to report instances of algorithmic discrimination. If a deployer discovers that their high-risk AI system has caused algorithmic discrimination, they must notify the Colorado Attorney General without unreasonable delay, and no later than ninety days after the discovery.

In addition to this immediate reporting requirement, deployers must also be prepared to disclose their risk management policies, completed impact assessments, and maintained records to the Attorney General upon request. This disclosure must be provided within ninety days of the request.

These requirements ensure that the Colorado Attorney General is promptly informed about any issues related to algorithmic discrimination, enabling timely oversight and intervention. It also ensures that deployers maintain robust documentation and are transparent about their efforts to manage and mitigate risks associated with high-risk AI systems.

Transparency Requirement for Consumer-Interacting AI Systems

The Colorado Artificial Intelligence Act (CAIA) establishes a fundamental transparency obligation for developers and deployers of AI systems that interact with consumers. Specifically, any deployer or developer who deploys or makes available an AI system intended for consumer interaction must ensure that the system clearly informs each consumer that they are interacting with an AI system.

This transparency requirement is designed to promote openness and trust between consumers and AI system operators. However, the CAIA includes an exception for situations where it would be obvious to a reasonable person that they are interacting with an AI system, thus avoiding unnecessary disclosures in clearly automated interactions.

By adhering to this transparency mandate, developers and deployers help ensure that consumers are fully aware when they are engaging with AI technology, fostering informed interactions and enhancing consumer trust.

Exclusions, Exemptions, and Exceptions under the CAIA

The Colorado Artificial Intelligence Act (CAIA) delineates several exclusions, exemptions, and exceptions to its regulations on high-risk AI systems. These provisions clarify the scope and applicability of the Act, ensuring that certain AI systems and technologies are not unduly burdened by the regulations.

Exclusions

High-risk AI systems, as defined by the CAIA, do not encompass AI systems designed to perform narrow procedural tasks or detect patterns and deviations in decision-making processes. These systems are excluded if they are not intended to replace or influence a previously completed human assessment without adequate human review. Additionally, the following technologies are not considered high-risk AI systems unless they play a substantial role in making consequential decisions:

  • Anti-fraud technology (excluding facial recognition technology)
  • Anti-malware and anti-virus software
  • AI-enabled video games
  • Calculators
  • Cybersecurity tools
  • Databases and data storage solutions
  • Firewalls
  • Internet domain registration and website loading services
  • Networking technologies
  • Spam and robocall filters
  • Spell-checkers
  • Spreadsheets
  • Web caching and hosting services
  • Any technology that communicates with consumers in natural language for providing information, making referrals, or answering questions, provided it is subject to an acceptable use policy that prohibits discriminatory or harmful content generation

Exemptions

Algorithmic discrimination, as defined by the CAIA, does not include:

  • The offer, license, or use of a high-risk AI system by a developer or deployer solely for self-testing to identify, mitigate, or prevent discrimination or to ensure compliance with Colorado and federal laws.
  • Efforts to expand an applicant, customer, or participant pool to increase diversity or address historical discrimination.
  • Actions by or on behalf of private clubs or establishments not open to the public under Title II of the Civil Rights Act of 1964.

Documentation and Disclosure Protections

The CAIA does not require developers or deployers to disclose trade secrets, information protected by state or federal law, or information that would create a security risk to the developer.

Risk Management and Impact Assessment Exemptions

Certain deployers are exempt from the requirements to establish a risk management policy and program, complete impact assessments, and publish a website statement if all the following conditions are met:

  • The deployer employs fewer than 50 full-time equivalent employees and does not use its own data to train the high-risk AI system.
  • The high-risk AI system is used for disclosed intended purposes and continues learning from data sources other than the deployer’s own data.
  • The deployer makes available to consumers any impact assessment provided by the developer of the high-risk AI system that includes information substantially similar to what is required under the CAIA.
  • Other exceptions to the CAIA’s requirements include specific circumstances involving HIPAA-covered entities and banks.

These exclusions, exemptions, and exceptions ensure that the CAIA’s regulations are appropriately targeted and do not impose unnecessary burdens on certain AI technologies and smaller entities.

Enforcement and Affirmative Defenses under the CAIA

CAIA grants the Attorney General exclusive authority to enforce its provisions, explicitly denying any private right of action. Violations of the CAIA are classified as per se unfair trade practices under Colorado consumer protection law.

In the event of enforcement action initiated by the Colorado Attorney General, developers and deployers have an affirmative defense available to them. This defense is valid if they meet the following criteria:

Discovery and Remediation:

  • The developer or deployer identifies and remedies a violation through feedback mechanisms encouraged for deployers or users, adversarial testing or red teaming, or an internal review process. This proactive approach demonstrates a commitment to continuous improvement and compliance.

Compliance with Risk Management Standards:

  • The developer or deployer adheres to the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework (RMF) and ISO/IEC Standard 42001 of the International Organization for Standardization. Alternatively, they may comply with another nationally or internationally recognized AI risk management framework that is substantially equivalent to or more stringent than the NIST AI RMF or ISO/IEC 42001.

These provisions ensure that developers and deployers who actively engage in risk management and adhere to recognized standards can defend themselves effectively against enforcement actions. This approach encourages best practices in AI development and deployment, fostering a culture of compliance and proactive risk mitigation.

Attorney General's Rulemaking Authority under the CAIA

CAIA empowers the Colorado AG with rulemaking authority to effectively implement and enforce the Act’s provisions. This authority encompasses the following areas:

  1. Documentation and Requirements for Developers:
    • Establishing rules regarding the necessary documentation and specific requirements developers must fulfill.
  1. Notices and Disclosures:
    • Defining the content and requirements for the notices and disclosures that developers and deployers must provide to consumers and regulatory bodies.
  2. Risk Management Policy and Program:
    • Outlining the content and requirements for the risk management policies and programs that deployers of high-risk AI systems must implement.
  3. Impact Assessments:
    • Specifying the content and requirements for the impact assessments that must be conducted by deployers of high-risk AI systems.
  4. Rebuttable Presumptions:
    • Setting forth the requirements for establishing rebuttable presumptions of compliance with the CAIA.
  5. Affirmative Defenses:
    • Defining the criteria and requirements for the affirmative defenses available to developers and deployers in the event of enforcement actions.

By exercising this rulemaking authority, the Colorado AG ensures that the CAIA’s requirements are clear, comprehensive, and enforceable. This regulatory oversight promotes consistency and transparency in the development and deployment of high-risk AI systems, safeguarding against algorithmic discrimination and ensuring ethical AI practices

Endnotes

Let's Talk