← Back to all briefings
Policy 6 min read Published Updated Credibility 94/100

Policy Briefing — May 17, 2024

Colorado’s SB24-205 now law demands developers and deployers of high-risk AI systems build risk programs, impact assessments, consumer notices, and incident reporting workflows ahead of the statute’s 1 February 2026 effective date.

Timeline plotting source publication cadence sized by credibility.
2 publication timestamps supporting this briefing. Source data (JSON)

Executive briefing: Colorado’s Artificial Intelligence Act (SB24-205) became law on 17 May 2024, establishing the first comprehensive U.S. state regime governing developers and deployers of high-risk AI systems. Beginning 1 February 2026, organizations that build or use AI making consequential decisions in employment, housing, education, health care, insurance, legal services, credit, or essential services must implement risk management programs, conduct impact assessments, deliver consumer notices, and report incidents of algorithmic discrimination to the Colorado Attorney General within 90 days. Compliance, privacy, and engineering leaders must start designing controls now so automated decision-making remains transparent, fair, and auditable.

Scope and definitions

The law applies to “developers” (entities doing business in Colorado that create, substantially modify, or own a high-risk AI system) and “deployers” (entities using a high-risk AI system). A high-risk AI system is one that, when deployed, makes or is a substantial factor in making a consequential decision that has a material legal or similarly significant effect on provision or denial of services, benefits, or opportunities in specified domains. Consequential decisions include determinations about employment, education admissions or financial aid, housing availability, health care services, insurance underwriting or pricing, legal services, essential goods and services, and credit opportunities. General-purpose AI and low-risk tools remain outside the law’s core obligations, although developers of general-purpose AI must respond to written requests from high-risk deployers seeking information.

Risk management and governance program

Deployers must maintain a risk management program aligned with recognized frameworks such as the NIST AI Risk Management Framework (AI RMF) or ISO/IEC 42001. The program should define governance structures, roles, and escalation protocols; establish processes for identifying, measuring, and mitigating algorithmic discrimination risks; and document controls across the AI lifecycle—from data ingestion and model training to deployment and monitoring. Organizations should integrate the AI risk program into enterprise risk management, with board or executive oversight and periodic reporting. Policies should cover data quality, documentation standards, model validation, human review, incident response, and third-party management.

Impact assessments

Before deploying or substantially modifying a high-risk AI system, deployers must conduct an impact assessment and update it annually. The assessment must describe the system’s intended use, training data, performance metrics, limitations, safeguards, and risk mitigation measures. It should evaluate potential algorithmic discrimination across protected classes, examine data governance controls, and document human oversight mechanisms. Results must be retained for at least three years and provided to the Attorney General upon request. Developers must also produce impact assessments and furnish them to deployers, capturing testing methodologies, known risks, and instructions for safe use.

Documentation and transparency obligations

Developers must prepare a risk disclosure statement summarizing known or reasonably foreseeable risks of algorithmic discrimination and instructions for how deployers should implement safeguards. They must provide documentation covering system capabilities, limitations, data requirements, evaluation metrics, and human oversight expectations. Developers must publish a publicly accessible statement describing how they manage risks associated with their high-risk AI systems, and they must maintain logs of previously discovered harms or incidents. Deployers must provide consumers with a notice before using a high-risk AI system to make a consequential decision, disclose the right to opt for human review when feasible, and supply meaningful information about the system’s operation when requested.

Consumer rights and appeals

When a high-risk AI system is used to make a consequential decision, consumers must receive notice of the decision, the opportunity to correct inaccurate data, and a mechanism to appeal to a human reviewer. Notices must include contact information, a summary of the AI system’s role, and instructions for lodging complaints with the deployer and the Attorney General. Deployers must respond to appeals within a reasonable period and document the outcome. If the system relies on third-party data sources, deployers should ensure data suppliers can support correction requests promptly. Consumer relations teams need training on the appeal process, escalation pathways, and documentation requirements.

Incident reporting

Both developers and deployers must notify the Attorney General within 90 days of discovering an incident of algorithmic discrimination, defined as unlawful differential treatment or an unlawful disparate impact resulting from the AI system. Deployers must also notify affected consumers without unreasonable delay, providing details about the incident, corrective actions, and how consumers can mitigate harm. Incident response plans should integrate AI-specific triggers alongside cybersecurity and privacy incident protocols, ensuring forensic teams capture logs, model versions, and data inputs to support investigations.

Inventory and recordkeeping

Deployers with more than 50 full-time equivalent employees must create and maintain an inventory of high-risk AI systems in use, including descriptions of purpose, decision context, data sources, and safeguards. They must also keep records of impact assessments, monitoring results, and consumer notices for at least three years. Developers must maintain documentation of system updates, third-party access, and risk evaluations. Governance teams should implement centralized repositories—such as model registries or AI catalogs—that store metadata, approval status, monitoring metrics, and responsible owners. Integrating these repositories with change-management workflows ensures that modifications receive appropriate review.

Safe harbor and enforcement

The Colorado Attorney General enforces the Act under the Colorado Consumer Protection Act. Civil penalties can reach $20,000 per violation, and injunctive relief may require cessation of AI use until compliance is demonstrated. However, the Act provides an affirmative defense for developers and deployers that can show they maintained a compliant risk management program, completed impact assessments, provided required disclosures, and cured violations within 90 days of discovery. Aligning controls with the NIST AI RMF or similar standards strengthens the defense. Organizations should document corrective actions, remediation timelines, and governance approvals to support safe harbor claims.

Third-party and supply chain coordination

Many deployers rely on external vendors for AI solutions. Contracts should require developers to supply documentation, risk disclosures, impact assessments, and ongoing updates. Service agreements must cover data governance expectations, access to logs, incident notification timelines, audit rights, and indemnities for regulatory fines arising from non-compliance. Procurement processes should incorporate AI-specific due diligence questionnaires assessing training data provenance, bias testing methodologies, model monitoring, and human oversight features. Vendor risk management teams need to evaluate whether suppliers are prepared to meet Colorado’s requirements by 2026.

Alignment with broader regulatory landscape

Colorado’s framework anticipates federal and international developments. Organizations operating across jurisdictions should align compliance with the EU AI Act, Canada’s proposed Artificial Intelligence and Data Act (AIDA), and U.S. sectoral rules such as the Equal Credit Opportunity Act, Fair Housing Act, Americans with Disabilities Act, and Health Insurance Portability and Accountability Act. Documentation from Colorado compliance efforts can support reporting obligations under New York City’s Local Law 144, California’s proposed Automated Decision Tools Accountability Act, and the White House Executive Order 14110 on AI safety. Harmonizing controls reduces duplication and ensures AI governance remains consistent.

Implementation roadmap

To meet the February 2026 effective date, organizations should pursue a phased approach. During 2024, conduct AI system inventories, classify use cases, and perform gap assessments against NIST AI RMF functions (Govern, Map, Measure, Manage). Establish AI governance committees, define policies, and assign accountability. In 2025, build and pilot risk management workflows, draft impact assessment templates, integrate monitoring dashboards, and negotiate updated vendor contracts. By late 2025, execute tabletop exercises for algorithmic discrimination incidents, finalize consumer notice templates, implement appeal mechanisms, and validate readiness with internal audit reviews. Continuous monitoring and annual reassessments should follow once the law takes effect.

Zeph Tech operationalizes Colorado AI Act compliance with AI system inventories, impact assessment automation, and discrimination incident playbooks that keep developers and deployers on the right side of Colorado’s landmark statute.

Timeline plotting source publication cadence sized by credibility.
2 publication timestamps supporting this briefing. Source data (JSON)
Horizontal bar chart of credibility scores per cited source.
Credibility scores for every source cited in this briefing. Source data (JSON)

Continue in the Policy pillar

Return to the hub for curated research and deep-dive guides.

Visit pillar hub

Latest guides

  • Colorado AI Act
  • Algorithmic accountability
  • AI risk management
  • Consumer protections
  • Incident reporting
Back to curated briefings