Policy Briefing — September 4, 2025
Colorado’s AI Act enters force in February 2026, giving September engineering and compliance teams five months to lock down risk management programs for high-risk automated decision tools.
Executive briefing: Colorado Senate Bill 24-205 becomes operative February 1, 2026, creating the first comprehensive U.S. state regime governing high-risk artificial intelligence systems. Developers must implement risk management programs aligned with the NIST AI Risk Management Framework, deliver documentation to deployers, and notify customers of substantial modifications. Deployers—including employers, lenders, insurers, and housing providers—must conduct impact assessments before using high-risk systems, provide consumer notices, and report algorithmic discrimination incidents to the Colorado Attorney General within 90 days. With enforcement approaching, September 2025 is the moment to finalize model inventories, evaluation protocols, and response playbooks.
Key compliance checkpoints
- System classification. Inventory automated decision tools and determine whether they fall within the Act’s high-risk definition covering consequential decisions on employment, lending, insurance, health care, education, or housing.
- Risk management program. Align documentation with Section 6 of SB 24-205, including design specifications, data governance controls, testing results, and monitoring plans.
- Incident reporting. Establish trigger criteria, investigative procedures, and executive sign-off workflows to ensure algorithmic discrimination events are reported to the Attorney General within 90 days.
Operational priorities
- Contract updates. Amend developer-deployer agreements to cover documentation delivery, notification timelines, and shared responsibilities for consumer-facing notices.
- Assessment scheduling. Queue impact assessments for each high-risk deployment and retain evidence that factors in foreseeable limitations, data representativeness, and post-decision human review.
- Consumer notices. Prepare disclosures that explain the AI-driven nature of the decision, key factors used, and available contestation channels before the February effective date.
Enablement moves
- Map organizational safeguards to the NIST AI RMF to demonstrate the statutorily required “reasonable care.”
- Develop bias testing dashboards that monitor statistically significant disparities and route alerts to compliance and legal teams for remediation.
Sources
- Colorado SB 24-205 (Artificial Intelligence Act) signed text
- Colorado Attorney General announcement outlining key compliance obligations
Zeph Tech catalogs high-risk AI systems, orchestrates impact assessments, and automates Colorado AG incident reporting workflows.