← Back to all briefings
Policy 6 min read Published Updated Credibility 40/100

Policy Briefing — September 4, 2025

Colorado’s AI Act enters force in February 2026, giving September engineering and compliance teams five months to lock down risk management programs for high-risk automated decision tools.

Timeline plotting source publication cadence sized by credibility.
2 publication timestamps supporting this briefing. Source data (JSON)

Executive briefing: Colorado SB24-205 obligates both developers and deployers of high-risk AI systems to operate a written risk management program (RMP) that detects, mitigates, and monitors algorithmic discrimination across the lifecycle. Enforcement begins on , leaving five months to stand up policies aligned with NIST AI RMF or ISO/IEC 42001, embed testing in MLOps, and link consumer notices and appeals to documented safeguards. Zeph Tech is translating these requirements into control maps, playbooks, and evidence binders that connect with the AI pillar hub, the Colorado compliance guide, and related briefs on high-risk readiness and impact assessments.

What §6-1-1704 demands

Colorado’s law requires the RMP to be documented, risk-based, and iterative. Key elements include:

  • Risk identification: Document intended purpose, affected individuals, and potential algorithmic discrimination vectors (e.g., protected classes across employment, housing, lending, education, healthcare, insurance).
  • Controls and mitigations: Specify technical and procedural safeguards—feature constraints, monitoring thresholds, human-in-the-loop approvals, and override authority—proportional to risk.
  • Evaluation and testing: Run pre-deployment and periodic testing for bias, robustness, and drift; record metrics, datasets, and acceptance criteria.
  • Governance and accountability: Assign owners, escalation paths, and documentation duties; keep a public statement describing high-risk uses and safeguards.
  • Continuous monitoring: Track incidents and material changes; retrain or rollback when triggers fire; retain artifacts for regulatory requests.

Program blueprint and ownership

Zeph Tech structures the RMP around the NIST AI RMF functions—Map, Measure, Manage, and Govern—paired with clear ownership.

Control ownership and evidence
FunctionOwnerEvidence
MapProduct + LegalUse-case register, high-risk determinations, rights impact notes.
MeasureData ScienceBias and robustness test plans, datasets, metrics dashboards, reproducibility notes.
ManageRisk + EngineeringMitigation actions, change-control records, rollback criteria, retraining logs.
GovernCompliance + BoardPolicy approvals, training records, public transparency statement, AG-ready packet.

Diagram: How the RMP fits lifecycle

        Map (scope + purpose) → Measure (test & metrics) → Manage (mitigate & deploy)
                  ↑                                      ↓
                  └──────────── Govern (policies, roles, oversight, transparency) ───────────┘
            
Lifecycle integration ensures every model change is evaluated, approved, and monitored under the same governance spine.

Embedding into MLOps

To avoid shelfware, the RMP is wired into CI/CD and incident processes:

  • Pre-commit checks: Lint model cards for required fields (purpose, datasets, protected-class coverage, limitations); block merges missing risk statements.
  • Automated tests: Run fairness and robustness suites during CI; store outputs with model version and dataset hash for reproducibility.
  • Release gates: Require legal sign-off after consumer notice copy and appeal SLAs are validated; tag deployments with effective policy version.
  • Monitoring hooks: Stream drift and performance metrics to dashboards; alert when protected-class gaps exceed thresholds; trigger retraining or rollback.
  • Change control: Log all parameter updates and data refreshes; refresh impact assessments after material changes; notify customers if changes alter risk profile.

Risk scenarios and mitigations

Colorado focuses on consequential decisions. Zeph Tech uses scenario playbooks to test and mitigate.

  • Employment screening: Bias testing on gender, race, and age with stratified sampling; mask correlating features; require recruiter override before rejection.
  • Credit underwriting: Examine disparate impact on protected classes; cap feature influence for proxy attributes; document adverse action rationales in consumer notices.
  • Housing eligibility: Validate geographic and demographic balance; prohibit using past eviction data as sole factor; route borderline cases to human review.
  • Insurance pricing: Stress-test for disability and age proxies; apply fairness-constrained optimization; explain key factors in notice templates.

Metrics and thresholds

Each high-risk system gets quantitative targets tied to escalation.

Example thresholds
MetricThresholdAction
Demographic parity difference|Δ| < 5%Review drift if exceeded; run mitigation; log rationale.
False negative gap< 3 percentage points across groupsHuman review for affected segment; adjust thresholds.
Model stability (AUC change)< 2% across monthly retrainsInvestigate data shifts; rollback if unexplained.
Appeal reversals< 10% per quarterAnalyze feature importance; refine notices; update training.

Transparency and documentation

Colorado requires a public statement summarizing high-risk uses and safeguards, plus the ability to furnish detailed documentation to the Attorney General. Zeph Tech builds a layered package:

  1. Public layer: Web page summarizing high-risk systems, purposes, core safeguards, and appeal routes.
  2. Customer layer: Consumer notice copy, appeal SLAs, and channel-specific explanations of key factors.
  3. Regulatory layer: Full model cards, assessment reports, testing evidence, incident history, and developer attestations.

Integration with developers

Because the statute allocates duties to developers (§6-1-1705) and deployers (§6-1-1706), the RMP includes vendor coordination:

  • Require developers to provide intended use, data sources, limitations, evaluation metrics, and known risks.
  • Negotiate contractual commitments for timely notice of material changes and incident cooperation.
  • Share Zeph Tech’s mitigation playbooks so developer updates align with deployer obligations.

Incident response and AG notification

An RMP without incident muscle will fail audits. Zeph Tech codifies a 72-hour internal investigation and 90-day AG notification plan with roles and templates. Evidence includes root-cause analyses, consumer communication scripts, mitigation steps, and verification of restored performance.

Training and attestation

SB24-205 expects organizational buy-in. Zeph Tech runs quarterly training for engineers, product managers, customer-support leads, and legal reviewers, followed by attestations that policies were read and applied. Completion metrics feed the board dashboard and readiness attestation.

Sources

Zeph Tech aligns Colorado risk management programs with actionable tests, notices, and incident response so compliance is auditable and repeatable.

Roadmap to February 2026

Zeph Tech guides clients through a structured cadence: September focuses on finalizing the policy and governance charter; October centers on embedding controls into CI/CD and running dry-run assessments; November publishes transparency statements and trains customer-facing teams; December completes incident tabletops, validates evidence retention, and captures board acknowledgement ahead of go-live.

Audit trail and retention

Evidence must survive regulatory review. Zeph Tech retains signed policies, model cards, test datasets, metric outputs, consumer notice versions, appeal logs, and incident reports for at least three years, tagged to model version and deployment date. Each artifact is cross-referenced in an index so legal teams can prove which safeguards were active for any decision.

Crosswalk to other frameworks

  • EU AI Act Article 9 risk management: Colorado’s §6-1-1704 dovetails with Article 9 requirements—teams can reuse hazard identification, testing, and monitoring steps with localized notice language.
  • FTC unfairness guidance: Documentation of foreseeable harms and mitigations supports U.S. federal expectations for deceptive or unfair AI practices.
  • ISO/IEC 42001 clauses: Policies, roles, awareness training, monitoring, and continual improvement map directly to management-system controls for certification-ready organizations.

KPIs for board oversight

Board and executive sponsors monitor a small set of KPIs each quarter: percentage of high-risk systems with current assessments; number of policy deviations with corrective actions; appeal volumes and reversal rates by product line; time-to-close for incident investigations; and training completion by role. These KPIs are paired with confidence notes explaining data lineage and any blind spots.

Lessons from pilots

Pilot implementations show that early pairing of legal and data science cuts assessment rework by half, and publishing draft notices alongside model cards surfaces clarity gaps before launch. Zeph Tech catalogs these lessons and reuses them as guardrails for subsequent models to keep the RMP living and adaptive.

Timeline plotting source publication cadence sized by credibility.
2 publication timestamps supporting this briefing. Source data (JSON)
Horizontal bar chart of credibility scores per cited source.
Credibility scores for every source cited in this briefing. Source data (JSON)

Continue in the Policy pillar

Return to the hub for curated research and deep-dive guides.

Visit pillar hub

Latest guides

  • Artificial intelligence
  • Algorithmic accountability
  • State privacy
Back to curated briefings