Canada Updates Directive on Automated Decision-Making
Canada’s Treasury Board Secretariat issued Directive on Automated Decision-Making 2.0 on 30 April 2024, tightening AI system impact assessments, transparency, and monitoring obligations for federal institutions.
Verified for technical accuracy — Kodi C.
The Treasury Board of Canada Secretariat (TBS) published Version 2.0 of the Directive on Automated Decision-Making on 30 April 2024. The update modernizes Canada's federal AI governance framework by expanding scope to generative and assistive AI, mandating refreshed Algorithmic Impact Assessments (AIAs), and embedding continuous monitoring controls before agencies can deploy automated decision systems. This revision represents the most significant update since the original directive was introduced in 2019, reflecting lessons learned from early setup and the rapid evolution of AI technologies including large language models and generative AI systems.
Evolution of Canadian AI Governance
Canada emerged as an early leader in government AI governance when TBS issued the original Directive on Automated Decision-Making in 2019. That directive established the Algorithmic Impact Assessment tool, requiring federal institutions to evaluate risks before deploying automated systems that influence administrative decisions affecting Canadians.
The AIA questionnaire evaluates systems across multiple dimensions including data quality, procedural fairness, transparency, legal authority, and human oversight requirements. Version 2.0 builds on this foundation while addressing gaps identified through three years of setup experience and the emergence of generative AI capabilities that did not exist when the original directive was drafted.
Expanded System Coverage
Section 4.2 applies the directive to any system that influences administrative decisions, including tools that support but do not fully automate outcomes. This expansion addresses concerns that the original directive's focus on fully automated decisions created a loophole for hybrid systems where AI generates recommendations that humans routinely accept.
The updated scope includes generative AI tools that draft communications to citizens, assistive AI that focus ons workloads or suggests responses, and analytical systems that score applications or identify patterns for human review. Federal institutions must evaluate whether deployed AI tools meet the expanded definition and complete AIAs for systems not previously assessed.
Updated Algorithmic Impact Assessment
Institutions must complete the new AIA questionnaire, publish the summary online, and refresh it annually or when systems change materially. The revised questionnaire includes new questions addressing large language model risks, training data provenance, potential for generating harmful outputs, and safeguards against prompt injection attacks.
Impact levels remain tiered from Level I (low impact) to Level IV (highest impact), with corresponding requirements for transparency, human oversight, and accountability measures. Systems that generate content for public communication or make recommendations on benefits eligibility typically score at higher impact levels requiring improved controls. Published AIA summaries must include plain-language explanations accessible to affected citizens.
Human Oversight and Explainability
Section 6.2 requires meaningful human intervention proportional to risk level, alongside accessible explanations for affected individuals. Low-impact systems may operate with post-hoc human review, while high-impact systems require human review before decisions take effect.
Meaningful intervention requires decision-makers to have sufficient information, authority, and time to disagree with automated recommendations. Explanations provided to affected individuals must describe the role of the automated system, factors considered, how the decision was reached, and recourse options. For generative AI systems, institutions must implement guardrails preventing autonomous operation without appropriate human review of outputs.
Incident Management and Monitoring
Section 6.4 establishes obligations to log, triage, and report adverse incidents, including algorithmic bias, security breaches, or service disruptions. Institutions must implement monitoring systems capable of detecting performance degradation, drift in decision patterns, and anomalous outcomes that may show bias or system failures.
Incident response procedures must define escalation pathways, communication protocols, and remediation timelines. Significant incidents affecting citizen rights or demonstrating systematic bias must be reported to TBS, enabling cross-government learning and policy refinement. Continuous monitoring requirements apply throughout system lifecycles, not merely during initial deployment.
Third-Party Accountability
Contracts must embed directive requirements, ensuring vendors deliver documentation and risk controls that support federal compliance. Procurement processes must evaluate vendor AI governance practices, training data sources, and ability to provide explanations and audit trails.
Software-as-a-service AI products must include contractual provisions for data handling, incident notification, and cooperation with AIA updates. Vendors must disclose material changes to underlying models or training data that could affect AIA assessments. The directive recognizes that federal institutions now rely on commercial AI products rather than developing systems internally, requiring strong vendor management frameworks.
Implementation Planning
Federal institutions should inventory existing automated decision systems against the expanded scope definition, identifying systems requiring first-time AIA completion. Previously assessed systems require review against updated questionnaire criteria, with refreshed assessments submitted within transition timelines.
Training programs should educate program managers and technical staff on updated requirements, particularly regarding generative AI deployment guardrails. Incident management procedures require updating to capture new logging, monitoring, and reporting obligations. Procurement templates should be revised to include directive-compliant vendor requirements for future AI acquisitions.
Continue in the Policy pillar
Return to the hub for curated research and deep-dive guides.
Latest guides
-
AI Policy Implementation Guide
Coordinate governance, safety, and reporting programmes that meet EU Artificial Intelligence Act timelines and U.S. National AI Initiative Act mandates while sustaining product…
-
Digital Markets Compliance Guide
Implement EU Digital Markets Act, EU Digital Services Act, UK Digital Markets, Competition and Consumers Act, and U.S. Sherman Act requirements with cross-functional operating…
-
Semiconductor Industrial Strategy Policy Guide
Coordinate CHIPS and Science Act, EU Chips Act, and Defense Production Act programmes with capital planning, compliance, and supplier readiness.
Coverage intelligence
- Published
- Coverage pillar
- Policy
- Source credibility
- 92/100 — high confidence
- Topics
- Canada Directive on Automated Decision-Making · Algorithmic Impact Assessment · Government AI governance · Transparency
- Sources cited
- 3 sources (canada.ca, iso.org)
- Reading time
- 5 min
Cited sources
- Directive on Automated Decision-Making (Version 2.0) — Treasury Board of Canada Secretariat
- Responsible Use of Artificial Intelligence in the Government of Canada — Treasury Board of Canada Secretariat
- ISO 31000:2018 — Risk Management Guidelines — International Organization for Standardization
Comments
Community
We publish only high-quality, respectful contributions. Every submission is reviewed for clarity, sourcing, and safety before it appears here.
No approved comments yet. Add the first perspective.