Department of Labor AI principles
The U.S. Department of Labor’s July 10, 2024 worker-centered AI principles outline eight commitments around safety, fairness, transparency, privacy, human oversight, and collective bargaining that employers must embed into AI governance, procurement, and workforce programs.
Fact-checked and reviewed — Kodi C.
On 10 July 2024 the U.S. Department of Labor (DOL) released “Artificial Intelligence and Worker Well-Being: Principles for Worker-Centered AI.” The eight principles call on employers, vendors, and policymakers to design, deploy, and oversee AI systems that respect worker dignity, protect health and safety, support collective bargaining, and ensure meaningful human oversight. Although non-binding, the principles signal enforcement expectations across OSHA, Wage and Hour, the Office of Federal Contract Compliance Programs (OFCCP), and the National Labor Relations Board (NLRB). Teams using AI for hiring, performance management, scheduling, safety monitoring, or productivity improvement must align governance frameworks, procurement contracts, and workforce engagement with the DOL guidance.
The principles emphasize: (1) centering worker helpment and participation; (2) ethically developing and using AI to prevent harm and discrimination; (3) establishing governance and human oversight; (4) ensuring transparency and explainability; (5) protecting labor and employment rights, including wage and hour, collective bargaining, and anti-retaliation safeguards; (6) supporting workers impacted by AI through training, redeployment, and redress; (7) ensuring responsible data use and strong privacy protections; and (8) prioritizing health, safety, and accessibility.
DOL urges employers to involve workers and unions early, conduct impact assessments, document design decisions, and maintain access to human review. The principles align with commitments from the White House Executive Order on Safe, Secure, and Trustworthy AI (October 2023) and complement NIST’s AI Risk Management Framework (AI RMF).
Summary of the eight worker-centered principles
- Center worker helpment. Engage workers and their representatives in AI design, testing, deployment, and evaluation; respect organizing rights and collective bargaining agreements.
- Ethically develop and use AI. Ensure AI systems avoid causing physical, psychological, or economic harm, with risk assessments and mitigation plans covering bias, safety, and misuse.
- Establish governance and human oversight. Create accountable governance structures with clear roles, escalation paths, and human decision-makers helped to override AI outcomes.
- Ensure transparency. Provide understandable documentation about AI purpose, data sources, decision logic, performance, and limitations to workers and regulators.
- Protect labor and employment rights. Safeguard wages, hours, discrimination protections, collective bargaining rights, and whistleblower protections when using AI to monitor or evaluate workers.
- Support workers impacted by AI. Offer training, career pathways, redeployment assistance, and compensation adjustments; avoid using AI to sidestep employer obligations.
- Ensure responsible data use. Limit data collection, protect privacy, govern retention and access, and avoid repurposing worker data without consent.
- prioritize health, safety, and accessibility. Use AI to improve, not compromise, occupational safety; ensure accessibility for workers with disabilities and guard against surveillance that undermines safety culture.
Control mapping
- NIST AI RMF: Align Govern and Manage functions with DOL governance, oversight, and mitigation requirements; map transparency expectations to Map and Measure activities.
- ISO/IEC 42001: Integrate principles into AI management system clauses covering policy, human factors, risk assessments, and stakeholder communication.
- OSHA and health/safety standards: Translate principle eight into hazard assessments, job safety analyzes, and safety committees when AI is involved in production or monitoring.
- EEOC and OFCCP guidance: Use fairness, data, and rights protections to reinforce anti-discrimination testing and recordkeeping obligations for AI in employment decisions.
- Collective bargaining obligations: Map worker helpment, transparency, and support principles to bargaining duties, midterm negotiation triggers, and labor-management committees.
Governance actions
- Create a worker-centered AI charter. Document commitments aligned with the DOL principles; assign executive sponsors from HR, legal, safety, and operations.
- Establish cross-functional councils. Include worker representatives, unions, DEI leaders, privacy officers, safety professionals, and AI engineers to review proposals and monitor operations.
- Implement human-in-the-loop oversight. Define roles helped to pause or override AI decisions; ensure availability of human appeal channels for workers.
- Update risk assessments. Conduct algorithmic impact assessments evaluating discrimination, safety, wage and hour, surveillance, and mental health risks.
- Enhance documentation and reporting. Maintain inventories of AI systems, use cases, training data, performance metrics, human oversight logs, and worker notifications.
Workforce engagement
- Hold listening sessions, focus groups, and union consultations before deploying AI solutions; document feedback and mitigation actions.
- Provide transparent communications detailing AI purpose, data used, expected outcomes, and human escalation options.
- Offer training on new workflows, safety considerations, and rights; include accessible materials for multilingual and disabled workers.
- Create anonymous reporting channels for AI-related concerns, linking to ethics hotlines, safety committees, and HR case management.
Procurement and vendor management
- Update RFPs and contracts to require adherence to DOL principles, NIST AI RMF, bias testing, accessibility, and transparency disclosures.
- Require vendors to provide documentation on data sources, testing, human oversight features, and worker support mechanisms.
- Include audit rights, incident notification clauses, and remediation obligations tied to labor rights violations.
- Assess third-party monitoring tools to prevent covert surveillance or wage theft, ensuring compliance with state privacy laws (for example, California CPRA, Illinois BIPA).
Compliance checkpoints
- Verify that AI-driven scheduling and timekeeping comply with Fair Labor Standards Act (FLSA) overtime, rest break, and recordkeeping requirements.
- Ensure applicant tracking, screening, and assessment tools meet EEOC anti-discrimination guidelines and local laws (for example, NYC Local Law 144 audits).
- For federal contractors, align AI usage with OFCCP affirmative action plans, Section 503 disability accommodation, and Vietnam Era Veterans’ Readjustment Assistance Act obligations.
- Integrate AI safety monitoring with OSHA reporting, hazard communication, and whistleblower protections.
Measurement and reporting
- Track AI system inventory, risk ratings, worker notification status, and oversight owner assignments.
- Measure bias and disparate impact metrics, safety incidents, and wage/hour variances associated with AI deployments.
- Monitor worker feedback volumes, resolution times, and satisfaction with human appeal processes.
- Report training completion for workers and managers on AI policies, rights, and safety.
- Document remediation timelines for principle violations, including corrective actions and governance updates.
90-day action plan
- Days 1–30: Map existing AI use cases, identify worker touchpoints, brief leadership and labor partners on DOL principles, and pause high-risk deployments pending review.
- Days 31–60: Conduct impact assessments, update governance policies, execute vendor contract amendments, and launch worker communication and training campaigns.
- Days 61–90: Implement oversight workflows, begin continuous monitoring, publish worker-centered AI dashboards, and prepare compliance packets for regulators and boards.
Collaborating with employers to operationalize worker-centered AI—embedding labor rights safeguards, transparency protocols, and human oversight into every phase of AI design, procurement, and deployment.
Continue in the AI pillar
Return to the hub for curated research and deep-dive guides.
Latest guides
-
AI Procurement Governance Guide
Structure AI procurement pipelines with risk-tier screening, contract controls, supplier monitoring, and EU-U.S.-UK compliance evidence.
-
AI Workforce Enablement and Safeguards Guide
Equip employees for AI adoption with skills pathways, worker protections, and transparency controls aligned to U.S. Department of Labor principles, ISO/IEC 42001, and EU AI Act…
-
AI Model Evaluation Operations Guide
Build traceable AI evaluation programmes that satisfy EU AI Act Annex VIII controls, OMB M-24-10 Appendix C evidence, and AISIC benchmarking requirements.
Source material
- Industry Standards and Best Practices — International Organization for Standardization
- NIST AI Risk Management Framework
Comments
Community
We publish only high-quality, respectful contributions. Every submission is reviewed for clarity, sourcing, and safety before it appears here.
No approved comments yet. Add the first perspective.