Political Declaration on Responsible Military Use of AI Gains Support — February 16, 2023
Fifty-plus nations endorsed the U.S.-led Political Declaration on Responsible Military Use of AI and Autonomy on 16 February 2023, committing to rigorous testing, human oversight, and transparency across defence AI programmes.
Executive briefing: At the Responsible AI in the Military Domain (REAIM) summit in The Hague on , the United States launched a Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy. More than 50 countries have endorsed the voluntary norms, which call for lawful, human-centric, and accountable development and deployment of military AI capabilities. The declaration emphasises adherence to international humanitarian law, rigorous testing and assurance, chain-of-command accountability, and transparent governance. Defence organisations should benchmark doctrine, acquisition, and operational practices against the declaration’s principles while preparing for peer reviews and confidence-building measures.
Declaration principles and expectations
The declaration outlines 12 guiding principles. Key commitments include ensuring AI-enabled military systems are developed in accordance with legal obligations; maintaining human responsibility for deployment decisions; conducting context-appropriate testing, evaluation, verification, and validation (TEVV); instituting oversight mechanisms for autonomous target selection; and establishing safeguards to reduce unintended escalation. Signatories pledge to investigate and remediate incidents, share best practices, and support transparency measures such as voluntary reporting or dialogue with other states. The declaration reinforces pre-existing norms—such as the DoD’s five AI Ethical Principles and NATO’s principles on responsible AI—while expanding participation beyond NATO allies to include partners from Latin America, Africa, and Asia.
Governance and organisational implications
Defence ministries should formalise governance structures that assign accountability for AI programmes at senior levels. Recommended actions include appointing chief AI ethics officers, integrating responsible AI requirements into acquisition policies, and updating doctrines to clarify commander responsibilities. The declaration encourages training for operators and commanders on AI system limitations, human-machine teaming, and escalation management. Organisations should embed responsible AI checkpoints within capability development lifecycles—from concept development through operational deployment—ensuring alignment with national strategies and legal reviews.
Testing, evaluation, verification, and validation (TEVV)
Rigorous TEVV is central to the declaration. Defence programmes must tailor evaluation protocols to system risk profiles, incorporating live-fire exercises, simulation, red-teaming, and adversarial testing. Maintain traceable datasets, reproducible model configurations, and configuration control to support repeatable testing. Capture evidence demonstrating robustness across operational environments, resilience to cyber interference, and compliance with rules of engagement. Establish independent review boards that approve TEVV plans and certify readiness, similar to NATO’s Autonomy Implementation Plan or the U.S. DoD’s Responsible AI Strategy and Implementation Pathway.
Operational safeguards and human oversight
The declaration affirms that human commanders remain responsible for the use of force. Defence organisations should define clear rules for human-machine interaction, including approval authorities, abort mechanisms, and fallback procedures. Ensure human operators possess situational awareness and the ability to intervene or disengage systems. Implement continuous monitoring for anomalous behaviour, establishing immediate incident reporting pathways to operational commanders and legal advisors. For autonomous weapons or decision-support tools, embed constraint mechanisms (e.g., geofencing, time limits, positive identification requirements) aligned with international humanitarian law principles of distinction, proportionality, and precaution.
Transparency, confidence-building, and international cooperation
Signatories commit to transparency measures that bolster trust. Recommended steps include publishing national policy statements, sharing doctrine updates, and participating in multilateral dialogues. The United States plans annual briefings on implementation progress, inviting other countries to share lessons learned. States can engage through NATO’s Defence Innovation Accelerator for the North Atlantic (DIANA), the Global Partnership on AI, and United Nations Group of Governmental Experts on Lethal Autonomous Weapons Systems discussions. Confidence-building may involve reciprocal site visits, cooperative exercises focused on safe AI employment, and exchange of TEVV methodologies.
Implementation roadmap for defence organisations
- Policy alignment: Map existing national policies, directives, and ethical codes to the declaration’s principles. Identify gaps in doctrine, acquisition regulations, and training curricula.
- Governance design: Establish responsible AI councils, legal review processes, and escalation paths. Integrate declaration commitments into strategic documents and capability development roadmaps.
- Lifecycle controls: Embed TEVV requirements, configuration management, and operational readiness criteria into programme baselines. Require vendors to supply safety cases and assurance artefacts.
- Training and culture: Develop training modules for commanders, operators, engineers, and lawyers covering AI limitations, ethical decision-making, and incident response.
- Transparency and reporting: Define metrics, reporting cadences, and external engagement plans to satisfy annual declaration updates and stakeholder expectations.
Risk management and incident response
Defence organisations should maintain risk registers capturing mission, safety, legal, and reputational risks associated with AI deployments. Implement incident response playbooks for malfunctions, adversarial exploitation, or collateral damage allegations. Ensure post-incident reviews capture root causes, data logs, and corrective actions, feeding into doctrine updates. Coordinate with intelligence, cyber, and legal teams to assess adversary interference or information operations. Share de-identified lessons with partners to promote collective learning, consistent with declaration commitments.
Industry and vendor considerations
Defence contractors must align research and development practices with declaration principles. Require suppliers to demonstrate compliance with export controls, cybersecurity standards (e.g., NIST SP 800-171, ISO/IEC 27001), and responsible AI guidelines. Incorporate contractual clauses mandating TEVV evidence, data provenance, algorithmic explainability, and kill-switch functionality. Encourage participation in safety-focused consortia and third-party assurance evaluations. Monitor supply chains for components sourced from jurisdictions with divergent norms, and apply risk mitigation (audits, escrow arrangements, code review) where necessary.
Measurement and reporting
Track quantitative indicators: number of AI-enabled systems with completed legal reviews, percentage of programmes executing TEVV according to plan, training completion rates, incident frequency, and remediation timelines. Qualitative metrics include operator confidence assessments, coalition interoperability feedback, and stakeholder trust surveys. Publish unclassified summaries demonstrating adherence to declaration principles, reinforcing transparency and enabling democratic oversight.
Future outlook and policy evolution
The declaration is a living instrument. Additional states may join, and working groups may refine implementation guidance, potentially informing formal treaties or confidence-building mechanisms. Monitor UN CCW negotiations, NATO’s responsible AI initiatives, and bilateral agreements (e.g., U.S.-EU Trade and Technology Council discussions on AI governance). Anticipate convergence with emerging domestic regulations, such as the EU AI Act’s military carve-outs or national export control updates. Defence organisations should maintain agile governance to incorporate future standards, scenario-based stress testing, and red-team exercises that evolve with technological advances.
Sources
- U.S. Department of State — Political Declaration on Responsible Military Use of AI and Autonomy
- Full text of the Political Declaration
- U.S. DoD Responsible AI Strategy and Implementation Pathway
- U.S. Department of Defense — Statement on responsible AI declaration at REAIM
Zeph Tech advises defence and national security organisations on integrating responsible AI principles into acquisition, testing, and operational frameworks aligned with the political declaration.
Continue in the AI pillar
Return to the hub for curated research and deep-dive guides.
Latest guides
-
AI Workforce Enablement and Safeguards Guide — Zeph Tech
Equip employees for AI adoption with skills pathways, worker protections, and transparency controls aligned to U.S. Department of Labor principles, ISO/IEC 42001, and EU AI Act…
-
AI Incident Response and Resilience Guide — Zeph Tech
Coordinate AI-specific detection, escalation, and regulatory reporting that satisfy EU AI Act serious incident rules, OMB M-24-10 Section 7, and CIRCIA preparation.
-
AI Model Evaluation Operations Guide — Zeph Tech
Build traceable AI evaluation programmes that satisfy EU AI Act Annex VIII controls, OMB M-24-10 Appendix C evidence, and AISIC benchmarking requirements.




