← Back to all briefings

Policy · Credibility 90/100 · · 2 min read

Policy Briefing — Australia Issues Interim Response on Safe and Responsible AI

Australia’s government published its interim response to the Safe and Responsible AI consultation on 1 July 2024, outlining mandatory guardrails for high-risk AI, a safety standard, and stronger regulator coordination.

Executive briefing: On 1 July 2024 the Australian Government released its Interim Response to Safe and Responsible AI in Australia. The paper commits to legislating mandatory guardrails for high-risk AI systems, establishing a voluntary AI Safety Standard and watermarking guidance, and empowering regulators such as the Australian Human Rights Commission and the eSafety Commissioner to address AI harms.

Key commitments

  • High-risk guardrails. The government will develop binding requirements for AI used in critical infrastructure, essential services, employment, education, and biometric surveillance, including mandatory risk assessments and record-keeping.
  • AI Safety Standard. Standards Australia will lead a voluntary standard focused on safety testing, transparency, and responsible release of general-purpose AI.
  • Watermarking guidance. The interim response tasks the Digital Transformation Agency with publishing best practices for watermarking and labelling synthetic content.
  • Regulator coordination. A new AI Taskforce will coordinate agencies including ACMA, ACCC, AHRC, OAIC, and eSafety on enforcement, guidance, and sandboxes.
  • Support for SMEs. The government plans tailored guidance and funding programmes to help small businesses adopt AI safely.

Next steps

  • Targeted consultation. Throughout 2024 the Department of Industry, Science and Resources will consult on the design of high-risk guardrails.
  • Legislation development. Draft legislation is scheduled for release in 2025 following consultation outcomes.
  • Standardisation work. Standards Australia will commence drafting the AI Safety Standard in partnership with industry and academia.

Program actions

  • Risk scoping. Identify AI use cases in Australia that fall within high-risk sectors and start documenting risk assessments, testing evidence, and human oversight plans.
  • Transparency tooling. Prepare to implement content provenance and labelling workflows aligned with forthcoming watermarking guidance.
  • Regulator engagement. Track consultations from OAIC, eSafety, and ACCC to align compliance strategies with sector-specific expectations.
  • Standards mapping. Monitor development of the AI Safety Standard and align control frameworks with ISO/IEC 42001 and NIST AI RMF for reuse.

Sources

Zeph Tech is partnering with Australian organisations to operationalise emerging AI guardrails, from risk testing to provenance controls.

  • Australia AI policy
  • Safe and Responsible AI
  • High-risk AI guardrails
  • AI governance
Back to curated briefings