AI Governance Library

AI policy guide and template (v1.0, October 2025)

A concise, editable AI policy template from Australia’s National AI Centre. It turns high-level ethics into actionable statements, roles, and workflows—covering purpose, scope, seven policy principles, governance, incident response, and review cadence.  
AI policy guide and template (v1.0, October 2025)

⚡ Quick Summary

This 12-page guide provides a ready-to-adopt AI policy skeleton that organisations can tailor to their context. It opens with a clear introduction on why every organisation using AI needs a policy, then walks through purpose, scope, and pragmatic, principle-based policy statements (ethics, accountability, risk assessment, quality/security, fairness, transparency, and human oversight). It adds an operational governance layer—defining accountable roles, screening for new use cases, incident handling, and regular reviews—so policy isn’t just words but a repeatable process. The appendix points to aligned Australian resources (e.g., AI register template, screening tool, AI Ethics Principles, Voluntary AI Safety Standard) to accelerate implementation. In short: a credible baseline for responsible AI that small and mid-size teams can deploy quickly and audit confidently.  

🧩 What’s Covered

Introduction & How to Use. The guide frames an AI policy as an “essential guiding resource” and explains how to adapt the template—align to values, define roles, match internal language, and gather feedback. It’s explicitly positioned as a starting point that fits with broader governance and Australia’s Guidance for AI Adoption. (pp. 2–3)  

Purpose. A model “Purpose” section lists strategic aims (protect rights, enhance services, ensure transparency, manage risk, empower staff) and reminds teams to version and date the policy—useful for audits and reviews. (p. 3)  

Scope & Definitions. The scope applies to all staff/volunteers/contractors and all AI under organisational control (in-house, vendor, embedded/cloud). A pragmatic AI definition—“technology that uses data to make inferences and generate outputs… with a degree of autonomy”—is paired with explicit inclusions (ML, genAI, predictive systems, chatbots) and exclusions (spreadsheets, rule-based automations, traditional BI). Ambiguities are escalated to the policy owner. (p. 4)  

Seven Policy Statements.

  1. Ethical & human-centred use aligned to Australia’s AI Ethics Principles; no deception/manipulation.
  2. Clear accountability: each system has an accountable person; responsibilities across the supply chain are documented.
  3. Risk & impact assessment before use; proportional controls; engage stakeholders, especially vulnerable groups.
  4. Quality, reliability & security: rigorous pre-deployment testing, continuous monitoring, privacy/security safeguards.
  5. Fairness & inclusion: avoid unfair discrimination; be extra careful where outcomes affect marginalised groups.
  6. Transparency & contestability: inform impacted parties as appropriate; maintain an AI register; keep records to enable contestation of significant AI-assisted decisions.
  7. Human oversight & control: humans can pause/override; higher oversight for higher impact; maintain manual alternatives for critical services. (pp. 5–6)  

Governance & Compliance. A practical RACI-like table (pp. 7–9) details roles:

  • AI policy owner (senior leader; champions policy, owns capabilities/risks, ensures training).
  • Policy approvers (CEO/Board or committee; align policy to strategy/risk appetite; approve major updates).
  • Compliance monitor (e.g., Internal Audit/Risk/Ops; audits artefacts like pre-screen triage, tracks incidents, reports compliance).
  • AI governance committee/authority (expert consultation; escalation point; reviews high-risk systems).
  • AI system owner (accountable across the lifecycle and documentation).
  • All employees/volunteers (follow policy, complete training, report incidents).The New AI use case procedure mandates a screening step that classifies proposals (e.g., normal/elevated/prohibited) and calibrates oversight proportionally—so governance effort matches impact. (pp. 9)  

Incident Management. Incidents are reported to the system owner and/or compliance monitor and handled via existing organisational procedures. The guide stresses the ability to take systems offline and maintain documented manual fallbacks for continuity. (p. 10)  

Policy Review. Annual review plus ad-hoc triggers: significant incidents, new impactful technologies, or changes to laws/standards. Reviews are led by the policy owner with governance committee input; substantive changes require formal approval. (p. 11)  

Appendix. Links to Australian Government tools/resources: AI register template, AI screening tool, Australia’s AI Ethics Principles, Voluntary AI Safety Standard, and glossary—useful scaffolding for immediate adoption. (p. 12)  

💡 Why it matters?

The document translates broad AI ethics into enforceable organisational practice. By tying principles to roles, screening, and incident/ review workflows, it shortens the path from “we should be responsible” to “here is who decides, when, and how.” The pre-screening triage and proportional oversight help small teams avoid over-engineering while still meeting stakeholder and auditor expectations. The AI register, contestability records, and manual fallback requirements strengthen accountability and resilience for real-world operations.  

❓ What’s Missing

  • Cross-jurisdiction mapping: No built-in alignment matrix to other global frameworks (e.g., sectoral rules), which many multinationals will need.
  • Procurement depth: Limited specifics on third-party assurance (e.g., model cards, vulnerability disclosure, SLAs for safety fixes).
  • Measurement: No KPIs/quality thresholds for monitoring drift, bias, or security beyond “rigorous testing/continuous monitoring.”
  • Training curricula: Roles require training, but content/levels are not prescribed.
  • Templates & records: Mentions an AI register and screening tool, but doesn’t include filled examples or decision logs inside the PDF.  

👥 Best For

  • SMEs and public agencies needing a credible baseline AI policy quickly.
  • Risk/Compliance, Legal, and CTO/CIO offices formalising AI governance with clear ownership.
  • Business units piloting genAI/ML who need a proportionate screening and approval pathway before scaling.  

📄 Source Details

Australian Government – Department of Industry, Science and Resources; National Artificial Intelligence Centre. AI policy guide and template, v1.0 (October 2025). 12 pages, with tables outlining roles (pp. 7–9) and sectioned guidance for purpose, scope, policy statements, governance, incidents, review, and appendix resources.  

📝 Thanks to

National Artificial Intelligence Centre (Australia) and the Department of Industry, Science and Resources for publishing a practical, adaptable baseline policy and pointing to companion tools (AI register and screening).  

About the author
Jakub Szarmach

AI Governance Library

Curated Library of AI Governance Resources

AI Governance Library

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Governance Library.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.