Skip to content

Set Up AI Governance

You need to establish AI governance for your team, division, or agency. This journey helps you build a practical framework, not just paperwork.

Step 1
Assess
Step 2
Design
Step 3
Document
Step 4
Implement
Step 5
Embed

Step 1: Assess Your Starting Point

Understand what exists and what you're governing.

Current State Assessment

AI Inventory:

  • What AI/ML systems exist in your area?
  • What GenAI tools are people using?
  • What's in development?
  • What's being considered?

Existing Governance:

  • What policies already apply (agency, whole-of-government)?
  • What governance bodies exist?
  • What processes are in place?
  • What's working? What isn't?

Scope Definition

Question Your Answer
What AI types are in scope? ML models, GenAI, both, consumer tools?
What organisational scope? Team, division, agency?
What lifecycle stages? Development, deployment, operations, all?
What risk levels? All AI or high-risk only?

Reference


Step 2: Design the Framework

Build governance that's proportionate and practical.

Core Components

Every AI governance framework needs:

Component Purpose Key Question
Principles Guide decision-making What do we stand for?
Roles Define accountability Who's responsible for what?
Processes Enable consistent decisions How do we make decisions?
Controls Manage risk What safeguards exist?
Oversight Ensure compliance How do we check it's working?

Risk-Based Approach

Not all AI needs the same governance. Tier your approach:

Risk Level Governance Intensity Examples
High Full assessment, board approval, ongoing monitoring Decisions affecting rights, safety, significant resources
Medium Standard assessment, senior approval, periodic review Operational decisions, internal tools
Low Light-touch review, team approval, exception-based monitoring Productivity tools, low-stakes recommendations

Governance Bodies

Options based on scale:

  • AI Lead accountable
  • Peer review process
  • Escalation to management
  • No separate committee needed
  • AI Working Group (monthly)
  • Division Head accountable
  • Representatives from key functions
  • Escalation to agency body
  • AI Governance Committee (quarterly)
  • SES Chair
  • Cross-functional membership
  • Secretariat support
  • Links to other governance (IT, data, security)

Step 3: Document the Framework

Write it down—but keep it usable.

Essential Documents

Document Content Audience
AI Policy Principles, scope, requirements All staff
Governance Charter Roles, bodies, authorities Governance participants
Risk Framework Risk categories, assessment approach, tolerances Project teams
Process Guide How to get AI approved, monitored, retired Practitioners
Quick Reference Decision tree, key contacts, common scenarios Everyone

Templates to Use

Avoid Governance Theatre

Read: Governance Theatre

Signs your framework is performative, not practical:

  • Documents exist but aren't used
  • Approvals are rubber stamps
  • No one reads the policies
  • Process is check-box compliance
  • Governance slows everything but catches nothing

Step 4: Implement the Framework

Move from documents to practice.

Implementation Phases

Phase 1: Foundation (Weeks 1-4)

  • Publish core documents
  • Brief governance body members
  • Communicate to stakeholders
  • Set up tracking mechanisms

Phase 2: Pilot (Weeks 5-8)

  • Apply to new projects
  • Refine processes based on experience
  • Build templates and tools
  • Train first wave of users

Phase 3: Full Operation (Weeks 9-12)

  • Apply to all in-scope AI
  • Conduct first governance reviews
  • Gather feedback
  • Iterate based on learning

Enabling Adoption

Make governance easy to follow:

  • Clear entry point (who to contact first)
  • Simple templates and checklists
  • Quick turnaround on reviews
  • Helpful feedback, not just rejection
  • Examples of good practice
  • Training and support available

Step 5: Embed and Evolve

Governance isn't a project—it's an ongoing practice.

Embedding Practices

Practice Frequency Purpose
AI inventory updates Quarterly Know what exists
Risk register reviews Quarterly Current risk picture
Governance body meetings Monthly/Quarterly Active oversight
Framework review Annually Keep framework current
Lessons learned capture Ongoing Continuous improvement

Metrics That Matter

Track whether governance is working:

Metric What It Shows
Time to approval Governance efficiency
Approval vs rejection rate Risk appetite in practice
Policy compliance Framework adoption
Incidents prevented/caught Value of governance
User satisfaction Practical usability

Evolution Triggers

Update your framework when:

  • New government policy issued
  • New AI capabilities emerge (e.g., GenAI)
  • Incident reveals gaps
  • Audit findings require changes
  • User feedback highlights problems
  • Scale of AI use changes significantly

Governance for Different AI Types

Focus areas:

  • Training data provenance
  • Model validation
  • Bias and fairness
  • Performance monitoring
  • Retraining governance

Focus areas:

  • Acceptable use policy
  • Prompt management
  • Output validation
  • Vendor management
  • Cost governance

Focus areas:

  • Approved tool list
  • Data handling rules
  • Use case boundaries
  • Training requirements
  • Monitoring approach