Skip to content

Ethical AI Decision Guide

Ready to Use

Purpose

This guide provides a practical framework for making ethical decisions when designing, developing, and deploying AI systems in government. It translates Australia's AI Ethics Principles into actionable decision-making tools.


Before Making Any AI Decision, Ask:
  • Does this respect human rights and dignity?
  • Could this harm individuals or communities?
  • Is this fair and non-discriminatory?
  • Can we explain how decisions are made?
  • Are we being transparent about AI use?
  • Who is accountable if something goes wrong?
  • Have we considered diverse perspectives?

1. Australia's AI Ethics Principles

1.1 The Eight Principles

Principle Core Question Key Considerations
1. Human, societal and environmental wellbeing Does this benefit people and society? Long-term impacts, sustainability, public good
2. Human-centred values Does this respect rights and dignity? Autonomy, privacy, cultural values
3. Fairness Does this treat everyone equitably? Bias, discrimination, equal access
4. Privacy protection and security Is personal information protected? Data minimization, consent, security
5. Reliability and safety Does this work as intended safely? Testing, fail-safes, risk management
6. Transparency and explainability Can we explain how it works? Documentation, interpretability, disclosure
7. Contestability Can decisions be challenged? Appeal processes, human review
8. Accountability Who is responsible? Governance, oversight, liability

1.2 Principle Interactions

Some decisions involve trade-offs between principles:

Trade-off Example Resolution Approach
Fairness vs Privacy Collecting demographic data for bias testing Anonymize data; use statistical proxies
Explainability vs Accuracy Complex models perform better but are opaque Use interpretable models for high-stakes decisions
Safety vs Innovation New AI could help but carries risks Staged rollout with monitoring
Transparency vs Security Disclosing AI details could enable gaming Share high-level information; protect specifics

2. Ethical Decision Framework

2.1 The ETHICS Decision Process

flowchart LR
    E["<strong>E</strong><br/>Examine<br/>the situation"] --> T["<strong>T</strong><br/>Think through<br/>stakeholders"]
    T --> H["<strong>H</strong><br/>Highlight<br/>risks & harms"]
    H --> I["<strong>I</strong><br/>Identify<br/>options"]
    I --> C["<strong>C</strong><br/>Choose &<br/>justify"]
    C --> S["<strong>S</strong><br/>Safeguard &<br/>monitor"]

    style E fill:#e3f2fd,stroke:#1976d2,stroke-width:2px
    style T fill:#e8f5e9,stroke:#388e3c,stroke-width:2px
    style H fill:#ffcc80,stroke:#ef6c00,stroke-width:2px
    style I fill:#fff3e0,stroke:#f57c00,stroke-width:2px
    style C fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
    style S fill:#e0f2f1,stroke:#00796b,stroke-width:2px

2.2 Step 1: Examine the Situation

Questions to answer:

Question Your Response
What is the AI being asked to do?
What decisions will it make or support?
What data will it use?
Who will be affected by the AI?
What is the context of use?
What are the stakes involved?

Risk Tier Assessment:

Tier Characteristics Example Uses Ethics Scrutiny
Tier 1: Minimal No individual impact, internal only Document summarization, internal analytics Standard review
Tier 2: Limited Some individual impact, human oversight Search ranking, recommendation Ethics checklist
Tier 3: Significant Affects access to services or rights Benefit eligibility triage, risk scoring Full ethics review
Tier 4: High Major impact on fundamental rights Enforcement targeting, automated decisions Ethics board approval

2.3 Step 2: Think Through Stakeholders

Stakeholder Analysis:

Stakeholder Group How Affected Concerns Voice in Process
Primary users (staff)
Affected individuals
Vulnerable groups
Oversight bodies
General public

Key Questions: - Have we consulted affected communities? - Are vulnerable groups disproportionately affected? - Have we included diverse perspectives in design? - Are there cultural considerations we need to address?

2.4 Step 3: Highlight Risks and Harms

Harm Categories:

Harm Type Description Examples Likelihood Severity
Physical Bodily harm or safety risks Medical AI errors, infrastructure failures
Psychological Mental distress or trauma Insensitive interactions, anxiety from AI decisions
Financial Economic loss or disadvantage Incorrect benefit denials, unfair pricing
Reputational Damage to standing or dignity Privacy violations, profiling
Discriminatory Unfair treatment of groups Biased hiring, discriminatory access
Privacy Exposure of personal information Data breaches, inference attacks
Democratic Impact on civic participation Manipulation, suppression
Environmental Ecological impact Energy consumption, e-waste

Harm Assessment Questions:

Question Response
What could go wrong?
Who would be harmed and how?
How likely is harm to occur?
How severe would the harm be?
Can harms be reversed or remedied?
Are some groups more at risk than others?

2.5 Step 4: Identify Options

Option Generation:

Consider alternatives including: 1. Proceed as planned - Accept current design 2. Modify the approach - Add safeguards or constraints 3. Alternative technology - Non-AI or different AI approach 4. Enhanced oversight - Add human review 5. Phased approach - Start limited, expand gradually 6. Delay or defer - Wait for better solutions 7. Do not proceed - Reject the use case

Options Evaluation Matrix:

Option Ethical Concerns Addressed Practical Feasibility Residual Risks
Option 1
Option 2
Option 3

2.6 Step 5: Choose and Justify

Decision Documentation:

Element Documentation
Decision made
Rationale
Principles prioritized
Trade-offs accepted
Safeguards required
Conditions attached
Dissenting views
Approval authority

Justification Test: - Would I be comfortable if this decision was made public? - Would I be comfortable if I were the person affected? - Can I explain this decision to a non-expert? - Does this decision align with our stated values?

2.7 Step 6: Safeguard and Monitor

Safeguards Checklist:

Safeguard Implementation Owner Status
Human oversight
Appeal/review process
Bias monitoring
Performance monitoring
Regular ethics review
Sunset clause

Ongoing Monitoring: - Define ethics metrics to track - Schedule regular ethics reviews - Establish feedback mechanisms - Document issues and responses


3. Common Ethical Scenarios

3.1 Scenario: Using AI to Prioritize Service Requests

Situation: An agency wants to use AI to triage and prioritize citizen service requests.

Ethical Analysis:

Principle Consideration Recommendation
Fairness Risk of systematically deprioritizing certain groups Test for disparate impact; ensure diverse training data
Transparency Citizens should know AI is involved Disclose AI use in service charter
Contestability People should be able to challenge prioritization Provide easy escalation path
Accountability Clear ownership of prioritization outcomes Name responsible officer

Decision Framework: - LOW risk if used to assist humans who make final decisions - MEDIUM risk if it significantly affects service timing - HIGH risk if it effectively determines service access

3.2 Scenario: Predictive Risk Scoring

Situation: Using AI to identify high-risk cases for intervention (fraud, child safety, health).

Ethical Analysis:

Principle Consideration Recommendation
Fairness High risk of encoding historical biases Extensive bias testing; avoid proxies for protected attributes
Privacy Requires significant personal data Data minimization; purpose limitation
Human-centred Risk of treating people as statistics Ensure human review of all flagged cases
Contestability High-stakes decisions need challenge rights Robust review process

Red Lines: - No fully automated decisions for significant outcomes - No use of protected attributes as direct inputs - Must be able to explain why a case was flagged

3.3 Scenario: Chatbot for Public Services

Situation: Deploying an AI chatbot to handle citizen inquiries.

Ethical Analysis:

Principle Consideration Recommendation
Transparency Users should know they're talking to AI Clear disclosure; don't impersonate humans
Reliability Must provide accurate information Regular quality checks; clear limits
Accessibility Must work for diverse users Multiple channels; escalation to humans
Privacy May collect sensitive information Minimize data collection; clear notices

Guidelines: - Always identify as AI - Provide easy escalation to human - Don't handle high-stakes decisions - Don't collect more information than needed

3.4 Scenario: Using External AI Services (e.g., Large Language Models)

Situation: Using third-party AI services for government functions.

Ethical Analysis:

Principle Consideration Recommendation
Privacy Data may leave government control No personal information without appropriate agreements
Accountability Shared responsibility with vendor Clear contractual terms
Reliability May produce inaccurate information Human verification required
Transparency Third-party "black box" Understand and document limitations

Usage Guidelines: - Never input classified or sensitive information without appropriate approvals - Never use outputs for decisions without verification - Document limitations and failure modes - Ensure appropriate data agreements with vendors


4. Ethics Review Process

4.1 When Ethics Review is Required

Trigger Review Type
New AI system development Full ethics assessment
Significant change to existing AI Change impact review
New data source for AI Data ethics review
AI affecting vulnerable groups Enhanced review
AI with enforcement/compliance role Mandatory ethics board review
Any Tier 3-4 AI use Ethics committee approval

4.2 Ethics Assessment Template

Section 1: AI Description | Field | Response | |-------|----------| | AI system name | | | Purpose | | | Type of AI | | | Decision type | | | Affected groups | |

Section 2: Principle Assessment

Principle How Addressed Evidence Gaps Mitigations
Human wellbeing
Human-centred values
Fairness
Privacy and security
Reliability and safety
Transparency and explainability
Contestability
Accountability

Section 3: Risk Assessment | Risk | Likelihood | Severity | Mitigation | Residual Risk | |------|------------|----------|------------|---------------| | | | | | |

Section 4: Recommendation | Recommendation | Conditions | Review Date | |----------------|------------|-------------| | Approve / Approve with conditions / Reject | | |

4.3 Ethics Governance Structure

flowchart TB
    EXEC["<strong>EXECUTIVE SPONSOR</strong><br/>Accountable for AI ethics"] --> EB
    EXEC --> EL
    EXEC --> PT

    subgraph EB["<strong>ETHICS BOARD</strong>"]
        EB1[Tier 3-4 approvals]
    end

    subgraph EL["<strong>ETHICS LEAD</strong>"]
        EL1[Guidance & advice]
    end

    subgraph PT["<strong>PROJECT TEAMS</strong>"]
        PT1[Ethics-by-design]
    end

    style EXEC fill:#e3f2fd,stroke:#1976d2,stroke-width:2px
    style EB fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
    style EL fill:#fff3e0,stroke:#f57c00,stroke-width:2px
    style PT fill:#e8f5e9,stroke:#388e3c,stroke-width:2px

5. Ethical Red Lines

5.1 Prohibited Uses

The following AI uses are generally prohibited in Australian government:

Prohibited Use Rationale Exception Process
Mass surveillance of citizens Privacy, human rights None
Social scoring of citizens Autonomy, dignity None
Manipulation of democratic processes Democratic values None
Lethal autonomous weapons Human control of force Defence policy
Discrimination based on protected attributes Anti-discrimination law None

5.2 High-Risk Uses Requiring Special Approval

Use Case Required Approval Additional Safeguards
AI in criminal justice Minister + Ethics Board Independent oversight
AI affecting children Ethics Board + Child safety Guardian notification
AI in health decisions Ethics Board + Clinical Clinician oversight
AI denying benefits/services Ethics Board + Legal Full appeal rights
AI in national security Security Committee Classified oversight

5.3 Ethical Boundaries Checklist

Before proceeding, confirm: - [ ] This is not a prohibited use - [ ] Appropriate approval obtained for high-risk use - [ ] Human oversight proportionate to risk - [ ] Appeal/contestability mechanism in place - [ ] Affected individuals informed of AI use - [ ] Data use is lawful and proportionate - [ ] No discrimination against protected groups


6. Handling Ethical Dilemmas

6.1 When Principles Conflict

Resolution Hierarchy:

  1. Fundamental rights take priority (no discrimination, privacy)
  2. Safety and wellbeing next (prevent harm)
  3. Procedural fairness follows (transparency, contestability)
  4. Operational considerations last (efficiency, cost)

6.2 Escalation Process

If you encounter an ethical dilemma you cannot resolve:

  1. Document the dilemma clearly
  2. Consult Ethics Lead or designated ethics advisor
  3. If unresolved, escalate to Ethics Board
  4. If still unresolved, escalate to Executive Sponsor
  5. Document final decision and rationale

6.3 Whistleblowing and Raising Concerns

If you believe an AI system raises serious ethical concerns:

  1. Raise with your manager in first instance
  2. If unresolved, contact Ethics Lead
  3. If still unresolved, use formal complaint channels
  4. Public Interest Disclosure provisions apply

Protection: Staff who raise ethical concerns in good faith are protected under PID legislation.


7. Embedding Ethics in AI Lifecycle

7.1 Ethics at Each Stage

Stage Ethics Activities Deliverable
Ideation Initial ethics screening Ethics tier classification
Discovery Stakeholder ethics concerns Ethics requirements
Design Ethics-by-design review Ethics design document
Development Bias testing, fairness checks Fairness report
Testing Ethics testing scenarios Ethics test results
Deployment Ethics go/no-go decision Ethics clearance
Operations Ongoing ethics monitoring Ethics dashboard
Retirement Ethics review of outcomes Lessons learned

7.2 Ethics in Agile/Sprint Processes

Sprint Activity Ethics Integration
Backlog grooming Flag items with ethics implications
Sprint planning Include ethics tasks in sprint
Daily standups Raise ethics concerns early
Sprint review Demo ethics features (explainability, etc.)
Retrospective Discuss ethics lessons learned

8. Tools and Checklists

8.1 Quick Ethics Checklist

Before any AI decision or milestone, confirm:

Fairness: - [ ] Tested for bias across demographic groups - [ ] No use of protected attributes as direct inputs - [ ] Diverse training data

Transparency: - [ ] AI use disclosed to affected parties - [ ] Decision logic documented - [ ] Limitations documented

Accountability: - [ ] Clear ownership assigned - [ ] Escalation path defined - [ ] Monitoring in place

Contestability: - [ ] Appeal process available - [ ] Human review option exists - [ ] Review timeframes appropriate

Privacy: - [ ] Data collection minimized - [ ] Consent obtained where required - [ ] Security controls implemented

8.2 Ethics Conversation Starters

Use these questions in team discussions:

  1. "What's the worst that could happen with this AI?"
  2. "Who might be harmed by this, and how?"
  3. "If this decision was in the news, how would we feel?"
  4. "Would we be comfortable if this happened to us?"
  5. "Have we heard from the people affected?"
  6. "What are we assuming that might not be true?"
  7. "Can we explain this to a non-technical person?"
  8. "What would we do if this goes wrong?"

8.3 Ethics Decision Tree

flowchart TB
    START([Is AI being used?]) --> Q1{Does it affect<br/>individuals?}

    Q1 -->|No| STD[Standard process]
    Q1 -->|Yes| Q2{Are effects<br/>significant?}

    Q2 -->|No| CHK[Ethics checklist]
    Q2 -->|Yes| Q3{Does it affect<br/>rights/access?}

    Q3 -->|No| ENH[Enhanced review]
    Q3 -->|Yes| FULL[Full ethics<br/>board review]

    style START fill:#e3f2fd,stroke:#1976d2,stroke-width:2px
    style STD fill:#c8e6c9,stroke:#388e3c,stroke-width:2px
    style CHK fill:#fff9c4,stroke:#f9a825,stroke-width:2px
    style ENH fill:#ffcc80,stroke:#ef6c00,stroke-width:2px
    style FULL fill:#ef9a9a,stroke:#c62828,stroke-width:2px

9. Resources and Support

9.1 Internal Resources

Resource Purpose Contact
Ethics Lead Guidance and advice [Contact]
Ethics Board Formal approvals [Contact]
Privacy Officer Privacy guidance [Contact]
Legal Team Legal compliance [Contact]

9.2 External Resources

Resource Description
Australia's AI Ethics Framework National ethical framework
OAIC AI Guidance Privacy and AI
OECD AI Principles International framework

9.3 Training and Development

Training Audience Frequency
AI Ethics Fundamentals All staff Annual
Ethics in AI Development Technical staff At onboarding
Ethics Decision Making Project leads Annual
Ethics Board Orientation Board members At appointment

10. Glossary

Term Definition
Algorithmic bias Systematic errors in AI that create unfair outcomes
Contestability Ability to challenge AI-influenced decisions
Explainability Ability to describe how AI reaches conclusions
Fairness Absence of discrimination or unfair treatment
Human-in-the-loop Human review and approval in AI decisions
Human-on-the-loop Human oversight of AI operations
Human-out-of-the-loop Fully autonomous AI decision-making
Proxy discrimination Using neutral factors that correlate with protected attributes
Transparency Openness about AI use and functioning