Skip to content

AI Project Delivery Playbook

Ready to Use

Purpose

This playbook provides end-to-end guidance for delivering AI projects in the Australian Public Service. It complements existing program and project frameworks with AI-specific considerations, templates, and best practices.


How to Use This Playbook

This playbook is designed to: - Augment your agency's existing project delivery methodology (Agile, Waterfall, hybrid) - Highlight AI-specific activities, risks, and decision points - Provide templates and checklists for AI project phases - Reference relevant policies, standards, and frameworks

This playbook is NOT: - A replacement for your agency's project management framework - A technical AI development guide (use vendor/technical documentation) - A policy document (refer to government policies and your agency policies)

How to navigate: 1. Read the Overview to understand the AI project lifecycle 2. Jump to the phase relevant to your current project stage 3. Use checklists and templates within each phase 4. Adapt guidance to your specific context and agency requirements


Overview

AI Project Lifecycle

AI projects follow a similar lifecycle to other IT projects, with key differences:

flowchart TB
    D[<strong>DISCOVERY</strong><br/>Identify opportunity and assess feasibility]
    P[<strong>PLANNING</strong><br/>Define scope, approach, and governance]
    DE[<strong>DESIGN</strong><br/>Design solution and prepare data]
    DV[<strong>DEVELOP</strong><br/>Build, train, and validate AI model]
    DP[<strong>DEPLOY</strong><br/>Release to production]
    M[<strong>MONITOR</strong><br/>Ongoing monitoring and improvement]

    D --> P --> DE --> DV --> DP --> M
    M -.->|Retrain/Improve| DV

Key differences for AI projects: - Data-centric: Data quality and availability are critical success factors - Iterative: Model development requires experimentation and iteration - Uncertain outcomes: Model performance isn't guaranteed upfront - Ongoing learning: Models may need continuous retraining - Ethical considerations: Requires ongoing bias and fairness monitoring - Explainability needs: Often need to explain automated decisions


Phase 1: Discovery

Objective: Identify AI opportunity, assess feasibility, and secure sponsorship

Key Activities

1.1 Identify the Problem

Actions: - Define the business problem or opportunity - Document current state and desired future state - Identify stakeholders and beneficiaries - Quantify potential benefits

Outputs: - Problem statement - Stakeholder map - Initial benefit estimates

Templates: - AI Use Case Identification Template

1.2 Assess AI Suitability

Questions to answer: - Is AI appropriate for this problem? - Is the problem pattern-based or rules-based? - Is relevant data available? - Are there simpler non-AI solutions? - What are the risks and ethical considerations?

Actions: - Complete AI suitability assessment - Explore alternative (non-AI) solutions - Consult with AI/data science experts - Document decision rationale

Outputs: - AI suitability assessment - Alternatives analysis - Recommendation (proceed/explore further/don't proceed)

Red flags (reconsider AI): - No data available or data quality is very poor - Problem is simple and rule-based - Errors are not tolerable (e.g., safety-critical without human oversight) - Lack of explainability is a blocker - Costs vastly outweigh benefits

1.3 Data Feasibility Assessment

Actions: - Identify potential data sources - Assess data availability, quality, and accessibility - Identify data gaps - Estimate data preparation effort - Assess privacy and security implications

Key questions: - What data exists internally? - Can we access external data sources? - Is the data labeled (for supervised learning)? - Is the data representative and unbiased? - What is the data classification (OFFICIAL, SECRET, etc.)?

Outputs: - Data inventory - Data quality assessment - Data access plan - Privacy and security considerations

1.4 Initial Risk Assessment

Assess risks across: - Technical: Model performance, integration complexity - Data: Quality, availability, privacy, bias - Ethical: Fairness, transparency, accountability - Operational: Change management, skills, support - Regulatory: Compliance with policies and laws - Reputational: Public trust, media attention

Actions: - Identify top risks - Assess likelihood and impact - Identify deal-breakers - Develop high-level mitigation strategies

Outputs: - Risk register (initial) - Go/no-go recommendation

1.5 Develop Business Case

Include: - Problem statement and strategic alignment - Proposed AI solution approach - Benefits (quantified where possible) - Costs (initial and ongoing) - Risks and mitigation strategies - Alternatives considered - Implementation approach and timeline - Resource requirements

Financial analysis: - Total cost of ownership (3-5 years) - Return on investment (ROI) - Payback period - Cost-benefit ratio

Outputs: - Business case document - Executive summary - Funding request

Templates: - Business Case Template (standard agency template) - AI Use Case Identification Template (for detailed analysis)

1.6 Secure Sponsorship and Approval

Actions: - Present business case to decision-makers - Secure executive sponsor - Obtain initial funding for planning phase - Establish governance structure

Outputs: - Executive approval - Assigned executive sponsor - Initial budget allocation - Governance charter

Discovery Phase Checklist

  • Problem clearly defined and documented
  • AI suitability assessed (AI is appropriate)
  • Data feasibility confirmed
  • Key risks identified and assessed
  • Business case developed and approved
  • Executive sponsor assigned
  • Initial funding secured
  • Governance structure established

Discovery Phase Duration

Typical timeline: 2-6 weeks

Varies based on: - Complexity of problem - Data availability and accessibility - Stakeholder engagement requirements - Approval processes


Phase 2: Planning

Objective: Define detailed project scope, approach, governance, and safeguards

Key Activities

2.1 Detailed Scope Definition

Actions: - Define specific AI capabilities and features - Identify in-scope and out-of-scope elements - Define success criteria and KPIs - Establish acceptance criteria

Outputs: - Detailed scope statement - Success criteria and KPIs - Acceptance criteria

Success metrics examples: - Model performance: Accuracy, precision, recall, F1 score - Business outcomes: Cost savings, time savings, error reduction - User satisfaction: User feedback scores, adoption rates - Operational: Response time, throughput, availability

2.2 Define Project Approach

Select delivery methodology: - Agile: Iterative, incremental development (recommended for most AI projects) - Waterfall: Sequential phases (suitable if requirements well-defined) - Hybrid: Combination of approaches

For AI, recommend: - Agile for model development (experimentation and iteration) - Defined milestones for governance checkpoints - Proof of concept before full development

Outputs: - Delivery methodology - Phase/sprint structure - Key milestones and decision gates

2.3 Project Governance

Establish governance structure:

Project Board/Steering Committee: - Executive sponsor - Business owner - Technical lead - Privacy officer - Security officer - Subject matter experts

Frequency: Monthly or at key milestones

Responsibilities: - Strategic direction and oversight - Risk and issue escalation - Budget and resource decisions - Go/no-go decisions at gates

AI Ethics Review Panel (if high-risk AI): - Ethics officer - Privacy officer - Legal counsel - Subject matter experts - Community representatives (where appropriate)

Frequency: At design, pre-deployment, and periodically post-deployment

Responsibilities: - Review ethical implications - Assess fairness and bias - Evaluate transparency and explainability - Approve deployment

Outputs: - Governance charter - Terms of reference for committees - Decision rights matrix - Escalation paths

2.4 Privacy Impact Assessment (PIA)

When required: If AI handles personal information (almost always for APS)

Actions: - Engage privacy officer early - Complete PIA using agency template - Identify privacy risks and mitigation measures - Obtain privacy officer approval

Key considerations for AI: - Automated decision-making - Purpose limitation (data used for training vs. operation) - Data retention for model retraining - Cross-border data flows (if using offshore AI services) - Re-identification risk from de-identified data

Outputs: - Completed and approved PIA - Privacy risk register - Privacy controls implementation plan

Resources: - Privacy Impact Assessment FAQ - Agency PIA template - OAIC guidance

2.5 Security Assessment

Actions: - Engage security team early - Complete security risk assessment - Classify data (OFFICIAL, OFFICIAL: Sensitive, etc.) - Define security controls - Assess vendor security (if applicable)

Key AI security considerations: - Training data security - Model theft or reverse engineering - Adversarial attacks (poisoning, evasion) - Infrastructure security (cloud vs. on-premises) - Access controls for model and data

Outputs: - Security risk assessment - Security controls specification - Accreditation plan

2.6 Responsible AI and Ethics Assessment

Assess against Australian Government AI Ethics Framework:

  1. Human, societal and environmental wellbeing
  2. Human-centered values
  3. Fairness
  4. Privacy protection and security
  5. Reliability and safety
  6. Transparency and explainability
  7. Contestability
  8. Accountability

Actions: - Complete ethics self-assessment - Identify ethical risks (bias, discrimination, lack of transparency) - Define mitigation measures - Determine if ethics review panel needed - Document accountability framework

High-risk AI (requires enhanced ethics review): - Significant impact on individuals' rights or welfare - Potential for bias or discrimination - Automated decisions without human oversight - Use in sensitive domains (justice, welfare, health) - Large-scale deployment

Outputs: - AI ethics assessment - Responsible AI plan - Bias testing and mitigation plan - Explainability approach

2.7 Procurement Planning (if using vendors)

Actions: - Define build vs. buy vs. partner decision - Identify potential vendors or solutions - Develop procurement approach - Include AI-specific contract terms

AI procurement considerations: - Data ownership and usage rights - Model ownership and IP - Privacy and security requirements - Performance guarantees (accuracy, response time) - Explainability and transparency - Bias testing and mitigation - Exit strategy and data portability

Outputs: - Procurement strategy - Vendor evaluation criteria - Statement of Requirements (SOR) or RFP - Contract terms (AI-specific clauses)

Tools: - Model Evaluation Calculator - Vendor Evaluation Scorecard

2.8 Resource Planning

Identify resource needs:

Roles typically required: - Project manager - Business analyst - Data scientist / ML engineer - Data engineer - Software developers - UX/UI designer - Privacy officer (consulting) - Security officer (consulting) - Subject matter experts - Change manager

Actions: - Define roles and responsibilities (RACI matrix) - Identify capability gaps - Plan for recruitment, contractors, or training - Estimate effort and timeline

Outputs: - Resource plan - RACI matrix - Recruitment or contractor plan - Training needs assessment

2.9 Develop Detailed Project Plan

Include: - Work breakdown structure - Timeline and milestones - Resource allocation - Budget - Risk management plan - Communication plan - Quality assurance plan - Change management plan

AI-specific planning considerations: - Time for data preparation (often 50-70% of effort) - Model experimentation and iteration - Bias testing and mitigation - Explainability development - User acceptance testing with AI-specific scenarios

Outputs: - Detailed project plan - Timeline (Gantt chart or similar) - Budget breakdown - Risk management plan

Planning Phase Checklist

  • Detailed scope and success criteria defined
  • Delivery approach and methodology selected
  • Governance structure established
  • Privacy Impact Assessment completed and approved
  • Security assessment completed
  • Responsible AI and ethics assessment completed
  • Procurement approach defined (if applicable)
  • Resources identified and secured
  • Detailed project plan developed
  • Budget approved
  • Stakeholder communication plan in place

Planning Phase Duration

Typical timeline: 4-8 weeks

Longer if: - Complex procurement required - Extensive privacy or security assessments - High-risk AI requiring ethics review


Phase 3: Design

Objective: Design the AI solution, prepare data, and validate approach

Key Activities

3.1 Solution Architecture Design

Actions: - Define technical architecture - Select technology stack - Design data flows - Plan integration points - Define infrastructure requirements

Architecture decisions: - Cloud vs. on-premises - Build custom model vs. use pre-trained models vs. commercial AI service - Real-time vs. batch processing - API design and interfaces - Scalability and performance requirements

Outputs: - Solution architecture document - Technology stack selection - Infrastructure requirements - Integration design

Tools: - Model Evaluation Calculator

3.2 Data Preparation

Critical success factor: Data quality determines AI success

Activities:

3.2.1 Data Collection: - Gather data from identified sources - Obtain necessary access and approvals - Document data lineage and provenance

3.2.2 Data Cleaning: - Handle missing values - Remove duplicates - Correct errors and inconsistencies - Standardize formats

3.2.3 Data Labeling (for supervised learning): - Define labeling guidelines - Label training data (manual or semi-automated) - Quality assurance of labels - Inter-rater reliability testing

3.2.4 Data Transformation: - Feature engineering - Normalization and scaling - Encoding categorical variables - Dimensionality reduction if needed

3.2.5 Data Splitting: - Training set (typically 60-70%) - Validation set (typically 15-20%) - Test set (typically 15-20%) - Ensure representative splits

Outputs: - Clean, labeled, prepared datasets - Data preparation scripts and pipelines - Data quality report - Data dictionary

Resources: - Synthetic Data Fact Sheet (for test data) - PII Masking Utility

3.3 Model Selection and Design

Actions: - Select modeling approach (classification, regression, NLP, etc.) - Choose candidate algorithms - Define model architecture - Establish baseline performance

Considerations: - Problem type and data characteristics - Explainability requirements (simpler models often more explainable) - Performance requirements - Training and inference compute requirements - Available expertise

Outputs: - Model selection rationale - Model architecture design - Baseline performance benchmarks

3.4 Define Evaluation Metrics

Actions: - Select technical performance metrics - Define business success metrics - Establish target thresholds - Plan evaluation approach

Common metrics: - Classification: Accuracy, precision, recall, F1, AUC-ROC - Regression: RMSE, MAE, R² - NLP: BLEU, ROUGE, perplexity - Fairness: Demographic parity, equalized odds - Business: Cost savings, time savings, user satisfaction

Define acceptable performance: - Minimum acceptable threshold - Target performance - Stretch goal

Outputs: - Evaluation framework - Performance thresholds - Testing plan

3.5 Fairness and Bias Testing Plan

Actions: - Identify protected attributes (age, gender, ethnicity, etc.) - Define fairness metrics - Plan bias testing approach - Establish bias mitigation strategies

Fairness metrics: - Demographic parity - Equalized odds - Equal opportunity - Predictive parity

Testing approach: - Test on disaggregated data (by demographic groups) - Compare model performance across groups - Test for disparate impact - Conduct adversarial testing

Outputs: - Bias testing plan - Fairness metrics and thresholds - Mitigation strategies (pre-processing, in-processing, post-processing)

3.6 Explainability Approach

Actions: - Determine explainability requirements - Select explainability techniques - Design explanations for users - Plan user testing of explanations

Techniques: - Model-agnostic: LIME, SHAP, permutation importance - Model-specific: Decision tree rules, linear model coefficients - Example-based: Nearest neighbors, counterfactuals - Attention mechanisms: For deep learning

User-facing explanations: - Why did the model make this decision? - What factors were most important? - What would change the outcome?

Outputs: - Explainability technical approach - User-facing explanation design - Explainability testing plan

3.7 User Experience (UX) Design

Actions: - Design user interfaces - Define user workflows - Design AI-human interaction patterns - Create prototypes and mockups

AI-specific UX considerations: - Indicate when users are interacting with AI - Manage user expectations (AI is not perfect) - Provide confidence levels or uncertainty - Enable human override or escalation - Present explanations clearly - Design for errors (what happens when AI is wrong?)

Outputs: - UX designs and mockups - User journey maps - Interaction patterns - Prototype for user testing

3.8 Design Reviews and Validation

Actions: - Conduct design reviews with stakeholders - Validate technical design with architects - Review privacy and security controls - Obtain approvals before development

Outputs: - Design review feedback - Updated designs - Approval to proceed to development

Design Phase Checklist

  • Solution architecture documented and approved
  • Data collected, cleaned, and prepared
  • Data quality validated
  • Model approach selected and designed
  • Evaluation metrics and thresholds defined
  • Bias testing plan developed
  • Explainability approach defined
  • UX designed and validated with users
  • Design reviews completed
  • Approval to proceed to development

Design Phase Duration

Typical timeline: 6-12 weeks

Highly variable based on: - Data availability and quality (poor data = longer) - Complexity of solution - Labeling requirements - Stakeholder engagement


Phase 4: Develop

Objective: Build, train, and validate the AI model and solution

Key Activities

4.1 Model Development Environment Setup

Actions: - Set up development infrastructure - Configure version control (Git) - Set up experiment tracking (MLflow, Weights & Biases) - Configure development tools and libraries - Establish CI/CD pipeline

Outputs: - Development environment - Version control repository - Experiment tracking system

4.2 Model Training

Iterative process:

  1. Initial training: Train baseline model
  2. Evaluation: Assess performance on validation set
  3. Hyperparameter tuning: Optimize model parameters
  4. Feature engineering: Refine input features
  5. Model refinement: Try different architectures or algorithms
  6. Repeat: Iterate until performance targets met

Best practices: - Track all experiments (hyperparameters, data, metrics) - Use validation set for tuning, reserve test set for final evaluation - Monitor for overfitting - Document modeling decisions and rationale

Outputs: - Trained model(s) - Training logs and metrics - Experiment documentation

4.3 Model Validation

Actions: - Evaluate on held-out test set - Assess against defined metrics and thresholds - Conduct error analysis - Test edge cases and failure modes

Questions to answer: - Does the model meet performance thresholds? - Where does the model make errors? - Are errors acceptable or concerning? - Does performance generalize (not overfit)?

Outputs: - Model validation report - Performance metrics - Error analysis - Recommendation (accept, refine, or reject model)

4.4 Bias and Fairness Testing

Actions: - Evaluate model on disaggregated data - Calculate fairness metrics - Test for disparate impact - Identify and document biases

If bias detected: - Apply mitigation techniques: - Pre-processing: Reweight or resample training data - In-processing: Add fairness constraints during training - Post-processing: Adjust model outputs

Re-evaluate after mitigation

Outputs: - Bias testing report - Fairness metrics across groups - Bias mitigation actions taken - Residual bias documentation

4.5 Explainability Implementation

Actions: - Implement explainability techniques - Generate explanations for sample predictions - Validate explanations with subject matter experts - Test user-facing explanations with users

Outputs: - Explainability implementation - Sample explanations - User testing results - Final explanation designs

4.6 Application Development

Build supporting application: - User interfaces - APIs and integrations - Data pipelines - Monitoring and logging - Error handling

Testing: - Unit testing - Integration testing - User acceptance testing - Performance and load testing - Security testing

Outputs: - Functional application - Test results and sign-off - Technical documentation

4.7 Monitoring and Alerting

Implement monitoring for: - Model performance metrics - Prediction distribution (detect drift) - Input data quality - System performance (latency, throughput) - Error rates - Fairness metrics over time

Set up alerts for: - Performance degradation - Data drift - System errors - Security incidents

Outputs: - Monitoring dashboards - Alert configuration - Monitoring runbook

4.8 Documentation

Create comprehensive documentation: - Technical documentation: - Model architecture and algorithms - Training data and preparation - Model performance and limitations - API documentation - Deployment instructions

  • User documentation:
  • User guides
  • Training materials
  • FAQs

  • Operational documentation:

  • Runbooks
  • Troubleshooting guides
  • Monitoring and alerting procedures
  • Incident response plan

Outputs: - Complete documentation suite

Develop Phase Checklist

  • Development environment set up
  • Model trained and validated
  • Performance thresholds met
  • Bias and fairness testing completed
  • Explainability implemented and tested
  • Application built and tested
  • Monitoring and alerting implemented
  • Documentation completed
  • Security testing passed
  • User acceptance testing passed
  • Approval to deploy to production

Develop Phase Duration

Typical timeline: 12-24 weeks

Highly variable based on: - Model complexity - Performance targets - Bias and explainability requirements - Integration complexity - Number of iteration cycles needed


Phase 5: Deploy

Objective: Release the AI solution to production safely and responsibly

Key Activities

5.1 Pre-Deployment Readiness

Final checks: - [ ] All development phase deliverables complete - [ ] Governance approvals obtained - [ ] Security accreditation (if required) - [ ] Privacy controls implemented - [ ] User documentation ready - [ ] Training delivered - [ ] Support processes established - [ ] Rollback plan prepared

5.2 Deployment Strategy

Options: - Big bang: Deploy to all users at once (higher risk) - Phased rollout: Deploy to subsets of users incrementally (recommended) - Pilot: Deploy to small pilot group first - A/B testing: Run AI alongside current system, compare results

Recommendation for AI: Phased rollout or pilot

Benefits: - Reduce risk - Gather real-world performance data - Identify issues before full deployment - Build user confidence gradually

5.3 Pilot Deployment

Actions: - Select pilot users/sites - Deploy AI system to pilot - Provide enhanced support during pilot - Gather feedback and performance data - Monitor closely for issues

Pilot success criteria: - Performance meets thresholds - No critical issues - User satisfaction acceptable - Privacy and security controls effective

Outputs: - Pilot results report - Lessons learned - Refinements needed - Go/no-go decision for full deployment

5.4 Full Deployment

Actions: - Deploy to production (incrementally if phased) - Monitor system closely post-deployment - Provide user support - Communicate deployment to stakeholders

Deployment activities: - Infrastructure provisioning - Model deployment - Application deployment - Configuration - Smoke testing - User notification

Outputs: - Production system live - Deployment report - Post-deployment review

5.5 Training and Change Management

User training: - How to use the AI system - How to interpret AI outputs - When to trust vs. question AI - How to escalate or override - How to provide feedback

Change management: - Communicate benefits and changes - Address concerns and resistance - Provide ongoing support - Celebrate successes

Outputs: - Training delivered - Training materials - Change management activities completed

5.6 Handover to Operations

Actions: - Transition from project team to operational team - Train operational support staff - Hand over documentation - Establish support processes - Define roles and responsibilities

Operational responsibilities: - Ongoing monitoring - User support - Incident response - Model retraining - Performance reporting

Outputs: - Handover complete - Operational team trained - Support processes established

Deploy Phase Checklist

  • Pre-deployment readiness confirmed
  • Deployment strategy defined
  • Pilot deployment completed successfully
  • Full deployment completed
  • Training delivered to users
  • Change management activities completed
  • Handover to operations completed
  • Support processes in place
  • Project closure activities completed

Deploy Phase Duration

Typical timeline: 4-12 weeks

Includes pilot period and phased rollout


Phase 6: Monitor & Improve

Objective: Continuously monitor AI performance, maintain quality, and improve over time

Key Activities

6.1 Performance Monitoring

Monitor continuously: - Model performance metrics (accuracy, precision, recall, etc.) - Business outcomes (cost savings, efficiency gains, etc.) - User satisfaction - System performance (latency, availability, etc.) - Fairness metrics over time

Frequency: - Real-time dashboards for system health - Daily/weekly automated reports - Monthly performance reviews - Quarterly governance reviews

Outputs: - Performance dashboards - Regular performance reports - Escalation of issues

6.2 Data and Model Drift Detection

Monitor for drift: - Data drift: Input data distribution changes over time - Concept drift: Relationship between inputs and outputs changes - Model drift: Model performance degrades over time

Detection methods: - Statistical tests on input distributions - Performance monitoring on recent data - Comparison to baseline metrics

Actions when drift detected: - Investigate root cause - Assess impact - Retrain model if needed - Update data pipelines

Outputs: - Drift detection alerts - Drift analysis reports - Retraining schedule

6.3 Ongoing Bias and Fairness Monitoring

Continuously monitor: - Fairness metrics across demographic groups - Complaint patterns - Adverse outcomes by group

Actions: - Regular bias audits (quarterly or semi-annually) - Investigate fairness concerns - Retrain with bias mitigation if needed - Engage ethics review panel if issues found

Outputs: - Bias monitoring reports - Mitigation actions - Ethics review outcomes

6.4 User Feedback and Improvement

Gather feedback: - User surveys - Support tickets and complaints - Usage analytics - Stakeholder interviews

Analyze feedback for: - Pain points and frustrations - Feature requests - Accuracy concerns - Explanation quality

Actions: - Prioritize improvements - Enhance user experience - Refine explanations - Add new features

Outputs: - User feedback analysis - Improvement backlog - Enhancement releases

6.5 Model Retraining

When to retrain: - Performance drops below thresholds - Data or concept drift detected - New data available - Bias or fairness concerns - Scheduled periodic retraining

Retraining process: 1. Gather new training data 2. Prepare and label data 3. Retrain model 4. Validate performance (including fairness) 5. Test in staging environment 6. Deploy updated model 7. Monitor post-deployment

Outputs: - Updated model - Retraining documentation - Performance comparison (old vs. new model)

6.6 Incident Response

Incidents to plan for: - Model making significant errors - Bias or discrimination detected - Security breach - Privacy incident - System outage

Response process: 1. Detect and alert 2. Assess severity 3. Contain and mitigate 4. Investigate root cause 5. Remediate 6. Post-incident review 7. Implement preventive measures

Outputs: - Incident response plan - Incident reports - Post-incident reviews - Corrective actions

6.7 Governance and Reporting

Regular governance activities: - Monthly operational reviews - Quarterly steering committee updates - Annual comprehensive AI system review - Ongoing ethics monitoring (for high-risk AI)

Reporting: - Performance against KPIs - Benefits realization - Issues and risks - Continuous improvement activities

Outputs: - Regular reports to governance - Annual AI system review - Benefits realization report

Monitor Phase Checklist

Continuous activities: - [ ] Performance monitoring active - [ ] Drift detection running - [ ] Fairness monitoring in place - [ ] User feedback being gathered - [ ] Support processes operational - [ ] Incident response plan ready

Periodic activities: - [ ] Quarterly performance reviews conducted - [ ] Annual comprehensive AI review completed - [ ] Model retrained as needed - [ ] Bias audits conducted - [ ] Improvements prioritized and implemented


Cross-Cutting Concerns

Stakeholder Engagement

Throughout project lifecycle: - Identify and map stakeholders - Engage early and often - Manage expectations (AI isn't perfect) - Communicate progress and setbacks - Celebrate successes

Key stakeholder groups: - Executive sponsors - Business owners and users - Privacy and security officers - IT and operations - Ethics and legal - External stakeholders (citizens, partners)

Risk Management

Continuous risk management: - Maintain risk register - Regular risk reviews - Update mitigations as needed - Escalate high risks to governance

Common AI project risks: - Data quality or availability issues - Model performance below targets - Bias and fairness concerns - Privacy or security incidents - Vendor dependencies - Skills and capability gaps - Change resistance - Regulatory changes

Quality Assurance

Throughout project: - Define quality standards - Conduct reviews and testing - Independent QA where appropriate - Documentation review

AI-specific QA: - Model validation - Bias testing - Explainability validation - Data quality checks - Ethics review


Templates and Tools

Discovery & Planning

Design & Development

Governance

  • Prioritization Framework
  • Risk Register Template (agency standard)
  • Stakeholder Engagement Plan (agency standard)

Appendices

Appendix A: Key Roles and Responsibilities

Role Responsibilities
Executive Sponsor Strategic direction, resource allocation, governance oversight
Project Manager Day-to-day project management, coordination, reporting
Business Owner Requirements, acceptance criteria, benefits realization
Data Scientist / ML Engineer Model development, training, validation
Data Engineer Data pipelines, preparation, infrastructure
Software Developer Application development, integration
UX Designer User experience design, prototyping
Privacy Officer PIA, privacy compliance, privacy risk management
Security Officer Security assessment, controls, accreditation
Subject Matter Expert Domain expertise, requirements, validation

Appendix B: Decision Gates

Key decision points:

  1. After Discovery: Proceed to Planning?
  2. After Planning: Proceed to Design/Development?
  3. After Design: Approve design, proceed to Development?
  4. After Development: Deploy to Pilot?
  5. After Pilot: Deploy to Production?
  6. Ongoing: Continue operation or decommission?

Gate criteria: Defined in project plan and governance charter

Appendix C: References

Government Policies and Frameworks: - Australian Government AI Ethics Framework - APS Digital Service Standard - Protective Security Policy Framework (PSPF) - Privacy Act 1988

Additional Resources: - OAIC Privacy Guidelines - Australian Cyber Security Centre (ACSC) guidance - Agency-specific project management frameworks