Skip to content

AI Risk Register

Template Download Template

Purpose: Identify, assess, and manage risks specific to AI projects in government contexts. Covers technical, data, ethical, operational, and reputational risks unique to AI/ML systems.
At a Glance
  • When to use: Throughout the project lifecycle
  • Review frequency: Weekly during active development, monthly in production
  • Key outputs: Risk ratings, treatment plans, escalation triggers
  • Pre-populated: Common AI project risks included below

Project Information

Field Details
Project Name
Risk Owner
Last Updated
Review Frequency Weekly / Fortnightly / Monthly
Next Review Date

Risk Assessment Criteria

Likelihood Scale

Rating Score Description Probability
Rare 1 Unlikely to occur < 5%
Unlikely 2 Could occur but not expected 5-25%
Possible 3 Might occur 25-50%
Likely 4 Will probably occur 50-75%
Almost Certain 5 Expected to occur > 75%

Impact Scale

Rating Score Schedule Budget Quality/Scope Reputation
Insignificant 1 < 1 week < 5% Minor deviation Internal only
Minor 2 1-2 weeks 5-10% Some reduction Local media
Moderate 3 2-4 weeks 10-20% Significant reduction National attention
Major 4 1-3 months 20-40% Major reduction Parliamentary scrutiny
Severe 5 > 3 months > 40% Project failure Ministerial intervention

Risk Matrix

Impact ↓ / Likelihood → 1 Rare 2 Unlikely 3 Possible 4 Likely 5 Almost Certain
5 Severe 5 10 15 20 25
4 Major 4 8 12 16 20
3 Moderate 3 6 9 12 15
2 Minor 2 4 6 8 10
1 Insignificant 1 2 3 4 5

Risk Levels:

Score Level Action
1-4 🟢 LOW Monitor
5-9 🟡 MEDIUM Active management
10-15 🟠 HIGH Escalate to sponsor
16-25 🔴 CRITICAL Immediate escalation

Risk Register

Active Risks

ID Category Risk Description Cause Consequence L I Score Level Treatment Controls Owner Status Due
R001
R002
R003

Common AI Project Risks

Technical Risks

ID Risk Description Typical Causes Mitigation Strategies
T01 Model Performance Model fails to meet accuracy requirements Insufficient/poor data, wrong algorithm POC validation, baseline comparisons
T02 Data Quality Training data is incomplete, biased, or inaccurate Poor data governance, legacy systems Data quality assessment, cleansing
T03 Integration Failure AI system cannot integrate with existing systems API incompatibility, legacy tech Architecture review, POC
T04 Scalability Issues System cannot handle production volumes Underestimated load, poor design Load testing, scalable architecture
T05 Model Drift Model performance degrades over time Changing data patterns Monitoring, retraining pipeline
T06 Technical Debt Shortcuts create future maintenance burden Time pressure, poor practices Code reviews, documentation
T07 Vendor Lock-in Dependency on single vendor platform Platform-specific features Multi-cloud strategy, abstractions

Data Risks

ID Risk Description Typical Causes Mitigation Strategies
D01 Data Availability Required data not accessible Data silos, ownership issues Early data discovery, agreements
D02 Data Privacy Breach PII exposure or misuse Inadequate controls, human error PIA, encryption, access controls
D03 Data Bias Training data reflects historical biases Unrepresentative sampling Bias testing, diverse data sources
D04 Data Sovereignty Data stored in non-compliant locations Cloud misconfiguration Data residency requirements
D05 Data Loss Training data or models lost Backup failure, corruption Backup strategy, versioning

Ethical & Compliance Risks

ID Risk Description Typical Causes Mitigation Strategies
E01 Algorithmic Bias Model produces unfair outcomes Biased data, algorithm design Bias testing, fairness metrics
E02 Lack of Explainability Cannot explain AI decisions Black-box models Explainability tools, model cards
E03 Privacy Violation Non-compliance with Privacy Act Inadequate PIA, scope creep Privacy by design, regular PIAs
E04 Regulatory Non-compliance Fails to meet regulatory requirements Unclear requirements Legal review, compliance checklist
E05 Ethical Concerns AI use raises ethical questions Insufficient ethical review Ethics committee review
E06 Human Rights Impact Negative impact on rights Automated decision-making Human-in-the-loop, impact assessment

Project Management Risks

ID Risk Description Typical Causes Mitigation Strategies
P01 Scope Creep Uncontrolled expansion of scope Poor requirements, stakeholder pressure Change control, scope baseline
P02 Resource Constraints Insufficient skilled resources Market shortage, budget Training, contractors, prioritization
P03 Stakeholder Resistance Key stakeholders oppose project Poor communication, fear of change Engagement plan, change management
P04 Budget Overrun Costs exceed approved budget Underestimation, scope creep Contingency, regular tracking
P05 Schedule Delay Project runs behind schedule Dependencies, complexity Buffer, critical path management
P06 Sponsor Disengagement Executive sponsor loses interest Competing priorities Regular briefings, quick wins

Operational Risks

ID Risk Description Typical Causes Mitigation Strategies
O01 System Downtime AI system unavailable Infrastructure failure HA design, SLAs, monitoring
O02 Security Breach Unauthorized access to AI system Vulnerabilities, attacks Security assessment, controls
O03 Model Exploitation Adversaries manipulate AI system Adversarial attacks Input validation, monitoring
O04 Skills Gap Operations team cannot support system New technology, training gaps Training, documentation, support
O05 Shadow AI Unofficial AI systems emerge Unmet needs, slow IT Governance, approved alternatives

Reputational Risks

ID Risk Description Typical Causes Mitigation Strategies
R01 Public Backlash Negative public reaction to AI use Poor communication, incidents Comms strategy, transparency
R02 Media Scrutiny Negative media coverage Failures, ethical concerns Proactive comms, incident response
R03 Parliamentary Questions Ministers asked about AI project Public concern, incidents Briefing materials, transparency
R04 Trust Erosion Citizens lose trust in agency Multiple issues, poor response Trust-building measures

Risk Treatment Options

Treatment Description When to Use
Avoid Eliminate the risk by not proceeding Risk outweighs benefits
Mitigate Implement controls to reduce likelihood/impact Acceptable residual risk achievable
Transfer Shift risk to another party (insurance, vendor) External party better placed to manage
Accept Acknowledge and monitor the risk Cost of treatment exceeds risk

Risk Response Plan Template

Risk ID Response Actions Resources Required Timeline Success Criteria

Risk Monitoring

Key Risk Indicators (KRIs)

KRI Description Threshold Current Trend
Model accuracy Production model accuracy > 90%
Data quality score Input data quality metric > 95%
Incident count Security/privacy incidents 0 per month
Stakeholder satisfaction Survey score > 4.0/5.0
Budget variance Actual vs planned < 10%

Risk Review Log

Date Reviewer Risks Reviewed Changes Made Next Review

Escalation Thresholds

Risk Level Escalation To Timeframe Actions Required
Low Project Manager Next review Monitor
Medium Project Manager Within 1 week Treatment plan required
High Project Sponsor Within 48 hours Immediate action plan
Critical Executive Committee Within 24 hours Emergency response

Closed Risks

ID Risk Description Closure Date Closure Reason Final Status
Mitigated/Avoided/Accepted

Sign-Off

Role Name Date
Risk Owner
Project Manager
Project Sponsor