AI Change Management Playbook
Ready to Use
Purpose
This playbook provides a structured approach to managing the people-side of AI change in government. It addresses the unique challenges of AI adoption including workforce concerns, skills development, cultural change, and building trust in AI systems.
The ADAPT Framework: Assess readiness → Design approach → Align stakeholders → Prepare capability → Transition & sustain
Quick Reference
Key Principles
- People first - Technology serves people, not the reverse
- Transparency - Be open about AI and its impacts
- Inclusion - Involve affected staff in design and implementation
- Support - Provide training, time, and resources
- Fairness - Ensure equitable treatment of all staff
- Continuous - Change management is ongoing, not one-time
1. Understanding AI Change
1.1 Why AI Change is Different
| Traditional IT Change | AI-Specific Challenges |
| Predictable system behavior | Probabilistic outputs; may evolve over time |
| Clear task automation | Augments judgment; changes decision-making |
| Documented rules | "Black box" perception; explainability challenges |
| Defined scope | Scope may expand; new capabilities emerge |
| One-time training | Ongoing learning and adaptation required |
| Technical adoption | Trust and acceptance barriers |
| Workflow changes | Identity and role transformation |
| Fear | Description | Change Management Response |
| Job loss | "AI will replace me" | Clear communication on workforce strategy; focus on augmentation |
| Skill obsolescence | "My skills won't matter" | Skills development; new career paths |
| Reduced autonomy | "AI will tell me what to do" | Emphasize human oversight; staff as controllers |
| Increased surveillance | "AI will monitor my performance" | Clear privacy boundaries; purpose limitation |
| Unfair treatment | "AI will judge me unfairly" | Transparency about AI use; appeal mechanisms |
| Loss of expertise | "My knowledge won't be valued" | Expert input in AI design; knowledge capture |
| Accountability concerns | "Who's responsible if AI is wrong?" | Clear accountability frameworks |
1.3 AI Change Readiness Indicators
| Indicator | Ready | Not Ready |
| Leadership support | Visible sponsorship; resources committed | Passive or absent; no resources |
| Staff sentiment | Curious, cautiously optimistic | Fearful, resistant, hostile |
| Digital maturity | Comfortable with technology | Technology-averse culture |
| Trust in organization | Believes org acts in staff interests | Distrustful of management motives |
| Previous change experience | Positive track record | History of failed changes |
| Union relations | Constructive engagement | Adversarial relationship |
2. AI Change Framework
2.1 The ADAPT Framework
A - Assess readiness and impact
D - Design the change approach
A - Align stakeholders
P - Prepare and build capability
T - Transition and sustain
2.2 Phase 1: Assess Readiness and Impact
Objectives: - Understand current state - Identify change impacts - Assess readiness - Identify risks
Activities:
| Activity | Purpose | Output |
| Stakeholder mapping | Identify all affected groups | Stakeholder register |
| Impact assessment | Understand AI effects on roles | Change impact matrix |
| Readiness assessment | Gauge preparedness for change | Readiness scorecard |
| Culture assessment | Understand organizational culture | Culture profile |
| Risk assessment | Identify change risks | Risk register |
Change Impact Assessment:
| Role/Group | How AI Affects Them | Impact Level | Readiness | Priority |
| Task changes, new skills needed, etc. | High/Med/Low | High/Med/Low | |
| | | | |
2.3 Phase 2: Design the Change Approach
Objectives: - Define change strategy - Plan communications - Design training approach - Establish governance
Change Strategy Canvas:
| Element | Approach |
| Vision | What does success look like? |
| Case for change | Why AI, why now? |
| Sponsorship | Who is leading? |
| Stakeholder engagement | How will we involve people? |
| Communication | How will we inform and engage? |
| Training | How will we build capability? |
| Support | How will we help people through transition? |
| Resistance management | How will we address concerns? |
| Reinforcement | How will we sustain the change? |
Design Principles for AI Change:
- Co-design with users - Staff help shape AI implementation
- Start small - Pilot with willing early adopters
- Iterate based on feedback - Continuous improvement
- Celebrate successes - Share positive outcomes
- Be honest about challenges - Acknowledge difficulties
2.4 Phase 3: Align Stakeholders
Objectives: - Secure leadership commitment - Engage unions/staff representatives - Build coalition of supporters - Address key influencers
Stakeholder Engagement Plan:
| Stakeholder | Current Position | Desired Position | Strategy | Owner |
| Executive sponsor | Supportive | Active advocate | Regular briefings | |
| Middle management | Skeptical | Supportive | Show quick wins; address concerns | |
| Frontline staff | Anxious | Cautiously accepting | Involvement; training | |
| Union | Wary | Constructive partner | Early engagement; guarantees | |
Leadership Alignment Checklist:
2.5 Phase 4: Prepare and Build Capability
Objectives: - Develop skills and knowledge - Create support materials - Establish support structures - Ready the organization
Capability Building Framework:
| Level | Focus | Activities |
| Awareness | Understanding AI | AI fundamentals training; demos |
| Knowledge | How AI affects my role | Role-specific training; job aids |
| Skills | Working with AI | Hands-on practice; coaching |
| Mastery | Optimizing AI use | Advanced training; peer learning |
Training Curriculum:
| Module | Audience | Duration | Delivery |
| AI Fundamentals | All staff | 2 hours | E-learning |
| AI Ethics & Governance | All staff | 1 hour | E-learning |
| Working with AI [System] | Direct users | 4 hours | Workshop |
| AI System Administration | Administrators | 8 hours | Workshop |
| AI Champion Training | Champions | 1 day | Workshop |
Support Structures:
| Structure | Purpose | Implementation |
| Help desk | Technical support | Extended hours during go-live |
| Peer champions | On-the-ground support | Trained champions per team |
| FAQ and knowledge base | Self-service answers | Intranet site |
| Feedback channel | Continuous improvement | Online form; regular reviews |
| Manager toolkit | Equip managers | Materials and talking points |
2.6 Phase 5: Transition and Sustain
Objectives: - Execute the transition - Support through change curve - Embed new ways of working - Sustain and reinforce
Go-Live Support Plan:
| Period | Focus | Activities |
| Go-live day | Hypercare | All hands support; rapid response |
| Week 1 | Stabilization | Daily check-ins; issue resolution |
| Month 1 | Adoption | Usage monitoring; targeted support |
| Quarter 1 | Optimization | Refinements based on feedback |
| Ongoing | Sustainment | Regular reviews; continuous improvement |
Sustaining Change:
| Activity | Frequency | Owner |
| Adoption metrics review | Weekly then monthly | Change lead |
| User feedback sessions | Monthly | Product owner |
| Success story sharing | Ongoing | Communications |
| Refresher training | Quarterly | Training team |
| Process optimization | Quarterly | Operations |
3. Communication Strategy
3.1 Communication Principles for AI
- Be transparent - Explain what AI does and doesn't do
- Acknowledge concerns - Don't dismiss fears
- Be specific - Avoid vague reassurances
- Be consistent - Same message from all leaders
- Be ongoing - Not just at launch
- Two-way - Listen and respond
3.2 Key Messages Framework
| Audience | Key Messages | Supporting Points |
| All staff | AI is being introduced to [purpose]. This is about [augmenting/helping] staff, not replacing them. | Specific examples; guarantees |
| Direct users | AI will help you by [specific benefits]. You remain in control of [decisions]. | Training available; support structures |
| Managers | AI will change how your team works by [specifics]. Your role in [oversight/coaching] is essential. | Management tools; talking points |
| Executives | AI delivers [strategic benefits]. We're managing [risks/concerns] through [approach]. | Metrics; governance |
| External | We're using AI to [improve services]. We're committed to [ethical use/transparency]. | Ethics framework; oversight |
3.3 Communication Timeline
| Phase | Timing | Focus | Channels |
| Pre-announcement | -8 weeks | Leadership alignment | Executive briefings |
| Announcement | -6 weeks | Introduce the change | All-staff comms; town hall |
| Engagement | -6 to -2 weeks | Address concerns; gather input | Workshops; feedback sessions |
| Preparation | -2 weeks | Practical readiness | Training; job aids |
| Launch | Week 0 | Go-live support | Multi-channel |
| Early adoption | +1-4 weeks | Share successes; address issues | Success stories; support comms |
| Sustainment | Ongoing | Reinforce and improve | Regular updates |
3.4 Communication Channels
| Channel | Use For | Frequency |
| All-staff email | Major announcements | Key milestones |
| Intranet | Detailed information; FAQs | Ongoing updates |
| Team meetings | Discussion; Q&A | Weekly during change |
| Town halls | Leadership visibility; Q&A | Monthly during change |
| Champions network | Peer communication | Ongoing |
| Video messages | Leadership connection | Key moments |
| Posters/digital signs | Awareness and reminders | During launch |
3.5 Addressing Frequently Asked Questions
| Question | Response Approach |
| "Will AI take my job?" | Be honest about workforce plans; emphasize augmentation; highlight new opportunities |
| "How will AI change my role?" | Specific changes by role; what stays the same; new skills needed |
| "Can I trust AI decisions?" | Explain human oversight; quality assurance; error correction |
| "What if AI makes mistakes?" | Escalation process; human review; continuous improvement |
| "Who decided to use AI?" | Explain decision process; consultation undertaken |
| "What about my data privacy?" | Clear privacy commitments; what's monitored; what's not |
4. Workforce Strategy
4.1 Workforce Impact Analysis
Impact Categories:
| Category | Description | Examples |
| Augmented roles | AI assists but doesn't replace | Decision support; admin assistance |
| Transformed roles | Role changes significantly | New focus on exceptions; oversight |
| New roles | Roles created by AI | AI trainers; model supervisors |
| Reduced roles | Less need for these roles | Routine processing; data entry |
| Unchanged roles | Little AI impact | Field work; direct service |
Role Transition Planning:
| Current Role | Impact | Future Role | Transition Path |
| Augmented/Transformed/Reduced | | Training; redeployment |
| | | |
4.2 Job Security Commitments
Consider providing clear commitments such as:
| Commitment | Details |
| No forced redundancies | Staff will not be made redundant due to AI |
| Redeployment priority | Affected staff have priority for new roles |
| Retraining support | Funding and time for upskilling |
| Natural attrition | Headcount changes through natural turnover |
| Consultation | Meaningful consultation on workforce changes |
4.3 Skills Development Strategy
Skills Framework:
| Skill Category | Skills | Development Approach |
| AI literacy | Understanding AI; appropriate trust; recognizing limitations | Foundational training for all |
| AI collaboration | Working with AI tools; effective prompting; oversight | Role-specific training |
| Critical thinking | Validating AI outputs; exception handling; judgment | Enhanced decision-making training |
| Data skills | Data quality; interpretation; feedback | Data literacy programs |
| Technical skills | AI operation; troubleshooting; basic configuration | Technical training |
| Ethics and governance | Identifying bias; raising concerns; compliance | Ethics awareness training |
4.4 Career Pathways
| From | To | Pathway |
| Processing officer | Exception handler | Training in complex cases; AI oversight |
| Team leader | AI operations supervisor | AI management training; performance monitoring |
| Subject matter expert | AI trainer/quality specialist | Knowledge transfer; model feedback skills |
| Data entry | Data quality analyst | Data quality training; analysis skills |
5. Resistance Management
5.1 Understanding Resistance
Resistance Sources:
| Source | Manifestation | Root Cause |
| Fear | Avoiding engagement | Concern about job loss or inadequacy |
| Distrust | Questioning motives | Past negative experiences; lack of trust |
| Frustration | Complaints about quality | AI not meeting expectations; poor implementation |
| Overload | Claims of no time | Too much change; inadequate support |
| Values | Ethical objections | Genuine concerns about AI appropriateness |
| Power | Protection of territory | Perceived loss of influence or control |
5.2 Resistance Response Strategies
| Strategy | When to Use | Approach |
| Educate | Lack of understanding | Provide information; demonstrate benefits |
| Involve | Desire for control | Include in design; seek input |
| Support | Skill concerns | Training; coaching; time to adapt |
| Accommodate | Valid concerns | Modify approach based on feedback |
| Negotiate | Seeking concessions | Discuss trade-offs; find middle ground |
| Co-opt | Influential resistors | Engage them in leadership role |
| Escalate | Persistent disruption | Involve management; formal processes |
5.3 Resistance Indicators
Watch for these warning signs:
| Indicator | Possible Meaning | Response |
| Low training attendance | Avoidance; not prioritized | Mandatory training; manager involvement |
| Increased grievances | Concerns not addressed | Listen; investigate; respond |
| Workarounds | System not working for users | Improve AI; more training |
| Negative corridor talk | Anxiety; skepticism | Increase communication; address rumors |
| High sick leave | Stress; disengagement | Support; investigate causes |
| Slow adoption | Resistance; insufficient training | Coaching; quick wins |
5.4 Building Trust in AI
| Trust Element | How to Build |
| Reliability | Demonstrate consistent performance; address errors quickly |
| Transparency | Explain how AI works; show decision rationale |
| Competence | Validate accuracy; share performance data |
| Honesty | Acknowledge limitations; be upfront about issues |
| Benevolence | Show AI is designed to help; prioritize user needs |
| Control | Ensure humans can override; adjust settings |
6. Union and Staff Representative Engagement
6.1 Engagement Principles
- Early involvement - Engage before decisions are final
- Good faith - Genuine consultation, not just information
- Transparency - Share information openly
- Respect - Value union role and expertise
- Action on concerns - Respond meaningfully to issues raised
6.2 Consultation Requirements
| Stage | Consultation Activities | Timeframe |
| Pre-decision | Inform of potential AI use; seek early feedback | 8+ weeks before |
| Planning | Share impact assessment; discuss mitigations | 6+ weeks before |
| Design | Input on implementation approach | 4+ weeks before |
| Pre-launch | Review readiness; confirm support measures | 2+ weeks before |
| Post-launch | Ongoing feedback; issue resolution | Ongoing |
6.3 Common Union Concerns
| Concern | Response Approach |
| Job security | Clear commitments; redeployment plans |
| Performance monitoring | Clear boundaries; privacy protections |
| Work intensification | Workload monitoring; adjustment mechanisms |
| Skills and training | Funded training; time to learn |
| Consultation | Early, genuine, ongoing engagement |
| Health and safety | Ergonomic assessment; mental health support |
6.4 Dispute Resolution
| Level | Process | Timeframe |
| Local | Manager-delegate discussion | 2 days |
| Escalation | HR-union representative | 5 days |
| Formal | Dispute resolution procedures | Per agreement |
| External | Fair Work Commission (if applicable) | As required |
7. Change Metrics and Evaluation
7.1 Change Success Metrics
Adoption Metrics:
| Metric | Definition | Target | Data Source |
| System usage | % of target users using AI | 90% | System logs |
| Feature utilization | Key features being used | 80% | System logs |
| Workaround rate | Staff bypassing AI | <5% | Observation; audit |
| Support tickets | Issues reported | Decreasing trend | Help desk |
Sentiment Metrics:
| Metric | Definition | Target | Data Source |
| Staff satisfaction | Satisfaction with AI | >3.5/5 | Survey |
| Confidence | Confidence in using AI | >⅘ | Survey |
| Trust | Trust in AI decisions | >3.5/5 | Survey |
| Support adequacy | Feel supported through change | >⅘ | Survey |
Business Metrics:
| Metric | Definition | Target | Data Source |
| Productivity | Output per FTE | +X% | Business metrics |
| Quality | Error/rework rate | -X% | Quality metrics |
| Processing time | Time to complete tasks | -X% | System data |
| Customer satisfaction | CSAT for AI-enabled services | Maintained or improved | Surveys |
7.2 Pulse Surveys
Sample Questions:
| Category | Question | Scale |
| Awareness | I understand why we are implementing AI | 1-5 |
| Preparedness | I feel prepared to work with the new AI system | 1-5 |
| Support | I'm receiving adequate support through this change | 1-5 |
| Confidence | I'm confident I can work effectively with AI | 1-5 |
| Value | I can see how AI will benefit my work | 1-5 |
| Concerns | I have concerns about AI that haven't been addressed | 1-5 |
Survey Schedule:
| Timing | Focus |
| Pre-launch (-4 weeks) | Baseline readiness |
| Launch (+2 weeks) | Early experience |
| Post-launch (+6 weeks) | Adoption progress |
| Sustainment (+3 months) | Embedding |
| Ongoing (quarterly) | Long-term sentiment |
7.3 Evaluation and Adjustment
Change Health Check:
| Indicator | Green | Amber | Red | Status |
| Leadership engagement | Active sponsorship | Passive support | Absent/negative | |
| Staff sentiment | Positive trend | Flat | Negative trend | |
| Adoption rate | On track | Slightly behind | Significantly behind | |
| Issue resolution | Timely resolution | Some delays | Major backlog | |
| Training completion | 90%+ completed | 70-90% completed | <70% completed | |
8.1 Stakeholder Impact Assessment Template
| Stakeholder Group | Size | Current Role | Future Role | Impact Level | Key Concerns | Engagement Approach |
| | | | H/M/L | | |
| | | | | | |
8.2 Communication Plan Template
| Audience | Message | Channel | Timing | Owner | Status |
| | | | | |
| | | | | |
8.3 Training Plan Template
| Module | Audience | Duration | Delivery | Developer | Trainer | Date |
| | | | | | |
| | | | | | |
8.4 Resistance Log Template
| Date | Group/Individual | Resistance Behavior | Root Cause | Response | Status |
| | | | | |
| | | | | |
8.5 Change Readiness Checklist
Leadership: - [ ] Executive sponsor identified and active - [ ] Leadership team aligned - [ ] Resources allocated - [ ] Governance established
Communication: - [ ] Key messages developed - [ ] Communication plan approved - [ ] FAQ prepared - [ ] Channels identified
Capability: - [ ] Training curriculum designed - [ ] Training materials developed - [ ] Trainers prepared - [ ] Training scheduled
Support: - [ ] Help desk prepared - [ ] Champions identified and trained - [ ] Escalation paths clear - [ ] Feedback channels established
Stakeholders: - [ ] Union consultation completed - [ ] Staff informed - [ ] Managers equipped - [ ] Resistors addressed
9. Case Study: Successful AI Change
9.1 Background
A government agency implemented an AI system to assist with processing citizen applications.
9.2 Change Approach
| Element | Approach | Result |
| Early engagement | 6-month co-design with frontline staff | Staff felt ownership |
| Transparent communication | Monthly all-staff updates; open Q&A | Trust maintained |
| Job security commitment | No redundancies; focus on redeployment | Fear reduced |
| Phased rollout | Pilot with volunteer teams first | Issues caught early |
| Comprehensive training | 2-day hands-on training; ongoing support | High confidence |
| Feedback loops | Weekly improvement meetings | Continuous refinement |
9.3 Results
| Metric | Outcome |
| Adoption rate | 95% using AI within 3 months |
| Staff satisfaction | 4.⅕ (up from 3.⅖ pre-project) |
| Processing time | 40% reduction |
| Error rate | 25% reduction |
| Staff turnover | No increase (slight decrease) |
9.4 Lessons Learned
- Early co-design builds ownership
- Visible leadership sponsorship essential
- Job security commitments reduce anxiety
- Training investment pays off
- Ongoing feedback channels prevent issues festering
10. Appendices
Appendix A: Change Management Resources
| Resource | Description | Link |
| PROSCI ADKAR Model | Individual change model | prosci.com |
| Kotter's 8 Steps | Organizational change model | kotterinc.com |
| APS Change Management Guide | Public sector guidance | apsc.gov.au |
Appendix B: Sample Manager Talking Points
For team meetings:
"I want to talk about [AI system] that we'll be implementing. Here's what you need to know:
- Why: [Purpose and benefits]
- What it means for you: [Role impact]
- What's NOT changing: [Continuity elements]
- Timeline: [Key dates]
- Training: [What support you'll get]
- Questions: [Encourage questions]
I know change can be unsettling. I want you to know [job security commitment]. We're here to support you through this."
Appendix C: Glossary
| Term | Definition |
| Adoption | Active use of the new system/way of working |
| Change curve | Emotional stages people go through in change |
| Change saturation | When too much change overwhelms capacity |
| Champion | Staff member who supports and promotes change |
| Hypercare | Intensive support immediately after go-live |
| Resistance | Opposition or reluctance to change |
| Sponsorship | Leadership support and advocacy for change |