Uncomfortable Futures¶
Consequence Simulator
- Slow-Motion Robodebt: Accumulated harm at scale
- Vendor Hostage: Dependency without control
- Discrimination Time Bomb: Invisible bias at scale
- Security Breach: Data compromise and citizen harm
Why These Scenarios Exist¶
Strategic planning exercises imagine optimistic futures. Risk assessments identify bad things that might happen. Neither does what this document does: paint vivid, specific, uncomfortable pictures of how things could go wrong.
These scenarios are: - Plausible: Based on patterns that have occurred elsewhere - Specific: Detailed enough to be viscerally uncomfortable - Instructive: Each reveals a failure mode worth preventing - Discussable: Easier to discuss hypotheticals than real failures
Use them for pre-mortems, stress testing, stakeholder conversations, or simply to ensure you've considered the dark paths.
Scenario 1: The Slow-Motion Robodebt¶
Year One: Success¶
The AI system is deployed to identify "high-risk" cases in a government benefit program. Early results are impressive: processing time cut by 40%, more "referrals for review" than manual staff ever generated. The Minister announces the initiative in estimates. The project team celebrates. Performance bonuses are distributed.
Year Two: The Signal¶
Complaints increase, but they're attributed to "users who would have complained anyway." Legal Aid offices notice more clients with similar issues. An advocate group begins documenting cases. A junior staffer raises concerns but is told "the numbers don't support that."
Year Three: The Pattern¶
A journalist obtains internal documents showing staff flagged issues a year earlier. The story runs: "AI System Wrongly Flagged Thousands, Internal Warnings Ignored." The opposition demands answers. The project sponsor has already moved to another agency.
Year Four: The Reckoning¶
Senate inquiry reveals: - 47,000 citizens incorrectly flagged - 12,000 had benefits reduced or cancelled - Average time to resolution: 8 months - 3 documented suicides among affected citizens - Internal emails show concerns were raised and dismissed - Cost of remediation: $180 million - Original project budget: $12 million
Year Five: The Legacy¶
Royal Commission recommends: - Mandatory human review for all AI decisions affecting benefits - New oversight body for government AI - Personal liability for senior executives - Compensation scheme for affected citizens
Three executives face code of conduct investigations. Two face criminal referral. The agency becomes a case study in AI ethics courses globally.
The Question for You¶
Is your AI system making decisions about vulnerable people? What's your Year Two signal? Who's raising the concerns you're dismissing?
Scenario 2: The Vendor Hostage¶
The Partnership¶
A prestigious global AI vendor is engaged to deliver a transformative AI capability. The contract is celebrated as a model of innovation. The vendor provides proprietary technology, proprietary training, proprietary everything.
Year One: The Honeymoon¶
Everything works. The vendor is responsive. Custom features are delivered. The agency becomes a reference site. Internal capability to manage the system is minimal—"the vendor handles it."
Year Two: The Acquisition¶
The vendor is acquired by a larger company. Your contract is "honored," but: - Your account manager is replaced by someone managing 40 accounts - Response times increase from hours to weeks - The roadmap no longer includes features you need - Pricing for the next contract cycle is "under review"
Year Three: The Squeeze¶
Renewal negotiations reveal: - Price increase: 340% - Alternative vendors cannot use your data (proprietary format) - Your staff cannot maintain the system (proprietary training) - Estimated cost to switch: $45 million over 3 years - Estimated cost to stay: $38 million over 3 years - Estimated cost to turn off with no replacement: service collapse
You're a hostage.
Year Four: The Exodus¶
Your best AI staff leave—"there's no real work here, just vendor management." The vendor's quality continues to decline. A security audit reveals vulnerabilities the vendor won't prioritize. The system is now critical infrastructure you don't control.
Year Five: The Reckoning¶
A new government announces a review of all major vendor contracts. Your contract is cited as an example of "previous government waste." The current Minister was not involved but must answer for it. There is no good option.
The Question for You¶
What happens if your vendor is acquired tomorrow? Who else can operate your AI system? What would it cost to switch?
Scenario 3: The Discrimination Time Bomb¶
The Model¶
An AI system is deployed to assist with resource allocation—prioritizing which cases get attention, which applications are fast-tracked, which citizens receive enhanced service.
The model is "objective." It uses only operational factors: case complexity, processing time, resource availability. No demographic data. The team is proud of this.
Year One: Efficiency¶
Resources are allocated more efficiently. Wait times decrease for "straightforward" cases. The metrics look good. No one examines who is in the "straightforward" category.
Year Two: The Analysis¶
A researcher requests data under FOI for academic study. Analysis reveals: - Citizens from certain postcodes are 3x more likely to be deprioritized - Citizens with non-English names wait 40% longer on average - Aboriginal and Torres Strait Islander citizens are 2.5x more likely to be flagged for "complex review" - None of these factors are in the model—they're proxies
Year Three: The Story¶
"Government AI Discriminates Against Indigenous Australians, Migrants, Poor" - the headline runs in every major outlet. The response that "the model doesn't use those factors" makes it worse—the discrimination is invisible even to the operators.
Year Four: The Reckoning¶
Racial Discrimination Commissioner investigation finds: - Indirect discrimination likely occurred - Agency failed in positive duty to prevent discrimination - Remediation required for affected citizens - External audit of all AI systems mandated
The AI system is turned off. Manual processing resumes. Years of efficiency gains are reversed.
Year Five: The Legacy¶
The agency becomes the national example of algorithmic discrimination. Every future AI proposal government-wide must address "the [Agency X] risk." A generation of AI projects are delayed or cancelled by association.
The Question for You¶
Have you tested your "objective" model for disparate impact? What proxies might be hiding in your features? Who are you systematically disadvantaging without knowing it?
Scenario 4: The Security Breach¶
The Architecture¶
The AI system requires access to sensitive citizen data. For efficiency, data from multiple sources is consolidated. For performance, security controls are streamlined. The security team raised concerns but were told "the benefits outweigh the risks."
The Breach¶
Eighteen months after deployment, anomalous activity is detected. Investigation reveals: - Unauthorized access occurring for 7 months - 2.3 million citizen records potentially compromised - Including: tax information, health conditions, family relationships, benefit history - Attack vector: vulnerability in AI model update process - The breach exploited a security concern documented but not addressed
The Response¶
Mandatory notification to affected citizens. Credit monitoring offered. Parliamentary questions. Media coverage. Internal investigation.
But the worst is yet to come.
The Cascade¶
The compromised data appears on dark web forums. Citizens report: - Identity theft - Targeted scams using personal information - Blackmail attempts using sensitive health information - Family violence situations worsened by address disclosure
Three months later, a citizen is murdered by an estranged partner who obtained their address from the leaked data.
The Reckoning¶
Criminal investigation. Civil litigation. Class action. Coronial inquest. Royal Commission.
Total direct costs: $450 million Reputational damage: Incalculable Individual harm: Ongoing and permanent
The Legacy¶
The AI system is not rebuilt. The consolidation architecture is abandoned. Every future system requires security review that adds 6-12 months to timelines. The agency is fundamentally changed.
The Question for You¶
What's the worst thing that could happen with your data if it was breached? Whose lives could be endangered? What security concerns have you documented but not addressed?
Scenario 5: The Minister's Announcement¶
The Pressure¶
The Minister wants to announce something big in the budget. AI is popular. Your team is told to have something ready. The timeline is political, not technical.
The Commitment¶
The Minister announces a "new AI capability that will transform [service area]." The announcement is specific: "operational by [date]." The date is 8 months away. Your honest estimate was 18 months.
The Scramble¶
To meet the date: - Testing is abbreviated - Pilot is shortened - Change management is "Phase 2" - Documentation is "good enough" - Security review is "conditional" - Staff training is "just-in-time"
The Launch¶
The system launches on time. For a press conference. Behind the scenes: - Staff don't know how to use it - Edge cases crash the system - Citizens receive incorrect information - The workarounds become permanent
Year One: The Pretense¶
The system "works" in the sense that it exists. Metrics are carefully chosen to show success. Problems are attributed to "user error" and "implementation challenges." Anyone who raises concerns is told "the Minister has announced this."
Year Two: The Reality¶
Internal audit reveals: - 30% of AI outputs require manual correction - Staff time has increased, not decreased - Citizen complaints have doubled - The original business case benefits were never achieved - $23 million has been spent on something that makes things worse
Year Three: The Quiet Death¶
A new Minister is briefed that the AI initiative "has not achieved expected benefits." It is quietly defunded. No announcement. No accountability. No lessons learned. The careers of people who raised concerns remain damaged.
The Question for You¶
Is your timeline political or technical? What are you cutting to meet an announcement date? Who will be blamed when reality catches up with rhetoric?
Scenario 6: The Ethical Cascade¶
The Decision¶
Your AI system works well for its intended purpose. Leadership sees an opportunity: "Could we use it for [related but different purpose]?" The answer is technically yes. The question of whether you should is not deeply examined.
The Extension¶
The system is extended to a new use case. This use case involves higher-stakes decisions affecting people's liberty, livelihood, or family relationships. The model was not designed for this. But it "works."
The Normalization¶
Success in the second use case leads to a third. And a fourth. Each extension is small. Each extension is approved. The system is now making decisions its designers never imagined.
The Incident¶
An edge case emerges in the fourth use case—a situation the model handles badly because it was never designed for this context. A citizen is seriously harmed. Investigation reveals the chain of extensions, each individually approved, collectively reckless.
The Media Narrative¶
"AI System Used for Purpose Never Intended, Citizen Pays the Price"
The story isn't about one decision—it's about a culture that kept saying yes without asking why. Each approval is examined. Each approver is named. The incremental nature of the problem makes everyone culpable.
The Reckoning¶
The entire AI program is shut down—not just the problematic use case, but the original, beneficial use case too. The ethical failure is seen as systemic, not isolated.
The Question for You¶
What use cases are you being pressured to extend to? Where's the line you won't cross? Who's responsible for saying no?
Scenario 7: The Generational Harm¶
The Model¶
An AI system is deployed to make predictive assessments: identifying children at risk, predicting recidivism, assessing future benefit needs. The model uses historical data to predict future outcomes.
The Problem¶
Historical data reflects historical decisions. Those decisions were made by a system with known biases. The AI learns to replicate those biases and calls them predictions.
Children from certain families are flagged for intervention. Young people from certain backgrounds are predicted to reoffend. Citizens with certain histories are assumed to need less.
The Feedback Loop¶
The predictions become self-fulfilling: - Children flagged for intervention are treated differently - Young people predicted to reoffend are supervised more heavily, caught more often - Citizens assumed to need less receive less
The next generation of data confirms the predictions. The model becomes more confident. The bias deepens.
Year Ten: The Discovery¶
A new government commissions a review of welfare and justice outcomes. Analysis reveals: - Two generations of a community have been systematically disadvantaged - The AI system's predictions were self-fulfilling prophecies - Entire family lines have been marked by a model trained on biased history - Reversing the damage would take another generation
The Question for You¶
What historical biases are encoded in your training data? Whose grandchildren are affected by decisions you're making today? What feedback loops are you creating?
Using These Scenarios¶
For Pre-Mortems¶
Before your project launches, convene your team and ask: "It's [year] and our project is a case study in [failure mode]. What happened?"
Work backward from each scenario to identify what would have to fail.
For Stakeholder Conversations¶
Use scenarios to make abstract risks concrete: "Let me describe a specific situation that could occur..."
People respond to stories more than statistics.
For Risk Assessment¶
Each scenario illuminates a failure mode: - Scenario 1: Accumulated harm at scale - Scenario 2: Vendor dependency - Scenario 3: Invisible discrimination - Scenario 4: Security failure - Scenario 5: Political pressure - Scenario 6: Scope creep - Scenario 7: Generational impact
Map your project to the scenarios that apply.
For Decision-Making¶
When facing a difficult choice, ask: "Which scenario does this decision make more or less likely?"
If a decision increases the probability of an uncomfortable future, proceed with extreme caution.
The Scenario You're Most Afraid Of¶
The scenario you most resisted reading is probably the one that applies to you.
Return to it. Read it again. Ask what would have to be true for it to be your future. Then ask what you're going to do about it.
"The future is not written. But the patterns that write it are visible to anyone willing to look."
Which future are you building?