The Consequence Simulator¶
Uncomfortable Reading
- First: What you intended (Immediate)
- Second: What emerges from first (Weeks to months)
- Third: What emerges from second (Months to a year)
- Fourth: Organizational/cultural change (Years)
Why This Exists¶
Most AI project planning focuses on what you're trying to achieve. The business case. The benefits. The happy path.
This simulator forces you to think about what happens after. Not the success scenario in your slide deck—the actual, messy, human, political, organizational reality that unfolds when your AI system meets the real world.
Because consequences cascade. And they cascade in directions you didn't anticipate, at speeds you didn't expect, hitting people you forgot existed.
The Uncomfortable Truth About Consequences¶
First-Order Consequences¶
These are the ones in your business case. "The AI will process claims faster." You planned for these. You probably got them right.
Second-Order Consequences¶
These are the ones that emerge from the first. "Claims staff feel threatened and start working-to-rule." You might have thought about these. You probably underestimated them.
Third-Order Consequences¶
These are the ones that emerge from the second. "Union files grievance, media picks it up, Minister gets questions in Parliament." You didn't see these coming. But they're coming anyway.
Fourth-Order Consequences¶
These reshape your organization in ways no one anticipated. "Agency becomes risk-averse about all technology projects for the next five years. Innovation dies. Good people leave." This is the real legacy of your project—not the AI, but the organizational scar tissue.
How to Use This Simulator¶
The Documents¶
| Document | Purpose |
|---|---|
| first-order.md | The intended consequences and their shadows |
| ripple-effects.md | How consequences cascade through systems |
| stakeholder-cascades.md | Impact on every person affected |
| uncomfortable-futures.md | Scenarios no one wants to discuss |
The Tools¶
Note: Interactive tools for consequence simulation are under development. Use the documents above to manually work through consequence chains.
The Core Framework¶
flowchart TB
DEC([<strong>YOUR DECISION</strong>]) --> O1
O1["<strong>FIRST ORDER</strong><br/><em>(Visible)</em><br/>What you intended<br/>Timeframe: Immediate"]
O2["<strong>SECOND ORDER</strong><br/><em>(Predictable)</em><br/>What emerges from first<br/>Timeframe: Weeks to months"]
O3["<strong>THIRD ORDER</strong><br/><em>(Surprising)</em><br/>What emerges from second<br/>Timeframe: Months to year"]
O4["<strong>FOURTH ORDER</strong><br/><em>(Invisible)</em><br/>Organizational/cultural change<br/>Timeframe: Years"]
LEG["<strong>THE LEGACY</strong><br/><em>(Permanent)</em><br/>What people remember<br/>Timeframe: Careers"]
O1 --> O2 --> O3 --> O4 --> LEG
style DEC fill:#e3f2fd,stroke:#1976d2,stroke-width:2px
style O1 fill:#c8e6c9,stroke:#388e3c,stroke-width:2px
style O2 fill:#fff9c4,stroke:#f9a825,stroke-width:2px
style O3 fill:#ffcc80,stroke:#ef6c00,stroke-width:2px
style O4 fill:#ef9a9a,stroke:#c62828,stroke-width:2px
style LEG fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px The Questions This Simulator Asks¶
Questions you should answer before any major AI decision:
-
Who gets hurt? Not inconvenienced. Hurt. Whose job, identity, or livelihood is threatened?
-
Who wasn't consulted? Who will feel blindsided? Who has power you forgot about?
-
What happens when it breaks? Not if. When. At the worst possible moment. What then?
-
What does the newspaper headline say? Both the good one and the bad one. Which is more likely?
-
What will people remember in five years? Not the metrics. The story. What story are you creating?
-
Who gets blamed? When (not if) something goes wrong, who is holding the bag? Is it you?
-
What becomes impossible afterward? What doors close? What trust is spent? What political capital is burned?
-
What would your successor inherit? The technical debt? The organizational trauma? The broken relationships?
Navigation by Scenario Type¶
If you're launching an AI that affects jobs:¶
- Start with stakeholder-cascades.md - specifically the workforce impact section
- Then ripple-effects.md - the union/industrial relations chains
- Then uncomfortable-futures.md - the "it went wrong" scenarios
If you're deploying AI that makes decisions about people:¶
- Start with first-order.md - the fairness and bias sections
- Review the ripple effects and stakeholder cascades
- Consider the uncomfortable futures
If you're a vendor implementation:¶
- Start with stakeholder-cascades.md - the vendor dependency chains
- Then ripple-effects.md - the "vendor leaves" scenarios
- Review the uncomfortable future scenarios
If you're under political pressure to deliver:¶
- Start with uncomfortable-futures.md - the rushed deployment scenarios
- Review all consequence cascades carefully
- Consider the long-term implications
A Warning¶
This content is designed to be uncomfortable. It will surface possibilities you'd rather not consider. It will make you paranoid about decisions you've already made.
That's the point.
The goal is not to paralyze you with fear. It's to ensure that when consequences arrive—and they will—you're not surprised. You've thought about it. You've prepared. You've made conscious choices about which risks to accept.
Consequences are not punishments for bad decisions. They're the natural result of all decisions. The question is whether you saw them coming.
"The best time to think about consequences was before you started. The second best time is now."