Skip to content

Comfortable Lies

The Hard Stuff

Things We Tell Ourselves About AI Projects That Aren't True

This content is intentionally uncomfortable. It exists because the alternative—comfortable silence—causes more harm.

The most dangerous lies are the ones everyone agrees on.


How Comfortable Lies Work

Comfortable lies persist because: - They make everyone feel better - Challenging them is socially costly - They contain enough truth to be defensible - The people who know they're lies have no incentive to say so

In AI projects, comfortable lies delay failure, inflate expectations, and prevent honest course correction.

This document names them.


The Lies, One by One

Lie #1: We have executive sponsorship

What we mean: An executive said yes in a meeting once.

The uncomfortable truth:

  • Saying yes to a project is not the same as sponsoring it
  • Real sponsorship means protecting the project when it's in trouble
  • Real sponsorship means making time when you're busy
  • Real sponsorship means spending political capital
  • Most "executive sponsors" have 15 projects they've "sponsored"

How to test it:

  • Ask your sponsor to cancel a meeting to deal with a project issue
  • Ask your sponsor to call a peer to unblock something
  • Ask your sponsor to publicly defend a difficult decision
  • If they won't or can't, you don't have sponsorship - you have a signature

The real question: If this project gets hard, who will fight for it?

Lie #2: The technology is proven

What we mean: It worked somewhere else, probably in a demo.

The uncomfortable truth:

  • "Proven" in a tech company lab is not proven in government
  • "Proven" with clean data is not proven with your data
  • "Proven" at small scale is not proven at your scale
  • The vendor's "successful deployments" may have a very loose definition of success
  • Technology that works is not the same as technology that works here

How to test it:

  • Talk to actual users of the "proven" technology (not the vendor's references)
  • Ask about failures, not just successes
  • Ask how long it took to get to "proven"
  • Ask what they would do differently

The real question: What specifically makes us confident this will work in our environment?

Lie #3: We've managed the stakeholders

What we mean: We sent them some emails and had a few meetings.

The uncomfortable truth:

  • Informing is not engaging
  • Engaging is not aligning
  • Silence is not agreement
  • The stakeholders who'll cause you problems aren't the ones in your meetings
  • "Consultation" that doesn't change anything isn't consultation

How to test it:

  • Can you name someone who objected, and what you did about it?
  • Ask a skeptical stakeholder what they really think
  • Ask who wasn't consulted and why
  • Check if the engagement report includes anything negative

The real question: Who could kill this project, and are they on board?

Lie #4: The business case is solid

What we mean: The spreadsheet shows a positive ROI using the assumptions we chose.

The uncomfortable truth:

  • Business cases are usually exercises in justification, not analysis
  • The benefits are aspirational; the costs are underestimated
  • The assumptions are rarely stress-tested
  • The timeline is optimistic
  • The person who wrote it has an incentive for it to look good

How to test it:

  • Double the costs. Is it still viable?
  • Halve the benefits. Is it still viable?
  • Add 18 months to the timeline. Is it still viable?
  • What happens if you're wrong about the biggest assumption?

The real question: What would have to be true for this to be a bad investment?

Lie #5: We'll iterate based on feedback

What we mean: We'll launch what we've already decided, then tweak the UI.

The uncomfortable truth:

  • Agile is often waterfall with sprints
  • "Iterate" usually means fix bugs, not change direction
  • By the time feedback arrives, it's too late to change fundamentals
  • Saying "we'll iterate" is a way to avoid making decisions now
  • Iteration requires willingness to throw things away - most projects won't

How to test it:

  • What would user feedback have to say for you to kill a major feature?
  • When did you last make a significant change based on feedback?
  • What decisions have you explicitly deferred to iteration?
  • How much budget is reserved for post-launch iteration?

The real question: What are we actually willing to change?

Lie #6: The data is ready

What we mean: The data exists. Somewhere.

The uncomfortable truth:

  • Exists ≠ Accessible ≠ Clean ≠ Usable ≠ Appropriate
  • Data preparation is usually 80% of ML projects
  • "Ready" is estimated by people who haven't tried to use it yet
  • The data that exists may not be the data you need
  • Data owners may have a different view of "ready" than you do

How to test it:

  • Has anyone actually tried to build something with this data?
  • Can you do a one-week proof using the real data?
  • What percentage of records are complete, accurate, and current?
  • Who will fix data quality issues, and with what budget?

The real question: Have we tried, or are we assuming?

Lie #7: We've mitigated the risks

What we mean: We've listed the risks in a spreadsheet and written the word "mitigate" next to them.

The uncomfortable truth:

  • Identifying risks is not mitigating them
  • "Will continue to monitor" is not mitigation
  • Many mitigations are aspirational, not actual
  • The real risks often aren't on the list
  • Risk registers are rarely updated when new risks emerge

How to test it:

  • For your top 5 risks: what specifically has been done?
  • If the risk materialized tomorrow, what would you actually do?
  • When did you last add a new risk to the register?
  • Who is personally accountable for each mitigation?

The real question: If the worst happened, are we actually prepared?

Lie #8: The timeline is achievable

What we mean: The timeline is what we were told it had to be.

The uncomfortable truth:

  • Timelines are usually political, not technical
  • The timeline was set before anyone understood the work
  • Buffer time has been removed to look efficient
  • Dependencies on other teams aren't in the timeline
  • Everyone knows it's optimistic; no one wants to say so

How to test it:

  • What similar projects have achieved this timeline?
  • What's the buffer for unknowns?
  • Have the people doing the work validated the timeline?
  • What happens to the date if one major assumption is wrong?

The real question: What would have to go perfectly for us to hit this date?

Lie #9: Change management is in the plan

What we mean: There's a slide called "Change Management" in the deck.

The uncomfortable truth:

  • Change management is often an afterthought
  • Budget for change management is often first to be cut
  • "Training" is not change management
  • The people most affected usually learn last
  • Resistance is treated as irrational rather than information

How to test it:

  • What's the change management budget as % of total?
  • Who is the dedicated change manager?
  • Have you talked to the people whose jobs will change?
  • What will you do if people resist?

The real question: Do the people whose behavior needs to change know and agree?

Lie #10: We've learned from previous projects

What we mean: We put "lessons learned" in a document last time.

The uncomfortable truth:

  • Lessons learned documents are rarely read
  • The people who learned the lessons have moved on
  • Institutional memory is shockingly short
  • The same mistakes are repeated 2-3 years later
  • "Lessons learned" sessions are often blame-avoidance exercises

How to test it:

  • Can you name three specific things you're doing differently based on past failures?
  • Did anyone on this team work on the project you're learning from?
  • Have you talked to people who lived through the past failures?
  • What mistakes from previous projects could you repeat?

The real question: What would someone who failed at this before tell us?

Lie #11: This is different

What we mean: We want it to be different.

The uncomfortable truth:

  • Most AI projects face the same problems
  • Data quality. Stakeholder resistance. Scope creep. Overpromising.
  • "This is different" is usually wishful thinking
  • The differences people cite are usually superficial
  • The problems are structural, not circumstantial

How to test it:

  • What specifically is different, and does it address the usual failure modes?
  • Have you talked to people who did something similar?
  • What would have to be true for this to fail in the usual ways?

The real question: Why would the patterns that affect every other project not affect us?

Lie #12: The vendor will handle it

What we mean: It's in the contract. Probably.

The uncomfortable truth:

  • Vendors optimize for their success, not yours
  • Contract terms are enforced in court, not in reality
  • "Vendor will provide support" ≠ problems get solved
  • Vendors know more about how to win disputes than you do
  • When it fails, it's still your problem

How to test it:

  • What specifically is the vendor responsible for?
  • What recourse do you have if they don't deliver?
  • What's their incentive to exceed expectations vs. meet minimum?
  • Have you talked to their other customers?

The real question: If the vendor does exactly what's required and nothing more, do we succeed?


How to Stop Lying

Step 1: Create Safety for Truth

  • Don't punish people who raise uncomfortable truths
  • Thank people for skepticism
  • Reward early warnings, even false alarms
  • Make "I don't know" an acceptable answer

Step 2: Ask Disconfirming Questions

Instead of "why will this work?" ask "why might this fail?" Instead of "what's the plan?" ask "what's not in the plan?" Instead of "are we on track?" ask "what would 'off track' look like?"

Step 3: Bring in Outsiders

  • People without incentive to believe the lies
  • Fresh eyes who will ask obvious questions
  • Devil's advocates with permission to be annoying

Step 4: Document Predictions

Write down what you believe will happen. Check later. Calibrate your truth-telling over time.

Step 5: Pre-Mortems

Before starting, ask: "It's 12 months from now and this project has failed. What happened?"

The lies will surface.


A Taxonomy of Self-Deception

Type Example Root Cause
Optimism bias "The timeline is achievable" We want it to be true
Social pressure "We have stakeholder buy-in" Disagreeing is awkward
Sunk cost "We're too far in to stop" Admitting failure hurts
Groupthink "Everyone agrees this will work" Dissent is discouraged
Cargo cult "We followed best practice" Process confused with outcome
Plausible deniability "We managed the risks" CYA > success
Abstraction "The data is ready" Haven't actually checked
Diffusion of responsibility "The vendor will handle it" Someone else's problem

The Ultimate Uncomfortable Truth

Most AI projects fail.

Not "don't achieve optimal outcomes" - fail. Don't deliver value. Get cancelled. Get deprioritized. Quietly abandoned.

Every project team believes they're the exception.

Almost none of them are.

The comfortable lies are the mechanism by which doomed projects continue until it's too late.

The first step to being an actual exception is to stop lying.


"The truth will set you free. But first, it will make you miserable."

Better miserable early than miserable after two years and ten million dollars.