Skip to content

About the Humans

Forbidden Questions

Questions About the People Affected
"We optimized for efficiency. We forgot to optimize for humanity."
The Final Question
  • Would you want your mother/child/partner processed by this system?
  • The version of them that's scared, confused, in crisis
  • If the answer isn't "yes," something needs to change

The Fundamental Questions

1. Who is actually affected by this?

The forbidden version: "Not the abstract user persona—the real people whose lives change because of our AI."

What you'll hear instead: - "Citizens will benefit from improved service" - "Stakeholders have been engaged" - "User needs have been considered"

What to probe: - Who are the specific human beings affected? - What's at stake for them? (Money? Freedom? Safety? Dignity?) - Can you picture a specific person and what happens to them? - Have you met them? Talked to them? Listened to them? - Do they know this is happening?

Why it matters: Abstract "users" don't suffer. Real people do. If you can't picture the human impact, you're not thinking about it honestly.


2. Did anyone ask them?

The forbidden version: "Not 'stakeholder consultation'—did the actual affected people have genuine input?"

What you'll hear instead: - "Extensive consultation was conducted" - "Focus groups informed design" - "We engaged representative groups"

What to probe: - Did the actual affected people (not their representatives) participate? - Was their input early enough to change design, or just validate decisions? - Were they informed enough to understand what they were asked about? - Were they asked about what matters to them, or what we wanted to know? - What did they say, and what did we do about it?

Why it matters: Consultation theater is not consent. Most "engagement" happens after fundamental decisions are made. People affected by AI often don't know they're affected, let alone have input.


3. Who can't speak for themselves?

The forbidden version: "Who is affected but unable to participate in any consultation?"

What you'll hear instead: - "We engaged diverse stakeholders" - "Representative groups were included" - "All voices were heard"

What to probe: - Children affected by family-related AI decisions? - People with cognitive impairments? - Deceased people whose data trains models? - Future people who inherit today's decisions? - Incarcerated, hospitalized, or otherwise unable to participate? - People who don't know they're affected?

Why it matters: The most vulnerable often can't advocate for themselves. Representative groups have their own interests. Some affected people cannot be consulted. How are they protected?


4. What's at stake for them?

The forbidden version: "What is the worst thing that happens to a real person when this goes wrong?"

What you'll hear instead: - "Error rates are acceptable" - "Appeals processes exist" - "Impacts are manageable"

What to probe: - Can someone lose their income? Their housing? Their children? - Can someone be wrongly accused? Investigated? Imprisoned? - Can someone miss essential services? Healthcare? Support? - Can someone be traumatized, even if the error is eventually corrected? - What's the realistic worst case for a specific person?

Why it matters: "Acceptable error rate" is acceptable to whom? Not to the person experiencing the error. Understanding stakes means understanding impact on actual lives.


The Dignity Questions

5. How does this make people feel?

The forbidden version: "Beyond functionality—what's the emotional experience of being processed by this system?"

What you'll hear instead: - "User experience has been designed" - "The interface is intuitive" - "We conducted usability testing"

What to probe: - Do people feel respected or processed? - Do people feel understood or categorized? - Do people feel they have agency or are they subjects? - Do people feel heard or ignored? - Is the experience dignifying or dehumanizing?

Why it matters: Citizens interacting with government AI have feelings. They're not just transactions. Efficiency that dehumanizes is not success.


6. Is this Kafkaesque?

The forbidden version: "Will people feel trapped in an incomprehensible system they can't navigate or challenge?"

What you'll hear instead: - "Processes are clearly documented" - "Information is available" - "Support channels exist"

What to probe: - Can an ordinary person understand why a decision was made about them? - Can they challenge it meaningfully, not just procedurally? - Do they have access to the information they'd need to appeal? - Is the process designed for the system's convenience or the citizen's? - Would you be comfortable being processed by this?

Why it matters: Kafka's work endures because bureaucratic dehumanization is real. AI can make it worse—decisions made by systems no one understands. This is not progress.


7. What happens to their story?

The forbidden version: "Is there any room for a human being to explain their actual situation?"

What you'll hear instead: - "Cases are assessed on their merits" - "All relevant factors are considered" - "Decisions are fair and consistent"

What to probe: - Can someone explain circumstances the data doesn't capture? - Is there room for exceptional circumstances? - Can context change a decision? - Does the system see people as cases or as stories? - What nuance is lost when humans become data points?

Why it matters: Consistency isn't justice when circumstances vary. "Fair" treatment that ignores context can be profoundly unfair. People are stories, not just data.


The Power Questions

8. What power are we giving the system over people?

The forbidden version: "What can this AI do to someone that the person has no practical ability to prevent?"

What you'll hear instead: - "The system assists decision-making" - "Humans remain in the loop" - "Decisions can be appealed"

What to probe: - What happens automatically without human review? - What's the practical ability of a citizen to challenge a decision? - How long does challenge/appeal take, and what happens meanwhile? - What resources does challenge require (time, money, expertise)? - Is "human in the loop" real or nominal?

Why it matters: Power imbalances in AI systems are often invisible. The citizen faces an algorithm backed by state authority. "You can appeal" isn't meaningful if appeals take months or require lawyers.


9. Who has no choice?

The forbidden version: "Who cannot opt out of being processed by this AI?"

What you'll hear instead: - "Participation is voluntary" - "Alternatives are available" - "Citizens can choose"

What to probe: - Can someone get benefits/services without interacting with the AI? - Is the "choice" really between using AI or forgoing essential services? - Is refusal penalized formally or informally? - Can someone be affected by AI decisions without ever knowing? - What real alternatives exist?

Why it matters: Choice is meaningful only when alternatives exist. If the "choice" is between AI processing and no government services, there's no choice. Compulsory systems require more scrutiny, not less.


10. What happens if they resist?

The forbidden version: "If someone objects to being processed by AI, what actually happens to them?"

What you'll hear instead: - "Alternative processes exist" - "Citizen concerns are addressed" - "Complaints are taken seriously"

What to probe: - What happens if someone refuses AI processing? - Are objectors treated differently (delays, extra scrutiny)? - What's the realistic experience of someone who says "no"? - Is resistance possible or futile? - Have you talked to anyone who's resisted?

Why it matters: Systems reveal themselves in how they treat resistance. If objection is punished, choice is fiction. If resistance is impossible, consent is meaningless.


The Vulnerability Questions

11. Who is most likely to be harmed?

The forbidden version: "Who do we know will be disproportionately affected, and what are we doing about it?"

What you'll hear instead: - "All citizens are treated equally" - "The system is unbiased" - "We don't discriminate"

What to probe: - Who has less ability to understand the system? - Who has less ability to challenge errors? - Who has less margin for system failures (poverty, crisis, vulnerability)? - Who is least represented in training data? - Who is most likely to be in edge cases?

Why it matters: Equal treatment produces unequal outcomes when people start from unequal positions. Vulnerable populations are more likely to be harmed by AI and less able to recover.


12. What about people in crisis?

The forbidden version: "What happens when someone interacting with this AI is in genuine distress?"

What you'll hear instead: - "Escalation paths exist" - "Sensitivity is built into design" - "Staff are trained"

What to probe: - Can the system recognize distress? - What happens when someone is crying, panicking, suicidal? - Is there a human available when needed? - How fast can escalation happen? - Has anyone tested this with people in actual crisis?

Why it matters: People accessing government services are often in crisis. Unemployment, housing loss, health crises, family breakdown. AI that can't recognize or respond to human distress can cause harm at the worst possible moment.


13. What about people who don't fit categories?

The forbidden version: "What happens to people who don't fit our data schema?"

What you'll hear instead: - "The system handles diverse cases" - "Edge cases are considered" - "Manual processes exist for exceptions"

What to probe: - What about gender-diverse people in binary systems? - What about complex family structures? - What about people with multiple or no citizenship? - What about people with names the system can't handle? - What about people whose situations are genuinely unusual?

Why it matters: Data schemas enforce categorization. People who don't fit get forced into wrong categories, excluded, or error-looped. Being uncategorizable shouldn't mean being unservable.


The Future Impact Questions

14. What data trail are we creating?

The forbidden version: "What will AI decisions made today mean for this person in ten years?"

What you'll hear instead: - "Data is retained per policy" - "Privacy is protected" - "Data use is appropriate"

What to probe: - What record persists from AI-assisted decisions? - How might that record affect future treatment? - Can someone escape past categorization? - Are AI predictions becoming permanent labels? - What happens when predictions are wrong but persist?

Why it matters: AI systems create records. Records persist. Categorizations made today can follow people for life. A "high risk" label at 25 might still affect someone at 45.


15. What are we teaching people about government?

The forbidden version: "What does experiencing this AI teach citizens about their relationship with the state?"

What you'll hear instead: - "We're modernizing service delivery" - "Citizens expect digital services" - "This is the future of government"

What to probe: - Do people feel government understands them or processes them? - Are people trusting or fearing government technology? - What's the message when AI replaces human interaction? - What happens to the social contract when algorithms mediate it? - Are we building trust or eroding it?

Why it matters: Government-citizen relationships are built in interactions. AI interactions are different from human ones. What people learn from being AI-processed shapes their view of government for life.


The Humanity Checklist

Before deploying AI that affects people:

  • Can I picture a specific person affected?
  • Have I talked to people like them?
  • Do they know this is happening?
  • Did they have genuine input?
  • Do I know what's at stake for them?
  • Is there room for their story?
  • Can they understand decisions about them?
  • Can they challenge decisions meaningfully?
  • Can they opt out in reality, not just theory?
  • What happens if they resist?
  • Who bears disproportionate risk?
  • What happens if they're in crisis?
  • What if they don't fit our categories?
  • What data trail are we creating?
  • Would I be comfortable being processed by this?

If you can't answer all of these satisfactorily, you're not ready to deploy.


The Final Question

Would you want your mother/child/partner/friend to be processed by this system?

Not the well-off, articulate, educated version of them.

The version of them that's scared, confused, in crisis, or making a mistake.

The version that doesn't have time, resources, or support to navigate a complex system.

Would you want that person processed by what you're building?

If the answer isn't an unhesitating "yes," something needs to change.


"The measure of a society is how it treats its most vulnerable members. The measure of an AI system is the same."