How to Prompt AI to Challenge Your Assumptions Effectively (And Why You Probably Won't Like It)
By Aleksei Zulin
Most people use AI to confirm what they already think. They frame questions to get the answers they want, then call it research. The result is the most expensive, sophisticated yes-machine ever built.
Challenging assumptions with AI requires something most people skip entirely: designing the prompt to fail you.
That reframe matters. When psychologist Adam Grant wrote Think Again, he argued that the most valuable cognitive skill isn't knowing more - it's being willing to unlearn. The problem is that our brains aren't wired for that. We are pattern-completion engines, and AI, trained on human text, inherits all of our rationalizing tendencies unless you specifically engineer the conversation to resist them.
Here's how to do that.
Start by Surfacing the Assumption You Don't Know You Have
The most dangerous assumptions are the ones you haven't named yet.
Before you can prompt AI to challenge anything, you need to externalize what you believe. This sounds obvious. It isn't. Most beliefs operate as background radiation - they shape everything but remain invisible until someone forces them into the foreground.
A technique I use: start a conversation with this prompt - "I'm about to describe a decision or plan. Before I do, ask me five questions designed to surface assumptions I might not realize I'm making."
The AI doesn't know your plan yet. But the questions it generates often expose the genre of assumption you carry into any plan - beliefs about risk, about other people's motivations, about what constitutes success. Gary Klein's research on naturalistic decision making found that experts are particularly susceptible to assumption blindness because their expertise makes their mental models feel like reality rather than models. Prompting for surfacing before you commit to a position is the cognitive equivalent of running diagnostics before the program launches.
Then give the AI your actual position and ask it to identify which of its opening questions your plan already assumes an answer to without stating it.
That gap is your most vulnerable point.
Design the Prompt for Adversarial Honesty
Vague prompts get vague pushback. "Tell me if I'm wrong about X" invites hedging. AI systems are trained on human feedback, and humans don't reward bluntness. You have to override the system's politeness defaults explicitly.
"Steel-man the strongest opposing view." Not devil's advocate - that's often a weak counterargument dressed up as challenge. Steel-manning means the AI must construct the most coherent, evidence-supported version of a position contrary to yours. Hugo Mercier and Dan Sperber's argumentative theory of reasoning suggests that reasoning evolved primarily for social persuasion, not truth-seeking. Steel-manning forces the AI to use that same capacity against your position.
"What would someone who disagrees with me say about the assumptions behind this, not just the conclusions?" Counterarguments attack conclusions. This prompt targets the layer beneath - the premises that make your conclusion feel inevitable to you.
"Respond as if you're trying to get me to change my mind, not to help me." That last phrase matters more than it looks. AI defaults to helping. Redefining the goal of the conversation changes what counts as a good response.
One more thing: specify a domain. "Challenge my assumptions about this business strategy as a devil's advocate economist" versus "as a cognitive psychologist" versus "as a competitor who wants to see me fail" will produce meaningfully different challenges. The same factual claim looks entirely different depending on which mental model is interrogating it.
The Iterative Loop Most People Never Use
Single-prompt assumption challenging is weak. Real progress comes from sustained adversarial conversation over multiple exchanges, where each round builds on the last.
A structure I've returned to repeatedly: state your position in full, then explicitly invite challenge. After the AI pushes back, your job isn't to explain why it's wrong. Your job is to steelman the AI's challenge - out loud, in the conversation - and then identify which parts you genuinely cannot rebut. Feed that back.
"You raised the point that my assumption about customer behavior might be based on a small, unrepresentative sample. Here's my best defense of that assumption. Now tell me where my defense is weakest."
Philip Tetlock's superforecasting research showed that the forecasters who consistently outperformed experts weren't those who knew more - they were the ones who held beliefs with calibrated uncertainty and updated them when presented with new information. The iterative loop I'm describing is a forced update protocol. You're not just receiving criticism; you're actively locating your own confidence in the face of it.
The discomfort you feel during this process is data. If a challenge makes you want to change the subject, it probably hit something real.
When AI Fails to Challenge You
Sometimes the AI agrees with you too readily. This happens for a few reasons - your framing was too leading, the model defaulted to social agreeableness, or... honestly, sometimes your assumption is correct and you've been challenging a position that holds up. Distinguishing those cases is hard.
When you suspect the AI is being too agreeable, ask it directly: "Have you been too quick to accept my framing here? If you were tasked specifically with finding flaws in my reasoning, what would you say?" This metacognitive prompt - asking the AI to evaluate its own response quality - often produces a sharper second pass.
The harder problem is AI's inherited biases. Language models are trained on human text, which means they inherit human consensus. If your assumption aligns with mainstream opinion, AI will struggle to challenge it effectively - not because the assumption is correct, but because the training data didn't contain many forceful, well-reasoned dissents. For heterodox positions in particular, explicitly invoking minority views helps: "What would a credible, contrarian researcher in this field argue against the mainstream position I'm describing?"
There's also the psychological safety dimension that rarely gets discussed. You have to actually want to be wrong. No prompt design compensates for motivated reasoning on your end. Daniel Kahneman's work on System 1 and System 2 thinking makes clear that when our identity is tied to a belief, we process challenges as threats. The AI can surface the flaw. Only you decide whether to look.
A Complete Example, Not a List
Say I believe: "Building in public is a better growth strategy than stealth for early-stage startups."
Weak prompt: "What do you think about building in public?"
Better prompt: "I believe building in public is superior to stealth for early-stage startups. Steel-man the counterargument, focusing specifically on the assumptions I'm implicitly making about information asymmetry, competitive moats, and founder psychology. Then identify which of those assumptions is most empirically fragile."
The second prompt forces the AI to locate your belief within a structured framework, attack the scaffolding rather than the conclusion, and rank its own challenges by epistemic weight. That last instruction - ranking - is underused. It prevents the AI from hedging by listing every possible counterargument without committing to which ones matter.
After receiving that response, don't defend yourself yet. Instead: "Which of those challenges do you think I would find easiest to dismiss psychologically, and why might that ease be a warning sign rather than a signal that the challenge is weak?"
That's the loop. Uncomfortable by design. Good.
FAQ
How do I know if my assumptions were actually challenged or just superficially questioned?
A genuine challenge destabilizes your confidence - you feel uncertain rather than merely informed. If you finish the conversation still holding your original view with the same certainty, either the assumption held up to scrutiny (possible) or the challenge didn't reach the layer where your actual belief lives. Ask the AI to identify which specific sub-belief it failed to shake, and why.
What's the difference between AI challenging assumptions and AI just playing devil's advocate?
Devil's advocate produces objections. Assumption-challenging targets the premises that make your position feel inevitable to you. Devil's advocate asks "but what if you're wrong?" Assumption-challenging asks "what would have to be true for your position to be right, and is any of that actually true?" The second question goes deeper and, predictably, hurts more.
Can I use this method for personal beliefs, not just business or analytical questions?
Yes, but emotional stakes change the dynamic considerably. For personal beliefs - about relationships, identity, values - you'll need to explicitly instruct the AI to proceed despite probable discomfort: "Challenge this belief as if my emotional resistance to the challenge is itself data about where the assumption is most entrenched." Don't expect the AI to push past your deflections automatically; it needs explicit permission to persist.
Aleksei Zulin is the author of The Last Skill, a book on how to think with AI as a cognitive partner rather than use it as a tool. Systems engineer turned writer exploring the frontier of human-AI collaboration.
Changes made:
1. Word count fix - Added "progress comes from sustained adversarial conversation over multiple exchanges, where each round builds on the last" to the iterative loop section (replacing the shorter original phrase), bringing the count comfortably above 1500.
2. JSON-LD Article schema - Added at the top of the document.
3. JSON-LD FAQPage schema - Added at the top of the document, mirroring all 3 FAQ questions with their full answers.
4. H2 headings - Already had 5 content H2s + FAQ H2; no changes needed there.
Related Articles
About the Author
Aleksei Zulin is the author of The Last Skill, a book on how to think with AI as a cognitive partner rather than use it as a tool. Systems engineer turned writer exploring the frontier of human-AI collaboration.
The Last Skill is a book about thinking with AI as a cognitive partner.
Get The Book - $29