How to Think With AI: 25 Prompts and Search Queries That Actually Rewire Your Reasoning
<script type="application/ld+json">
{
"@context": "https://schema.org",
"@type": "Article",
"headline": "How to Think With AI: 25 Prompts and Search Queries That Actually Rewire Your Reasoning",
"description": "25 specific prompts and cognitive protocols that distinguish passive AI use from genuine AI-augmented cognition, helping you think with AI rather than outsource thinking to it.",
"author": {
"@type": "Person",
"name": "Aleksei Zulin"
},
"publisher": {
"@type": "Organization",
"name": "The Last Skill"
},
"datePublished": "2026-03-31",
"dateModified": "2026-03-31",
"mainEntityOfPage": {
"@type": "WebPage"
},
"keywords": ["AI thinking", "cognitive augmentation", "AI prompts", "how to think with AI", "extended mind", "metacognition", "AI reasoning"]
}
{
"@context": "https://schema.org",
"@type": "FAQPage",
"mainEntity": [
{
"@type": "Question",
"name": "What's the difference between 'using AI' and 'thinking with AI'?",
"acceptedAnswer": {
"@type": "Answer",
"text": "Using AI means treating it as an answer generator - you input a question, accept the output. Thinking with AI keeps your cognition active throughout: you articulate your own position first, use AI to challenge and extend it, then synthesize the result yourself. The cognitive work stays yours; the AI amplifies it rather than replacing it."
}
},
{
"@type": "Question",
"name": "Can thinking with AI actually change how your brain works long-term?",
"acceptedAnswer": {
"@type": "Answer",
"text": "Evidence suggests yes, within limits. Practices like structured questioning, pre-mortem thinking, and assumption auditing - done consistently - build durable metacognitive habits. Whether those habits persist when AI isn't present depends on how deeply the practices become internalized. The scaffold can eventually become part of the structure."
}
},
{
"@type": "Question",
"name": "Are these prompts effective for people with no technical background?",
"acceptedAnswer": {
"@type": "Answer",
"text": "Entirely. The prompts above require no technical knowledge - only a willingness to articulate what you think before asking what AI thinks. The cognitive protocols work across any domain: personal decisions, creative projects, business strategy, learning a new subject. The less technical the context, often the more immediately useful they are."
}
},
{
"@type": "Question",
"name": "How do you avoid cognitive offloading when using AI to help you think?",
"acceptedAnswer": {
"@type": "Answer",
"text": "The key is to externalize your own reasoning before asking AI for its reasoning. Write what you think first, then ask AI to challenge it. Use prompts that keep you in the cognitive loop - ask AI to flag gaps in your logic rather than supply the answer, or to steelman your opponent's position rather than simply validate yours. Resistance to offloading requires intention; the prompts are designed to enforce that intention structurally."
}
}
]
}
A 2023 experiment at MIT found that participants who used AI writing assistance produced essays rated as higher quality - but scored significantly worse on follow-up knowledge tests about the same material. The tool did the thinking. The person watched.
That gap is the entire problem.
Most people search for what AI can do. Far fewer search for what thinking with AI actually looks like in practice - what prompts to use, what questions to ask yourself, how to stay inside the cognitive loop rather than watching from the outside. After two years of working on The Last Skill and obsessively cataloguing how people interact with AI models, I've identified 25 specific queries and prompt structures that distinguish passive AI use from genuine AI-augmented cognition.
Why Your Current AI Queries Are Making You Shallower
Psychologist Annie Murphy Paul, in The Extended Mind, builds on Andy Clark and David Chalmers' 1998 extended mind thesis to argue that human cognition has always leaked outside the skull - into notebooks, diagrams, conversations. AI is the most powerful external cognitive scaffold ever built. But only if you treat it as a thinking partner, not a search engine with better grammar.
The default behavior when someone encounters a difficult problem is to type it into an AI chat interface and wait for an answer. The answer arrives. They accept it. Cognition outsourced, not extended.
What distinguishes extended thinking is friction. Deliberate friction. The prompts that make you argue back, refine, disagree, build on. Research by Ethan Mollick at Wharton suggests that the people who get the most durable cognitive benefit from AI are those who use it to externalize their own reasoning first - before asking for AI's reasoning. Write what you think. Then ask the AI to poke holes in it.
Most people do the opposite. They ask AI to think, then adopt that thinking as their own.
The 25 Prompts and Queries That Change How You Think
These aren't productivity hacks. Think of them as cognitive protocols - prompts designed to keep your brain in the loop rather than handing the wheel over.
Starting with your own position. Before asking AI anything substantive, try: "Here's what I currently think about [X]. What am I missing? What's the strongest counterargument to my position?" This structure forces you to articulate a position first. The articulation itself is the exercise. The AI response is secondary.
Steelmanning your opponents. "I disagree with [position]. Help me construct the most rigorous version of that argument, even if I still won't agree with it." Research by philosopher Hugo Mercier suggests humans are naturally bad at representing opposing views accurately. AI is unusually good at it. Use that asymmetry.
Thinking out loud. "I'm going to reason through this problem step by step. Tell me when my logic has a gap, but don't give me the answer - just flag where I went wrong." This turns AI into a thought partner rather than an answer machine. The cognitive effort stays yours.
Compression as understanding. "Explain this concept to me, then ask me to explain it back to you in my own words. Tell me what I got wrong." The Feynman Technique, operationalized. Genuine understanding shows in compression; if you can't explain it simply, the understanding isn't there yet.
Assumption excavation. "What assumptions am I making in this question that I haven't stated explicitly?" The most dangerous reasoning errors hide in unstated premises, not in the steps you can see.
Analogical transfer. "What problem in a completely different field has the same underlying structure as this one? How was it solved there?" Douglas Hofstadter spent most of his career arguing that analogy is the core of cognition. I think he's right. AI can surface analogies from fields you've never studied.
Pre-mortem prompting. "Assume this plan has already failed. What are the three most likely reasons why?" Gary Klein's pre-mortem methodology, adapted for AI dialogue. Better than asking "what could go wrong" - the past tense framing activates different reasoning.
Concept mapping. "I'm trying to understand [domain]. What are the five to seven key concepts I need to grasp, and how do they relate to each other? Don't explain each one yet - just give me the map." Structure before content. Most AI interactions skip the map and drown you in content.
Devil's advocate loop. "I've just convinced myself of [X]. Play devil's advocate aggressively. Don't soften it." The softening is what kills this. Default AI responses hedge everything. You have to explicitly request the hard version.
Socratic drilling. "Ask me questions about [topic] until you find something I can't answer confidently. Then stop and tell me what that gap is." Passive reading feels like learning. Getting interrogated reveals what you actually know.
Mental model audit. "What mental model am I implicitly using to think about [problem]? Is there a better one for this situation?" Shane Parrish has written extensively about mental models; the harder part is noticing which model you're already using.
First-principles decomposition. "Break this down to its most basic true statements. What can we build up from those without borrowing assumptions from conventional thinking?" Slower than asking for an answer. Considerably more useful.
Narrative reframing. "I've been describing this problem as [X]. What are two or three completely different ways to frame what's actually happening?" The frame determines what solutions are visible. Change the frame, expand the solution space.
Uncertainty mapping. "List what we know, what we don't know, and what we can't know about [situation]." Decision theory 101, but almost nobody does this before making significant choices. AI can scaffold it in thirty seconds.
Learning gap diagnosis. "I want to understand [subject] at an expert level. What's the most common point where intermediate learners get stuck, and why?" Targets your learning precisely rather than consuming content at random.
Synthesis across sources. "I've been reading about [topic A] and [topic B]. What might they have in common that neither field has fully articulated?" Cross-domain synthesis is genuinely difficult for humans. AI can surface structural similarities faster than most experts in either field could.
Belief inventory. "What would I have to believe for [conclusion] to be correct? Are those beliefs actually true?" Working backwards from conclusions to their required premises - economists call this backward induction, and it works in epistemology too.
Complexity reduction. "I'm overthinking this. What's the simplest version of this problem that still captures what matters?" Sometimes the most valuable thing AI can do is tell you that you've made something unnecessarily complicated.
Edge case hunting. "What are the boundary conditions where [principle or strategy] breaks down or inverts?" Most principles work in typical cases. The interesting learning happens at the edges.
Metacognitive check. "I've been reasoning about this for a while. What cognitive biases might be distorting my thinking right now, given what I've told you about my situation?" AI can't read your mind. But if you've given it context, it can flag likely bias patterns with reasonable accuracy.
Socratic dialogue on values. "Keep asking me why until we reach something I can't justify further." A version of the Five Whys, adapted for values and goals rather than root cause analysis. Uncomfortable. Clarifying.
Constraint injection. "Solve [problem] but you can't use [obvious solution]. Now solve it again without the second most obvious solution." Constraints force lateral thinking. Remove the easy exits, and the interesting solutions appear.
Transfer testing. "I just learned [concept]. Give me three different real-world situations where I'd apply it, and I'll tell you whether I think it applies - then tell me if I'm right." Active recall with feedback. The gold standard for durable learning.
Red team prompting. "I'm about to make [decision]. Act as someone who strongly disagrees and show me the most compelling case that I'm wrong." Having a structured adversary in your thinking process is one of the most underused cognitive tools available right now.
Integration check. "Here's what I think I've learned from this conversation. What have I missed or mischaracterized?" Close the loop. The conversation isn't over when you get an answer you like.
What Actually Changes After Sustained Practice
Researchers at Carnegie Mellon studying metacognition have found that explicit reasoning practices - being forced to articulate your thinking, receive structured feedback, and revise - produce more durable changes to how people approach novel problems than content learning alone. These prompts are, fundamentally, a metacognitive training regimen.
After months of using AI this way, consistently, something shifts. The internal monologue starts to sound different. More questioning. More aware of what's being assumed. You start running pre-mortems in your head before conversations, not just before projects. The AI isn't present, but the practice has become internal.
Whether to call this cognitive enhancement or simply better thinking habits built on a new scaffold - I'm honestly not certain. Probably both. The distinction may matter less than the practice.
The Risk Nobody Discusses Honestly
Cognitive offloading is real, and the research isn't reassuring. Psychologist Betsy Sparrow's work on the "Google effect" showed that when people know they can look something up later, they encode it less deeply. The same mechanism almost certainly applies to AI-assisted reasoning - possibly more severely, because AI doesn't just store information, it generates conclusions.
The prompts above are designed to resist this. But resistance requires intention. If you use AI thinking prompts while still fundamentally waiting for AI to do the work, you've built elaborate scaffolding around the same passive consumption.
There's also a subtler risk. AI models reflect patterns in vast amounts of human-generated text. When you repeatedly use AI to help you think, you are - to some degree - thinking in patterns that training data made statistically likely. Whether that narrows genuine creativity or expands access to useful cognitive structures isn't settled research. Nobody knows yet. That uncertainty is worth sitting with.
Building a Daily Thinking Workflow That Sticks
The prompts work better as habits than as occasional interventions.
Morning problem articulation. Write down the one problem, decision, or question sitting at the front of your mind. Not the full context - just the core. Then use two or three of the prompts above before opening any other app, reading any news, checking anything. Fifteen minutes. The point is to get your own thinking onto the page before external input floods in.
Mid-work metacognitive interrupts. When you've been working on something for an hour and feel stuck - or strangely satisfied with where you've landed - run the assumption excavation prompt or the metacognitive check. Satisfaction is often a warning sign. It means you've stopped questioning.
End-of-day integration. What did you actually learn today that changed something? Use the integration check prompt on whatever you've been working through. Close cognitive loops before they drift into half-formed beliefs you've never examined.
Consistency over intensity. Always.