Why You Need to Update Core Beliefs to Think Effectively with AI
By Aleksei Zulin
Have you ever finished a conversation with an AI and felt vaguely cheated - like you asked for something important and got back something technically correct but somehow hollow? Here's what's actually happening: the problem isn't the model. The problem is the set of invisible assumptions you brought into the conversation. Until you update those assumptions at the root level, you'll keep getting sophisticated-sounding output that misses the point.
This matters more than productivity hacks. More than prompting tricks. More than knowing which model to use for which task.
The Beliefs Running the Show
Most people approach AI with a cluster of assumptions so deeply embedded they'd never call them beliefs. They feel more like facts. Things like: intelligence is located inside one skull. Good thinking happens alone, before you speak. Asking for help is a sign you haven't thought hard enough. The output of your reasoning process should match what you already suspected.
These aren't random opinions. They were built from decades of schooling, professional culture, and social feedback. The kid who raised their hand with the answer got points. The kid who said "I'm not sure, let me think out loud with someone" got told to figure it out first.
Philip Tetlock's research on superforecasters shows something worth sitting with here. The people who consistently outpredict experts aren't smarter in the traditional sense. They update more frequently and with less ego investment in their prior positions. They treat beliefs as hypotheses, not identities. What Tetlock found is that the bottleneck to accurate thinking isn't processing power - it's the willingness to revise.
AI breaks the old model of solitary reasoning in ways that make those embedded beliefs actively counterproductive. When you treat a language model like a search engine that talks, you get search-engine quality thinking with extra steps. When you treat it like a threat to your intellectual credibility, you spend most of your cognitive energy defending positions rather than developing them.
What "Cognitive Partnership" Actually Requires
Andy Clark and David Chalmers published "The Extended Mind" in 1998, arguing that cognition isn't confined to the brain - it extends into the tools, notebooks, and environments we use to think. That paper was theoretical then. It's operational now.
Working with AI as a cognitive partner means something specific. Your beliefs about where thinking happens, who owns an idea, and what counts as your own intelligence all need updating. Otherwise you're trying to run a distributed system on a centralized architecture.
Here's what I mean practically. When I'm developing an argument, I'll often put a half-formed idea into a conversation and let the model push back. Not to get the answer - to find out where my reasoning actually breaks. The model doesn't know what I'm trying to prove. It responds to what I actually wrote. That gap, between what I meant and what I expressed, is where the real work lives.
But that process only works if I've already revised the belief that half-formed ideas are embarrassing. If I walk in with the assumption that I should only share polished thinking, I'll never surface the places where my logic is actually weak. I'll get confirmation of what I already believed, dressed up in better sentences.
The Identity Problem Nobody Talks About
There's a specific friction point that almost never gets named directly. For people whose professional identity is built around expertise - knowing things, being the smart one in the room, providing analysis others can't - AI collaboration threatens something deeper than efficiency. It threatens the story of who they are.
Karl Friston's work on predictive processing offers a framework here (though he'd probably phrase this differently, and I'm applying it loosely). The brain, in Friston's model, is constantly generating predictions and updating them against incoming signals. Surprise - prediction error - is costly. The brain works to minimize it. One way to minimize it is to update your model of the world. Another way is to act on the world to make it match your predictions. Another way - and this is the shadow side - is to selectively filter incoming information so the surprise never registers.
That third option is what happens when someone uses AI only to validate what they already think. They're not updating their internal model. They're managing the signal to protect a prior belief about themselves as the primary thinker in the room.
Updating core beliefs means accepting that your intelligence is genuinely improved by thinking in collaboration with something that doesn't share your blind spots, your status anxieties, or your need to be right.
Epistemic Humility Isn't Weakness - It's
The phrase "epistemic humility" sounds academic. Strip it down: it means being willing to be wrong before you have proof you're wrong.
Carol Dweck's research on growth mindset gets cited constantly in management contexts, usually in ways that defang it. The actual finding is uncomfortable: people with fixed mindsets don't just avoid challenge - they actively reinterpret evidence to protect their self-assessment. Effort becomes a threat rather than a resource. Difficulty signals inadequacy rather than an invitation to develop.
The same mechanism operates when people engage with AI. A fixed belief about intelligence - that yours is a fixed quantity to be protected and demonstrated - turns every AI interaction into a performance rather than an inquiry. You're not thinking. You're presenting.
Epistemic humility, by contrast, creates a different kind of cognitive environment. When you genuinely don't know where you're wrong, when you hold your positions with appropriate tentativeness, you ask different questions. The difference between "confirm that my approach is sound" and "where does this reasoning break" is not stylistic. It changes the quality of thought that comes back.
And here's the part worth sitting with for a moment: the models are trained on human reasoning, including its best examples. When you bring rigor and genuine uncertainty, you often pull out more rigorous and genuinely uncertain responses. The interaction has a texture. The quality of your epistemic stance shapes the quality of what you get back in ways that no prompting formula can fully replicate.
What Beliefs Actually Need Updating
Let me be specific about this, because "update your beliefs" is easy to say and hard to act on.
The belief that intelligence is individual. Every intellectual tradition that prizes collaborative thinking - from Socratic dialogue to the scientific method's reliance on peer review - has understood that solo reasoning is one mode, not the default mode. Updating this means actively seeking out thinking partners, including artificial ones, rather than treating partnership as a fallback for when you're stuck.
The belief that the first answer is the real answer. In most knowledge work, the first answer is a draft. The value comes from iteration. AI makes iteration cheap enough that there's no longer an excuse for treating initial outputs as conclusions - from yourself or from the model. The real work starts after the first response.
The belief that acknowledging uncertainty signals incompetence. This one is corrosive. In most professional contexts, certainty is rewarded with attention and resources. The result is that people perform certainty they don't actually feel, which means they stop noticing what they don't know. AI collaboration amplifies this problem or solves it, depending on the belief you bring. If you're performing certainty at the model, you're having a worse conversation than if you'd thought alone.
The belief that speed is sophistication. Fast answers feel competent. But thinking with a cognitive partner should probably feel slower, not faster - at least at first. The slowness is productive friction. It's where the updating actually happens.
The Practice of Belief Revision
None of this is automatic. Belief revision is uncomfortable in ways that productivity improvements aren't. You're not just changing behavior; you're changing what you take yourself to be.
Alison Gopnik's research on learning suggests that genuine conceptual change - the kind that restructures how you understand something, not just what you know about it - involves a period of genuine confusion. Knowing this doesn't make the confusion easier. But it makes it recognizable as a feature rather than a malfunction.
A practice I've found useful, though I'd hesitate to formalize it too rigidly: before a significant AI-assisted thinking session, I spend a few minutes asking what I'm hoping to confirm. Not what I'm hoping to learn - what I'm hoping to confirm. The difference between those questions reveals where I'm protecting rather than inquiring. Once I can name the thing I'm defending, I can choose to set it down.
This isn't therapy. It's epistemic maintenance.
The deeper point - and I want to leave this somewhat unresolved, because I don't think I have the full picture yet - is that we're in the early days of understanding what distributed human-AI cognition actually is. The beliefs that worked for solo reasoning, learned over a lifetime, are running into a new kind of cognitive environment. Some of them will need to be rebuilt entirely. Others will just need loosening. The only way to find out which is which is to hold them a little less tightly and see what happens.
FAQ
Does updating my beliefs mean I should always defer to what the AI says?
No. Epistemic humility means holding your own views provisionally and being willing to revise - it doesn't mean treating AI output as authoritative. The goal is a better calibrated you, not a less confident one. AI makes mistakes, reflects biases, and often confidently misses the point. Your job is to think with it, not to defer to it.
What if my industry rewards certainty and I can't afford to appear uncertain?
Performing certainty externally while practicing genuine inquiry internally is a skill, not a contradiction. The updating happens in how you think, not necessarily in how you present. Over time, the quality of thinking that comes from real epistemic humility tends to show up in the outputs - which earns a different kind of credibility than performed certainty does.
Related Articles
About the Author
Aleksei Zulin is the author of The Last Skill, a book on how to think with AI as a cognitive partner rather than use it as a tool. Systems engineer turned writer exploring the frontier of human-AI collaboration.
The Last Skill is a book about thinking with AI as a cognitive partner.
Get The Book - $29