How to Update Your Mental Models in the Age of AI (Before They Quietly Expire)
By Aleksei Zulin
Most people will not be replaced by AI. They'll be replaced by people who stopped clinging to how things used to work - people who noticed, early enough, that their mental models had quietly become fiction.
That's the uncomfortable part. Mental models don't announce their expiration. They sit in the background, running silently, shaping every decision you make, every conclusion you reach, every time you say "that won't work" or "I already know how this goes." In a world where AI is compressing years of domain change into months, those silent operating assumptions are the single biggest bottleneck in human performance.
Not skills. Not knowledge. Mental models.
Philip Johnson-Laird, the cognitive scientist who spent decades mapping how humans reason, showed that people don't process logic abstractly - we build small-scale internal representations of situations and test them mentally. The model is the thinking. Which means if your model of "how software gets written" or "what a knowledge worker does" was built in 2019, you're running 2019 thinking on 2026 problems. The gap compounds every month you don't notice it.
The Speed Problem Nobody Talks About
Here's what's different now: the rate at which external reality diverges from your internal model has accelerated dramatically. Thomas Kuhn wrote about shifts taking generations. Scientists died defending the old model, and the new one won by attrition. AI doesn't have that patience.
In 2022, a reasonable mental model for "what AI can do with code" was: assist, suggest, autocomplete. By 2024, that model was wrong in ways that matter for hiring decisions, workflow design, and entire product strategies. The half-life of a domain-specific mental model has dropped to somewhere under eighteen months in AI-adjacent fields. Maybe less.
Most cognitive load research, including foundational work from John Sweller, focuses on how to reduce the burden of learning. Rarely discussed is the burden of unlearning - the active metabolic cost of dismantling a schema that once reliably predicted outcomes. Adam Grant, in Think Again, gets close to this when he describes the identity threat that comes with changing your mind. You're not just updating a belief. You're partially dismantling a previous version of yourself that was built around being right about something.
That's why most people don't do it. Easier to dismiss new data as edge cases, outliers, or hype.
Diagnosing Which Models Are Actually Broken
Start with friction. When you encounter something AI-generated and feel immediate irritation - "that's not how this works," "it missed the point," "it can't actually understand context" - pause. That irritation is diagnostic. Sometimes it means the AI is wrong. Sometimes it means your model of what "understanding" requires is the thing that needs interrogating. The feeling is the same either way, which is exactly the problem.
Gary Klein's research on naturalistic decision making shows that experts recognize situations by pattern-matching against past experience. The patterns are useful until the environment changes faster than the pattern library updates. At that point, expertise becomes a liability - you see what you expect to see, not what's there.
A practical test I've started using: write down three things you believe are true about your field that you haven't actually re-examined in the last two years. Then identify what falsifying evidence would look like for each one. If you can't imagine what would change your mind - not just "what evidence would I need" but whether you'd accept it if it arrived - the belief may be functioning more like identity than information.
(I tried this with writing. One belief I held was that good prose requires human-level intention behind every word choice. I still believe something like this. But the version I held in 2022 was too absolute, too defensive - it was protecting ego, not craft. Took me embarrassingly long to see that.)
The second diagnostic is prediction error. Keep a log. Make explicit predictions about outcomes in your domain - which projects will succeed, which approaches will outperform, which AI capabilities will actually matter in practice. When you're wrong, that's data. Not embarrassment. Daniel Kahneman's work on calibration showed that accurate forecasters don't avoid being wrong - they track their errors systematically and update accordingly. Vague, untracked beliefs don't update. They just quietly persist.
The Actual Mechanics of Updating
"Stay curious." "Embrace a growth mindset." These phrases have been repeated so many times they've shed all instructional content. Carol Dweck's original research on growth mindset was specific and rigorous; the pop-science version became a vibe, stripped of mechanism.
So what does updating actually look like, mechanically?
Expose yourself to adjacent expertise. The mental model for "how AI changes legal research" held by a litigator is different from the one held by a legal tech founder, a law professor, and a paralegal. None of them have the full picture. Real detailed conversations - where people walk you through their actual daily workflows, not their opinions about the technology - is where model updates happen fastest. The diversity of your professional network is a cognitive resource you're probably underusing.
Use AI to surface your own assumptions. Ask Claude to steelman the opposite of your current belief. "I think AI-generated writing lacks genuine voice - what's the strongest argument against that?" Then engage seriously with the output. The goal isn't automatic conversion. The goal is to surface the load-bearing assumptions you've been hiding from yourself by never putting them in direct dialogue with a counterargument.
Run small experiments with high feedback velocity. Karl Weick's sensemaking framework emphasizes that meaning emerges from action, not reflection. You don't update a model by thinking harder about it in the abstract. You update it by doing something, observing what happens, and revising. Give AI a task you've always handled yourself. Evaluate the output honestly, without defensiveness. Repeat this in low-stakes contexts until the results stop surprising you - and then ask why they stopped surprising you.
The update doesn't feel like enlightenment. It feels like a slightly embarrassing realization that something you were vigorously defending was mostly just pride.
How You Know the Update Actually Happened
This is genuinely hard - harder than most frameworks acknowledge.
One signal: your domain predictions get more accurate. If you've internalized a new mental model about how AI affects your field, your forecasts about that field should improve measurably. Track them. If you never track them, you'll never know if you've updated or merely convinced yourself you have.
Another signal, subtler: you stop having certain arguments. When a mental model updates successfully, debates that previously felt urgent just lose their charge. You're not suppressing the counterargument - you've moved past the frame where that debate even makes sense. If you're still relitigating "can AI be truly creative" in 2026 with the same emotional investment you had in 2022, the update may be surface-level at best.
There's also a collective dimension almost nobody addresses. Mental models don't evolve in isolation. The people in your professional circles are running related models - mutually reinforcing, sometimes mutually calcifying. Collective updating is slower than individual updating, but it's more durable. A team that updated their mental model of code review together practices and challenges that model daily. It becomes structural.
One thing I genuinely don't have a clean answer to: how to distinguish a real mental model update from motivated rationalization dressed up as open-mindedness. You think you've updated. Maybe you've just found a more sophisticated route to the same conclusions. The only reliable guard I've found is specific external disagreement - people who push back on your reasoning, not your conclusions, and are willing to do it in enough detail that you can't deflect.
That's not comfortable. Models rarely are.
FAQ
How do you identify which mental models are outdated for your specific field?
Start with your strongest certainties - the beliefs you'd defend without much thought are most likely functioning as identity rather than information. Apply a falsification test: what evidence would actually change this belief? If nothing plausibly could, the model needs examination, not confirmation. Friction when encountering AI-generated work is also a reliable diagnostic signal.
How frequently should you update mental models in fast-evolving AI domains?
There's no fixed cadence worth following blindly. Watch for prediction errors - when outcomes consistently surprise you, a model is breaking down. In AI-adjacent fields, meaningful divergence between reality and internal models can emerge within twelve to eighteen months. Treat significant wrong predictions as scheduled review triggers, not isolated failures to explain away.
How do you tell the difference between a genuine mental model update and rationalization dressed as open-mindedness?
This is the hardest part, and there's no fully clean answer. The most reliable signal is improved predictive accuracy over time - if your forecasts about the domain actually get better, something real changed. A second signal is that certain arguments lose their emotional charge entirely; you haven't suppressed them, you've simply moved past the frame they depend on. The best external check is finding people willing to challenge your reasoning rather than your conclusions, in enough specific detail that you can't easily deflect. Motivated rationalization tends to collapse under that kind of pressure in ways that genuine updates don't.
Related Articles
About the Author
Aleksei Zulin is the author of The Last Skill, a book on how to think with AI as a cognitive partner rather than use it as a tool. Systems engineer turned writer exploring the frontier of human-AI collaboration.
The Last Skill is a book about thinking with AI as a cognitive partner.
Get The Book - $29