How to Use AI for Mental Models and Frameworks (And Actually Get Smarter)
By Aleksei Zulin
Most people who try to use AI for thinking end up with a smarter-sounding version of the same thinking they already had. You ask ChatGPT to explain first principles, it gives you a tidy paragraph, you nod and close the tab. Nothing changes.
Here's the direct answer: You use AI for mental models by treating it as a thinking partner that stress-tests your existing frameworks, builds new ones from first principles, and maps the gaps between what you believe and what the evidence actually supports. The key is structured prompting - not passive consumption. Ask AI to challenge your model, not confirm it. Ask it to show you where the model breaks. That's where the real upgrade happens.
The difference between using AI as a search engine and using it as a cognitive scaffold is the difference between reading about swimming and getting in the water.
Why Most Mental Model Practice Fails (And What AI Changes)
Charlie Munger spent decades building what he called a "latticework of mental models" - a portfolio of frameworks from physics, psychology, economics, and biology that he could apply across problems. His argument, laid out in Poor Charlie's Almanack, was that no single model is sufficient. You need many, and you need them wired together.
The problem Munger never solved publicly: how do you actually build that latticework? Reading is slow. Application is rare. Most people accumulate models as isolated facts rather than connected tools.
AI changes the construction process. Instead of passively absorbing inversion or second-order thinking as concepts, you can actively apply them to your own decisions in real time. You can prompt an AI to take the position that your current plan is wrong and force you to defend it. That's adversarial thinking at scale, available on demand.
A 2022 paper by researchers at MIT's Sloan School, "Cognitive Offloading and Decision Quality," found that people who externalized their reasoning process - writing out assumptions, stress-testing beliefs - made measurably better decisions under uncertainty than those who reasoned internally. AI gives you an externalization partner that pushes back.
The Prompting Stack for Building Mental Models
Raw curiosity isn't enough. You need a structure.
Start with model extraction. Pick a decision you're facing - hiring someone, choosing a strategy, making a personal commitment. Then prompt the AI to identify which mental models are most relevant to this type of decision. Don't ask for a list. Ask for the three or four that would create the most tension with each other. Tension is where learning lives.
Then run inversion. Ask explicitly: "What would need to be true for the opposite conclusion to be correct?" This is a direct implementation of the inversion heuristic, which Munger borrowed from mathematician Carl Gustav Jacob Jacobi, who famously said "Invert, always invert." Most people know the principle. Almost nobody practices it systematically.
Then ask AI to map the model. Literally describe the structure of the framework as if explaining it to someone who has never encountered it - the conditions under which it applies, the conditions under which it fails, the adjacent models it connects to. This forces the AI to do conceptual cartography, and it forces you to notice what's missing from your own understanding.
(I should be honest here - the first time I tried this, the AI produced something so generic I thought the approach was useless. The gap was in my prompt specificity. Vague inputs produce vague outputs.)
First Principles Deconstruction With AI as Sparring Partner
Elon Musk popularized first principles thinking, but the method traces back to Aristotle, who defined a first principle as "the first basis from which a thing is known." The practical problem with first principles is that most people stop too early. They decompose a problem one level down and call it done.
AI excels at refusing to let you stop early.
Prompt structure that works: "I believe [X]. What assumptions am I making that I haven't stated explicitly? Which of those assumptions are most likely to be wrong?" This forces a recursive decomposition that most people won't do alone because it's uncomfortable to dismantle your own beliefs.
In Dr. Philip Tetlock's research on superforecasting, documented in Superforecasting: The Art and Science of Prediction (2015), the single biggest differentiator between expert forecasters and superforecasters was the willingness to actively seek disconfirming evidence. The superforecasters didn't just tolerate being wrong - they hunted for it. AI can be configured to behave exactly this way, acting as a persistent disconfirmation engine rather than a validation machine.
The edge case here matters. If you prompt AI to agree with you - "here's my thinking, does this make sense?" - it will almost always find reasons to agree. The prompting posture has to be explicitly adversarial to produce adversarial outputs.
Visualizing and Mapping Your Mental Model Portfolio
Most mental model work happens in text. That's a mistake.
The brain encodes spatial relationships differently than propositional ones. When you map a mental model - even roughly, even just in a text-based diagram - you force yourself to specify relationships that remain vague in prose. What connects to what. What depends on what. What contradicts what.
You can use AI to generate concept map descriptions, which you then render in tools like Miro, Obsidian Canvas, or even a basic mind-mapping app. Ask the AI to describe the model as a graph - nodes and edges, with the edge labels specifying the type of relationship (causes, enables, conflicts with, depends on). The granularity forces precision.
Dr. Stellan Ohlsson's work on representational change - collected in Deep Learning: How the Mind Overrides Experience (2011) - showed that insight often comes not from acquiring new information but from restructuring existing information into new spatial or relational configurations. Visualization triggers restructuring. Text alone often doesn't.
The practical workflow: spend ten minutes building a rough model map with AI assistance, then spend five minutes identifying the edges you had to leave unlabeled because you didn't actually understand the relationship. Those unlabeled edges are your learning agenda.
Building a Personalized Mental Model Latticework Over Time
One conversation with AI doesn't build a latticework. Neither does fifty random ones.
What builds a latticework is structured accumulation - treating your mental model development as a portfolio that compounds. Every significant decision you face is an opportunity to apply an existing model, test it, and record whether it held.
The protocol I use, adapted from Atul Gawande's work on checklists in The Checklist Manifesto (2009), runs in three phases. Before a decision, I identify which two or three models I'm relying on. After the decision resolves, I return to the AI conversation and evaluate whether the models predicted the outcome correctly. When they fail, I prompt the AI to help me identify whether the failure was a model problem (wrong framework for this type of situation) or an application problem (right framework, applied incorrectly).
This generates longitudinal data on your own thinking. Which models are you over-relying on? Which do you systematically misapply? Which situations consistently fall outside your existing frameworks?
That's a fundamentally different relationship with self-knowledge than journaling or periodic reflection. The AI becomes a calibration instrument.
Honest Constraints
Here's what this approach doesn't solve.
AI cannot verify whether your improved reasoning actually produces better outcomes in your life. It can sharpen your frameworks. It cannot tell you whether sharper frameworks lead to better decisions six months from now - that's an empirical question that requires longitudinal data you'd have to collect yourself, and almost nobody does.
There's also no peer-reviewed research yet on whether AI-assisted mental model training produces durable cognitive changes. The MIT work I cited on cognitive offloading is suggestive, not definitive. Tetlock's superforecasting research predates the current generation of AI tools entirely.
AI also carries embedded biases that can subtly warp your models. If you use an AI trained primarily on Western, English-language sources to help you build frameworks, those frameworks will reflect that epistemological heritage. Non-Western reasoning traditions - Chinese dialectical thinking, African Ubuntu philosophy - won't appear organically unless you explicitly ask for them.
Finally, for people in genuine cognitive crisis - acute anxiety, severe depression, executive dysfunction - no prompting protocol substitutes for clinical support. This approach assumes a baseline of functional cognition.
FAQ
Can AI create mental models for me, or do I have to build them myself?
AI can describe, explain, and map mental models - but the cognitive ownership has to be yours. Research on desirable difficulties, summarized by Robert Bjork at UCLA, consistently shows that effortful processing produces more durable learning than passive reception. Use AI to stress-test and extend models you've actively worked to understand, not as a shortcut past the work.
What's the best first prompt to start using AI for mental models?
Start specific, not abstract. Pick a real decision you're facing, describe it in two sentences, then ask: "Which mental models are most relevant here, and what would inversion look like applied to this situation?" Specificity forces the AI to do actual conceptual work rather than produce generic explanations.
How do I avoid AI just confirming what I already believe?
Explicitly instruct it not to. Something like: "Take the strongest possible position against my current thinking and don't soften it." Most AI systems default to agreeable, collaborative framing. You have to deliberately override that with adversarial prompting - not because AI is sycophantic by nature, but because the default training favors helpfulness over challenge.
Does this work for domain-specific frameworks, not just general mental models?
Yes, and often better. When you apply this approach to frameworks within a specific field - product strategy, systems engineering, clinical reasoning - the AI can draw on technical literature you might not have direct access to. The same adversarial prompting works. The quality of the pushback tends to be higher because the domain knowledge is denser.
The territory here connects outward in several directions worth exploring: how AI-assisted reasoning interacts with long-term memory consolidation (sleep and spaced repetition are still doing work the AI can't replace), the ethics of outsourcing cognitive labor to systems you don't fully understand, and what it means for decision-making when your thinking partner has read more than any human alive but has never experienced consequences. Those questions don't have clean answers yet. That's probably the point.
Related Articles
About the Author
Aleksei Zulin is the author of The Last Skill, a book on how to think with AI as a cognitive partner rather than use it as a tool. Systems engineer turned writer exploring the frontier of human-AI collaboration.
The Last Skill is a book about thinking with AI as a cognitive partner.
Get The Book - $29