← Back to Blog
·8 min read

What Mental Models Can AI Teach You to Solve Complex Problems?

By Aleksei Zulin

Most people use AI to avoid thinking. The ones getting sharper are using it to think harder - and the gap between those two groups is widening faster than anyone is publicly acknowledging.

Here's the claim I'll spend the rest of this article defending: the most valuable thing AI can teach you has nothing to do with its outputs. It's the structure of its reasoning - the mental models embedded in how large language models process complexity - that you can extract and apply to your own cognition. Not the answers. The architecture.

Charlie Munger spent decades arguing that a "latticework of mental models" drawn from multiple disciplines was the foundation of sound judgment. He pulled frameworks from physics, psychology, economics, biology. AI systems trained on centuries of human knowledge across all those domains simultaneously operate on something structurally similar. When you interact with one seriously - not as a search engine but as a sparring partner - you start absorbing the shape of that thinking. The question is whether you're doing it deliberately or accidentally.

The models below are not a complete list. They're the ones I've found transfer most directly from AI-assisted reasoning to unassisted thinking - the ones that stick after the conversation window closes.

Here's how to do it deliberately.

Inversion: How AI Thinks Backwards

Ask any large language model to solve a problem directly, then ask it to identify everything that would cause the solution to fail. The quality shifts dramatically. More specific. More uncomfortable. Far more useful.

Inversion is a model with deep historical roots - mathematician Carl Jacobi famously advised "invert, always invert," and Munger championed it throughout his career as one of the most underused thinking tools in existence. Many hard problems become tractable when approached from the opposite direction. Instead of asking "how do I build a product people will adopt," ask "what would guarantee this product gets abandoned?" The answers reveal constraints and risks that forward thinking systematically misses, precisely because our brains are optimistic about plans we've made ourselves.

AI is surprisingly good at inversion. Probably because it has ingested thousands of post-mortems, failure analyses, and catastrophic decision case studies and can surface plausible failure modes rapidly. When I work through any significant decision now, I run two passes automatically - the forward case, and the inversion. What enables this? What kills it?

The discipline the model teaches isn't just the technique. It's the habit of treating your own plan as a hypothesis rather than a conclusion. That shift alone is worth the price of the conversation.

Second-Order Thinking and the Cascade You're Ignoring

Ask a model what happens if a company doubles its engineering team overnight. You'll get first-order answers: faster feature development, more output. Push further - ask what happens next, then what happens after that - and the real picture emerges: coordination overhead increases nonlinearly, communication channels multiply faster than headcount (a team of 100 has roughly five times the communication paths of a team of 30), technical debt compounds faster, and culture dilutes in ways that take two to three years to manifest fully.

Howard Marks has written extensively about second-order thinking as the distinguishing cognitive feature among high-quality decision makers. Most people consider only the immediate consequence. Superior thinkers ask "and then what?" two or three levels deep. AI can walk you through that cascade explicitly, and more importantly, it can model it across domains where you haven't yet built intuition.

The practical application: after receiving any recommendation or analysis, ask the model directly - what are the second and third-order consequences of acting on this? Then look for asymmetries. Places where downstream effects are disproportionately larger than the immediate effect. That's usually where the real risk or opportunity is buried.

The Map Is Not the Territory (And AI Fails Beautifully)

Here's the angle the productivity crowd consistently avoids. AI hallucinations are a teaching tool.

When a language model confidently presents a fabricated citation, or reconstructs a historical sequence with plausible but wrong details, it's demonstrating something fundamental about cognition itself: models are representations of reality, not reality. Alfred Korzybski's formulation "the map is not the territory" captures it precisely. Every mental model is a simplification, and all simplifications have edges where they break.

AI fails at those edges. Loudly, sometimes embarrassingly. I've watched models explain clinical thresholds with serene confidence while being completely wrong. The failure mode is less interesting than what it points to: whenever the system extrapolates from pattern rather than retrieving verified fact, the map diverges from the territory.

Gary Klein's research on naturalistic decision-making shows that expert intuition outperforms analytical reasoning in familiar domains precisely because experts have accurate maps built from real experience. The problem is that most people cannot distinguish their accurate maps from their inaccurate ones - they feel identical from the inside. Watching AI fail confidently, and tracing exactly where and why it failed, builds that calibration muscle.

(There's something uncomfortably recursive about using an AI's failures to improve your own reasoning about AI. I haven't fully resolved it, and I'm not sure I need to.)

First Principles Decomposition as a Repeatable Practice

Elon Musk's branding of "first principles thinking" made it feel like a personality trait for founders. Strip it back and it's Descartes - methodical doubt, breaking problems into their irreducible components rather than reasoning by analogy from prior solutions.

AI can scaffold this process formally. When you bring a complex problem and ask the model to decompose it into fundamental constraints - not best practices, not conventional approaches, but actual physical, logical, or economic constraints - something shifts. The model stops generating familiar templates and starts reasoning from the base layer upward.

Try this with any operational problem you're stuck on. Ask the model to distinguish constraints that are real (physical, regulatory, mathematical) from constraints that are assumed (conventional, legacy, habitual). The distinction clarifies almost immediately. Most "impossible" problems are actually "difficult given our current assumptions" problems. The assumptions are often decades old and no one has questioned them because no one found it inconvenient enough to bother.

Philip Tetlock's superforecasting research identified that the best predictors share a specific cognitive habit: they decompose problems into components, estimate each component independently, then reassemble. The AI can model this decomposition explicitly. Which means you can reverse-engineer the process and apply it to your own analysis even when no model is present. That transfer - from AI-assisted to unassisted - is the actual goal.

Systems Thinking: Finding the in Structure

Donella Meadows spent her career mapping complex systems and arrived at a hierarchy of intervention points that most practitioners still ignore. Her key insight was that most people intervene at the wrong level - adjusting parameters when the real is in feedback loops, information flows, and system goals. Adding more resources to a broken system gives you more broken system, faster.

AI systems are feedback-loop-heavy by construction. They update on output framing. They're sensitive to how prompts are structured. They amplify certain reasoning styles and suppress others based on training dynamics. Working with them closely - especially when results are inconsistent or surprising - teaches you to ask structural questions. What feedback loop is driving this behavior? What information is missing from this system? What would change the goal function?

These are the exact questions Meadows wanted people asking about organizations, ecosystems, and economies. The AI makes the loops visible faster because the cycle time is seconds rather than years. You can test a structural hypothesis about a prompt in real time, watch the output shift, and revise your model of the system accordingly. That tight feedback loop is pedagogically powerful in a way that reading Meadows - however valuable - is not.

Where most people focus on what the model says, the more productive habit is analyzing why it said that - what structural feature of the system produced that output. Often that question maps directly onto a structural feature of the problem you're actually trying to solve. The system becomes a mirror, and the reflection is more useful than the answer.


Frequently Asked Questions

How do I start using AI to build mental models rather than just get answers?

Treat every session as a reasoning audit. After receiving any response, ask: "What assumptions underlie this?" and "What would have to be true for this to be wrong?" You're not fact-checking the output - you're reverse-engineering the logic structure. Over weeks, those structures become part of how you approach problems without the model present.

What if I don't have a technical background - do these mental models still transfer?

Completely. Inversion works in parenting and negotiation. Second-order thinking applies to career decisions. First principles decomposition is useful anywhere assumptions have calcified over time. AI provides a fast, low-stakes environment to practice these models before the stakes are real - domain is secondary to the habit.

Which mental model should I start with if I'm new to this approach?

Start with inversion. It's the lowest-friction entry point: take any plan or decision you're currently working on and ask an AI to list everything that could cause it to fail. You'll immediately see a different quality of analysis than forward-planning alone produces, and the technique transfers directly to unassisted thinking within days.

Related Articles

About the Author

Aleksei Zulin is the author of The Last Skill, a book on how to think with AI as a cognitive partner rather than use it as a tool. Systems engineer turned writer exploring the frontier of human-AI collaboration.

The Last Skill is a book about thinking with AI as a cognitive partner.

Get The Book - $29