← Back to Blog
·9 min read

What Steps Should I Follow to Think More Like an AI?

By Aleksei Zulin

Neuroscientist Karl Friston has spent decades arguing that the brain is not a passive observer but a prediction machine - one that constantly generates hypotheses about the world and updates them based on incoming evidence. His "free energy principle" describes cognition as the relentless minimization of surprise. If that sounds familiar, it should. Large language models work on nearly the same premise: predict the next token, adjust weights, reduce error. We've been building silicon brains that mirror our own, and almost nobody has asked the reverse question - what would happen if you ran the process backwards, using AI's architecture as a blueprint for upgrading your own thinking?

The answer, I've found, is more practical and more unsettling than most productivity advice.

Your Brain Already Runs on Pattern Completion - You Just Don't Control It

Before anything else, recognize what you're working with. Humans don't reason from first principles by default. We pattern-match, then rationalize. Daniel Kahneman's System 1 and System 2 model describes this well: fast, associative pattern completion dominates most decisions, while slow, deliberate reasoning gets reserved for problems we flag as difficult. The trouble is that we flag too few problems as difficult.

AI language models are all System 1, running at scale. Every response emerges from learned statistical patterns across billions of text examples. There's no second-guessing, no ego investment, no motivated reasoning protecting a prior belief. The model doesn't care if its previous output was wrong. It just predicts.

Humans care enormously. Caring is the problem.

The first practical shift isn't a technique - it's an orientation. Start treating your initial read of any situation as a draft prediction, not a conclusion. Neuroscientist Stanislas Dehaene writes in How We Learn that the brain's predictive codes are updated through prediction error, not through confirmation. When your expectation is wrong, learning happens. When you're right, almost nothing changes neurologically. Which means seeking disconfirmation, actively hunting for the moment your mental model breaks, is literally how the brain updates. AI doesn't resist this. You will. The practice is noticing that resistance.

The Token Mindset: Breaking Thought into Its Smallest Units

Here's an exercise I've run with dozens of people in workshops, and it's consistently the one that generates the most friction: take a claim you believe - something you'd say with confidence - and break it down to its load-bearing assumptions. Not bullet points. Individual units of meaning, the way a transformer tokenizes language before processing it.

"Remote work increases productivity" becomes, at the token level: "remote," "work," "increases," "productivity" - and each of those carries a hidden disambiguation problem. Remote for whom? Increases compared to what baseline? Productivity measured how?

Andrej Karpathy, formerly of OpenAI, has written about how the tokenization step shapes everything that follows in a language model. The granularity of your decomposition determines the quality of your reasoning downstream. Coarse input produces coarse output. Most human thinking fails not at the reasoning stage but at the parsing stage - we're working with chunks so large and so loaded with implicit assumptions that the "reasoning" we do afterward is just rearranging prejudices.

The practice: pick one belief per day and tokenize it. Not to destroy the belief. To know what you're actually defending.

The Context Window as a Cognitive Discipline

Every large language model operates under a context window - a hard limit on how much information it can process simultaneously. Older models had narrow windows (2,000 tokens); current models handle hundreds of thousands. But the constraint still shapes behavior. Information outside the window doesn't exist, as far as the model is concerned.

Humans have the inverse problem. We cram everything into every decision.

Cognitive load research, particularly work by John Sweller in the late 1980s through his schema-automation model, shows that working memory capacity is brutally limited - roughly four chunks of information at any given moment. Yet most people approach complex problems by trying to hold every relevant fact simultaneously, which guarantees that something important gets dropped or distorted.

The AI-inspired discipline here is deliberate context curation. Before engaging with a complex problem, define your window. What are the three to five facts or constraints that actually govern this decision? Everything else - and I mean everything else - gets temporarily excluded. Not ignored forever, but parked. The model doesn't agonize over what's outside its window. It works with what's in it.

(I find this easier to say than to do. There's a specific anxiety that comes from deliberately not thinking about a relevant factor, and it takes practice to distinguish that anxiety from actual warning signals.)

The Cognitive Biases AI Skips - And How to Skip Them Too

AI models have biases. That's documented extensively and worth taking seriously. But they carry different biases than humans do, and the contrast is instructive.

Humans are vulnerable to anchoring, where the first number we hear distorts every subsequent estimate. We suffer from the sunk cost fallacy, where past investment warps present judgment. We experience in-group favoritism at a level that affects everything from hiring to scientific peer review. We remember vivid recent events as more probable than they are, a distortion psychologists call the availability heuristic.

A language model, presented with new inputs, doesn't remember the anchor from three exchanges ago (unless it's still in the context window). It has no sunk costs. It doesn't know which output is "mine" in a way that requires defending.

This doesn't mean AI reasoning is superior. Gary Marcus has written extensively about the ways LLMs fail at systematic reasoning, abstract generalization, and genuine causal understanding. But the biases they skip point toward something worth practicing.

The practical move is what I call "clean-slate re-evaluation." Take a decision you're currently stuck on - a project you're uncertain whether to kill, a relationship you're not sure whether to prioritize, a strategy you're unsure is working. Strip out what you've already invested. Strip out what you predicted six months ago. Ask only: given the evidence in front of me right now, what does the pattern suggest? No continuity with your previous self required.

Most people find this genuinely difficult. That difficulty is the whole point.

Probabilistic Output: Trading Certainty for Accuracy

AI systems don't think in binaries. They generate probability distributions - a spread of possible next tokens, each with an associated likelihood. The one that gets output is typically the most probable, but the model carries the uncertainty internally. It knows, in a structural sense, that other answers were possible.

Humans flatten this. We say "the meeting is Tuesday" when we mean "I'm about 85% sure the meeting is Tuesday based on the calendar I last checked four days ago." We say "he's untrustworthy" when we mean "I've seen two data points that suggest unreliability, with significant error bars."

Psychologist Philip Tetlock's work on superforecasters - people who consistently outperform experts at predicting geopolitical events - found that the single most reliable predictor of forecasting accuracy was the willingness to express calibrated uncertainty. Not confidence. Not expertise. Calibration. Superforecasters say "62%" where others say "yes" or "probably."

The practice isn't complicated. Start tagging your own assertions with rough confidence percentages, at least internally. Not performatively, not to seem humble, but because it changes what you do with the claim. An 85% confidence claim warrants different action than a 55% claim, and pretending otherwise doesn't make you more decisive - it makes your decisions noisier.

Pattern Retrieval Before Reasoning

There's a sequence AI models follow that human thinking tends to invert, and getting this right might be the most underrated cognitive upgrade available.

When a language model generates a response, pattern retrieval happens first - activating learned associations across the training distribution - and explicit "reasoning" (to the extent that term applies) emerges from those patterns. Humans often try the opposite: we attempt to reason from stated principles, then search for patterns that confirm the reasoning. This is backwards and expensive.

Research by cognitive scientist Gary Klein on naturalistic decision-making shows that expert performance rarely involves the kind of systematic reasoning we assume. Experienced firefighters, chess grandmasters, and intensive care nurses make fast, accurate decisions by recognizing patterns first - matching current situations to a library of prior experiences - and reasoning only when pattern-matching fails.

The implication for ordinary thinking: before constructing an argument, ask what pattern the situation matches. Not "what do I think about this?" but "what does this remind me of, and how did that go?" The reasoning step, if needed, comes after - and it comes with much better raw material.

This is also where reading broadly and across domains pays dividends in a way that reading deeply in one field doesn't. The larger and more diverse your pattern library, the better your first-pass retrieval. AI models trained on broader corpora generalize better. The same logic applies.


FAQ

Can thinking like an AI actually make me smarter, or just more robotic?

The goal isn't to suppress emotion or intuition - it's to run better predictions with the cognition you have. Pattern retrieval, calibrated uncertainty, and context discipline are practices that sharpen judgment, not flatten personality. The researchers who score highest on reasoning benchmarks tend to be more curious and more comfortable with complexity, not less human.

How do I practice token-level decomposition without it becoming paralyzing?

Apply it selectively. Choose high-stakes beliefs or decisions where imprecision is costly - strategic choices, important assessments of people, significant predictions about your field. For low-stakes thinking, fast System 1 pattern-matching is efficient and fine. The skill is knowing which mode a situation actually requires.

What's the most common mistake people make when trying to think more systematically?

Confusing process with outcome. Adopting more structured thinking doesn't guarantee correct conclusions - it reduces noise and improves calibration over time. People often abandon systematic practices after one failure. The better measure is whether your confidence levels correlate with your accuracy rates across hundreds of predictions, not whether a single forecast was right.

Related Articles

About the Author

Aleksei Zulin is the author of The Last Skill, a book on how to think with AI as a cognitive partner rather than use it as a tool. Systems engineer turned writer exploring the frontier of human-AI collaboration.

The Last Skill is a book about thinking with AI as a cognitive partner.

Get The Book - $29