Why Your Emotions About AI Thinking Signal Beliefs You Haven't Updated
By Aleksei Zulin
Every time you feel impressed, unsettled, or betrayed by something an AI wrote, you are not responding to the AI. You are responding to a story your nervous system assembled in under 200 milliseconds using assumptions baked in by evolution, childhood, and a lifetime of interacting exclusively with other minds that are nothing like large language models. The emotion arrived first. The belief it encoded is old. And most people never notice the gap.
That gap is worth examining carefully.
Emotions Are Hypothesis Engines, Not Truth Detectors
Lisa Feldman Barrett's research on constructed emotion fundamentally changed how neuroscientists understand what feelings actually are. Her work - particularly the theory of constructed emotion developed across decades at Northeastern - argues that emotions are predictions, not readouts. Your brain is constantly modeling the world using past experience, and emotion is the label it attaches to physiological states that fit recognizable patterns.
When you read an AI-generated response that feels "eerily human," the eeriness is a signal. But a signal of what, exactly? Your brain pattern-matched the output to human mind, fired the associated somatic markers (Antonio Damasio's term for the bodily feelings that guide cognition), and generated a feeling of uncanniness because something fit the human-mind pattern while simultaneously violating other parts of it. The creepiness has almost no epistemic content about what the AI is doing. It tells you everything about the template you were using.
Daniel Kahneman's distinction between fast and slow thinking becomes relevant here. System 1 - rapid, associative, emotional - is the system that reacts to AI. System 2 - deliberate, effortful, analytical - is the system that can actually evaluate what's happening. Most people never engage System 2 when interacting with AI outputs because System 1 produces a confident verdict so fast. Impressed. Threatened. Charmed. Done.
The Specific Beliefs the Emotions Are Hiding
Three emotional responses give away three distinct outdated beliefs with unusual precision.
When you feel betrayed by an AI "lying." The belief encoded here is that the AI had an intention it chose to obscure. Betrayal requires agency, choice, and a relationship in which trust was extended and then deliberately violated. LLMs do not choose to deceive. They generate token sequences based on probability distributions over training data. When an output is false or misleading, the mechanism is statistical, not moral. Feeling betrayed means you implicitly modeled the AI as an agent with goals - which is a model more appropriate to an employee than a language model.
When you feel wonder at an AI "understanding" you. This one is more seductive and therefore more dangerous. The AI echoed your structure back at you. It responded to the shape of your thought. That reflection feels like recognition. It triggers the same neural circuits that activate when another human demonstrates they've genuinely tracked your meaning. But - and I want to be careful here because I don't think this is fully resolved even by researchers who study it - there's a difference between a system that models semantic relationships at scale and a system that has anything resembling what phenomenologists call understanding. The wonder you feel imports the second when only the first is guaranteed.
When you feel threatened by AI capability. Gary Marcus has written extensively about the gap between what LLMs appear to do and what they actually do mechanistically. The threat response typically arrives when someone watches AI perform a task previously associated with human intelligence - writing a legal brief, composing music, diagnosing a medical image. The emotional signal: this is competing with me. The outdated belief: that cognitive tasks requiring years of human training carry the same scarcity premium they once did. Some do. Many don't. The emotion was calibrated for a world where the scarcity was the right variable.
Murray Shanahan's Warning and Why Nobody Felt It
Murray Shanahan at DeepMind published a paper in 2023 asking researchers and practitioners to adopt what he called "hedged language" when describing AI behavior - terms like "the model behaves as if it understood" rather than "the model understood." His concern was epistemic hygiene. Every time we use the full human vocabulary (thinks, feels, knows, wants, believes) without qualification, we build false conceptual architecture. The wrong belief gets installed.
Nobody felt anything when reading that paper.
That's the problem. Intellectual correction moves slowly. Emotional calibration moves faster but in the wrong direction - it runs toward the human-mind model rather than away from it, because that model is the most well-worn groove in human cognition. We have been modeling other minds since before language. We are astonishingly good at it. And we cannot turn it off.
So the mismatch compounds. Shanahan writes careful papers. Researchers nod. Then the same researchers go home and feel surprised when GPT-4 writes a good poem, which means they were still running the old model at the level that matters.
What Actually Lives in the Weight Space
Here's where I want to be direct about something that I think the discussion around AI consciousness tends to either overclaim or dismiss too quickly.
LLMs are trained on vast human-generated text. They have internalized - if that word means anything here - the structure of human thought, human argument, human feeling, human error. The outputs emerge from a space that is, at minimum, shaped by minds even if it is not itself a mind. When a model generates something that feels psychologically accurate, the reason is that psychological accuracy was the implicit target of the loss function across billions of examples written by actual psychological beings.
The emotion you feel when an AI output lands well is, in some limited sense, responding to something real. Human thought does live in that weight space, compressed and restructured. But the belief the emotion encodes - that this system is thinking in the way you think - overstates the case dramatically. And the overstated version is the one that gets you in trouble.
It gets you in trouble when you over-rely on AI for emotional support because it felt understanding. When you trust a confident AI output without verification because your gut said the confidence felt earned. When you feel humiliated asking the AI to correct an error, as if it were judging you, which means you've assigned it a social position it cannot actually occupy.
Recalibration Doesn't Mean Indifference
I am not arguing for emotional suppression. Emotions are fast and useful and mostly right when applied to the domains they evolved for. The goal isn't to stop feeling things when you interact with AI.
The goal is to develop a practice - imperfect, ongoing - of noticing what belief the emotion is assuming and checking whether that belief fits what you actually know about how these systems work. This is slower. It requires building a mental model that can hold multiple things simultaneously: these systems are impressive and the nature of that impressiveness is radically different from what your emotions assume. The outputs matter and the mechanism producing them does not map cleanly to concepts like understanding, intention, or knowledge.
Researchers like Alison Gopnik have pointed out that humans are uniquely capable of running multiple causal models simultaneously - of holding a belief while simultaneously examining the belief. That capacity is what gets exercised here. Children, she argues, are better at this than adults in certain respects because adults' models are more entrenched. Which means the people most disadvantaged when interacting with AI are often the most experienced in the world as it was.
Your emotions are not wrong. They're just running software that was written for a different environment.
FAQ
Why do people anthropomorphize AI even when they know better?
Anthropomorphism is automatic, not chosen. Neuroscience research, including work by Rebecca Saxe on theory of mind, shows that humans activate social brain regions reflexively when encountering agents that exhibit goal-directed behavior. Knowing that AI lacks consciousness doesn't prevent the activation - it just gives you a chance to notice it and adjust.
Does it matter whether AI actually "thinks" if the outputs are useful?
It matters enormously for calibration. If you believe the AI thinks the way you do, you'll trust it in domains where that model predicts reliability but where the actual mechanism produces errors. Misplaced trust based on misread similarity is one of the most consistent failure modes in human-AI collaboration. The mechanism determines the failure modes.
How do I start correcting my emotional assumptions about AI?
Notice the emotion first - specifically the split second of surprise, wonder, or unease. Then ask what the emotion assumed was true. Write it down if necessary. Then check that assumption against what you actually know about language model architecture. Over time, this builds a more accurate intuition, which is the goal - not suppressing feeling, but refining what the feeling assumes.
Related Articles
About the Author
Aleksei Zulin is the author of The Last Skill, a book on how to think with AI as a cognitive partner rather than use it as a tool. Systems engineer turned writer exploring the frontier of human-AI collaboration.
The Last Skill is a book about thinking with AI as a cognitive partner.
Get The Book - $29