← Back to Blog
·8 min read

What Questions Should I Ask AI to Expand My Perspective? A Framework for Genuine Cognitive Expansion

By Aleksei Zulin

A few months ago, a friend showed me his AI conversation history. He was proud of it. Hundreds of exchanges, all of them polished, efficient, productive. He asked ChatGPT to summarize articles, draft emails, explain concepts. All reasonable uses. But when I scrolled through, something bothered me. Every single question he asked assumed he was already right. He was using AI to execute his thinking, not to challenge it.

That's the trap. Most people approach AI like a faster search engine. They input a conclusion disguised as a question, and the model - trained to be helpful - hands it back wrapped in validation. The perspective doesn't expand. It hardens.

The researchers who study belief revision have a term for this: myside bias. Psychologist Jonathan Baron at the University of Pennsylvania documented how people generate arguments, evaluate evidence, and ask questions in ways that systematically favor their existing views. The problem isn't intelligence. Smart people do it more, not less. What changes the equation is the quality of the questions you're willing to ask - not just of AI, but of yourself, through AI.


Stop Asking AI What You Think. Start Asking What You're Avoiding.

The most perspective-expanding questions aren't the ones that feel comfortable to type. They're the ones where you hesitate before hitting enter.

Here's a concrete reframe. Instead of asking "What are the benefits of remote work?" - a question you might already know the answer to - ask "What would someone who deeply opposes remote work say that I haven't fully considered?" The shift sounds small. The cognitive effect isn't.

Philip Tetlock's research on superforecasting, published in his 2015 book with Dan Gardner, found that the most accurate predictors weren't domain experts. They were people who actively sought out views that contradicted their own and updated accordingly. AI gives you a 24/7 sparring partner for exactly this - if you ask it to play that role.

Some prompts worth trying: "What's the strongest argument against the position I just described?" Or: "Assume I'm wrong about this. What evidence would support that assumption?" Or, more uncomfortable, "What cultural blind spot might be shaping how I see this problem?"

That last one deserves a pause. Cultural blind spots aren't about being uninformed. They're structural. The way your family talked about money, failure, success - it shapes what you treat as obvious and what you don't notice asking about at all. AI trained on diverse text can sometimes surface what your particular vantage point made invisible. Not always. But often enough to be worth the discomfort.


The Questions That Crack Open Time

Historical counterfactuals sound like a parlor game. They're not.

When you ask AI "What would have happened if the printing press had been invented two centuries later?" you're not doing trivia. You're stress-testing your assumptions about how change happens, who controls it, and what conditions make ideas spread or die. The counterfactual forces you to identify which variables you think actually matter - and that's where perspective lives.

Gary Klein, the cognitive psychologist who developed the premortem technique, spent decades studying how decision-makers develop richer mental models. His core finding: experts don't just know more facts. They hold more conditional knowledge - they understand how outcomes change when key variables shift. Asking AI counterfactual and scenario questions builds exactly that kind of conditional thinking.

Go further into the future. "What problems will people in 2075 wish my generation had taken seriously that we're currently ignoring?" is a question most people never sit with. AI won't give you a definitive answer. (The honest ones will hedge appropriately.) But the exercise of generating those answers, reading them, arguing back - that's the mechanism. The answer matters less than what you notice yourself resisting.

One more direction here that gets underused: ask AI to map second and third-order consequences. "If this policy gets implemented, what happens next, and then what happens after that?" Most of our blind spots aren't about first-order effects. We see those. The blindness lives downstream.


Emotional and Ethical Terrain Most People Skip

The dimension almost entirely absent from how-to guides on AI prompting is the emotional and ethical one. Which is strange, because that's often where perspective is most calcified.

Ask AI about an ethical dilemma you've actually faced - not a textbook trolley problem. Describe the real situation: the competing loyalties, the ambiguity, the thing you did, and the part you're still not sure about. Then ask: "From a virtue ethics perspective, what would this choice say about what I value? From a consequentialist view, what did I optimize for without realizing it?"

This isn't therapy. (Well - it can be useful in ways adjacent to therapy, but that's a separate conversation.) The point is that ethical frameworks function as perspective-generating machines. When you apply a framework you don't normally use, you get information your default framing hides.

Adam Grant's research on the psychology of rethinking, detailed in Think Again, shows that intellectual humility - the genuine kind, not the performed kind - correlates with better decision-making across domains. But intellectual humility is hard to feel in the abstract. It becomes accessible when you encounter a genuinely well-reasoned view that contradicts yours. AI, asked the right way, can provide that encounter on demand.

The question I've found most disruptive, personally: "What would someone who has lived a fundamentally different life than mine - different country, class, cultural background - find obvious about this situation that I'm treating as complicated?" Sometimes the AI's response is generic. Sometimes it's unexpectedly clarifying. The variance is part of the point.


How Do You Know If Your Perspective Actually Shifted?

This is the question nobody asks, and I'll admit I don't have a clean answer.

The honest version: most people feel like they've expanded their perspective after an intellectually stimulating AI conversation, and some of them have, and some of them have just experienced the pleasant sensation of novelty without any real update. These feel nearly identical from the inside.

A few rough diagnostics. Did you change how you'd behave in a specific situation? Did you update a concrete belief, not just feel more in the abstract? Did you feel genuine resistance during the conversation - that friction where something challenged what you already thought you knew?

Psychologist Carol Dweck's work on growth mindset is instructive here, though not in the Instagram-quote way it's usually cited. The key finding isn't that "believing you can grow" makes you grow. The research shows that specific behaviors - seeking out difficulty, welcoming discomfort, responding to setbacks by revising strategy rather than protecting self-image - produce measurable change. Asking AI hard questions is one such behavior. The trick is insisting on hard answers, pushing back when the response feels too comfortable, and treating the conversation as a workout rather than a massage.

Keep a log. Not of every AI conversation - that way lies obsessive note-taking and no actual thinking. But when a question genuinely surprises you, write down what you believed before and what shifted. Do this for a month. Then look back. The pattern of what could be shifted will tell you something about the shape of your particular blind spots.


FAQ

How do I make sure AI isn't just telling me what I want to hear?

Explicitly ask it not to. Say "I want you to steelman the opposing view, even if it's uncomfortable." Then push back on the first answer you get - ask where it's being vague or overly diplomatic. Most AI models soften responses by default. You can override that with direct instruction, and the quality of challenge increases noticeably when you do.

Can AI actually help with cultural blind spots, or does it just reflect mainstream perspectives?

Both are true. AI models reflect the biases in their training data, which skews toward English-language, Western, educated sources. That's a real limitation. But asking explicitly - "What perspectives on this topic are underrepresented in mainstream Western discourse?" - can surface genuine alternatives. Treat it as a starting point for further research, not a final answer.

What's the difference between a question that expands perspective and one that just makes me feel smart?

Feeling smart usually means your existing frameworks handled the new information easily. Real perspective expansion creates friction - a sense that something doesn't quite fit what you already believe, that you need to restructure something rather than just add to it. If every AI conversation leaves you nodding, the questions aren't doing their job.


The questions you're willing to ask tell you more about your current perspective than any answer could. AI won't expand your thinking automatically. Nothing does. But with the right prompts - the uncomfortable ones, the counterfactual ones, the ones that invite challenge rather than confirmation - it becomes something rarer than a tool. A genuine cognitive partner.

Whether you use it that way is still, entirely, your choice.

Related Articles

About the Author

Aleksei Zulin is the author of The Last Skill, a book on how to think with AI as a cognitive partner rather than use it as a tool. Systems engineer turned writer exploring the frontier of human-AI collaboration.

The Last Skill is a book about thinking with AI as a cognitive partner.

Get The Book - $29