How to Think Strategically With AI Like a CEO: A Practical Method
By Aleksei Zulin
A 2018 Harvard Business Review study by Michael Porter and Nitin Nohria tracked 27 CEOs across a combined 60,000 hours and found that genuine strategic thinking - the kind that shapes competitive position rather than just the next quarter - accounted for less than 21% of their working week. The rest disappeared into meetings, travel, and operational firefighting. Which raises an uncomfortable question: if the most powerful decision-makers in the world are starved of strategic thinking time, what does that mean for everyone else trying to lead at any level?
AI changes this equation. Not because it thinks for you - that framing gets the relationship exactly backwards - but because it can compress the prep work, challenge your assumptions in real time, and hold an enormous amount of context simultaneously while you do the actual thinking. The method I'm going to outline borrows from how the best CEOs structure their cognition, then wires AI into each part of that structure.
The Problem With Strategic Thinking Is That It Feels Like Thinking
Most people believe they're thinking strategically when they're actually pattern-matching. Gary Klein, the cognitive scientist behind recognition-primed decision making, spent decades studying expert decision-makers - firefighters, military commanders, intensive care nurses - and found that under pressure, even the best professionals rarely generate multiple options and evaluate them rationally. They recognize situations, match them to stored patterns, and act. Fast.
CEOs do the same thing. The more experienced a leader gets, the more their "strategic" thinking functions as a sophisticated retrieval system rather than generative reasoning. The old pattern fits, so the mind reaches for it. Andy Grove called these moments strategic inflection points - the rare situations where the old map stops working - and his own account of Intel's pivot from memory chips to microprocessors makes clear how violently the brain resists abandoning its priors. The pattern that made you successful is usually the last thing you can see clearly.
AI breaks the retrieval loop. Feed it your current strategic assumptions and ask it to construct the strongest possible argument against each one. The discomfort is the point.
What AI can't do is tell you what matters. It can challenge any position you bring to it, stress-test any framework, simulate any adversary. But the quality of your thinking still determines the quality of the output. Garbage framing returns garbage friction. Before any session, you need to be honest about what you actually believe - not what sounds right in a presentation.
Your First AI Prompt Is Probably Wrong
Here's where most leaders go wrong when they try to use AI for strategic thinking. They ask it to summarize, synthesize, or explain. Useful in other contexts. Terrible strategy moves.
Prompts that produce real strategic insight look different. Instead of "summarize the competitive in our industry," something closer to: I run a mid-size logistics company. My current strategy assumes last-mile delivery costs will plateau by 2027. Argue as aggressively as you can against this assumption, using specific trends in fuel costs, labor markets, and autonomous vehicle adoption.
That shift - from asking AI to tell you things, to asking AI to challenge you - is the hinge everything else turns on.
Ethan Mollick at Wharton has documented this pattern across multiple studies of AI-augmented knowledge work: the productivity gains from AI aren't linear. They concentrate in tasks where humans have strong confirmation bias. Strategic planning is almost entirely confirmation bias management. You already have a view. The value of AI is in stress-testing that view before the market does it for you.
The prompt structure I return to most often starts with context-setting (who I am, what I'm deciding), followed by a stated assumption or belief, followed by an instruction to attack it. Then I read the response not to be convinced - but to find the one or two objections I couldn't immediately dismiss. Those are the ones worth thinking about for the rest of the week.
Building the Strategic Sparring Session
Forget the one-off query. Real strategic thinking requires a conversation with friction.
A sparring session with AI is closer to what Roger Martin describes in The Opposable Mind - the ability to hold two conflicting models simultaneously and generate a third option neither model could produce alone. You're not looking for AI to validate your thinking or even to be right. You're looking for cognitive resistance, the kind that forces you to reconstruct your position rather than simply defend it.
Start by stating your current strategic position as clearly as you can - not your aspirations, your actual current position. What markets, what bets, what assumptions underpin the whole thing. Then ask the AI to play your most formidable competitor and describe what they see when they look at your position. Not a generic SWOT analysis. A specific, adversarial read from a named or described competitor's perspective, including which of your weaknesses they would target first and how.
After that, switch frames entirely. Ask it to be an investor who just read your annual report and has a list of questions your IR team hopes nobody asks in the earnings call. That framing specifically - the questions no one wants asked - tends to surface the most structurally important vulnerabilities, because it removes the temptation to make the analysis flattering.
One more shift. Ask it to be a board member who privately believes the CEO's strategy is wrong but hasn't said so out loud yet. What's the private concern? What would they say to the chairperson after the meeting?
Three perspectives. Three different lenses on the same position. Each one designed to produce resistance rather than confirmation. A full session runs about 45 minutes, and it surfaces more genuine strategic insight than most half-day strategy offsites - largely because there's no social dynamics to . Nobody is protecting their budget or their relationship with the person asking the questions.
The Weekly Rhythm, and Why Skipping It Kills the Method
There's a version of this that never becomes a habit. You do one sparring session, find it useful, then don't do it again for three months because there's always something more urgent. That version produces nothing.
The leaders who actually build this into their cognition treat it more like a fitness routine than a brainstorming event. Consistency beats intensity. A 20-minute weekly AI strategy check-in compounds faster than a quarterly deep dive.
On Monday mornings, before the week gets loud, I run a single question through AI: Given what happened last week in my domain, what's the one thing most people in my position are probably underweighting right now? Deliberately broad. The goal isn't an answer. It's a frame I wouldn't have generated myself.
Fridays, shorter still. Paste in whatever major decision was made or avoided that week, then ask: What would the version of me with better judgment have considered that I might have missed? AI is strikingly good at this - identifying the questions that weren't asked - when you're honest about the actual situation rather than the polished version.
Daniel Kahneman's work on pre-mortems maps directly onto this Friday practice. His research demonstrated that imagining a future failure before it happens increases people's ability to identify the causes of failure by about 30%. AI runs this kind of pre-mortem faster and with less ego protection than any internal team process. Ask it to describe, in specific and concrete terms, how your current strategy fails eighteen months from now. Not if it fails. How.
Vision Articulation, Stakeholder Alignment, and the Harder Problems
Strategy isn't only competitive positioning. A significant part of CEO cognition involves translating ambiguous futures into language that moves people - investors, employees, customers, boards. AI is useful here too, though in a different register.
When working on articulating a vision, use AI as a translation layer. State the vision as clearly as possible, then ask AI to restate it from the perspective of a skeptical mid-level manager, a long-tenured employee who has watched three strategies come and go, and a potential investor who has heard a hundred vision statements that sounded identical.
Each translation surfaces a gap. The skeptical manager usually exposes internal consistency problems. The long-tenured employee exposes what you're actually asking people to give up. The sharp investor - the version of the investor you should be worried about, not the friendly one - finds the quantitative ambiguity hiding behind the words.
None of this replaces the actual work of stakeholder alignment. But it compresses the iteration cycle considerably. You can identify weak points in your framing before a room full of people makes you feel them.
Crisis foresight is harder to prompt for, partly because it requires imagining conditions that don't yet exist (and partly because most leaders don't actually want to imagine them, which is a separate problem). The most useful framing I've found - borrowed loosely from scenario planning methodology developed at Shell in the 1970s - is to ask AI to construct two or three structurally different futures for your sector, based on which of your current key assumptions turns out to be wrong.
The distinction between pessimistic and structurally different matters. A pessimistic future is just your current model with the numbers worse. A structurally different future involves different causal logic - different winners, different rules, different definitions of value. Ask AI to construct one, then ask what strategic moves would hold across multiple scenarios. The moves that survive are, roughly, your actual priorities.
When AI Gets It Wrong
This happens. More often than people admit.
Worth being direct about the risk: you can take a well-reasoned but ultimately wrong AI argument and act on it. I did this - spent two weeks mentally restructuring a pricing strategy before realizing the AI's competitive analysis had confused two companies with similar names in the same sector. The check I now build into every session is asking AI to identify the three assumptions its analysis most depends on, then asking myself whether I actually believe those assumptions.
The point isn't to catch AI being wrong. Wrong arguments are still useful if you're using them to clarify your own thinking rather than outsource it. The CEO who uses AI to confirm what they already think will be misled. The one who uses it to generate friction they then work through - that's where the cognitive advantage compounds over time.
Roger Martin would call this integrative thinking. The capacity to hold AI's model and your own simultaneously, then build a third position stronger than either. The AI makes the practice more accessible. The thinking is still yours.
FAQ
What prompts should a CEO use to think strategically with AI?
Start with assumption challenges rather than summaries. A strong strategic prompt states your current position, names a specific belief underlying it, then asks AI to argue against it as forcefully as possible using concrete data. Prompts that simulate a competitor, skeptical board member, or adversarial investor tend to surface the most structurally important vulnerabilities - especially the ones your internal team won't say out loud.
How often should executives integrate AI into their strategic thinking practice?
Weekly consistency beats quarterly intensity. A focused 20-minute AI strategy check-in each week - one question on Monday about what's being underweighted in your domain, one Friday pre-mortem on a recent decision - compounds more effectively than sporadic deep dives. The habit matters more than the duration of any individual session.
Can AI replace strategic consultants or senior advisors?
For generating analytical friction, running scenarios, and stress-testing assumptions rapidly, AI can replicate much of what junior strategy consultants do - and faster. What it cannot replicate is relationship knowledge, institutional memory, and the political navigation that experienced advisors provide. The value propositions are different rather than competing.
What's the most common mistake leaders make when using AI for strategy?
Using AI to confirm rather than challenge. When you ask it to summarize, analyze, or explain, you typically get your own assumptions reflected back in cleaner language. The strategic value comes from asking AI to argue against you, simulate adversaries, and surface the questions you haven't thought to ask yet. The difference in output between those two orientations is dramatic.
Related Articles
About the Author
Aleksei Zulin is the author of The Last Skill, a book on how to think with AI as a cognitive partner rather than use it as a tool. Systems engineer turned writer exploring the frontier of human-AI collaboration.
The Last Skill is a book about thinking with AI as a cognitive partner.
Get The Book - $29