What Techniques Do Top Thinkers Use With AI for Innovation?
By Aleksei Zulin
Are you using AI the same way everyone else is - as a faster search engine or a writing shortcut? Top thinkers aren't. They've developed specific, sometimes counterintuitive techniques that treat AI as a cognitive partner in the actual work of creating new ideas, not just executing existing ones. The difference in output is not marginal.
I've spent two years studying how researchers, founders, and scientists are integrating AI into their innovation workflows. What I found wasn't a collection of clever prompts or a stack of specialized tools. It was a set of mental postures - ways of relating to AI that fundamentally change what becomes possible.
The Adversarial Thought Partner
Most people use AI for agreement. They want their idea validated, their draft cleaned, their argument polished into something presentable. Demis Hassabis, co-founder of DeepMind and Nobel laureate in Chemistry, approaches it from the opposite direction. In interviews about AlphaFold's development, he's described the value of systems that challenge hypotheses rather than confirm them. The researchers didn't just ask AI to find answers - they used it to systematically break their own models of protein folding before committing to architectural decisions.
The technique in practice: prompt your AI to argue against your strongest idea. Not to list counterarguments in bullet form - that produces generic objections with no teeth - but to embody a skeptic who has read everything you've read and believes you're missing the central point entirely. Give it a character. Give it a specific intellectual tradition to inhabit.
I tried this with a product concept last year. The AI, given the right framing, identified a distribution problem I'd completely overlooked. Smart. But the real insight was the process - by writing a brief specific enough that an intelligent critic could dismantle it, I had already tightened my thinking before the AI said a single word.
The adversarial posture scales across domains. Andrej Karpathy, formerly of OpenAI and Tesla, has talked publicly about stress-testing architectural assumptions during neural network decisions by asking what a researcher who fundamentally disagrees would say at a major conference. Specificity changes everything. "What's wrong with this" produces noise. "What would a researcher who thinks this entire approach is theoretically misguided argue, and why" produces something you can actually think against.
Constraint Injection and the Compression Technique
Here's something I rarely see written about clearly.
Top innovators don't ask AI for more ideas. They ask it to work within constraints that seem to make the problem impossible. Andrew Ng, founder of DeepLearning.AI and one of the most prolific educators in applied machine learning, has spoken about the value of forcing AI through tight specifications - not because the output is always correct or usable, but because constraints reveal the shape of the problem in ways that open-ended exploration doesn't.
Compression works like this. You take a complex innovation challenge and ask AI to solve it under a single constraint that seems absurd - no budget, no engineering time, no new infrastructure, no external dependencies. A hardware founder I know asked an AI to design a core product feature assuming zero additional budget and a single week of engineering. The result was wrong in every practical sense. But it surfaced a modular architecture idea that, with real resources and a real timeline, became the team's actual direction. The constraint forced recombination.
The neuroscientist David Eagleman has written extensively about how the brain generates creative output through limitation rather than freedom. AI amplifies this dynamic in ways that weren't previously accessible. When you give AI unlimited latitude, you get median output - the average of everything it's been trained on. When you compress the problem space deliberately, you force the system to recombine rather than retrieve.
There's something almost uncomfortable about how reliably this works. Like you're engineering your own insight by pretending the resources don't exist. (Maybe you are. I'm genuinely uncertain whether that's a feature of human cognition or a design flaw we're learning to exploit.)
Cross-Domain Synthesis at Scale
Real innovation lives at field boundaries.
Ethan Mollick, a professor at Wharton who has conducted some of the most methodologically rigorous empirical research on AI and knowledge work, found in his studies that workers who used AI across domain boundaries - applying concepts from one field to problems in another - produced measurably more novel outputs than those who stayed within their area of expertise. Not subjectively more creative. Measurably, in blind evaluations by independent assessors.
The technique is deceptively simple and almost embarrassingly underused. Describe your innovation problem in precise terms. Then ask the AI to approach it from the perspective of someone working in a completely unrelated field. Evolutionary biology. Medieval logistics. Jazz improvisation theory. Supply chain optimization from the 1970s. The stranger the domain, the more likely you are to encounter a framing you haven't already tried.
Stuart Russell, Berkeley AI researcher and author of Human Compatible, has argued that intelligence itself might be partially defined by the capacity to transfer learned patterns across domain boundaries. What the best thinkers are doing with AI is engineering that transfer deliberately - using a system trained on everything, and choosing which angle to look through.
I've used this to think about writing structure by asking an AI what a structural engineer would say about chapter transitions in a nonfiction book. Strange prompt. The output was technically wrong about writing. But it gave me a vocabulary - load-bearing elements, redundant supports, stress distribution - that I've used to diagnose problems in manuscripts ever since.
The Iterative Externalization Loop
Fast. Write. Refine. Repeat.
This is probably the most common technique among thinkers who have genuinely changed how they work, but it's almost never described precisely enough to be actionable. The loop works by externalizing a half-formed idea in rough, unpolished language, getting AI feedback or extension, then revising based not on the AI's output but on your reaction to the AI's output.
Your reaction is the data.
Reid Hoffman, who co-wrote Impromptu with GPT-4 in real time as a method of stress-testing his own thinking about AI's societal implications, described the process in a 2023 conversation as being less about completion and more about generating friction. The AI's response gives you something to push against. You disagree with what it says and suddenly discover you have a position you didn't know you held. That's the innovation value - not the AI's idea, but the clarification of yours.
Researchers working in the tradition of Mihaly Csikszentmihalyi's studies on creative cognition have consistently found that externalization is one of the most reliable methods for advancing stuck thinking. Writing things down, speaking them aloud, explaining a half-baked theory to a knowledgeable colleague. AI makes this loop faster and available at any hour, with an interlocutor that has broad - if in many ways shallow - knowledge across almost every domain.
The trap is easy to fall into. When the loop becomes a production pipeline, when the goal becomes getting something done, the cognitive value collapses into drafting assistance. When the goal is the loop itself - thinking rather than producing - something genuinely different happens. Most people never find this mode because they bring AI in too late, after the thinking is already done, when they want execution rather than exploration.
Semantic Scaffolding Before Deep Research
Adam Grant, organizational psychologist and author, has described using AI to map conceptual territory before engaging with primary sources - not to replace research, but to build a provisional structure of the domain that makes the actual research faster and more targeted. A rough map drawn by someone who has read widely but not always deeply.
Semantic scaffolding means asking AI for the current state of debate in a field, the major fault lines, the questions that remain genuinely contested among experts. You treat the output not as truth but as orientation. Then you go find the actual territory - and you find it faster because you know roughly where you are.
The innovation value here is significant and underappreciated. Most people entering a new domain either read too narrowly, staying within the literature that confirms their initial hypothesis, or too broadly, drowning in sources before they've developed any real judgment about what matters. Scaffolding gives you a middle path.
There's a version of this that goes further, and I think it's where the real is. Some researchers are using AI to identify what questions aren't being asked in their field - the gaps in the literature, the shared assumptions nobody examines, the problems everyone treats as solved that might not be. You ask the AI to describe the consensus view of a problem, then ask what that consensus systematically fails to account for. This is generative in a way that search engines can't replicate, and that even expert colleagues often can't provide because they're too close to the field to see its perimeter.
The Calibration Habit Nobody Talks About
None of this works without calibration. Full stop.
The most sophisticated AI users I've observed share one practice that rarely appears in how-to content: they actively test their AI's limitations before trusting its outputs in high-stakes contexts. They probe for hallucinations deliberately. They ask questions whose answers they already know. They push at the edges - very recent events, obscure researchers, highly specific technical details - not because they expect failure, but because knowing where the tool breaks down tells you precisely where to trust it.
Yann LeCun has been publicly and consistently skeptical about large language models' capacity for genuine causal reasoning. Whether or not you agree with his technical position, his skepticism models something important: the best innovators hold calibrated uncertainty about their tools. They neither dismiss AI as a toy nor treat it as an oracle. They have an empirically grounded sense of what the tool does well, in which contexts, at what level of reliability.
Calibration isn't exciting. It doesn't generate viral content about AI transformation. But it's the difference between someone who uses AI as a genuine cognitive partner and someone who's moving faster - with more confidence - toward the wrong answer.
FAQ
What AI tools do top thinkers actually use for innovation?
Most are working with frontier language models - Claude, GPT-4, Gemini - though the tool matters far less than the technique. Demis Hassabis has described custom AI systems for scientific discovery, but for general innovation work the pattern is consistent: frontier models used with highly specific, context-rich prompting rather than generic requests. The sophistication is in how you engage the tool, not which one you choose.
How do you shift from using AI as a productivity tool to using it as a cognitive partner?
Change the goal of the interaction. Stop trying to get something done and start trying to think. Begin by externalizing a genuinely messy problem - not a clean question - and observe your reaction to what the AI does with it. The output is secondary. What you discover about your own assumptions and positions is where the value lives, and most people never look there.
What's the most common mistake people make when using AI for innovation?
Bringing it in too early, before they've wrestled seriously with the problem themselves. When you engage AI before you've developed genuine personal stake in the question, you tend to inherit the AI's frame - which is the median frame, the average of its training. The thinkers who extract the most from AI typically do so after they've already formed a real position. Then the AI's perspective creates productive friction rather than just replacing yours.
Related Articles
About the Author
Aleksei Zulin is the author of The Last Skill, a book on how to think with AI as a cognitive partner rather than use it as a tool. Systems engineer turned writer exploring the frontier of human-AI collaboration.
The Last Skill is a book about thinking with AI as a cognitive partner.
Get The Book - $29