Fair warning: this is a rather long piece. It uses precise language to argue that precise language is exactly the wrong instrument for understanding what it's trying to describe.
Kings had advisors. Military strategists, economists, religious authorities, spymasters. Each brought deep expertise the king lacked. Each had a perspective honed over a lifetime in their domain.
The king's job was not to know more than any of them about their field. It was to sit in the room after they'd all spoken, disagreed, and left. And decide.
Today we have something the kings never had: every expert in the room at once, available instantly, contradicting each other in real time. LLMs let you consult simulated strategists, economists, designers, psychologists, historians, all simultaneously, iteratively, at no cost. This is extraordinary leverage. But it surfaces a question the kings would have recognized: what, exactly, does the person on the throne contribute that the advisors don't?
The obvious answers have been well covered. Accountability: someone must own the consequences. Context: the king knows things about his kingdom that can't be briefed. Organizational politics: power dynamics that exist nowhere in the briefing documents.
These are real, but they're also temporary moats. AI will increasingly have access to organizational context. Accountability can be structurally assigned. These are implementation gaps, not category gaps.
There is one category gap I keep coming back to, and it's the one I find hardest to articulate. Which is, itself, the point.
Everything that can be articulated gets commoditized. The thing that resists articulation is the thing that resists commoditization.
---
The Third Thing
When we talk about what LLMs do, we usually describe two modes of operation.
The first is analysis: structured reasoning, deduction, evaluation. LLMs do this extraordinarily well. Give them data and a framework and they produce outputs that are rigorous, well-organized, and often better than what a median expert would produce. This is the rational domain. It's articulable. You can trace the logic.
The second is creative variation: novel combinations, unexpected juxtapositions, lateral leaps. LLMs do this too, and people frequently mistake it for genuine creativity. What the model is doing is combinatorial: recombining elements from its training distribution in ways that feel novel. Stochastic sampling in language generation is to intuition what a roulette wheel is to a seasoned poker player's read on whether someone is bluffing. Both involve uncertainty. One is noise. The other is compressed signal.
What's missing is a third thing. Not analysis. Not randomness. Not some combination of the two.
Call it intuition. Call it instinct. Call it gut. In the Indian philosophical tradition, particularly in Abhinavagupta's work and Bhartrhari's philosophy of language, they called it pratibha: a faculty of direct creative insight, prior to and irreducible to reasoned analysis. Different traditions locate it differently But across all these frameworks, the claim is consistent: there exists a form of knowing that is neither articulable reasoning nor stochastic variation. A third epistemic mode.
This is the thing that sits beneath taste and judgment. Taste is the application of this faculty to quality. Judgment is the application of it to decisions. But the faculty itself, the capacity to know something without being able to justify it, to feel the shape of the right answer before you can argue for it, is what I want to examine.
If it's real, it changes the calculus on what AI can and cannot replace.
---
What Intuition Is Not
The easiest way to define intuition is by what it isn't, because the most common move in AI discourse is to collapse it into something else.
Intuition is not fast analysis. Kahneman's System 1/System 2 framework encourages this conflation. System 1 is fast, automatic, effortless. System 2 is slow, deliberate, effortful. The implication is that intuition is just analysis running on faster hardware, the same process, compiled rather than interpreted. If that's all it is, then LLMs already have it.
But when an experienced PM looks at a product strategy and says "this won't work" before they can explain why, they're not doing fast analysis. The analysis, when they reconstruct it later, may justify the intuition. But the intuition arrived first, whole and certain, and the analysis was built after the fact to house it. The reconstruction is not the thing. The thing is the flash of recognition that preceded it.
Intuition is not randomness. When someone has a "creative hunch," it's tempting to model this as structured thinking plus noise. If that were true, you could replicate it by turning up the temperature on an LLM. But intuition has directionality. It has conviction. It says this, not any of these. It doesn't explore a space. It points.
Intuition is not the temporary absence of articulation. This is the subtlest conflation. The temptation is to think: "I can't explain my intuition yet, but with enough reflection, I could." Making it just analysis that hasn't finished arriving. But anyone who has operated on genuine intuition knows this is wrong. Some intuitions, when interrogated, yield reasoning. Those weren't really intuitions. They were conclusions that arrived ahead of their supporting arguments. Genuine intuition resists articulation not as a temporary state but as a constitutive feature. The moment you fully articulate it, it becomes analysis. The inarticulate remainder is the intuition.
This last point matters most for the AI question.
---
The Press Office
There's a useful way to think about what the conscious mind actually does. It thinks it's the Oval Office, the place where decisions are made. In reality, it's closer to the Press Office: hastily constructing plausible-sounding explanations for decisions that were made somewhere else, for reasons it doesn't fully understand.
LLMs are the ultimate Press Office. They can construct a plausible explanation for anything. They can construct seventeen contradictory plausible explanations and let you pick the most convincing one. What they cannot do is have the decision made somewhere else. There is no "somewhere else." It's Press Office all the way down.
This isn't a temporary limitation that scaling or multimodal training will solve. It is what the technology is. LLMs process tokens. They produce tokens. Everything they "know" exists as patterns over language. The medium is articulation.
And intuition, by definition, is the thing that lives outside articulation.
When an LLM produces output that looks intuitive ("something about this approach feels off") it is performing the articulation of inarticulation. It has learned that humans sometimes express knowing in this register, and it reproduces the register convincingly. But the register is not the faculty.
This is why using AI to generate first drafts of strategic work is not just unhelpful but actively harmful. It changes your cognitive task from thinking original thoughts to editing someone else's output. You anchor to the AI's framing instead of generating your own. The first draft is where intuition lives. It is the moment when you stare at the blank page and something in you, something you can't fully explain, begins to point. When you outsource that moment to an AI, you are not saving time. You are training yourself to skip the Oval Office and go straight to the Press Office. You are eroding the very faculty this essay is trying to protect.
An LLM is like having every published recipe from every cuisine, plus every food critic's review, plus every chef's memoir. That is a stupendously useful resource. But it is not the same as having a palate. The palate, that embodied, experienced, inarticulate faculty of judgment, is what separates a competent cook from a great one.
---
What the King Actually Does
Return to the king.
The advisors have spoken. The general recommends invasion. The treasurer warns of cost. The priest counsels patience. The spymaster presents conflicting intelligence. Each argument is coherent within its domain. Each contradicts at least one other.
The king must now decide.
This decision cannot be made by further analysis, because the advisors' frames are incommensurable. There is no common unit in which to weigh military advantage against economic cost against spiritual counsel against intelligence uncertainty. These are different value systems, and no meta-analysis resolves them without smuggling in a prior judgment about which value system matters most.
That prior judgment is the king's. And it is not analytical. It is intuitive: a felt sense of what matters most right now, in this situation, for this kingdom, at this moment in its story.
The king doesn't calculate the decision. He recognizes it.
And the recognition draws on everything. A thousand compressed experiences from decisions past, where specific details have been discarded but structural patterns are preserved as felt knowing. If you're a hominid on the savanna and something about the pattern of grass movements doesn't look right, you don't need to articulate "the perturbation pattern is inconsistent with wind-driven movement." You need to feel wrong and move. The articulation would kill you. Over evolutionary time, over a career's worth of accumulated decisions, the intermediate steps get burned away. What remains is the conviction without the defense.
The recognition draws on the body. The king doesn't just hear what his advisors say. He reads the room, literally. He sees the hesitation in the general's eyes. He notices which advisor is sweating. He senses the mood of the court. His gut, its 500 million neurons that never report to linguistic consciousness, is processing information that no briefing document captures. Not information compressed away from language. Information that was always in a different medium entirely.
And perhaps the recognition draws on something deeper still. Experienced sailors sensing weather changes before any instrument confirms them. Animals detecting earthquakes before seismographs register anything. These aren't supernatural. They're evidence of perceptual channels that exist but haven't been formalized. If intuition involves the detection of signals that haven't been articulated, that may be inherently unarticuable, then by definition, an LLM, which operates exclusively in the domain of the articulated, cannot access them.
This is what the person on the throne contributes: the integration of incommensurables. And that integration is intuition's highest expression.
No template resolves it. Every product strategy I have seen that was created by filling out a template was non-compelling. Not some. Every one. Templates give you the format of a decision without the faculty of decision. They substitute structure for thinking. The king's contribution is precisely the part that cannot be templateized.
Two things make the king's position structurally different from any advisor, including an AI one.
Non-ergodic risk. The king doesn't get to average across many decisions from a comfortable distance. He lives through each one sequentially, and a single catastrophic decision ends the sequence entirely. An LLM will give you advice with exactly the same confidence whether the decision affects the price of a stamp or the future of a nation. It has no ruin to avoid.
The competitive value of inarticulation. If you can articulate the full reasoning chain behind a decision, so can your competitors. So can an LLM. So can anyone with a spreadsheet and a Tuesday afternoon. The fact that intuition resists articulation isn't just an inconvenience. It's precisely what makes it competitively valuable. You won't find innovation in conventional logic, because if there were a logical path to the answer, someone would already have found it.
Today, the "king" is any human decision-maker sitting above a panel of AI experts. LLMs have given you the advisors. They are brilliant, tireless, and encyclopedic. They can offer a synthesis ("weighing all perspectives, I recommend X") but that synthesis is itself just another advisory opinion, generated by the same analytical-combinatorial process. It is not integration. It is a simulation of integration.
The actual integration requires a being who will live with the consequences, who carries the inarticulate weight of accumulated experience, who feels the situation in their body, and who must decide not just what to do but who to be in light of the decision.
---
The Paradox of Writing About This
Everything you just read is the Press Office. I've taken the inarticulate thing and given it articulable addresses. I've built careful arguments about why it resists argumentation. This essay, this analysis of what lives outside analysis, is doing exactly the thing it describes as impossible.
That paradox is worth sitting with rather than resolving.
Because the question "is intuition reducible to information processing?" is the wrong room. It's an analytical question about whether analysis can capture non-analysis. And the answer doesn't change anything about how you should live or make decisions. Whether intuition is compressed experience on a biological substrate or direct contact with something that transcends explanation, the practice is the same.
You still have to show up. You still have to come in without preconceptions. You still have to trust what you feel.
The king doesn't sit there wondering what kind of knowing he's using. He just decides. And the quality of the decision comes from the quality of his attention to everything in the room, including what can't be named.
---
Clearing the Channel
If intuition is the human faculty that matters most in an age of AI advisors, the natural question is: how do you develop it?
The word "develop" might be misleading. You don't build intuition the way you build an analytical skill. It's more like clearing a channel. The signal is always there. The question is how much noise you've put between yourself and it.
A newborn knows when something feels wrong. They haven't learned anything yet, but they know. Then we spend decades accumulating beliefs about how things are supposed to work, what's realistic, what other people think. All of that is noise on the antenna. The process isn't building something new. It's returning to something that was always there.
Five practices. All simple. The hard part is doing them consistently, because everything in modern life pushes against them.
1. Notice your body before your opinion.
Your body is always talking to you. When something is right: a surge of energy, leaning forward, excitement. When something is wrong: tightening, pulling back, heaviness. Most people have trained themselves to override these signals because they don't always agree with what makes logical sense.
Before you form an opinion about anything, notice what your body already knows. Someone presents a product strategy. Before you think about whether the metrics are right, what did your body do? Did you lean forward or pull back? That's the data. Everything after that is the Press Office.
2. Come in blank.
Most of the time when we encounter something, we arrive already full. Full of expectations, references, comparisons. All of that is interference. Like trying to hear a whisper while playing music at full volume.
The practice is to approach things as a receiver, not a judge. Suspend your beliefs about what's possible, what's good, what's appropriate. Just notice what's there.
3. Spend time in unmediated reality.
Nature is unfiltered signal. The way a tree grows, the way water moves, the way light changes: none of that has been curated, summarized, or optimized for engagement. When you spend time paying attention to it, your nervous system recalibrates. The noise quiets. You start to perceive things you couldn't perceive when you were staring at a screen.
This isn't metaphorical. Go outside. Pay attention to how a bird moves. Don't think about it. Just be in it. When you tune into natural balance often enough, imbalance becomes obvious: in a product, a decision, a room.
4. Protect silence.
The enemy of intuition is noise. Not just sound. Information, opinions, other people's thoughts, social media, the constant stream of input that modern life treats as normal. All of it fills the vessel. When the vessel is full of other people's noise, there's no room for the signal.
Create spaces where there is no input. No phone. No conversation. Just you and whatever arises. Because when you stop feeding the mind new input, the mind eventually quiets, and underneath it is something that has been trying to get your attention the whole time.
5. Trust what you receive.
This is where most people break down. They get the signal. The body says something. The intuition points somewhere. And then the mind comes in: that doesn't make sense, that's not logical, where's the evidence?
And they override it.
Every time you override an intuition because the analysis says otherwise, you train yourself to distrust the signal. And over time, the signal gets quieter. Not because it stopped, but because you stopped listening.
The practice is acting on intuition in low-stakes situations to build the trust that lets you act on it in high-stakes ones.
Underneath all five practices is a single principle: the art changes the artist. Every time you practice noticing the body's signal, you become a person with a stronger signal. Every time you override it in favor of the spreadsheet, you become a person who needs the spreadsheet. The practice is not separate from the capacity. They are the same thing.
---
What This Means
The intuition gap is not an implementation problem. It is a medium problem. Some AI limitations are implementation gaps (context windows, memory, tool access) and will be solved. Scaling doesn't solve this one the way scaling solves context.
If you are the "king," the human decision-maker sitting above your AI advisory panel, your intuition is your primary contribution. Not your analysis (the advisors analyze better than you). Not your creativity (they generate more options faster). Your ability to sit with all of it and feel the shape of the right answer.
This is what I've come to call product sense, and it is the variable that separates successful products from unsuccessful ones. Not process. Everyone does MVPs, talks to customers, iterates fast, uses modern tools. The variable is skill. And the core of that skill is precisely this faculty of recognition: the ability to identify quality, sometimes in non-consensus ways, before the results are in. Think of Jensen Huang betting Nvidia's future on CUDA in 2012. The people who could see that decision was right, before the consensus arrived, had taste. The people who needed the results to confirm it had analysis.
And highly successful people cannot accurately explain their own success, because their innate gifts are invisible to them. Ask a great PM how they make product decisions and they'll tell you about frameworks and customer conversations and data analysis. They'll describe the turpentine and the canvas. What they can't describe is the inarticulate faculty that tells them this and not that, before the reasoning arrives. Their explanation of their own success is, itself, the Press Office.
Protect your intuition. Here is what that means concretely:
Keep reading raw data. Not AI summaries of data. When you let AI summarize 160 customer interviews into a tidy set of themes, you are training yourself on Wikipedia articles when your job requires training on source data. The patterns your body catches while reading the twentieth transcript are not patterns the summary will contain. They are felt, not articulable. They are yours.
Keep making decisions under genuine uncertainty. The reflex to run an experiment before committing is often not rigor. It is avoidance. Some decisions require judgment, not data. The more you practice making those calls, the more you develop the faculty that makes them well.
Keep putting yourself in rooms where your body has to process information your mind can't articulate. The boardroom. The customer visit. The heated cross-functional debate. The signals your body picks up in those rooms, who hesitated, what shifted, where the energy dropped, are an information channel that no briefing document and no AI can replicate.
And take the experience of "I know this but I can't explain it" seriously. The AI discourse has trained us to treat inarticulate knowing as a failure, a sign that you haven't done the analysis yet. Invert this. The inarticulate knowing might be the most valuable thing you have. Not because it's always right. Intuition is fallible. But because it operates on information that your analytical mind, and any AI system, literally cannot access.
If you could fully explain it, it would already be automated.
The advisors are better than they've ever been. The throne still needs a king. And the king's most important work isn't done in the room where the advisors present. It's done in the silence afterward, when the analysis is over, the options are laid out, and there is nothing left to do but feel the shape of what's right.