This is the companion to You're Not Leveraging AI. You're Leveraging a Fast Typewriter, which explores how PMs use AI at different levels of leverage. That piece is about how you use the tool. This one is about what the tool changes about you.

---

For years, we've used the PM Skills Map: four senses that define what it means to be capable at this job.

Product Sense. Empathy, creativity, strategic thinking, second-order effects. Execution Sense. Shipping, scoping, unblocking, contextual expertise. Analytical Sense. Data interpretation, metric selection, pattern recognition. Communication Sense. Influence, stakeholder management, narrative.

It's a good map. The question is whether AI has redrawn the territory underneath it.

The short answer: the map holds. The emphasis within each sense shifts dramatically. Because within every sense, there's a mechanical layer and a judgment layer. AI is getting very good at mechanical. Which means the judgment layer is getting very exposed.

Think of it like this: when the tide goes out, you see what was always underwater. AI is the tide. Judgment is what's been underwater. And some PMs are about to discover they've been standing on mechanical skills this whole time.

---

Product Sense: From Generation to Discrimination

What gets less important: First drafts. Option enumeration. Format translation. The blank canvas problem, that moment of staring at an empty doc wondering where to start. AI eliminates it. Good.

What gets more important: Non-consensus creative insight. Framing problems in ways nobody else has. And above all, taste, the ability to identify quality before consensus confirms it.

Here's the new product sense challenge, and it's subtle: detecting simulated product sense.

AI produces strategies and product visions that look like they came from great product thinking. They're confident. Well-structured. Plausible. They use the right vocabulary. They reference the right frameworks. They have that unmistakable sheen of competence.

The skill is telling "this looks good" from "this is good."

Here's what that looks like in practice. A PM and AI are looking at the same user research for a workflow tool. AI suggests three feature directions. All three are reasonable. All three are well-argued. And all three assume users want to complete the task faster.

But the PM who spent two days watching session recordings noticed something the research summary didn't capture: users don't want to complete the task faster. They want to eliminate the task from their workflow entirely. The problem isn't speed. The problem is that the task exists at all. That reframe changes the entire product direction, from optimization features to automation features, from "do this better" to "stop doing this."

AI simulated product sense by generating plausible feature directions within the stated frame. The PM with actual product sense saw the frame itself was wrong. That's the gap. And it's a gap that widens as AI gets better at producing confident, well-structured output within whatever frame it's given.

---

Execution Sense: From Coordination to Velocity

What gets less important: Coordination overhead. Dependency tracking, status updates, progress management across large teams. Much of this gets offloaded to AI or eliminated entirely as teams shrink. The PM whose primary value was keeping thirty people synchronized is losing their moat.

What gets more important: Personal velocity and contextual expertise.

Contextual expertise deserves its own definition, because it's the execution skill AI is furthest from replicating. It's knowing how things really work versus how they're documented. The VP who checks out after 3pm on Fridays. The design team that won't engage until they see a prototype, not a spec. The fact that this year's reorg means your executive sponsor has quietly shifted priorities and nobody's updated the strategy doc to reflect it.

This is organizational dark matter, the invisible stuff that determines how things actually move. It's learned through raw exposure to the politics, personalities, and informal power structures of a specific company at a specific moment in time. No model has access to it. No prompt can surface it.

The shift: old execution sense was coordination-heavy. New execution sense is velocity-heavy. Less managing dependencies across humans. More: how fast can you build, ship, and iterate when the coordination layer thins out?

The PM who was exceptional at orchestrating thirty people but can't personally produce at speed is now in a difficult position. The game changed underneath them.

---

Analytical Sense: From Mechanics to Judgment

What gets less important: Running queries. Building dashboards. Pattern detection in large datasets. AI handles the mechanical half of analytical work and is getting better fast. The PM who was valued primarily for being "the one who knows SQL" is watching that advantage evaporate.

What gets more important: Choosing what to measure. Knowing when data is misleading you. Sensing when a metric is a proxy for the thing you care about rather than the thing itself.

This distinction sounds abstract until you see it in practice. A PM looks at declining activation rates and concludes the onboarding flow needs redesign. The data supports this. The dashboard is clear. But a PM with strong analytical judgment asks a different question: are activation rates declining because the flow is worse, or because a recent marketing campaign changed the composition of who's signing up? Same data. Opposite conclusions. The mechanical skill, pulling the numbers, is identical. The judgment skill, interrogating what the numbers mean, is what separates them.

There's a hidden risk here that connects directly to how PMs develop this judgment. The PM who never works with raw data develops a different relationship to data than the one who does. A PM who only looks at dashboards develops analytical sense the way a chef who only reads recipes develops taste, technically informed but experientially hollow. The chef who burns a hundred meals learns something the recipe-reader never will. The PM who has manually scraped through event logs and built janky spreadsheets to answer a question they couldn't quite articulate has built pattern-matching that no dashboard summary produces.

Don't let the mechanical skills atrophy faster than the judgment skills develop. The mechanical work wasn't just producing outputs. It was training your internal model.

---

Communication Sense: From Construction to Influence

What gets less important: Message construction. Drafting emails. Refining arguments. Prepping stakeholder decks. All the mechanical work of turning thoughts into polished words.

What gets more important: Influence. Reading rooms. Modeling how a specific person thinks given their history, incentives, and current political position. Timing. Courage. Earning trust through consistency, not through a single well-crafted presentation.

AI can help you find the right words. It cannot tell you who needs to hear them, or when.

At senior levels, influence is the job. Not a nice-to-have. Not a soft skill you develop on the side. It is fundamentally what you are paid for. AI makes this more obvious, not less.

When message construction takes five minutes instead of fifty, the bottleneck is exposed: it was never the writing. It was the judgment about what to say to whom and when. The PM who spent four hours crafting the perfect executive summary was partially doing communication work and partially avoiding the harder question of whether this was the right moment to present it at all.

---

The Two Meta-Skills AI Exposes

The four senses are the instruments. But AI also exposes two meta-skills that sit above and below them, not new senses, but the operating system they run on.

Above: Orchestration

The governing layer. When to deploy which skill. When to push and when to hold. When to escalate and when to absorb. When to write the strategy doc and when to wait three weeks because the organization isn't ready to receive it.

This is the conductor role, not the player.

AI makes orchestration more critical because of velocity. When you can produce artifacts in minutes, the bottleneck shifts from creation to timing. A PM generates a thorough market analysis in thirty minutes on Sunday evening and drops it in Slack before the Monday planning meeting. The analysis is sound. But the head of engineering just spent the weekend debugging a production outage and hasn't slept. The analysis lands as tone-deaf prioritization pressure at the worst possible moment. The same artifact, presented Wednesday after the dust settles, gets enthusiastic engagement. The content was identical. The orchestration was the difference between a wasted effort and a strategic win.

Before AI, the slowness of artifact creation provided natural pacing. You couldn't present the strategy too early because it took two weeks to write. That constraint is gone. Which means the judgment about when has to be conscious rather than accidental.

Below: Calibration

The learning layer. Tracking predictions against outcomes. Updating mental models. Confronting where your judgment was wrong, not just what went wrong, but why your model of the situation was wrong.

This is the mechanism by which all four senses improve over time. Without it, you can work for fifteen years and have one year of experience fifteen times.

AI makes calibration more urgent for a specific reason: the struggle of doing the work used to force learning. When you wrote a strategy from scratch and it failed, you felt exactly where your thinking broke down. The failure was visceral. You could trace the logic chain back to the faulty assumption because you built the chain yourself.

When AI drafts the strategy and you edit it, the feedback signal gets muted. You're evaluating output, not generating it. When it fails, you're not sure whether your editing was wrong, the original prompt was wrong, or the AI's reasoning was wrong. The learning loop weakens at precisely the moment you need it most.

You need to deliberately rebuild it. Track predictions. Compare to outcomes. Diagnose not just what went wrong, but where your model diverged from reality. This is uncomfortable, effortful, and produces no visible output. Which is exactly why most people won't do it, and exactly why the ones who do will pull ahead.

---

"Shouldn't AI Literacy Be on the Skills Map?"

No.

AI fluency is table stakes. Like clear writing or spreadsheet proficiency. You screen for it in hiring. You train for it in onboarding. It does not belong on a capabilities map.

And here's why the distinction matters: put "AI skills" on a PM capabilities map and watch what happens. PMs will optimize for demonstrating AI usage rather than developing the judgment that makes AI usage valuable. You'll get beautiful AI-generated PRDs from PMs who can't tell you what users actually need. The tool proficiency becomes a substitute for the judgment it was supposed to serve.

We've seen this before. When "data-driven" became a PM competency, we got a generation of PMs who could run any query but couldn't tell you whether the metric they were measuring mattered. One team I worked with spent an entire quarter optimizing for a "time to first action" metric that looked rigorous on every dashboard, while missing that their highest-value users were the ones who took the longest to act, because they were evaluating the product seriously before committing. The mechanical skill was flawless. The judgment about what the metric actually meant was absent. They optimized themselves into shrinking their best customer segment.

AI literacy as a listed competency would reproduce exactly the same failure mode, faster.

---

What Actually Shifts

Strip away the detail and the pattern is clean:

Less important across every sense: First drafts. Coordination overhead. Query mechanics. Message construction. Anything mechanical within the four senses. The floor rises. Baseline competence gets cheaper.

More important across every sense: Taste. Contextual expertise. Data judgment. Influence. Orchestration timing. Honest self-calibration. The judgment core. The ceiling matters more when the floor is free.

The PM Skills Map doesn't need new senses. It needs new emphasis within each one, a clear-eyed recognition that the mechanical layers that used to differentiate are becoming commodity, and the judgment layers that used to be optional are becoming the entire game.

The tide is going out. What are you standing on?