The Problem Nobody's Naming
I'm increasingly worried about new PMs entering the field in the AI era.
Not because AI will replace them. Because AI might prevent them from ever building the judgment needed to do the job well.
Product sense (empathy, creativity, strategic thinking, the ability to see second-order effects before they arrive) is learnable but not teachable. There is no 5-step framework. No certification. No course that reliably produces it. It develops through sustained exposure to messy, uncomfortable reality. Reading raw customer feedback until patterns emerge that no one briefed you on. Watching your "brilliant" feature fail and not being able to explain why for months. Sitting with ambiguous, contradictory data and resisting the urge to let someone else tell you what it means.
That discomfort isn't a cost of building judgment. It is building judgment.
And AI makes all of it optional.
Why Would You Struggle When You Don't Have To?
Why read 160 verbatim customer comments when AI can summarize them in 8 seconds?
Why sit with ambiguous data when AI can generate a confident, well-structured recommendation?
Why struggle through a vague problem space when AI can produce a crisp PRD in 30 seconds, complete with user stories, acceptance criteria, and a prioritization framework?
Every one of these shortcuts is individually rational. In the moment, the AI-assisted path is faster, cleaner, and produces output that looks more polished than what most PMs could produce on their own.
That's the trap.
The Struggle Is the Product
When I used to read raw customer feedback (all of it, including the repetitive parts, the incoherent complaints, the off-topic tangents) my brain was doing something I didn't appreciate at the time. It was adjusting weights. Building intuition. Forming an internal model of how customers think, what they actually care about beneath what they say they care about, and which patterns signal real problems versus noise.
This is, not coincidentally, exactly what an LLM does during training. It processes enormous volumes of raw, messy, unstructured data and builds a model that can later produce original outputs.
If you trained ChatGPT on summaries of Wikipedia instead of Wikipedia itself, it would be a measurably worse model. The summaries strip out the texture, contradiction, and pattern density that training requires. The redundancy isn't waste. It's the signal.
Your brain works the same way. AI summaries skip the training step. You get the conclusion without the understanding. The output without the model.
The Override Problem
This creates a specific, dangerous gap: new PMs who grow up on AI assistance will have difficulty overriding AI when it's wrong.
And AI is wrong often. It's wrong in ways that sound brilliant.
AI gives you the consensus answer: articulate, well-structured, defensible. But product breakthroughs rarely come from the consensus answer. They come from someone who has built enough independent judgment to look at a confident, well-reasoned recommendation and say: No. I've seen this pattern before. The data says X, but the actual problem is Y.
If you've never independently formed a product opinion, watched it fail, revised your mental model, and formed a better opinion through that cycle of error and correction, you have no basis for that override. You have no internal model to compare against the AI's output. You just have the AI's model.
The Fast-Iteration Counterargument
The strongest objection to this thesis is worth naming directly: what if AI-assisted PMs simply iterate faster, and the sheer volume of iterations compensates for shallow judgment on each one? If a new PM can ship 10 versions in the time it took an experienced PM to ship 2, maybe judgment develops through rapid repetition rather than deep reflection.
I don't think this wins. Iteration without an internal model is random walk. You ship, you see the result, but you have no framework for interpreting why it succeeded or failed. So you ship again, with a slightly different guess. Each cycle teaches you less than it should because you're not accumulating understanding, you're accumulating attempts. The PM who ships twice but deeply understands both outcomes builds more judgment than the PM who ships ten times and lets AI explain each result.
Volume is not velocity when you're not learning from each rep.
An Honest Caveat
I don't want to pretend this is unique to the AI era.
Before AI, most PMs avoided the uncomfortable work of building judgment too. They copied competitors instead of developing original product instincts. They followed the HiPPO (Highest Paid Person's Opinion) instead of defending their own analysis. They filled out PRD templates as if completing the template was the same as doing the thinking. They ran A/B tests not to learn, but to avoid making an actual decision, outsourcing their judgment to statistical significance.
AI didn't create the avoidance problem. It just removed the last remaining friction. The PM who would have deferred to their VP now defers to Claude instead. The mechanism changed. The failure mode is the same.
What New PMs Have That We Didn't
It would be wrong to frame this as purely disadvantageous. New PMs have real structural advantages.
Their AI fluency is genuine, not cosmetic. When I use AI, I'm translating my existing workflows into a new medium. When they use AI, they're thinking natively in a medium that fits how they already work. That's the difference between someone who learned a second language at 30 and someone who grew up bilingual.
And they won't waste years on overhead that AI handles trivially: status updates, ticket grooming, meeting summaries, stakeholder decks. The tasks that consumed 40-60% of a PM's time and built zero judgment. New PMs can skip straight to the work that matters. That's a structural advantage my generation never had.
How to Build Judgment When AI Makes It Optional
So how should a new PM deliberately build the judgment that their environment no longer forces them to develop?
1. Know Which Tasks to Protect From AI
Not all PM work is equal. Use AI aggressively for overhead: status updates, meeting notes, first-draft documentation, formatting. These tasks need to get done but build no judgment.
But on leverage tasks (deep customer understanding, strategic positioning, product definition, deciding what not to build) do the work yourself. Deliberately. Even when AI could do it faster. Especially when AI could do it faster.
2. Read the Raw Data. Every Time.
AI-summarized customer feedback builds no intuition. It's like reading a book summary and claiming you understood the author's argument. You got the plot points. You missed the point.
Read the actual transcripts. The verbatim comments. The contradictory feature requests that can't all be right. Let the patterns form in your brain, not the AI's. The PM who reads the summary knows what customers said. The PM who reads the transcripts knows what customers meant.
3. Form Your Opinion Before Consulting AI
Before you ask AI for a recommendation, write down what you think the answer is. What would you build? What would you prioritize? What would you kill?
Then ask AI. Compare. When you disagree, investigate why. Is your instinct wrong? Is the AI missing context? Is it giving you the consensus answer when the situation calls for a contrarian one?
The gap between your answer and AI's answer is where your judgment develops. If you never form your own answer first, there is no gap. There is no development. There's just adoption.
4. Own Your Experiments
Run at least some experiments where you own the logic end to end. Design the hypothesis yourself. Choose the metrics yourself. Predict the outcome (write it down) before you see the results. Sit with it when your prediction was wrong.
If you only ever run AI-suggested experiments measured by AI-selected metrics and interpreted through AI-generated analysis, you're training the AI's feedback loop. Not yours.
5. Seek the Discomfort Deliberately
Sit in a user research session where the user visibly struggles with your product and you can't intervene. Read a support ticket thread that makes you cringe. Ship something small, something with your name on it, and watch it succeed or fail without letting AI generate an explanation for why.
The discomfort is the signal that learning is happening. If everything feels smooth and efficient and painless, you're probably building output, not judgment.
A Note for PM Leaders
If you manage new PMs, this is your problem too. Ask yourself honestly: are you evaluating your team in a way that rewards the PM who reads AI summaries and produces polished artifacts quickly? Or the PM who spent three hours reading raw transcripts and came back with an ugly, hand-drawn insight that contradicts the dashboard?
The incentives you set determine which type of PM your team produces. If you reward speed and polish, you will get PMs who optimize for speed and polish. And in three years you will wonder why none of them can override a confident AI recommendation when the situation demands it.
You are either creating the conditions for judgment to develop, or you are selecting against it. There is no neutral position.
The Asymmetric Bet
Are the odds stacked against new PMs? In one sense, yes. The environment actively discourages the uncomfortable, slow, apparently inefficient work that builds judgment. Every tool, every workflow, every piece of career advice pushes toward speed, polish, and delegation.
But here's the inversion: the PMs who deliberately choose the hard path will have a bigger competitive advantage than any previous generation.
Because their competition will all be reading the same AI summaries. Generating the same AI-assisted strategies. Producing the same AI-polished PRDs. Converging on the same adequate, defensible, undifferentiated answers.
The PM who spent the "wasteful" hours reading raw transcripts, forming independent opinions, and building a mental model through the slow accumulation of direct experience will be the one in the room who sees what the AI missed. The one who can override a confident, articulate, well-structured recommendation and be right about it.
In a world where everyone has access to the same AI, that's the only edge.
The Question That Defines a Career
The question for new PMs isn't "how do I use AI well?"
Every PM will use AI well. It's table stakes. It's the equivalent of asking "how do I use email well?" in 2005.
The real question is: What will I insist on doing myself, even when AI offers to do it for me?
Your answer to that question determines whether you become a PM with judgment or a PM with a tool.