Prompting isn’t the skill gap. Judgment is.
While everyone’s learning to prompt better, the market is moving past prompting entirely.
I’ve been watching the PM job market for months. The pattern is hard to miss.
McKinsey’s latest research shows demand for AI fluency has grown sevenfold in two years. It’s now the fastest-growing skill in US job postings. Seven million workers are in occupations where AI skills are explicitly required, up from one million in 2023.
So everyone’s scrambling to learn AI tools. Get good at prompting. Master Claude and ChatGPT. My LinkedIn feed is full of frameworks for writing better queries.
Makes sense, right?
Then I looked at what companies actually ask in PM interviews.
The Toronto Product Management Association partnered with recruiters to survey what hiring leaders at companies like GrubHub, Amazon, Facebook, and Axon are asking candidates right now. Eleven questions from CPOs and VPs of Product.
Not one question about AI tools. Not one about prompting.
Instead: “Tell me about a time you were wrong.” “What’s your biggest screw-up and what did you learn?” “Tell me about a time you had to say no to someone more senior than you.”
The job postings say AI fluency. The interviews test judgment.
I think I know what’s happening.
Prompting is already commoditizing. Cursor autocompletes your prompts. Claude projects remember context, so you don’t have to re-explain. Agent systems are starting to make decisions without you in the loop.
The same McKinsey report that shows AI fluency is exploding also says: “More than 70 percent of skills sought by employers are used in both automatable and non-automatable work.” The skills that endure are “judgment, relationship-building, critical thinking, and empathy.”
In other words: AI fluency gets you in the door. Judgment is what they’re actually hiring for.
I’ve started thinking about this differently. If AI can generate options instantly, what can’t it do?
It can’t identify the edge case that breaks everything. AI gives you the happy path. The weird user who’s going to do something nobody anticipated? That requires human paranoia built from watching things fail.
It can’t sequence strategically. AI can give you a roadmap. It can’t tell you which feature unlocks the next three versus which one is a dead end. That’s pattern recognition from shipping things and seeing what happens.
It can’t synthesize across contexts. AI works in the context you give it. Connecting the support ticket, the sales call, and the analytics spike into one coherent problem? You have to build that map yourself.
It can’t read the room. When the VP says this, the customer says that, and the data says something else, AI can summarize the disagreement. It can’t tell you who to believe.
The uncomfortable part: none of these are skills you learn from a course. They’re built from reps. From making decisions under uncertainty, being wrong, and noticing why.
Prompting, you can learn in a weekend. Judgment takes years.
That’s exactly why it’s valuable.
I’m not anti-AI. I use these tools constantly. But I’ve stopped optimizing for “get better at AI” and started optimizing for “get better at the parts AI can’t do.”
When I think about what to emphasize in interviews, I don’t practice my prompting stories. I practice my judgment stories. The messy decisions. The calls I made without enough information. The times I was wrong and what I learned.
That’s the bet I’m making. The skills that are hard to develop are the skills that will be hard to replace.
Everyone else can fight over who prompts better.
Until next week,
Mike Watson @ Product Party
P.S. Want to connect? Send me a message on LinkedIn, Bluesky, Threads, or Instagram.

