Companies are measuring AI adoption. They should be measuring capacity.
The wrong number is producing the wrong intervention.
A company I heard about recently ran a competition to drive AI adoption across its workforce. Document your most impactful use of AI tools. Share it with the team. Winner gets a gift card.
Participation was low. Adoption numbers barely moved.
Leadership was frustrated. They’d invested in the tools, communicated the vision, and even added an incentive. What else could they do?
The question nobody asked: how many of the people they were asking to participate were already working at 100% capacity before the competition launched?
Most enterprise AI rollouts are designed as addition problems.
Take the existing workload. Add a new tool. Add a learning curve. Add documentation requirements for the competition. Add the cognitive overhead of remembering to use the new tool in the first place. Then wait for productivity gains.
That math doesn’t work.
AI gives time back. But first it takes some. There’s a ramp before there’s relief, and that ramp costs exactly what overextended people don’t have: available hours, mental bandwidth, the space to try something, fail at it, and try again.
The employees most buried in reactive work are the ones who need the efficiency gains most. They’re also the ones least positioned to invest in getting there.
Measuring adoption doesn’t surface this problem. It hides it.
When a company tracks logins, completion rates on training modules, or competition entries, they’re measuring activity. Compliance, really. Whether people showed up to the thing.
What they’re not measuring: whether the people who showed up had any capacity to actually change how they work. Whether the workflow even had room for something new. The intended outcome - actual time savings on real work - is a different question entirely, and most rollouts never ask it.
I’ve watched companies celebrate AI rollout numbers that were entirely disconnected from business outcomes. The adoption metric was intended to justify the contract, not to track change.
The gift card competition produced a handful of motivated early adopters who had the time and inclination to engage. Which is great. But it also confirmed for everyone else that AI was optional. Something extra you did if you had bandwidth. A nice-to-have.
That belief, once formed, is hard to undo.
Nobody’s asking the harder question.
If the real constraint is capacity, adding an AI tool to an already full plate doesn’t solve it. Neither does a competition. The companies seeing real efficiency gains from AI tend to do one thing differently: they start by removing something before they add anything.
They identify specific tasks that AI can own, pull those out of someone’s workload, and let the time savings show up before asking for the adoption behavior. The tool doesn’t ask for trust up front. It earns it by visibly changing what a week looks like.
That’s a harder conversation to have internally than launching a rollout. It requires someone to make a call about what work stops, not just what tool gets added. Most AI adoption programs are funded by IT or operations. The authority to change workloads lies somewhere else entirely.
That’s probably the real problem.
Two things worth measuring instead of the adoption rate:
Time recovered on specific task categories. Pick three high-volume, low-judgment tasks. Measure hours spent before and 60 days after AI implementation. If the number didn’t move, the tool isn’t working or the workflow wasn’t designed to let it.
Capacity headroom. Ask the people you deployed AI to whether they feel less overwhelmed three months in. Not whether they’re using the tool. Whether anything changed about the amount of runway they have each week. That’s the outcome the business actually wants.
Neither of these is easy to track. That’s probably why the adoption rate won. It’s a simple number with a clear benchmark, and it feels like progress.
Simple numbers are fine until they’re measuring the wrong thing.
The company with the gift card competition had something real underneath it. There were genuine efficiency opportunities. The margin in their business meant even small gains in output per person could matter significantly.
But the path they took assumed the blocker was awareness or motivation. Get people to try it. Show them it works. Let enthusiasm spread.
The actual blocker was time. And you can’t gamify your way past that.
Final Thoughts
A more useful starting point:
Before deploying any AI tool, ask which task it replaces, not which task it augments. Addition doesn’t free up capacity. Substitution does.
Pilot with your least overloaded people first. They’ll actually use it. That generates the real proof of concept, not a competition leaderboard.
Measure outcomes at 60 days, not adoption at 30. One number tells you if people showed up. The other tells you if anything changed.
That company with the gift card competition wasn’t wrong to care about AI adoption. The efficiency opportunity was real. The margin pressure was real. They just picked a metric that told them what they wanted to hear instead of what was actually happening.
You can’t incentivize your way to capacity. You have to create it first.
Mike Watson @ Product Party
P.S. Want to connect? Send me a message on LinkedIn, Bluesky, Threads, or Instagram.

