The conversation about AI and work has gotten stuck in a single frame: jobs disappearing. And yes, that's real — the McKinsey Global Institute estimated in 2023 that generative AI alone could automate up to 70% of tasks currently performed by knowledge workers. That number deserves attention. But I think we're spending so much energy counting disappearing jobs that we're missing the more consequential thing happening underneath all of it: a structural redistribution of power between workers, employers, and machines.
Job loss is a symptom. The power shift is the disease. Or maybe more precisely — the job loss conversation is about what workers lose, and the power shift conversation is about who gains something, and what they gain with it.
Those are very different questions, and most of the discourse is stuck on the first one.
What Automation Has Always Done — And Where This Time Diverges
Every wave of automation in history displaced some category of human labor. The mechanical loom didn't just eliminate hand-weavers; it concentrated production capacity in the hands of factory owners who could afford the machines. The spreadsheet didn't just replace bookkeepers; it made a single CFO capable of managing financial modeling that once required a team, and the team never came back.
The pattern is old: technology reduces the labor required per unit of output, which means the people who own or control the technology capture more of the surplus from that output. Workers who held negotiating leverage because their skill was scarce find themselves bargaining from a weaker position once the machine can approximate what they do.
What's different this time is the range of skills being approximated, and the speed at which that range is expanding. Previous automation waves targeted physical labor, then routine cognitive labor — filing, calculating, processing. Generative AI is targeting what we might loosely call judgment-adjacent work: drafting, analyzing, synthesizing, advising, even certain kinds of creative production. These were the professional-class skills that supposedly insulated white-collar workers from the automation that hollowed out manufacturing communities. That insulation is thinning fast.
According to a 2024 report from the International Monetary Fund, approximately 40% of global employment is exposed to AI — and in advanced economies, that number climbs to 60%. More importantly, the IMF noted that unlike prior automation waves, AI's exposure is highest among high-wage, high-skill jobs. The gradient is inverted. The people who thought they were safe are now on the front lines.
The Three Levers of Power This Is Actually Moving
When I think about what's really shifting, I keep coming back to three specific levers that get almost no airtime in the mainstream conversation about AI and employment.
Who Controls the Interpretation of Work
For most of the knowledge economy era, the professional worker held a particular form of leverage: they were the primary interpreter of what a problem meant and how to approach it. A lawyer didn't just draft documents — they translated a client's situation into legal strategy. A financial analyst didn't just run numbers — they synthesized information into a recommendation. A consultant didn't just research — they framed the question and defined what "solved" would look like.
That interpretive role was hard to outsource and hard to replicate. It was where the worker's judgment was irreplaceable, and it was the source of professional autonomy.
AI doesn't just assist with the execution of that work. It increasingly offers a competing interpretation, and in many organizational contexts, managers are learning to prefer the AI's interpretation — it's faster, it's documented, it's auditable, it doesn't push back. I've watched this pattern emerge in knowledge-work firms: the senior professional who used to define the problem now finds themselves validating or refining an AI-generated framing. The interpretive authority migrated to the system, and what remains for the human is review.
Review is not the same as interpretation. The power embedded in that distinction is enormous.
Who Absorbs the Risk of Being Wrong
One of the underappreciated dynamics in employer-worker relationships is who holds the risk when a judgment call turns out to be wrong. Historically, a lawyer who gave bad advice, a doctor who misdiagnosed, a financial advisor who made the wrong call — they carried professional liability, reputational risk, and personal accountability. That risk was part of why their judgment commanded a premium.
When AI intermediates the work, that risk structure gets murky in ways that generally favor employers and platform owners. If an AI-assisted diagnosis leads to a bad outcome, who is liable? The physician who reviewed the output? The hospital that deployed the tool? The company that built the model? The answers remain genuinely unsettled, but the early institutional responses are pointing in a consistent direction: the worker who "used the tool" carries the downstream professional risk, while the organization captures the efficiency gain. That's a particularly unfavorable trade for workers, and it's happening mostly without negotiation.
Who Can Be Replaced on Short Notice
Labor economists talk a lot about "replacement cost" — how expensive it is to replace a worker who leaves or needs to be let go. High replacement cost is one of the more durable sources of worker leverage. When your skills are specific, your institutional knowledge is deep, and retraining someone to fill your role takes twelve to eighteen months, you hold real bargaining power simply by being hard to replace.
AI is compressing replacement cost in ways that are hard to overstate. A 2024 Stanford HAI study found that AI assistance allowed newly hired workers to reach the productivity of experienced workers 40% faster in certain knowledge-work contexts. That's a direct attack on the leverage that experience and institutional knowledge provide. If an AI system can close the gap between a new hire and a ten-year veteran in a fraction of the time, the veteran's negotiating position weakens — not because they aren't doing valuable work, but because the cost of doing without them just dropped.
What the Headlines Miss: The Asymmetry of Augmentation
The optimistic counter-narrative to job displacement is augmentation — the idea that AI makes workers more productive, expands what they can accomplish, and creates new value that ultimately supports new jobs. I think this is partially true. But augmentation is not a neutral process. Augmentation has a beneficiary structure, and it tends to mirror existing power distributions rather than flatten them.
Consider a comparison of how augmentation actually plays out across different types of workers:
| Worker Type | AI Augmentation Benefit | Who Captures the Surplus |
|---|---|---|
| Senior Executive | Faster information synthesis, broader span of control | Executive (higher output, same comp structure) |
| Knowledge Worker (mid-level) | Higher task throughput, reduced hours per deliverable | Employer (fewer headcount needed) |
| Creative Professional | Faster iteration, broader output volume | Platform / Employer (commoditization of output) |
| Gig Worker | More available work, faster matching | Platform (keeps rate low due to supply increase) |
| Entry-Level Worker | Faster onboarding, basic skill gap closed | Employer (lower value placed on experience) |
The pattern is fairly consistent: the efficiency gains from augmentation accumulate upward. Workers see increased output demands without proportional compensation increases, because the market rate for their work adjusts downward as the supply of AI-assisted output rises.
This isn't a conspiracy — it's a market dynamic. But it's one that workers, unions, and policymakers are only beginning to grapple with, and the framing of "AI helps workers do more" obscures the question of who benefits from the more.
The Concentration Problem
There's a related shift that operates at the macro level, and it may be the most significant of all. The infrastructure required to develop and deploy frontier AI — compute, data, engineering talent, capital — is concentrated in a very small number of institutions. In 2024, five companies accounted for the vast majority of frontier AI model development. That concentration means the tools that are reshaping labor markets are owned by entities with specific interests in how those labor markets evolve.
This is not a new dynamic in the history of capitalism, but the scale and speed are unusual. When the loom was invented, loom ownership was physically distributed across hundreds of mills and eventually thousands. The equivalent of the loom today — a large language model — exists in a handful of server farms, operated by companies whose business models are, in many cases, directly predicated on reducing the cost of human labor.
I'm not suggesting that's nefarious. But I think it's worth being honest about the structural incentive at work: the entities with the most power to shape how AI augments or replaces labor are the entities that benefit most directly from labor replacement. That's a conflict of interest that should be part of the conversation, and mostly it isn't.
What Happens to the Middle
One of the more underappreciated economic risks in the AI transition is what happens to middle-skill, middle-wage professional work. Historical automation tended to produce labor market "polarization" — it hurt middle-skill routine work while leaving high-skill cognitive work and low-skill physical work relatively untouched. This produced a hollowing-out of the middle.
Generative AI breaks that pattern. A 2023 study from MIT and the National Bureau of Economic Research found that AI writing tools reduced the wage premium associated with high-skill writing tasks by compressing the performance gap between strong and weak writers. In other words, the tools made the middle closer to the top — which sounds good for the middle, until you realize it simultaneously lowers the ceiling for what the top can command.
When AI narrows skill gaps across an entire occupational category, the whole wage floor tends to settle lower, not higher. The people who were commanding a premium for being genuinely excellent at something find that premium eroding because "good enough" just got a lot more accessible.
That's a compressing force on the professional middle class that doesn't show up in job-loss statistics, because the jobs may remain. The pay, the autonomy, and the leverage shift quietly, without anyone getting fired.
The Governance Gap
What makes all of this particularly difficult is that the institutional structures designed to protect workers were built for a different era. Collective bargaining, professional licensing, labor law, employment protections — these were designed in contexts where displacement happened gradually, where the skills that mattered were legible, and where the boundary between "tool" and "replacement" was clear.
None of those conditions hold anymore. The pace of AI capability development outstrips the pace of regulatory response by years. The skills that are being eroded are often tacit and hard to define legally. And whether an AI is a tool that assists a worker or a system that replaces the need for one is increasingly a matter of employer framing rather than objective analysis.
In the United States, labor law has no framework for an employer's use of AI to reclassify the nature of work, reduce the complexity premium attached to a role, or substitute AI output for human judgment in a way that depresses wages without eliminating positions. Most of the governance infrastructure simply doesn't reach these dynamics.
The European Union's AI Act, which entered into force in 2024, represents the most significant regulatory attempt to address AI's labor implications — but its primary focus is safety and rights protection, not labor market structure. The power redistribution questions I'm describing here aren't primarily safety questions. They're economic justice questions, and the regulatory frameworks for those are still mostly unbuilt.
What Would Actually Help
I want to be careful not to end up in the familiar essay-closer trap of delivering a tidy policy prescription after a complicated diagnosis. The honest answer is that the governance challenge here is genuinely hard, and I'm skeptical of anyone who has a clean three-step solution.
But some things seem important to at least name.
First, the unit of analysis for labor policy needs to shift from jobs to bargaining power. A worker who has a job but has lost interpretive authority, risk insulation, and replacement leverage isn't protected by job counts. The job survived; something important still left.
Second, the transparency of AI deployment in workplaces needs to improve significantly. Workers in most contexts have no meaningful visibility into how AI is being used to monitor, evaluate, restructure, or replace their work. That's an information asymmetry that makes collective response nearly impossible.
Third — and this is the one that feels most underexplored — the question of who owns the training data that makes these systems valuable deserves serious attention. Much of the value embedded in large language models comes from human-generated content, professional knowledge, and accumulated institutional expertise. The people and institutions who produced that knowledge received no equity stake in the systems it trained. That's a distributional choice, and it's one that could theoretically be made differently.
You can read more about how AI is reshaping institutional power structures in The Institutional Reckoning: How AI Challenges Legacy Organizations and explore the broader workforce transition dynamics at Prepare for AI.
The Frame We Actually Need
Job loss is real, and it deserves attention. But the frame of "will AI take my job" is, in my view, almost perfectly designed to focus attention on the wrong question. It focuses on the employment binary — employed or not — when the more consequential shifts are happening along dimensions like autonomy, leverage, risk, compensation, and control.
The workers of the next decade who are most at risk aren't necessarily the ones whose jobs will disappear. They're the ones whose jobs will persist while their bargaining position quietly erodes, whose interpretive authority will migrate to systems owned by someone else, and whose accumulated expertise will be used to train the very tools that compress the premium they used to command for having it.
That's the power shift. It's quieter than job loss, harder to measure, and much harder to resist without a frame that can actually see it.
Last updated: 2026-04-29
Jared Clark
Founder, Prepare for AI
Jared Clark is the founder of Prepare for AI, a thought leadership platform exploring how AI transforms institutions, work, and society.