Strategy 12 min read

Symbol vs. Substance in AI Adoption: Performing Transformation

J

Jared Clark

March 23, 2026


There is a pattern playing out across industries right now that deserves a name. Organizations announce AI initiatives with fanfare. Press releases go out. Leadership declares the company is "all in on AI." A task force is formed. A vendor is selected. A pilot is launched.

And then — often — very little actually changes.

The workflows remain the same. The decision-making structures stay intact. The incentives that shaped behavior before AI arrived continue to shape behavior after it. Employees learn to route around the new tools or perform compliance with them. The organization hasn't transformed. It has performed transformation.

I call this the Symbol-Substance Gap in AI adoption — the distance between what an organization signals about AI and what it actually does with it. This gap is, I'd argue, the defining organizational challenge of the current AI moment. And most institutions are losing the battle against it without even knowing the war is being fought.


Why Organizations Default to Symbolic AI Adoption

To understand the Symbol-Substance Gap, you have to understand something about how large organizations respond to external pressure. When a new technology emerges that carries social weight — and AI carries enormous social weight right now — organizations face a legitimacy problem before they face an operational one.

Stakeholders expect a response. Boards ask about AI strategy. Customers wonder if you're using it. Competitors announce pilots. Employees read breathless coverage. The pressure to appear to be adopting AI arrives months or years before the organizational capacity to do so meaningfully.

Sociologists call this institutional isomorphism — the tendency of organizations to adopt the forms and practices of their environment to appear legitimate, regardless of whether those forms improve performance. It was identified by DiMaggio and Powell in 1983 to describe how hospitals, universities, and corporations start to look alike not because they're all solving the same problems effectively, but because they're all signaling fitness to the same audiences.

AI adoption in 2024 and 2025 is a textbook case. According to McKinsey's 2024 State of AI report, 78% of organizations reported using AI in at least one business function — yet fewer than 30% reported that AI had materially changed how core business decisions are made. The adoption signal is nearly universal. The substance is not.

This is not hypocrisy, exactly. It is a rational organizational response to an irrational situation: the pressure to show transformation is immediate, while the capacity to achieve it is slow and expensive. Symbol is the path of least resistance.


The Anatomy of Symbolic AI Adoption

Symbolic AI adoption is not monolithic. It takes several distinct forms, and organizations often exhibit multiple at once.

1. The Pilot Perpetuation Pattern

The pilot is perhaps the most common vehicle for symbolic AI adoption. A pilot is structurally perfect for signaling without committing: it is bounded, reversible, deniable, and fundable from discretionary budgets. Organizations announce pilots with the language of transformation ("we're piloting a cutting-edge AI solution to reimagine our customer experience"). The pilot runs. Results are mixed or ambiguous. The pilot is quietly extended, then quietly shelved — or, more commonly, quietly maintained in perpetual pilot status, never scaled, never killed.

A 2024 Gartner survey found that approximately 49% of AI pilots never move to full production deployment. The pilot has become a form of organizational theater: it demonstrates seriousness without requiring commitment.

2. The Vocabulary Substitution

Some organizations adopt AI primarily as a linguistic event. Existing processes are redescribed in AI terms. The analytics team becomes the "AI & insights team." The CRM's recommendation engine, which has existed for years, is rebranded as "AI-powered personalization." The chatbot that routes support tickets is elevated into an "AI-driven customer intelligence platform."

None of this is necessarily dishonest. But it is symbolic in the technical sense: the signifier has changed while the signified has not. The organization has changed its language about what it does without changing what it does.

3. The Governance Theater

Some organizations respond to AI pressure primarily through governance structures. An AI ethics committee is formed. An AI policy document is drafted and published. Principles are articulated. Responsible AI frameworks are adopted.

These are not inherently empty gestures — governance matters enormously and I've written about why AI governance frameworks succeed or fail elsewhere. But governance without operational integration is a symbol. A policy document that no one references when building a system, and no process for enforcing it, is a prop, not a guardrail. The organization has performed responsibility without practicing it.

4. The Centralized Showcase

A related pattern: organizations build one impressive, visible AI application — often in a high-visibility function like customer experience, executive dashboards, or marketing — while the actual operations of the business continue unchanged. The showcase application becomes the evidence of transformation that is presented to boards, investors, and the press. Meanwhile, core processes in finance, supply chain, HR, and product development run exactly as they did before.

The showcase is real. The transformation it implies is not.


What Substantive AI Adoption Actually Looks Like

The contrast with symbolic adoption is instructive. Substantive AI adoption shares several features that symbolic adoption typically lacks.

It Changes Who Decides What

The most reliable marker of substantive AI adoption is that it reorganizes authority. When AI is integrated into decision-making in a way that actually matters, it necessarily shifts who has input, who has final say, and how accountability is assigned. Loan underwriting that genuinely incorporates AI doesn't just add a score to a process — it changes what underwriters do, what they're accountable for, and how their judgment is weighted against algorithmic recommendation. Real transformation is always, at some level, a political event inside an organization.

Symbolic adoption leaves authority structures intact. Substantive adoption disturbs them — and the organization has to manage that disturbance deliberately.

It Requires New Skill Distribution, Not Just New Tools

Substantive AI adoption creates a measurable demand for new capabilities distributed through the workforce — not concentrated in a specialized AI team that everyone else treats as a black box. According to the World Economic Forum's Future of Jobs Report 2025, 39% of core job skills are expected to change by 2030, with AI literacy identified as a top-five skill across nearly every sector. Organizations that are substantively adopting AI show this in their hiring patterns, their training investments, and most tellingly, in how non-technical roles have evolved.

If the only people in your organization who understand AI are on the AI team, you haven't adopted AI — you've outsourced AI understanding to a specialized function while the rest of the organization remains unchanged.

It Produces Measurable Operational Change

Substantive adoption shows up in operational metrics that matter to the core business. Not "we deployed X models" or "Y% of employees have completed AI training" — but changes in cycle times, decision quality, error rates, cost structures, or customer outcomes. The metrics of symbolic adoption are adoption metrics. The metrics of substantive adoption are business metrics.

It Creates New Failure Modes

This one is counterintuitive: organizations that have truly integrated AI start to fail in new ways. A lending institution that has genuinely built AI into underwriting will eventually have to grapple with model drift, distributional shift, or fairness violations at scale — problems that wouldn't exist if AI were merely decorative. Substantive adoption produces substantive problems. If an organization's AI is causing no new operational headaches, that is itself a signal that it may not be doing much.


A Framework for Diagnosing the Gap

How do you assess where your organization falls on the Symbol-Substance spectrum? I find it useful to ask a set of diagnostic questions across four dimensions:

Dimension Symbolic Indicators Substantive Indicators
Decision-Making AI informs reports no one reads AI changes who approves what
Workforce AI team isolated from operations AI literacy distributed across roles
Accountability AI ethics policy published Policy enforcement visible in process
Measurement Adoption metrics tracked Business outcome metrics shift
Failure No new AI-specific problems New failure modes actively managed
Investment One-time tool purchase Ongoing investment in data, people, process

The honest version of this exercise is uncomfortable for most organizations. It reveals that significant portions of what they call AI adoption are, in DiMaggio and Powell's sense, legitimacy performances rather than operational transformations.

That discomfort is the point. You cannot close a gap you haven't acknowledged.


Why the Gap Is Getting Wider, Not Narrower

One might expect that as AI technology matures, the Symbol-Substance Gap would naturally close — that organizations would eventually be forced by competitive pressure into genuine integration. I'm not convinced this is happening at the scale or speed the optimists predict.

Several forces are actively widening the gap:

Vendor incentives favor symbol. AI vendors are rewarded for deployment metrics, not transformation outcomes. Their commercial success depends on making adoption appear easy and fast. Easy and fast AI adoption is, almost by definition, shallow AI adoption.

Leadership tenure is too short for substantive change. The average tenure of a Chief Digital Officer or Chief AI Officer in a large organization is under three years. Substantive AI transformation takes longer than that. The incentive is to deploy something visible quickly and leave before the hard integration work comes due.

Measurement systems reward activity over outcome. Most organizations measure AI by input metrics — models deployed, employees trained, pilots launched — because outcome metrics are harder to define and slower to materialize. Input metrics are perfectly compatible with symbolic adoption. They cannot distinguish it from the real thing.

According to a 2024 MIT Sloan Management Review study, only 18% of organizations reported having clear metrics for measuring AI's business impact — meaning the vast majority lack the measurement infrastructure to even detect whether their adoption is substantive.


The Organizational Cost of Performing Transformation

Symbolic AI adoption is not free. Its costs are just deferred and distributed in ways that make them hard to attribute.

The most direct cost is opportunity displacement. Every resource devoted to a symbolic AI initiative — the pilot that never scales, the governance theater, the showcase application — is a resource not devoted to the harder, slower work of genuine integration. Organizations that spend the current window of competitive AI development performing transformation are building technical debt, not technical capability.

The second cost is credibility erosion. Employees are not naive. They see through the vocabulary substitutions and the perpetual pilots. When leadership declares transformation and workers experience continuity, trust erodes. Future genuine change efforts face skepticism earned by previous symbolic ones. This is one reason organizational change management consistently identifies prior failed transformations as the single largest predictor of resistance to new change efforts.

The third cost is strategic misreading. Organizations that have convinced themselves, through their own symbolic adoption, that they are "doing AI" may systematically underestimate how far behind they actually are. The gap between perceived capability and actual capability is a strategic blind spot of the first order.


From Symbol to Substance: What the Transition Requires

Closing the Symbol-Substance Gap is not primarily a technical problem. The technology exists. The challenge is organizational.

It requires leadership that distinguishes between signaling AI capability and building it. These are different activities with different timelines, different cost structures, and different organizational demands. Leaders who conflate them will optimize for signal and wonder why substance hasn't followed.

It requires integrating AI into the processes where it is most disruptive, not most comfortable. The impulse in symbolic adoption is to apply AI where it is safe and peripheral — to tasks that are already working, workflows that are already efficient, functions where failure is low-stakes. Substantive adoption means deploying AI where it changes something important, which means accepting the risk that comes with importance.

It requires sustained investment in the unglamorous infrastructure. Data quality, data governance, model monitoring, workforce reskilling, process redesign — none of these are exciting announcements. None of them generate press releases. All of them are prerequisites for substantive AI adoption. Organizations that skip them are building on sand.

It requires honest measurement. You need to be able to say, with specificity: what decisions are being made differently because of AI, what outcomes have changed as a result, and what problems have emerged that we are managing. If you cannot answer these questions, you are not yet measuring substance.

I've explored related questions about how organizations build durable AI capability versus temporary AI momentum in other pieces on this site. The thread that connects them: transformation is a verb, not a noun. It is something you do continuously, not something you announce.


The Harder Truth

The Symbol-Substance Gap in AI adoption reflects something deeper than organizational dysfunction. It reflects the genuine difficulty of institutional change at scale, combined with the specific pressures of a technological moment in which the social stakes of appearing to adopt AI are higher than the immediate operational stakes of actually doing so.

Most organizations are not being cynical. They are being human. They are responding to real pressures with the tools available to them — and the tools for appearing to transform are always more available, faster to deploy, and less costly in the short run than the tools for actually transforming.

But the reckoning comes. It always does. The organizations that used the current window to build genuine capability will be structurally advantaged over those that used it to build a convincing portfolio of symbols. The gap between those two groups is growing, quietly, right now.

The question is not whether your organization has an AI strategy. The question is whether your AI strategy is changing anything that matters.


FAQ: Symbol vs. Substance in AI Adoption

Last updated: 2026-03-23

J

Jared Clark

Founder, Prepare for AI

Jared Clark is the founder of Prepare for AI, a thought leadership platform exploring how AI transforms institutions, work, and society.