Society & Governance 13 min read

Consensus Manufacturing at Machine Scale: When Agreement Becomes Computable

J

Jared Clark

March 25, 2026


There is an old idea in political theory that power flows not just from the barrel of a gun, but from the management of what people believe is possible. Antonio Gramsci called it hegemony — the way dominant ideas naturalize themselves until alternatives become literally unthinkable. Walter Lippmann, more practically, wrote about the "manufacture of consent": the coordinated effort to shape public opinion not through coercion but through the careful curation of information, symbol, and narrative.

Both men were describing a fundamentally human, fundamentally slow process. Propaganda required editors, broadcasters, and publicists. Consensus required repetition across time. Agreement — real or manufactured — had to be assembled laboriously, argument by argument, broadcast by broadcast.

That constraint is gone. We are now living through the moment when consensus manufacturing becomes computable — when the production of agreement can be automated, personalized, and deployed at a scale that no human institution has ever achieved. Understanding what that means is, I think, one of the most urgent intellectual tasks of our time.


What "Manufactured Consensus" Actually Means in 2025

The phrase "manufacturing consent" has been in circulation since Lippmann's 1922 Public Opinion and was later popularized by Noam Chomsky and Edward Herman. But the contemporary version of this problem is structurally different from what either thinker described. It is not primarily about media gatekeepers filtering what gets reported. It is about computational systems generating the appearance of widespread, organic agreement — at machine speed, at machine scale, with machine precision.

To understand the shift, consider three layers of what AI now makes possible:

1. Content generation at ambient scale. Large language models can produce thousands of unique, coherent, contextually appropriate pieces of persuasive text per second. A single API call can generate op-eds, comment threads, social media posts, policy briefs, and academic-style summaries — all arguing the same position, all sounding different. According to researchers at the University of Zurich, AI-generated arguments were found to be more persuasive than human-written arguments in a 2024 study, particularly when the AI was given personal information about its target audience. The persuasion gap is not marginal — it is measurable and significant.

2. Persona simulation at population scale. The bottleneck in astroturfing has historically been the human cost of maintaining fake personas. AI eliminates that bottleneck. A 2023 study by Stanford Internet Observatory researchers documented coordinated networks using generative AI to create synthetic social identities complete with posting histories, profile photos, and ideologically consistent viewpoints. The marginal cost of adding a million synthetic voices to a conversation is, practically speaking, zero.

3. Targeting and personalization at individual scale. Perhaps most consequentially, modern AI systems can tailor persuasive content to individual psychological profiles. Research from Cambridge Analytica's era showed that psychographic targeting increased message resonance. Post-LLM, the capacity to personalize persuasive content dynamically — adjusting framing, tone, analogies, and emotional register in real time for each individual reader — represents a qualitative leap in the architecture of influence.

These three capacities, combined, produce something genuinely new: a system capable of making any position appear to have broad, diverse, organic support — and of making that appearance deeply convincing to each individual it reaches.


The Mechanics of Computable Consensus

To think clearly about this problem, it helps to be precise about what "consensus" actually does in human epistemology and politics.

Consensus functions as a social proof heuristic. When we see that many people — especially diverse people, people like us, people we respect — hold a particular view, we update toward that view. This is not irrationality; it is an efficient strategy for navigating a complex information environment with limited cognitive resources. We cannot independently verify most of what we believe. We rely on the apparent beliefs of others as evidence.

This heuristic is the precise target of machine-scale consensus manufacturing. The goal is not to produce a logically airtight argument. It is to produce the signal that triggers social proof: apparent breadth of agreement, apparent diversity of agreement, apparent authenticity of agreement.

What makes current AI systems so potent for this purpose is their ability to simulate each of those attributes simultaneously:

  • Apparent breadth: Thousands of posts, comments, and articles expressing the same view across multiple platforms.
  • Apparent diversity: Different voices, demographics, writing styles, and rhetorical framings — all generated on demand.
  • Apparent authenticity: Content indistinguishable in texture and specificity from genuine human expression.

The result is what I would call a synthetic epistemic environment — an information landscape in which the signals humans normally use to assess what is true and broadly believed have been systematically decoupled from their underlying reality.


A Comparison: Traditional Propaganda vs. AI-Enabled Consensus Manufacturing

The differences between 20th-century propaganda and AI-enabled consensus manufacturing are not merely quantitative. They are architectural.

Dimension Traditional Propaganda AI-Enabled Consensus Manufacturing
Scale Broadcast to mass audiences Personalized to individuals at mass scale
Speed Days to weeks (production cycles) Seconds to minutes
Cost High (infrastructure, labor, distribution) Near-zero marginal cost per unit
Authenticity Identifiable as institutional messaging Indistinguishable from organic expression
Diversity of voice Limited (central messaging discipline) Infinite apparent diversity
Detection Relatively traceable (source attribution) Difficult to attribute, easy to deny
Personalization Segmented by demographic Tailored to individual psychology
Deniability Low (requires organizational infrastructure) High (decentralized generation, no paper trail)
Feedback loops Slow (polling, focus groups) Real-time (engagement metrics, A/B testing)

The table above captures something important: the problem is not that AI makes propaganda cheaper. It is that AI changes the nature of the thing being deployed. Traditional propaganda was legible as propaganda — it had a sender, a medium, a message. AI-enabled consensus manufacturing is designed to have none of those visible properties. It presents itself as the spontaneous, distributed agreement of real people.


The Democratic Epistemology Problem

Democratic theory has always rested on an assumption that is easy to overlook: that citizens, in aggregate, can form genuine preferences through genuine deliberation. The philosophical tradition from John Stuart Mill to Jürgen Habermas imagines democracy as a process of authentic public reasoning — imperfect, messy, susceptible to manipulation at the margins, but fundamentally grounded in real human minds exchanging real arguments.

Computable consensus manufacturing is a challenge not just to democratic outcomes but to democratic epistemology — to the conditions under which genuine democratic reasoning is even possible.

Consider what happens when: - Every online forum contains an unknown proportion of synthetic participants whose views are computationally optimized to shift the Overton window. - Every comment section includes AI-generated responses that appear to represent authentic public reaction. - Every policy debate is preceded by a synthetic public opinion landscape designed to make one set of conclusions seem inevitable.

The problem is not that people will believe false things — though that is serious. The deeper problem is that people will lose the capacity to distinguish between genuine social consensus and manufactured social consensus. And once that capacity is eroded, the social proof heuristic — which is cognitively indispensable — becomes a vector for unlimited manipulation.

A 2023 report from the Alan Turing Institute estimated that generative AI tools could plausibly be used to manufacture the appearance of majority public opinion on any given issue within weeks, at a cost accessible to mid-sized political organizations or well-resourced private interests. The implications for referendum politics, regulatory comment periods, and legislative deliberation are profound.


Where This Is Already Happening

This is not speculative. There are documented cases across multiple domains where machine-scale opinion manufacturing has already been deployed.

Regulatory comment fraud. The U.S. Federal Communications Commission's 2017 net neutrality comment period received an estimated 22 million comments — a significant proportion of which were later found to be fake, many generated algorithmically using real Americans' stolen identities. This was pre-LLM. The scale and sophistication of what is now possible vastly exceeds what was deployed then.

Political astroturfing. Researchers at Oxford Internet Institute's Computational Propaganda Project have documented state-sponsored computational propaganda campaigns in over 70 countries. These campaigns increasingly use generative AI to move from simple bot networks to sophisticated synthetic persona networks that are dramatically harder to detect.

Corporate reputation and policy influence. Internal documents from several major technology and pharmaceutical companies, surfaced through investigative journalism, have described the use of automated content generation to flood public discourse with favorable commentary during regulatory review periods — creating the impression of grassroots support for positions that primarily serve institutional interests.

Academic and scientific discourse. There is growing concern in the scientific community about AI-generated peer review manipulation, where synthetic expert commentary is used to create the appearance of scholarly consensus on contested questions. A 2024 study in Nature Human Behaviour found that AI-generated scientific summaries systematically overstated the strength of consensus in contested research areas.


The Detection Problem Is Harder Than It Looks

The intuitive response to synthetic consensus is detection: build better tools to identify AI-generated content, watermark outputs, require disclosure. These are reasonable interventions, and I support them. But the detection problem is structurally asymmetric in ways that matter.

Detection tools operate on statistical regularities — patterns in syntax, style, and structure that distinguish AI-generated text from human text. But those patterns are a moving target. Each generation of language models produces text that is harder to distinguish from human writing. The current generation is already beyond reliable automated detection at the level of individual documents.

More fundamentally, even if perfect detection were technically possible, it faces a deployment problem: the platforms, organizations, and individuals who benefit from synthetic consensus have strong incentives not to implement it, while the victims of synthetic consensus — citizens, democratic institutions, epistemically honest public discourse — have no enforcement mechanism.

This asymmetry is not unusual in the history of information warfare. But it does mean that technical countermeasures alone are insufficient. The response to computable consensus manufacturing has to be partly technical, partly institutional, and partly epistemic — a reform of how we understand the nature of agreement itself.


What a Robust Response Looks Like

I want to be careful here not to slide into the genre of essays that identify a large problem and then gesture vaguely at "solutions." The problem of computable consensus is genuinely hard, and I do not think there are clean answers. But I do think there are directions worth taking seriously.

Provenance infrastructure. The most promising technical direction is not detection but provenance — building systems that make it possible to verify the origin of content rather than trying to classify it after the fact. The Coalition for Content Provenance and Authenticity (C2PA), supported by Adobe, Microsoft, and others, is developing open standards for content credentials. This is valuable, though adoption remains limited and enforcement is nonexistent.

Deliberative redesign. Democratic institutions can be restructured to be less vulnerable to synthetic consensus. Random citizen assemblies, which select participants randomly rather than through self-selection or algorithmic amplification, are structurally resistant to astroturfing — you cannot flood a randomly selected deliberative body with synthetic participants. Several European countries have already moved toward citizens' assemblies for contested policy questions, partly for this reason.

Epistemic literacy at population scale. The evidence that media literacy interventions can partially inoculate people against manipulation is real, if modest. A 2022 meta-analysis in Psychological Science found that "prebunking" — explaining how manipulation techniques work before people encounter them — reduced susceptibility to manipulated content by an average of 21%. That is not a complete solution, but it is not nothing.

Platform accountability and transparency. Regulatory frameworks that require platforms to disclose the proportion of AI-generated content in public discourse, and to maintain auditable records of synthetic account networks, would create accountability structures that currently do not exist. The EU's Digital Services Act represents a partial move in this direction, though its provisions on synthetic content are still underdeveloped.

Treating epistemic integrity as infrastructure. Perhaps the most important conceptual shift is recognizing that the conditions for genuine public reasoning — authentic expression, traceable attribution, the absence of synthetic manipulation — are a form of civic infrastructure, not unlike roads or power grids. Infrastructure can be degraded by private actors for private gain. Infrastructure degradation is a legitimate subject of public policy and regulation.


The Deeper Question: What Is Consensus For?

Behind all of this is a philosophical question that the technical and policy debates tend to skip past: what do we actually want consensus to do?

If consensus is merely a decision procedure — a way of aggregating preferences to produce outcomes — then synthetic consensus that happens to align with what people would have preferred is perhaps not catastrophically different from real consensus. This is the position some AI optimists implicitly hold: if the AI figures out the right answer and manufactures agreement for it, maybe that is fine.

I find this argument unpersuasive, and not just because it requires enormous trust in whoever controls the AI. The deeper problem is that consensus in a democratic society is not only a decision procedure. It is an epistemic process — the mechanism through which a community of self-governing people come to understand their own values, interests, and priorities. The process matters, not just the output.

When agreement is computable — when it can be manufactured, optimized, and deployed without the participation of the minds it purports to represent — something essential to democratic self-governance is lost. Not just the right answer. The capacity for self-knowledge.

That is what is at stake. Not primarily the manipulation of any particular election or policy outcome, though that is serious. What is at stake is the social epistemology that makes genuine self-governance possible at all — the shared ability of citizens to know what they actually think, to form that thinking in genuine dialogue with real others, and to trust that the apparent agreement around them bears some relationship to genuine human minds.


Conclusion: Agreement in the Age of Artificial Minds

The 20th century's great struggles over media, propaganda, and public opinion were ultimately struggles over who controlled the means of producing consensus. Those struggles were hard. But they were fought on terrain where the fundamental nature of consensus — that it was, however manipulated, composed of real human beliefs — was not in question.

We are entering terrain where that assumption cannot be made. The means of producing the appearance of consensus have been separated, potentially completely, from the underlying reality of human agreement. AI systems can now generate not just persuasive arguments but entire epistemic environments — simulated landscapes of apparent public opinion, constructed to make any conclusion feel inevitable.

This is not a reason for despair. It is a reason for clarity — about what is happening, why it matters, and what genuine responses would require. The first step in that clarity is naming the thing precisely: not "disinformation," not "AI bias," not "deepfakes" — but the manufacture of consensus at machine scale. The computational production of agreement.

Understanding that framing is where the serious work begins.


Explore related thinking on this topic at prepareforai.org.

Last updated: 2025-07-14

J

Jared Clark

Founder, Prepare for AI

Jared Clark is the founder of Prepare for AI, a thought leadership platform exploring how AI transforms institutions, work, and society.