There is a passage in Umberto Eco's The Name of the Rose where a monk murders repeatedly to prevent a single book from being read. The logic is straightforward, if monstrous: knowledge is power, and controlling access to knowledge is the most durable form of institutional authority ever invented. The Catholic Church understood this for centuries. Guilds understood it. Nation-states understand it. And now, whether they would use the word or not, frontier AI laboratories understand it too.
I want to be careful here. I am not writing a conspiracy brief. The engineers and researchers at OpenAI, Anthropic, Google DeepMind, Meta AI, and their peers are, by and large, people who believe they are doing something important and broadly beneficial. Many of them are personally committed to safety in ways that are technically sophisticated and morally serious. But institutional incentives are not the same thing as individual intentions. And the structural position these organizations occupy — as the primary arbiters of what the most powerful reasoning systems in human history will and will not say, do, or reveal — is a position that demands scrutiny far beyond what it is currently receiving.
This essay is that scrutiny.
The Concentration Problem Is Worse Than You Think
Let us begin with the numbers, because the scale of concentration in frontier AI development is genuinely staggering.
As of 2025, the compute required to train a frontier large language model is estimated to exceed $100 million per run, with some next-generation training runs projected to cost over $1 billion. A 2024 analysis by the AI Now Institute found that fewer than five private companies control the majority of frontier model development globally. The three largest cloud providers — Amazon, Microsoft, and Google — provide the infrastructure on which nearly all frontier AI is built, and two of those three have made multi-billion-dollar equity investments directly into frontier AI labs. The interlocking ownership structure is not a conspiracy; it is the predictable consequence of capital requirements that effectively exclude all but the wealthiest institutions from the frontier.
What this means practically is this: the decisions made in fewer than a dozen boardrooms and safety teams determine what questions the most capable AI systems will answer, what topics they will refuse, what framings they will adopt, and what sources they will treat as authoritative. Those decisions are made without meaningful democratic input, without regulatory mandate in most jurisdictions, and — critically — without the kind of adversarial transparency that we require of, say, a pharmaceutical company seeking drug approval or a nuclear operator seeking a license.
What "Safety" Gets Used to Justify
The concept of AI safety is real and important. I work with organizations seeking certification under ISO 42001:2023, and I can tell you that the risk management requirements in clause 6.1.2 — which require systematic identification, assessment, and treatment of AI-related risks — reflect genuine engineering and governance problems that deserve serious attention. Responsible AI development requires guardrails.
But "safety" as deployed rhetorically by frontier labs is doing enormous work that goes well beyond responsible risk management. It is being used to justify:
- Restricting output on politically sensitive topics in ways that are never fully disclosed to users
- Selective capability withholding — keeping certain model capabilities internal while deploying lesser versions to the public
- Training data opacity that makes it impossible for external researchers to assess whether models have been built on legally or ethically questionable foundations
- Alignment decisions — choices about what values and priorities a model should reflect — that are made by small, homogeneous teams and presented as technical necessities rather than what they actually are: moral and political choices
The last point is the most important. When an AI system declines to help draft a particular argument, expresses uncertainty about a particular historical claim, or consistently frames a policy question in one direction rather than another, that is not a safety decision in any engineering sense. It is a values decision. It is a decision about what kind of thinking should be facilitated and what kind should be discouraged. And the organizations making those decisions are, for the most part, not accountable to anyone outside their own walls for making them.
This is what I mean by the priesthood analogy. The priesthood did not only control access to religious texts. It controlled the interpretation of those texts, the framing of the questions that could be asked, and the definition of what counted as acceptable inquiry. The power was not merely custodial; it was epistemic. Frontier labs are accumulating the same kind of epistemic authority over AI-mediated knowledge.
The Regulatory Gap Is Not an Accident
Global AI governance is fractured in ways that benefit incumbents.
| Jurisdiction | Primary AI Regulation | Enforcement Authority | Transparency Requirements |
|---|---|---|---|
| European Union | EU AI Act (effective 2024–2026 phased) | National market surveillance authorities | High-risk system documentation required |
| United States | Executive Order 14110 (Biden); no comprehensive federal statute | NIST, FTC, sector regulators (fragmented) | Voluntary commitments only for most systems |
| United Kingdom | Sector-by-sector guidance; AI Safety Institute | No single enforcement body | Voluntary |
| China | Generative AI Interim Measures (2023) | Cyberspace Administration of China | Registration required; content rules enforced |
| Global Frontier Labs | Self-imposed guidelines (Acceptable Use Policies, Model Cards) | None | Voluntary and selective |
The EU AI Act is the most comprehensive framework currently in force. Under Article 13, high-risk AI systems must provide transparency sufficient for users to interpret outputs and exercise meaningful oversight. Under Article 9, providers must implement risk management systems that are documented and auditable. These are real requirements with real teeth — but they apply primarily to deployed systems in specific high-risk categories, and they do not reach the foundational model training decisions that shape what these systems are capable of believing and saying before they are ever deployed.
The United States has no equivalent. The Biden Executive Order on AI (EO 14110, October 2023) required frontier model developers to share safety test results with the government for models above certain compute thresholds — a meaningful step — but it was rescinded by the subsequent administration. What remains is a patchwork of sector-specific guidance and voluntary commitments that frontier labs have signed with considerable fanfare and minimal accountability.
This regulatory vacuum is not an accident of legislative timing. It is the product of intensive lobbying by the same organizations that benefit most from its persistence. According to OpenSecrets data, AI industry lobbying expenditure in Washington, D.C. increased by over 200% between 2022 and 2024. The message delivered to legislators, with consistent sophistication, is that regulation will slow American competitiveness, that the technical complexity makes external oversight impractical, and that the labs themselves are best positioned to manage the risks their own systems create. This is, structurally, the same argument that financial institutions made before 2008 and that pharmaceutical companies made before the thalidomide crisis. It is an argument that has never aged well.
The Alignment Problem as a Governance Problem
Much of the academic discourse around AI alignment treats it as a technical problem: how do we get AI systems to do what we want? But this framing obscures a prior question that is fundamentally political: whose wants should the system be aligned to, and who decides?
When Anthropic's Constitutional AI approach encodes a set of principles into Claude's training, those principles reflect choices made by Anthropic's research team. When OpenAI's RLHF (Reinforcement Learning from Human Feedback) process uses rater feedback to shape GPT outputs, the composition of that rater pool, the instructions given to raters, and the decisions made about how to weight disagreements among raters are all choices made by OpenAI. These choices have downstream consequences for every conversation the model will ever have with every person who uses it.
Think about that scale. A system interacting with hundreds of millions of users, whose epistemic dispositions — its tendencies to be confident or uncertain, to present multiple perspectives or a single one, to treat some sources as authoritative and others as suspect — were shaped by choices made by a few dozen people in a particular cultural moment, in a particular geographic context, with particular professional and class backgrounds.
This is not a hypothetical risk. It is the current situation. And it is a situation that ISO 42001:2023 clause 6.1.2 was not designed to address at civilizational scale. The standard provides an excellent framework for an organization managing its own AI deployment risks. It does not provide — and was not intended to provide — governance for entities that are effectively setting the epistemic parameters for a significant fraction of human inquiry.
What Accountable AI Governance Actually Requires
I am not arguing that frontier labs should be dismantled or that AI development should stop. I am arguing that the current governance architecture is inadequate to the power these institutions actually hold, and that the gap between their influence and their accountability is growing faster than any voluntary framework can close.
What would meaningful accountability look like? In my work helping organizations achieve ISO 42001 certification and navigate evolving regulatory requirements, I have seen what rigorous AI governance actually demands. Applied at the frontier lab level, it would require at minimum:
1. Mandatory transparency on value choices. Not model cards that describe performance benchmarks, but public documentation of the normative choices embedded in training — what topics are restricted, on what grounds, according to whose judgment, reviewed by what process. This is analogous to the administrative record requirements that apply to federal regulatory agencies in the United States.
2. External audit rights. Not voluntary third-party assessments selected and paid for by the subject organization, but genuine independent audit with right of access to training data sampling, internal policy documents, and red-team findings. The pharmaceutical parallel is apt: you do not get to hire your own FDA reviewer.
3. Participatory standard-setting. The organizations currently writing the norms — through bodies like the Partnership on AI, IEEE, and ISO/IEC JTC 1/SC 42 — are dominated by industry representatives. Meaningful standard-setting requires structured participation from civil society, affected communities, and independent researchers with adversarial relationships to the subject organizations.
4. Structural separation where conflicts of interest are sharpest. A lab that both develops and deploys frontier models, and that profits from both, faces structural conflicts in every safety decision it makes. Regulatory frameworks in mature industries — banking, nuclear, pharmaceuticals — recognize this and impose structural remedies. AI governance has not yet reached this level of sophistication.
The Historical Pattern Is Legible
Every prior concentration of epistemic authority in human history has eventually been challenged by forces of diffusion and democratization — the printing press, the public library, the internet. But the timescale of that diffusion matters enormously. The printing press took 150 years to break the Catholic Church's textual monopoly. During those 150 years, wars were fought, heretics were burned, and the distribution of power in European civilization was determined by who controlled access to the new technology.
We do not have 150 years. The capabilities being developed at frontier labs are being integrated into educational systems, healthcare delivery, legal services, journalism, and scientific research on a timescale of years, not generations. The window in which governance architecture can be established before lock-in occurs is narrow and closing.
The organizations best positioned to prevent the emergence of permanent AI priesthood structures are the same organizations currently building them. This is not a comfortable position for them, and the history of analogous situations — the Bell System, Standard Oil, the major financial institutions pre-Glass-Steagall — suggests that voluntary reform is rarely sufficient. But it is worth noting that some frontier lab leaders have themselves called for regulatory intervention, recognizing that industry self-governance cannot substitute for democratic accountability.
The question is whether those calls are genuine strategic concessions or sophisticated regulatory capture maneuvers — inviting frameworks designed in consultation with incumbents, that raise barriers to entry without imposing meaningful constraints on those already inside the gate.
What This Means for Organizations Building on AI Today
If you are a compliance officer, a regulatory affairs professional, or an AI governance lead at an organization that is building products or services on top of frontier model APIs, the structural issues I have described above are not merely philosophical concerns. They are operational risks.
When a frontier lab changes its acceptable use policy, adjusts its content filtering, or modifies the alignment of its underlying model in a quarterly update, your product's behavior changes without your input and sometimes without adequate notice. When you cannot audit the training data or the value choices embedded in a model you are relying on, you cannot fully discharge your own regulatory obligations under frameworks like the EU AI Act or emerging sector-specific AI requirements in healthcare and finance.
This is one reason why I advise clients pursuing ISO 42001 certification not to treat third-party foundation model providers as black boxes that sit outside their governance scope. ISO 42001:2023 clause 8.4 addresses the management of externally provided AI systems and requires organizations to exercise appropriate control even over third-party components. Understanding the governance gaps at the frontier lab level is not academic — it is a prerequisite for building a defensible AI management system.
For organizations navigating these challenges, our team at Certify Consulting works daily with companies that need to govern AI they did not build and cannot fully see inside. The answer is not to avoid frontier models; it is to build governance architecture that is robust to the opacity of the systems you depend on.
You can also explore our resources on AI governance frameworks and ISO 42001 certification and responsible AI risk management for regulated industries to understand what this looks like in practice.
The Reformation Analogy, Completed
The Protestant Reformation was not, at its core, a theological dispute. It was a dispute about authority — specifically, about whether a specialized institution had the right to stand between individual believers and their access to sacred text and divine interpretation. Luther's 95 Theses were posted in 1517. The Treaty of Westphalia that ended the resulting conflicts was signed in 1648. The cost of that transition, measured in human suffering, was incalculable.
I am not predicting a religious war over AI. I am noting that the structural question at the center of the Reformation — who has the authority to mediate access to what matters most? — is the same structural question at the center of the current AI moment. And history's answer to that question has consistently been: not an unaccountable institution operating without external oversight, no matter how sincere its internal commitments.
The monks in Eco's novel believed, genuinely, that some knowledge was too dangerous for ordinary people. The abbess in charge believed he was protecting the faithful. He was wrong, and the library burned anyway.
The question for our moment is not whether frontier AI labs have good intentions. Most of them do. The question is whether good intentions, in the absence of accountable governance structures, are sufficient to justify the epistemic authority they currently hold. History's answer to that question is unambiguous.
Frequently Asked Questions
What is meant by frontier AI labs acting as "epistemic gatekeepers"?
Frontier AI labs determine what their models will and will not say, which topics they restrict, and what values they embed through training processes. Because these models interact with hundreds of millions of users across education, healthcare, legal services, and journalism, these decisions functionally shape what kinds of inquiry are facilitated or discouraged at scale — a form of epistemic authority analogous to historical institutions that controlled access to knowledge.
Is there any regulatory framework that currently governs how frontier labs make content and alignment decisions?
The EU AI Act imposes transparency and risk management requirements on high-risk AI deployments, but does not directly govern the foundational training decisions that shape model values and restrictions. In the United States, there is no comprehensive federal AI statute. Most frontier lab behavior in this area is governed by voluntary commitments and internal policies. ISO 42001:2023 provides an AI management system framework but is voluntary and organization-scoped rather than sector-regulating.
What can enterprise organizations do to manage the governance risks of depending on frontier model APIs?
Organizations should treat third-party foundation model providers as suppliers within their AI governance scope, consistent with ISO 42001:2023 clause 8.4. This means documenting known limitations and value-embedding choices, building monitoring processes that detect behavioral drift across model updates, diversifying model dependencies where feasible, and ensuring their own AI risk management documentation acknowledges and accounts for upstream opacity.
Why do frontier labs lobby against regulation if some of their leaders publicly support it?
There is a meaningful difference between supporting regulation in principle and supporting specific regulatory frameworks that would constrain current business operations. Industry engagement in regulatory processes — including the development of standards — can serve to shape frameworks that raise barriers to entry for competitors while imposing minimal constraints on incumbents. This pattern is well-documented in financial services, telecommunications, and pharmaceutical regulation.
What would meaningful external audit of a frontier AI lab actually look like?
Meaningful external audit would require independent auditors to have access to training data sampling methodologies, internal content policy documentation and revision history, red-team findings and how they were addressed, RLHF rater demographics and instruction sets, and model card data with sufficient technical detail for reproducibility assessment. Current voluntary third-party assessments do not approach this standard.
Jared Clark is the principal consultant at Certify Consulting, where he has helped more than 200 organizations navigate AI governance, quality management, and regulatory compliance — achieving a 100% first-time audit pass rate across eight-plus years of practice. He holds a JD, MBA, PMP, CMQ-OE, CPGP, CFSQA, and RAC. Learn more at certify.consulting.
Last updated: 2026-03-10
Jared Clark
Certification Consultant
Jared Clark is the founder of Certify Consulting and helps organizations achieve and maintain compliance with international standards and regulatory requirements.