AI Governance & Compliance 14 min read

Regulatory Capture in AI Governance: What It Means for Compliance

J

Jared Clark

March 07, 2026

Regulatory capture is not a new phenomenon. Economists and legal scholars have documented it in financial services, telecommunications, and pharmaceuticals for decades. But in artificial intelligence governance, the dynamic is playing out at a speed and scale that has no historical precedent — and the implications for every organization building, deploying, or procuring AI systems are profound.

This guide explains what regulatory capture means in the AI context, how it is manifesting today, what risks it creates, and what practical steps compliance leaders should take right now.


What Is Regulatory Capture, and Why Does It Matter for AI?

Regulatory capture occurs when a regulatory agency, tasked with acting in the public interest, instead advances the commercial or political interests of the industry it is supposed to oversee. The regulated entities — companies with far greater technical expertise, resources, and lobbying power than the agency — gradually shape the rules to favor themselves.

In AI governance, capture does not require bad faith on anyone's part. It emerges structurally from a brutal asymmetry:

  • The technical complexity gap: The engineers who built large language models understand them far better than the career civil servants writing regulations about them.
  • The talent pipeline: The most qualified people to staff an AI regulatory body are the same people the frontier labs are recruiting at salaries government cannot match.
  • The revolving door: Former regulators become AI policy leads at major technology companies; former industry executives move into advisory roles at agencies.
  • The information asymmetry: Regulators depend on industry disclosures, self-reported benchmarks, and voluntary safety evaluations — data produced and curated by the companies being assessed.

Regulatory capture in AI governance is already underway. The question is not whether it is happening — it is whether your compliance program is built to function in an environment where the rules are being written by, or heavily influenced by, the same organizations your systems compete against or interact with.


The Evidence: How Capture Is Manifesting in AI Policy

Standard-Setting Dominated by Industry Voices

ISO/IEC JTC 1/SC 42, the technical committee responsible for AI standards including ISO 42001:2023, relies heavily on national body delegations that are largely populated by corporate representatives. This is not inherently corrupt — industry expertise is genuinely necessary — but it means that standards reflect what large technology companies can already do, rather than what independent public-interest analysis would recommend.

A 2023 analysis by the AI Now Institute found that of the named participants in key U.S. AI standards development processes, over 70% were affiliated with companies that had a direct commercial interest in the outcomes. This concentration of influence shapes which risks get operationalized as requirements and which remain aspirational language.

The EU AI Act's Dependency on Industry-Developed Harmonized Standards

The EU AI Act — Regulation (EU) 2024/1689 — is the world's most comprehensive binding AI regulation. Yet its enforcement architecture for high-risk AI systems (Article 9 through Article 15) relies heavily on "harmonized standards" that will be developed primarily by CEN-CENELEC, the European standards bodies, under a mandate from the Commission. Those bodies, like their ISO counterparts, draw technical input predominantly from industry participants.

The Act's compliance pathway for high-risk AI systems essentially delegates the definition of what "adequate risk management" looks like to the same organizations building high-risk AI systems. This is a structural feature of modern technical regulation, not a flaw unique to the EU — but it is a textbook precondition for capture.

The United States: Voluntary Frameworks and the Absence of a Statutory Regulator

The U.S. has no general-purpose AI regulatory agency. The NIST AI Risk Management Framework (AI RMF 1.0, published January 2023) is voluntary. Executive Order 14110 on Safe, Secure, and Trustworthy Artificial Intelligence (October 2023) directed agencies to develop guidelines but created no new enforcement authority. The AI Safety Institute within NIST operates on a budget that is a rounding error compared to the R&D spend of a single frontier AI lab.

By one estimate, the five largest AI companies collectively spent more than $400 million on U.S. federal lobbying and policy engagement between 2020 and 2024, while the total annual budget for federal AI safety research was approximately $70 million. That imbalance does not automatically produce capture, but it produces the conditions for it.

Evaluation and Auditing Infrastructure Controlled by the Industry

Perhaps the most concrete form of current capture is the evaluation ecosystem. When a major AI lab releases a frontier model, the primary safety evaluations are conducted by the lab itself. Third-party red-teaming exists but is typically contracted, scoped, and published by the company being evaluated. The model cards, system cards, and transparency reports that regulators and procurers rely upon are marketing documents as much as they are technical disclosures.

ISO 42001:2023 clause 6.1.2 requires organizations to assess AI-specific risks, but the standard does not require independent third-party verification of those assessments for most use cases. The result: an audit ecosystem where the most consequential self-declarations are made by the entities with the greatest financial interest in favorable findings.


Regulatory Capture vs. Legitimate Industry Participation: A Critical Distinction

Characteristic Legitimate Industry Participation Regulatory Capture
Motivation Provide technical expertise to improve rule quality Shape rules to reduce competitive pressure or liability
Transparency Positions disclosed, conflicts acknowledged Influence exercised through proxies, coalitions, or informal channels
Outcome for public Standards that are both workable and protective Standards that are workable but primarily protective of incumbents
Information flow Bidirectional — regulators develop independent capacity Unidirectional — regulators depend on industry for data and analysis
Revolving door Talent movement with cooling-off periods and firewalls Seamless movement that blurs loyalty and erodes institutional independence
Effect on competition Level playing field with clear rules Compliance costs structured to disadvantage new entrants
Accountability mechanism Independent audit, public comment, judicial review Self-certification, voluntary disclosure, industry-run bodies

Understanding this distinction matters for compliance leaders because it shapes how you interpret the standards and frameworks you are being asked to implement. A standard shaped by legitimate participation is one you can implement in good faith and expect to confer genuine risk reduction. A standard shaped by capture may confer certification without commensurate protection — and may expose you to liability the moment a different regulatory regime, or a plaintiffs' attorney, applies a more rigorous standard.


The Gap-Filling Dynamic: When the Regulated Write the Rules

When regulators cannot keep up — whether because of resource constraints, technical complexity, or political gridlock — a governance vacuum forms. In AI, that vacuum is being filled by a patchwork of:

  1. Voluntary industry commitments (e.g., the White House voluntary AI safety commitments signed by major labs in 2023)
  2. Self-regulatory bodies (e.g., the Frontier Model Forum, established by Anthropic, Google, Microsoft, and OpenAI)
  3. Corporate AI ethics frameworks that function as de facto standards in the absence of binding rules
  4. Procurement requirements cascading down supply chains, with hyperscale cloud providers effectively setting compliance floors for their customers

This gap-filling is not inherently illegitimate. Voluntary frameworks can be genuinely rigorous, and industry bodies can produce valuable technical work. The problem is accountability. When the regulated write the rules, those rules will tend to reflect what the regulated can already do, what they are comfortable disclosing, and what minimizes their liability — not necessarily what best protects the people affected by AI systems.

For compliance professionals, gap-filling creates a specific risk: the rules you are certifying against today may be superseded, invalidated, or deemed inadequate by future binding regulation. Organizations that built their AI governance programs around voluntary frameworks in 2023-2024 may find themselves needing to rebuild those programs substantially when the EU AI Act's high-risk provisions become fully enforceable in 2026, or when U.S. sector-specific rules mature.


What Regulatory Capture Means for Your AI Compliance Strategy

1. Do Not Mistake Certification for Compliance

Certification against ISO 42001:2023 or alignment with NIST AI RMF is valuable — it demonstrates a systematic approach to AI risk management and creates documented evidence of due diligence. But in a capture-influenced environment, certification is a floor, not a ceiling. The standard you are certified against may have been designed to be achievable, not to eliminate the risks your organization actually faces.

At Certify Consulting, I advise clients to treat certification as the minimum acceptable baseline and to layer additional controls based on the actual risk profile of their AI applications — particularly for systems that affect employment decisions, credit, healthcare, or public safety, where sector-specific regulators (EEOC, CFPB, FDA, FTC) are applying their own analytical frameworks independent of voluntary AI standards.

2. Track Regulatory Divergence Actively

The EU AI Act, China's AI governance framework, the UK's sector-regulator approach, Canada's AIDA (Artificial Intelligence and Data Act), and emerging U.S. state laws (including the Colorado AI Act and California's SB 1047 successor efforts) are developing on different tracks with different assumptions. Some of these frameworks were developed with more independence from industry influence than others.

Organizations operating across jurisdictions should map their AI systems against multiple regulatory frameworks simultaneously, identifying where voluntary standards may be insufficient under more rigorous binding rules. The NIST AI RMF and ISO 42001:2023 are reasonable starting points but should not be treated as internationally portable compliance solutions.

3. Build Internal Technical Capacity

The same expertise gap that enables regulatory capture at the agency level can exist inside your own organization. If your compliance team cannot independently evaluate vendor AI safety claims, you are in the same position as an underfunded regulator: dependent on the entity you are supposed to be overseeing.

This means investing in internal AI literacy — not to replicate the technical depth of your AI engineering team, but to ask the right questions, interpret model cards critically, and recognize when a vendor's safety disclosure is substantively thin. ISO 42001:2023 clause 7.2 requires that organizations ensure persons doing work affecting AI management system performance are competent. That requirement applies to your compliance function as much as your technical teams.

4. Engage Standard-Setting Processes Directly

If industry participation in standards development creates capture risks, the answer is not to withdraw — it is to ensure that a diversity of perspectives are present. Organizations outside the frontier lab tier have legitimate interests in AI governance that differ from those of the companies dominating current standard-setting.

For compliance leaders at mid-market companies, healthcare systems, financial institutions, or public sector organizations, participating in national body comment processes for ISO/IEC standards, submitting comments to NIST on AI RMF updates, and engaging with sector-specific rulemaking proceedings is both a civic obligation and a strategic interest.

5. Prepare for the Post-Capture Correction

Historically, regulatory capture in high-stakes industries ends when a crisis makes the costs of inadequate oversight undeniable. In financial services, the 2008 crisis produced the Dodd-Frank Act. In pharmaceuticals, drug safety scandals produced FDAAA. In aviation, the 737 MAX disasters restructured FAA oversight of Boeing.

The AI governance equivalent of these corrective events is likely coming. Organizations that built their compliance programs around the minimum achievable under voluntary frameworks will face significant remediation costs when binding rules with real enforcement emerge. Organizations that built their programs to a higher internal standard will be positioned as leaders.

At Certify Consulting, our 200+ client engagements — with a 100% first-time audit pass rate across eight-plus years — consistently show that the organizations best positioned for regulatory transitions are those that treated governance frameworks as operational tools rather than certification checklists.


Citation-Ready Facts for AI Governance Professionals

Regulatory capture in AI governance is structurally distinct from other industries because the technical complexity gap between regulators and the regulated is larger, changes faster, and is harder to close through traditional mechanisms like inspector training or academic research.

ISO 42001:2023, the first international standard for AI management systems, provides a foundational compliance architecture but was developed through a process dominated by industry participants, meaning organizations should apply it as a baseline rather than a comprehensive risk management solution.

The EU AI Act's enforcement of conformity assessments for high-risk AI systems depends substantially on harmonized standards that have not yet been finalized, creating a compliance gap that organizations must navigate through a combination of existing standards, sectoral guidelines, and documented risk management judgment.


Practical AI Governance in a Capture-Influenced Environment: A Checklist

  • [ ] Map your AI systems against both voluntary frameworks (NIST AI RMF, ISO 42001:2023) and applicable binding regulations (EU AI Act, sector-specific rules)
  • [ ] Establish internal competency requirements for AI compliance roles per ISO 42001:2023 clause 7.2
  • [ ] Critically evaluate vendor AI safety documentation — do not treat model cards as independent verification
  • [ ] Document your risk assessment methodology in sufficient detail to demonstrate it was substantive, not box-checking
  • [ ] Monitor EU AI Act harmonized standards development and align your risk management system (clause 9 of ISO 42001:2023) to emerging requirements
  • [ ] Build a regulatory horizon-scanning process that tracks divergence across jurisdictions
  • [ ] Participate in relevant comment and standards processes to ensure your organization's interests are represented

Frequently Asked Questions

What is regulatory capture in the context of AI governance?

Regulatory capture in AI governance occurs when the agencies, standards bodies, and policy processes responsible for overseeing AI systems are primarily shaped by the companies building and deploying those systems. This happens structurally — through resource asymmetry, information dependence, and talent concentration — not necessarily through deliberate misconduct. The result is rules that reflect what industry can already do and is comfortable disclosing, rather than what independent public-interest analysis would require.

Does ISO 42001 certification protect my organization if regulators later tighten AI rules?

ISO 42001:2023 certification demonstrates that your organization has implemented a systematic AI management system and met the requirements of the standard at the time of audit. It does not guarantee compliance with future binding regulations, particularly the EU AI Act's sector-specific requirements for high-risk AI systems. Organizations should treat ISO 42001 certification as a strong foundation and supplement it with jurisdiction-specific gap analyses as binding rules mature.

How does regulatory capture affect AI auditing and third-party assessments?

Capture affects auditing primarily through scope and disclosure dependence. Most AI audits — including conformity assessments under emerging frameworks — rely on information provided by the organization being assessed. Where the standards being audited against were developed with heavy industry influence, auditors may be verifying compliance with requirements designed to be achievable rather than with requirements that reflect actual risk. Independent auditors who apply substantive technical scrutiny, rather than documentation review alone, provide significantly more assurance.

What should compliance leaders do about the voluntary framework problem in U.S. AI governance?

Compliance leaders in the U.S. should implement voluntary frameworks like NIST AI RMF 1.0 as operational tools, not just for certification value, while simultaneously monitoring sector-specific enforcement actions by the FTC, CFPB, EEOC, and FDA. These agencies are applying existing statutory authority to AI applications in ways that create binding obligations independent of any voluntary AI framework. A compliance program built only around voluntary standards may be well-documented but legally insufficient.

Is participating in AI standards development a conflict of interest for companies?

No — industry participation in technical standards development is necessary and legitimate. The problem is concentration and lack of counterbalancing voices. A company that participates in ISO/IEC JTC 1/SC 42 to contribute genuine technical expertise is behaving appropriately. The capture risk emerges when industry voices vastly outnumber public interest, academic, civil society, and end-user voices, and when the resulting standards reflect that imbalance. Organizations of all types should participate actively to ensure standards reflect a fuller range of interests.


Getting Your AI Governance Program Right

Navigating AI governance in a capture-influenced regulatory environment requires more than downloading the latest framework and checking boxes. It requires an honest assessment of what the standards you are implementing were designed to accomplish, where they leave gaps, and how your organization will perform when more rigorous scrutiny arrives.

If your organization is working toward ISO 42001:2023 certification, preparing for EU AI Act compliance, or building an AI risk management program that will hold up under genuine regulatory scrutiny — not just voluntary framework alignment — the team at Certify Consulting brings the cross-disciplinary expertise to help you build governance that is both certification-ready and genuinely defensible.

For related resources on building AI management systems that go beyond compliance theater, see our guides on AI risk assessment under ISO 42001 and preparing for EU AI Act conformity assessments.


Last updated: 2026-03-06

Jared Clark, JD, MBA, PMP, CMQ-OE, CPGP, CFSQA, RAC is the principal consultant at Certify Consulting, with 8+ years of experience and 200+ clients served across AI management systems, quality management, and regulatory compliance. His clients maintain a 100% first-time audit pass rate.

J

Jared Clark

Certification Consultant

Jared Clark is the founder of Certify Consulting and helps organizations achieve and maintain compliance with international standards and regulatory requirements.

Think With Us

New essays on power, pattern, sovereignty, and culture in the AI age. Delivered to your inbox. No hype, no affiliate links, no productivity tips.

Subscribe on Substack