There are weeks in Washington that feel like administrative background noise, and then there are weeks that quietly redraw the map. The last week of March 2026 was the latter.
Two developments — a federal court finding an FTC proceeding unconstitutional and the Trump Administration formally announcing its AI governance framework — landed within days of each other, and together they signal something important: the foundational architecture of how AI will be regulated in the United States is actively being contested and constructed at the same time. That's not a comfortable place for any organization navigating AI deployment, but it is a clarifying one.
Let me walk through both developments carefully, offer some context the headlines tend to miss, and then explain why I think the next 18 months matter more for AI policy than almost any period that's come before.
The FTC Ruling: What Actually Happened and Why It Matters for AI
A federal court ruled that an FTC administrative proceeding was unconstitutional — a decision that, on its face, reads like inside-baseball administrative law. But the implications extend well beyond whatever specific case triggered it.
The FTC has been one of the most active federal agencies staking out ground in AI oversight. In recent years it has launched investigations into generative AI companies, scrutinized AI-powered hiring tools, and issued guidance warning businesses about deceptive AI practices. The agency positioned itself, under its existing Section 5 authority over "unfair or deceptive acts or practices," as a de facto AI watchdog in the absence of comprehensive federal AI legislation.
That positioning now has a constitutional question mark hanging over it.
The court's finding centers on the structure of the FTC's in-house adjudicatory process — its administrative law judge (ALJ) system — and whether those proceedings comport with constitutional due process and separation-of-powers doctrine. This is part of a broader legal trend: the Supreme Court's 2024 Jarkesy decision, which held that the SEC's in-house enforcement proceedings violated the Seventh Amendment right to a jury trial for certain penalties, opened the door to similar challenges across federal agencies. The FTC ruling appears to walk through that door.
Here is the consequential sentence for organizations thinking about AI: If the FTC's administrative enforcement machinery is constitutionally compromised, the agency's practical ability to discipline AI-related misconduct through its preferred channel — in-house proceedings — is significantly weakened, at least until Congress acts or the courts further clarify the doctrine.
That doesn't mean the FTC goes away. Article III courts remain available. The agency can still pursue injunctive relief. But the procedural advantage the FTC historically held — the ability to move cases through its own administrative process, setting precedent on its own terms, on its own timeline — is now contested terrain.
For organizations deploying AI, this creates a strange kind of ambiguity: less short-term enforcement pressure from the FTC's administrative process, but also less regulatory clarity. Guidance and precedent that would have emerged from FTC proceedings may now be delayed or rerouted. That's not a green light; it's a gap.
Trump's AI Framework: Ambition, Architecture, and the Gaps in Between
The Trump Administration's AI framework, announced in the same week, is the more proactive of the two stories — and the more complex to interpret.
The framework represents the administration's answer to a question that has been hanging over U.S. AI policy since the Biden Administration's 2023 Executive Order on AI: how should the federal government organize itself around artificial intelligence, and what role should Washington play in shaping how AI develops and deploys?
The Trump Administration's answer, consistent with its broader regulatory philosophy, leans toward promotion over precaution. The framework emphasizes maintaining American AI dominance globally, reducing regulatory friction on AI development, and positioning the United States as the world's leading AI innovation environment. It explicitly distances itself from the more precautionary posture of the Biden-era order, which was revoked early in the administration.
Several elements of the framework deserve careful attention:
1. The "AI Leadership" Frame Is Doing a Lot of Work
The framework is organized around the concept of American AI leadership — meaning the primary policy objective is ensuring U.S. companies, not Chinese ones, lead in AI development and deployment globally. This framing has real policy consequences. It means that where there is tension between innovation speed and safety review, the framework's thumb is on the scale toward speed.
This is a legitimate policy choice, and the geopolitical logic is not unreasonable. But it also means that organizations hoping Washington would deliver a clear, detailed AI risk governance architecture — something like the EU AI Act — should not hold their breath. The Trump framework is directional, not prescriptive.
2. Federal Preemption Is an Emerging Subtext
One of the less-discussed but potentially most consequential aspects of the administration's AI posture is the question of federal preemption of state AI laws. Over the past two years, more than 40 U.S. states have introduced or passed AI-related legislation. California's AI bills alone have attracted global attention. The patchwork is real, and it is accelerating.
The Trump framework, in emphasizing a unified national AI environment, creates a political opening for federal preemption arguments. If Congress moves on AI legislation — and there are bipartisan conversations underway — the question of whether federal law should override state AI rules will be central. For organizations operating across multiple states, this matters enormously: the regulatory environment you're mapping today may look very different in 24 months depending on how preemption plays out.
3. What the Framework Doesn't Say
Perhaps as telling as what the Trump AI framework includes is what it omits. There is no detailed mandate for algorithmic accountability mechanisms. There is no federal AI incident reporting requirement along the lines of what the EU AI Act mandates for high-risk systems. There is no explicit civil rights framework governing AI-driven decision-making in employment, credit, or housing.
These omissions don't mean those issues disappear — state AGs, private litigants, and sector-specific regulators (banking, healthcare, labor) retain independent authority. But they do mean the federal government is not going to hand organizations a comprehensive compliance roadmap. Organizations will have to build their own.
Reading Both Signals Together: A Regulatory Inflection Point
Taken together, the FTC ruling and the Trump AI framework tell a coherent story about where U.S. AI governance is heading, even if neither story is complete on its own.
The United States is making a deliberate bet on AI as a strategic asset, while simultaneously (if not intentionally) reducing the enforcement architecture that would hold AI deployments accountable to baseline standards. That is a high-variance bet. If U.S. AI systems perform well and cause limited harm, it looks like visionary deregulation. If high-profile AI failures accumulate — in hiring, in lending, in healthcare, in critical infrastructure — the backlash will be severe and fast.
The organizations best positioned in this environment are not the ones waiting for regulatory clarity before acting — they are the ones building internal governance capacity now, so they are ahead of whatever wave comes next.
Consider the numbers: A 2024 McKinsey survey found that only 21% of organizations had implemented formal AI risk governance processes, even as 65% had adopted AI in at least one business function. That governance gap is what makes the current regulatory uncertainty so consequential — most organizations don't have the internal frameworks to distinguish which of their AI deployments would be scrutinized under a stricter regime, and which wouldn't.
Meanwhile, global context matters. The EU AI Act's risk-based framework is now in phased implementation, with prohibited AI practices banned as of February 2025 and obligations for high-risk AI systems rolling in through 2026 and 2027. Any U.S. organization with European operations or customers is not operating in a deregulatory environment — they are operating in a dual-track environment where European requirements apply regardless of what Washington does or doesn't do. The EU AI Act establishes fines of up to €35 million or 7% of global annual turnover for violations of its prohibited practices provisions — a number that makes internal AI governance a financial priority, not just a policy aspiration.
Sector-Specific Implications Worth Watching
The week's regulatory signals don't land the same way across every industry. Here's a quick read on where the implications are sharpest:
| Sector | FTC Ruling Impact | Trump AI Framework Impact |
|---|---|---|
| Technology / AI Developers | Reduced near-term FTC administrative enforcement risk; Article III exposure remains | Favorable environment for development speed; less prescriptive federal oversight |
| Financial Services | Minimal direct FTC change; OCC, CFPB, Fed retain authority | Watch for preemption debates affecting state-level AI lending rules |
| Healthcare | FDA AI/ML framework still applies; FTC change marginal | Promotion framing may accelerate AI adoption in health systems |
| Retail / E-Commerce | FTC deceptive AI practices guidance still applies; enforcement path murkier | State consumer protection AI laws remain active (CA, CO, TX) |
| HR / Hiring Technology | EEOC and state-level scrutiny of AI hiring tools remains active | No federal algorithmic accountability mandate in current framework |
| Critical Infrastructure | Sector regulators (FERC, TSA, DHS) retain independent authority | National security AI framing may accelerate government AI use |
What I'm Watching in the Weeks Ahead
Several threads from this week's news are worth tracking closely:
1. Congressional AI Legislation. Both the FTC ruling (which reveals a gap in the agency's enforcement architecture) and the Trump framework (which is directional but not legislative) create pressure for Congress to act. There are bipartisan conversations around a potential federal AI liability framework and a federal privacy law that includes AI provisions. Whether either moves before the midterm cycle accelerates is the key question.
2. State AG Activity. In the absence of strong federal enforcement, state attorneys general have become the most active AI enforcement actors in the U.S. California, Texas, New York, and Illinois all have active AI-related investigations or enforcement postures. This trend will intensify.
3. Private Litigation. The Jarkesy line of cases, of which the FTC ruling is a part, doesn't reduce legal exposure for AI misconduct — it reroutes it toward Article III courts, where jury trials are available and damages can be significant. Watch for class actions involving AI-driven decisions in employment, insurance, and financial services.
4. The NIST AI RMF as a De Facto Standard. With federal prescriptive regulation limited, the NIST AI Risk Management Framework (AI RMF 1.0, released in 2023) is increasingly functioning as the reference standard for organizations trying to demonstrate responsible AI governance. Expect more procurement contracts, insurance policies, and litigation defenses to reference NIST AI RMF alignment.
The Deeper Question: Who Is Governing AI If Washington Steps Back?
This is the question I keep returning to. If the FTC's administrative enforcement capacity is curtailed, if the federal AI framework is promotional rather than prescriptive, if Congress hasn't acted — who is actually governing AI in the United States right now?
The honest answer is: a diffuse constellation of actors. State legislators and AGs. Sector regulators with pre-existing statutory authority. Courts, increasingly, as litigants bring AI cases under existing tort, contract, and civil rights law. International bodies like the EU, whose rules apply to U.S. companies with European exposure. And, critically, the organizations deploying AI themselves — through their own governance choices, risk assessments, and internal accountability structures.
This is not necessarily a failure of governance. Distributed governance has advantages: it's adaptive, it's harder to capture by any single interest, and it allows variation based on context. But it places a real burden on organizations. In a fragmented regulatory environment, internal AI governance isn't just a compliance function — it is the primary mechanism through which accountability is exercised.
That's a bigger responsibility than most organizations have fully absorbed.
Closing Thoughts
The week's news — an FTC ruling and a presidential AI framework — might look like two separate regulatory events. But they are really two facets of the same underlying dynamic: the United States is actively deciding, through contested legal battles and executive policy choices, what kind of AI governance architecture it wants. That process is not finished. The decisions being made now — in federal courts, in the White House, in state legislatures — will shape the environment organizations operate in for the next decade.
The organizations that treat this moment as merely a period of reduced enforcement pressure will be caught flat-footed when the next wave arrives. The ones that use this window to build genuine internal AI governance capacity will be in a fundamentally different position — not because they "complied," but because they actually understood what they were deploying and why.
That distinction, in the long run, is what separates the organizations that survive AI regulatory maturation from those that scramble through it.
For more analysis on how AI is transforming institutional structures and organizational strategy, explore Prepare for AI.
Source reference: The Regulatory Review, Week in Review #405 (March 27, 2026)
Last updated: 2026-03-30
Jared Clark
Founder, Prepare for AI
Jared Clark is the founder of Prepare for AI, a thought leadership platform exploring how AI transforms institutions, work, and society.