Three things happened this week in Washington that, taken separately, look like routine regulatory noise. Taken together, they describe something more significant: a deliberate reshaping of who controls AI, who enforces the rules around it, and what physical infrastructure it runs on.
The Trump administration released formal legislative recommendations for a national AI governance framework. The Fifth Circuit ruled that the FTC's internal court system is unconstitutional. And the FCC banned all new foreign-made consumer routers on national security grounds. Add in the SEC's long-awaited cryptocurrency categorization and a string of other regulatory moves, and this was one of the more consequential weeks for technology governance that I can remember in recent years.
None of these stories got the attention they deserved individually. The media cycle treated them as separate beats — legal reporters covered the FTC ruling, tech reporters covered the AI framework, telecom reporters covered the router ban. But if you read them as a single document about where American AI policy is headed, a clear philosophy emerges. And that philosophy has direct consequences for every company building with or around AI right now.
Here is what actually happened, what it means, and what you should be doing about it.
The AI Framework: What Washington Actually Said
The administration's formal AI legislative recommendations, released this week, are the follow-through on a December executive order that discouraged states from passing AI regulations that conflicted with federal policy. That order was the warning shot. This week's framework is the actual map.
Five things sit at the center of it. The framework wants to protect children from AI misuse. It wants to safeguard data infrastructure. It asserts that intellectual property rights — specifically licensing — should govern how AI systems use creative work. It explicitly wants to prevent what it calls "censorship" in AI. And it calls for incorporating AI into education.
The piece that business leaders should be paying the most attention to is the preemption provision. The framework would preempt state laws that impose "undue burdens" on AI development. In plain language: if a state passes an AI regulation that federal policymakers consider too restrictive, that state law goes away. California's AI legislation — which has been the most ambitious in the country — becomes significantly harder to sustain under this logic. So does Colorado's, Illinois's, and the patchwork of other state-level efforts that were filling the vacuum left by federal inaction.
For companies that have been navigating a complex, contradictory web of state AI rules, this sounds like simplification. And in one sense it is. A single federal framework is easier to comply with than fifty different state regimes. But simplification is not the same as protection. The question worth sitting with is: simpler for whom? A federal framework that preempts state law is only as good as the federal standard it replaces those laws with — and right now, the proposed framework is notably light on privacy rules, liability standards, and enforcement mechanisms.
The "anti-censorship" pillar is worth examining closely, because it carries more than it initially appears to carry. When the federal government says it wants to prevent censorship in AI, it is not primarily talking about government censorship. It is signaling skepticism toward AI companies that moderate outputs — that restrict what their models will say on certain topics. In my view, this is the administration placing a thumb on the scale of a debate happening inside the AI industry right now, between companies that build content-restricted systems and those that argue for fewer guardrails. The framework doesn't resolve that debate, but it names a side.
What's missing from the framework is arguably as important as what's in it. There is no clear liability standard for AI-generated harm. There is no federal privacy floor. There is no defined enforcement mechanism. The framework describes goals without specifying consequences for failing to meet them. That is not necessarily the final word — legislative details will fill some of this in — but it means that for the foreseeable future, American AI governance is long on principles and short on teeth.
The FTC Ruling: When the Referee Loses Its Whistle
The same week the administration released its AI framework, the U.S. Court of Appeals for the Fifth Circuit handed down a ruling that will reshape how technology companies think about regulatory risk for years. The court vacated an FTC cease-and-desist order against Intuit — the company that makes TurboTax — and in doing so, ruled that the FTC's internal adjudication system is unconstitutional.
Here is the core of what the court said. The FTC has, for decades, operated its own in-house court system. When the agency believed a company was engaged in deceptive or anticompetitive practices, it could bring charges before its own administrative judges — judges who work for the FTC, whose decisions are reviewed by the FTC, without a jury trial and without independent judicial oversight. The Fifth Circuit ruled that this arrangement violates due process and infringes on the president's authority to remove executive officers. In simpler terms: the FTC was acting as investigator, prosecutor, judge, and appeals court simultaneously, and the Fifth Circuit decided that is not constitutional.
The immediate practical consequence is that the FTC can no longer rely on its internal court system as a primary enforcement tool. If the agency wants to take action against a company, it now has to go through the federal court system — a longer, more expensive, more uncertain path. That matters enormously because the FTC's ability to move quickly against bad actors has historically depended on this in-house process.
Connect this to AI, and the implications become clearer. The FTC has been the most aggressive federal regulator when it comes to deceptive AI practices — fake reviews, AI-generated impersonation, algorithmic price manipulation, deceptive chatbots. If you are an AI company engaging in any of those practices, the FTC's enforcement pathway just got significantly more complicated. The agency can still bring cases. It just lost its fastest and most flexible tool for doing so.
Does this ruling actually clear the way for AI companies to act with less regulatory fear? I think the honest answer is: somewhat, in the short term, and not necessarily in the long term. Federal courts can still enforce FTC actions — they just take longer, and companies have more opportunity to contest them. What's more likely to happen is that enforcement becomes slower and more selective. The FTC will pick its AI cases more carefully. The ones it does bring will be larger, more public, and more clearly egregious. The gray areas — the practices that might be deceptive but not obviously so — are less likely to attract action in the near term.
The irony here is significant. The same week the administration released an AI framework that promises to protect consumers from AI misuse, its enforcement architecture for delivering on that promise got substantially weakened by a court ruling. Whether that is contradiction or coordination is a question worth holding.
The Router Ban and the Infrastructure Play
The FCC's ban on new foreign-made consumer routers looks like a telecom story. It is actually an AI story.
AI does not live in the cloud in any abstract sense. It runs on physical infrastructure: servers, networks, routers, the hardware that moves data between devices and data centers. When the FCC bans foreign-made routers — citing the Trump administration's 2025 National Security Strategy determination that they pose "unacceptable risks" — it is making a bet that AI dominance in the coming decade requires domestic control over the physical layer, not just the software layer.
Pair this with the AI framework's preemption provisions and a pattern comes into focus. The framework protects AI development from state-level regulatory interference. The router ban protects AI infrastructure from foreign hardware. These are two sides of the same strategy: remove friction from domestic AI development while adding friction to foreign involvement in the systems AI runs on. Whether that strategy produces the outcomes it aims for is a separate question — hardware bans have complicated histories — but the logic is coherent.
For most businesses, the router ban's direct impact is limited to new device purchases. It does not force replacement of existing equipment. But it is a useful signal about the direction of American technology policy: sovereignty over physical infrastructure is being treated as a first-order national security concern, not an afterthought. Companies building AI-dependent systems should be watching this space, because the hardware question will not stop at routers.
What Business Leaders Should Actually Do This Week
Reading regulatory news without converting it into concrete decisions is just expensive anxiety. Here is what I think is actually worth doing, depending on where you sit.
If you are building an AI product and you have been managing state-by-state compliance — watching what California, Colorado, and Illinois are doing and trying to build systems that satisfy all of them — the federal preemption framework changes your planning horizon. You do not need to abandon state-specific compliance work immediately, and you should not; the framework is still in the legislative proposal stage. But you should be asking your legal team what a federal-preemption scenario looks like for your compliance roadmap, and you should be watching the legislative process closely. The companies that will be caught flat-footed are the ones that assume the state patchwork is permanent.
If you are a fintech or crypto company, the SEC's cryptocurrency categorization — which defined digital assets as digital commodities, collectibles, tools, stablecoins, or securities, with the SEC retaining oversight only over the securities category — is genuinely actionable. The ambiguity that has made crypto compliance so difficult was, in large part, about not knowing which regulator had jurisdiction over which asset. That clarity is now substantially better. Talk to your legal team about where your assets sit in that taxonomy and what the CFTC's oversight of non-security digital assets means for your compliance posture.
If your business model depends, even partly, on the FTC acting as a check on bad actors in your market — if you compete against companies that engage in practices you believe are deceptive and you have historically assumed the FTC would eventually catch up with them — that assumption just got shakier. The Intuit ruling does not eliminate FTC enforcement, but it slows it and makes it more selective. If a competitor's deceptive AI practices have been your concern, your legal team should be thinking about alternative remedies: state attorneys general, private litigation, FTC complaints that are egregious enough to be worth the agency's more constrained resources.
More broadly, every executive in a technology-adjacent company should be asking two questions right now. First: what does a world with a strong federal AI framework and weakened state AI laws look like for our regulatory exposure, and are we positioned for that world? Second: if FTC enforcement becomes slower and more selective, what does that mean for consumer trust in our space — and what are we doing to differentiate on integrity rather than relying on regulatory pressure to level the playing field?
Those are not easy questions. They are the right ones.
The Bigger Picture Worth Sitting With
Here is the tension at the center of this week's regulatory moves that I keep coming back to.
Washington is simultaneously deregulating AI at the national level — removing state-level friction, weakening the FTC's enforcement capacity, promising fewer restrictions on AI development — and centralizing control over AI infrastructure. The router ban, the national security framing, the push to bring hardware into the domestic supply chain: these are not deregulatory moves. They are assertive state intervention in the physical layer of the technology stack.
You could read that as contradiction. I think it is more accurate to read it as a deliberate division of labor. The federal government is removing one kind of control — the kind that limits what AI companies can build and how — while asserting another kind: control over who builds the infrastructure, where, and with whose hardware. Fewer rules about the product. More control over the production environment.
That is a strategy with winners and losers. American AI companies operating in the domestic market are the most obvious beneficiaries of the deregulatory side. Foreign technology companies — particularly Chinese hardware manufacturers — are the most obvious targets of the infrastructure side. American consumers are in a more ambiguous position: they get AI with fewer regulatory guardrails around privacy, liability, and deceptive practices, but they also get AI built on domestic infrastructure, which presumably carries different security characteristics.
What concerns me, and what I think deserves more attention than it is getting, is the enforcement gap. A national AI framework that preempts state rules but offers no clear liability standard or enforcement mechanism is not a framework that protects consumers. It is a framework that clears the field for AI companies. Whether that field produces trustworthy AI systems depends almost entirely on the character of the companies operating in it — and the week's other news, the FTC ruling in particular, just made it harder for the government to discipline the ones that don't.
The question I would leave you with is this: in a world where AI is governed nationally, where state rules are preempted, and where the primary federal enforcement agency for consumer protection has lost its most flexible tool — who do you trust to protect your customers from AI that isn't working as advertised? And what are you doing to be the answer to that question, rather than the problem it is asking about?
Published: March 30, 2026
Source: The Regulatory Review, Week in Review, March 27, 2026. theregreview.org. Stories referenced: Fifth Circuit ruling on FTC adjudication (Intuit); Trump Administration National AI Framework legislative recommendations; FCC foreign router ban; SEC digital asset categorization; USDA farm grant cancellations; Immigration judge civil service ruling; Senate mifepristone investigation.
Jared Clark
Founder, Prepare for AI
Jared Clark is the founder of Prepare for AI, a thought leadership platform exploring how AI transforms institutions, work, and society.