EU Deadlines, US State Laws, and a Federal Push to Centralize Policy
It reported a new legal reality: AI regulation is no longer hypothetical. It’s schedules, compliance templates, enforcement dates, and most interestingly conflicts between local rules and national strategy.
Europe’s AI Act moved from “historic” to “operational”
The EU AI Act entered into force on August 1, 2024, according to the European Commission. But 2025 is where it started to bite in a way businesses could feel.
First came the “prohibited practices” phase. The Commission published guidelines in early February 2025 to clarify which AI practices are considered unacceptable under the Act. This matters because bans are the sharp edge of regulation: if you’re building products that touch biometrics, manipulation risks, or sensitive profiling, you don’t get a grace period you get a stop sign.
Then comes the next major compliance cliff: general-purpose AI (GPAI) obligations. The Commission’s own guidance page notes that GPAI obligations enter into application on August 2, 2025, and it provides instructions and expectations for providers, including documentation submissions and processes tied to systemic-risk models.
This is how regulation becomes real: not speeches, but forms, reporting channels, and a defined obligation to explain what you built, how it behaves, and what risks it may create.
The U.S. is also drawing lines but it’s doing it through states first
While the EU is building a unified framework, the U.S. is experiencing a more chaotic but fast-moving pattern: states regulating specific harms.
Reuters reported that New York and California became the first states to regulate “AI companion” systems apps designed to simulate ongoing emotional relationships particularly used by teens and people seeking social support. New York’s law requires suicide-ideation detection, crisis referrals, and repeated disclosure that the companion is not human, with fines for violations. California’s law (effective January 1, 2026) adds youth-protection requirements, reporting obligations, and even allows private lawsuits in some cases.
This is an important signal for the whole tech ecosystem: “AI regulation” won’t only be about enterprise compliance. It will also be about consumer mental health, platform duty-of-care, and the edge cases where AI feels less like a tool and more like a relationship.
Federal policy is pushing back against a patchwork
Patchwork regulation creates business uncertainty, so federal policy often tries to centralize. The White House published an action titled “Ensuring a National Policy Framework for Artificial Intelligence,” arguing for a unified national approach and criticizing state-level obstruction.
Whether that strategy succeeds in courts and politics is its own saga. But the trendline is clear: as state laws multiply, federal pressure to preempt or harmonize will increase especially when AI is framed as a competitiveness issue.
AI is also becoming a regulated infrastructure load
One of the most under-discussed areas of AI governance is energy and grid reliability. Reuters reported that FERC directed PJM to clarify rules for co-located data centers and other large loads partly driven by the massive electricity demand associated with AI growth.
This is governance in an unexpected form: not “what the model can say,” but “where the model can be powered.” As AI data centers expand, expect more regulation that looks like utility policy, not tech policy.
The emerging reality: compliance is now product design
Put the EU’s August 2025 GPAI obligations together with U.S. state laws on AI companions and you get the 2025 headline beneath the headlines:
AI regulation is shifting from after-the-fact policing to design-time constraints. Disclosure, documentation, crisis protocols, risk assessment, and auditability are becoming features the way encryption and accessibility became features.
In 2026, the winners won’t be the companies that “avoid regulation.” They’ll be the companies that treat compliance as an engineering discipline and build products that can prove what they do.