The EU AI Act Is Now Enforced: Here's What Actually Changed

The EU AI Act is now officially enforced. Learn what actually changed, which AI systems face new restrictions, and how businesses must comply with European regulations.

---

Related Reading

- EU AI Act Enforcement Begins: What Companies Need to Know - The EU AI Act Just Claimed Its First Victim: A Major Fine for an American AI Company - EU AI Act Enforcement Begins: Here's What Actually Changes - The EU AI Act Is Live—And Companies Are Already Scrambling - The EU AI Act Is Live: What You Actually Need to Do

---

The Brussels Effect in Action

The EU AI Act's enforcement marks more than regional compliance—it represents the crystallization of the "Brussels Effect" in artificial intelligence governance. Much as GDPR became the de facto global standard for data privacy, the AI Act's risk-based classification system is already reshaping how multinational corporations design and deploy AI systems worldwide. Companies are discovering that maintaining separate product pipelines—one EU-compliant, one for less regulated markets—imposes prohibitive engineering costs. The result: many are voluntarily extending EU-grade safeguards to all users, effectively exporting Brussels' regulatory framework across borders without a single extraterritorial enforcement action.

This dynamic carries significant implications for competitive positioning. European AI startups, long criticized for operating under heavier regulatory burdens than their American or Chinese counterparts, may find their early compliance investments converting into market advantages. Firms that have already navigated conformity assessments, established quality management systems, and built audit trails can enter regulated sectors—healthcare, finance, critical infrastructure—with credentials that overseas competitors scramble to match. The Act's phased implementation timeline, stretching through 2027, offers a narrow window for this regulatory arbitrage to materialize before global parity reasserts itself.

Industry observers note a subtler shift in how AI systems are conceived from inception. The principle of "compliance by design," while formally required only for high-risk applications, is influencing product roadmaps across the risk spectrum. Engineering teams report increased engagement from legal and ethics functions earlier in development cycles—a structural change that some executives welcome for risk mitigation, while others lament as innovation friction. Whether this represents durable cultural transformation or temporary adjustment turbulence remains contested, though early empirical studies suggest the former: AI documentation practices and model cards, once rare outside research contexts, are becoming standard deliverables even for internal prototyping.

---

Frequently Asked Questions

Q: Does the EU AI Act apply to companies based outside the European Union?

Yes. The Act applies to any provider placing AI systems on the EU market or putting them into service within the EU, regardless of where the company is headquartered. This extraterritorial reach mirrors GDPR and means American, Chinese, and other non-EU firms must comply if their AI products affect EU users.

Q: What are the penalties for non-compliance?

Violations can trigger fines up to €35 million or 7% of global annual turnover—whichever is higher—though most infringements carry lower tiers of €15 million or 3%, and €7.5 million or 1.5% for specific documentation failures. The European Commission has indicated early enforcement will prioritize guidance over maximum penalties, but the statutory ceilings create substantial liability exposure.

Q: How does the Act classify "high-risk" AI systems?

High-risk categories include AI used in critical infrastructure, education, employment, essential services, law enforcement, migration, and administration of justice. The classification depends on both the sector and the specific use case—a biometric identification system in any context typically qualifies, whereas a chatbot may not unless deployed in high-stakes domains.

Q: Are open-source AI models exempt from the Act?

General-purpose AI models released under free and open-source licenses receive partial exemptions, particularly regarding obligations for downstream documentation and systemic risk evaluation. However, these carve-outs narrow considerably once a model exceeds computational thresholds (10^25 FLOPs) or is monetized through API access, placing substantial open-weight releases like Llama and Mistral within regulatory scope.

Q: When do different provisions of the Act take effect?

The enforcement timeline is staggered: prohibitions on unacceptable-risk AI (social scoring, manipulative systems) took effect February 2025; obligations for general-purpose AI models began August 2025; high-risk system requirements phase in between 2026 and 2027 depending on sector; and full enforcement infrastructure, including notified body accreditation, completes by 2027.