The European Union enacted the world's first comprehensive AI law — the EU AI Act (Regulation 2024/1689) — which entered into force in August 2024. Implementation is phased: prohibitions on unacceptable-risk AI applied from February 2025; General-Purpose AI (GPAI) model obligations came into effect August 2025; high-risk system requirements are delayed pending technical standards and possible amendments, with full implementation expected by August 2027. The AI Liability Directive was withdrawn in February 2025 and will not be revived; AI liability is now addressed through the revised Product Liability Directive. The Digital Omnibus package (November 2025) proposes to simplify and delay certain AI Act obligations, though it is still under negotiation. The EU AI Office, established in 2024, is the primary enforcement body for GPAI models.
The EU approach is grounded in fundamental rights protection and the precautionary principle. The philosophy holds that powerful technologies must be governed proactively to prevent harm, rather than reactively after harm occurs. The AI Act reflects the EU's 'Brussels Effect' — the tendency for EU regulations to become de facto global standards as multinational companies comply to access the EU market. The framework prioritizes human dignity, democratic values, and rule of law alongside innovation.
The EU is the most influential actor in global AI governance. The AI Act's extraterritorial reach affects any company serving EU customers. The EU leads the Council of Europe AI Convention negotiations, shapes OECD AI Principles, and engages in US-EU TTC on AI. The 'Brussels Effect' means EU standards often become global standards. The EU AI Continent Action Plan (2025) aims to attract global AI investment and talent.