The European Union’s Artificial Intelligence Act, adopted in 2024, establishes a comprehensive regulatory framework for AI systems and general-purpose AI models operating within the EU market. It applies a risk-based approach, categorising systems into unacceptable, high, transparency, and minimal risk levels, with stricter obligations placed on high-risk applications and systemic AI models. Enforcement of the Act is shared between EU Member States and central EU institutions, creating a hybrid governance model designed to balance national oversight with coordinated European supervision. At the national level, Member States are responsible for appointing notifying authorities and market surveillance bodies. These institutions assess AI systems before they enter the market and conduct post-market compliance checks, including investigations and penalties where necessary. At the EU level, the European Commission—through its AI Office—holds direct responsibility for enforcing rules governing general-purpose AI models, alongside advisory bodies such as the European AI Board, a scientific panel of experts, and an AI advisory forum. This layered structure aims to ensure consistent enforcement across Europe while maintaining technical expertise and regulatory flexibility in a rapidly evolving AI landscape.
Read more here







