R E S E A R C H N E X U S

A Concise Overview of the AI Act

The AI Act establishes a comprehensive framework for governing the development and deployment of AI systems in the EU. It classifies AI systems according to their level of risk—from “unacceptable” to “minimal”—and assigns varying obligations to both providers (developers) and users (deployers).

Key Takeaways

Risk-Based Classification
Unacceptable Risk:
Prohibited AI systems (e.g., manipulative social scoring, deceptive biometric uses).
High Risk: AI systems that significantly affect people’s fundamental rights or safety (e.g., access to jobs, education, public services). These face stringent obligations.
Limited Risk: AI that must meet transparency requirements (e.g., chatbots and deepfakes) so users know they are interacting with AI.
Minimal Risk: Most AI systems currently on the market (e.g., spam filters, AI-enabled video games). Regulation here is generally minimal, though evolving with generative AI.

Obligations by Stakeholder
Providers (Developers) of high-risk AI systems must establish risk management processes, ensure data quality, maintain technical documentation, and design for human oversight, accuracy, and cybersecurity.
Users (Deployers) of high-risk AI have fewer obligations but must follow usage instructions and monitor AI systems in their professional capacity.

General Purpose AI (GPAI)
GPAI models (capable of a broad range of tasks) must provide technical documentation, comply with copyright, and publish training data summaries.
Free and Open License Models:
Must comply with copyright and publish training data summaries unless they pose “systemic risk.”
Systemic Risk:
GPAI models trained using more than 10^25 FLOPs must notify the Commission and may face additional obligations such as adversarial testing and incident reporting.

Timeline and Governance
The Act will apply in phases, starting as early as six months for prohibited AI, with up to 36 months for certain high-risk systems. A centralized AI Office within the Commission will oversee compliance, especially for GPAI providers, and will manage complaints from downstream providers. Codes of practice will be developed to guide compliance efforts, with a presumption of conformity once harmonized standards are adopted.

In Practice

Providers (Developers): Prepare to update product design, documentation, and risk management to align with the AI Act’s detailed requirements.
Users (Deployers): Assess which AI systems you use qualify as high risk, adhere to deployment instructions, and implement oversight where necessary.
GPAI Providers: Evaluate compute resources, comply with documentation and transparency obligations, and be prepared for additional measures if your model is deemed systemic.