Abstract

On March 13, 2024, the European Parliament adopted the EU AI Act. We summarize the Act’s key features and its potential consequences.

On March 13, 2024, the European Parliament adopted the Artificial Intelligence Act, a far-reaching, 400-plus-page piece of legislation that seeks to establish a uniform legal framework for the development of AI.

The Act reaches beyond the EU’s borders, regulating AI providers and users regardless of location provided that the AI «output» is used in the EU. Although Switzerland has taken a wait-and-see approach to regulating AI, its position in Europe means that most Swiss AI will be subject to EU regulation and Switzerland may be compelled, whether de facto or de jure, to adopt equivalent regulations.

The Act largely takes a risk-based approach to regulating AI. Closely tracking the draft leaked in January (discussed here), it bans certain practices that threaten fundamental rights, such as social scoring and the exploitation of age or disability, and subjects «high-risk AI», such as biometric identification, credit scoring, and employee evaluations, to data governance, human oversight, impact assessments, and other restrictions.

The Act departs from the risk-based approach in two key respects:

  • First, it imposes basic transparency requirements on nearly all individual users of AI. These measures seek to ensure that AI-generated content is labeled, including deepfakes, and users know when they are interacting with AI.
  • Second, the Act singles out for specific regulation «general-purpose AI models» and «general-purpose AI systems». These measures, which were added following the generative AI explosion in 2023, depart from the Act’s technology-agnostic risk-based approach and may prove complicated to interpret and enforce as the technology evolves.

The penalties for violations are potentially substantial. Member states will establish their own rules, but the Commission’s «AI Office» is empowered to fine prohibited AI practices up to EUR 35 million or, if the offender is a company, up to 7 percent of total worldwide annual turnover, whichever is higher.

Restrictions on «unacceptable» uses come into force later this year. The «general-purpose AI system» regulations in early 2025. The transparency requirements and the regulation of «high-risk» systems become effective in 2026.

The EU’s efforts to regulate the risks of AI are laudable. The EU is trying to set early boundaries for an emerging technology that may have a profound social and economic impact. The Act sets important guardrails, but it contains only limited provisions to support the AI tech sector in Europe. Time will tell if the regulations stifle innovation and progress and weaken Europe’s competitive position vis-à-vis the United States and China, contributing to Europe falling further behind. To avoid these pitfalls, the new law will need constant re-evaluation to determine whether new developments require new approaches and safeguards.

Falls Sie Fragen zu diesem Bulletin haben, wenden Sie sich bitte an Ihren Homburger Kontakt oder an: