EU AI Act Analysis

Created
Mar 4, 2024 9:20 AM
Tags
ResearchLegalRegulations
Main page
image

Fines and Objectives

Fines are expected for:

Up to 7% of global annual turnover or €35m for prohibited Al violations.

UP to 3% of global annual turnover or €15m for most other violations.‍

Up to 1.5% of global annual turnover or €7.5m for supplying incorrect info Caps on fines for SMEs and startups.

Document objectives

  1. Ensure that AI systems placed on the Union market and used are safe and respect existing law on fundamental rights and Union values.
  2. Ensure legal certainty to facilitate investment and innovation in AI.
  3. Enhance governance and effective enforcement of existing law on fundamental rights and safety requirements applicable to AI systems.
  4. Facilitate the development of a single market for lawful, safe and trustworthy AI applications and prevent market fragmentation.

Four-point summary

The AI Act classifies AI based on the level of risk it poses.

  1. Unacceptable risk refers to AI systems that are prohibited because they pose significant dangers, such as social scoring systems and manipulative AI.
  2. High-risk AI systems are those that are subjected to thorough conformity assessments. These assessments ensure that these systems meet strict safety, privacy, and transparency standards.
  3. Limited-risk AI systems are those that are subject to lighter transparency obligations. Developers and deployers of such systems must ensure that end-users are aware they are interacting with AI, for instance, with chatbots and deepfakes.
  4. Minimal-risk AI systems are generally unregulated. However, they may be subject to a code of conduct, especially as technology evolves and new risks arise.
image

The majority of obligations under the AI Act fall on providers (developers) of high-risk AI systems.

Providers, regardless of where they are based, must comply with the regulation if they intend to place high-risk AI systems on the EU market or if the system's output is used in the EU.

Users (deployers) of high-risk AI systems also have certain obligations.

Albeit fewer than the providers. This applies to users located in the EU and third country users where the AI system’s output is used in the EU.

General Purpose AI (GPAI):

  1. All GPAI model providers must provide technical documentation, instructions for use, comply with the Copyright Directive, and publish a summary of the content used for training.
  2. Free and open license GPAI model providers are only required to comply with copyright and publish the training data summary unless they present a systemic risk.
  3. All providers of GPAI models that present a systemic risk—open or closed—must also conduct model evaluations, adversarial testing, track and report serious incidents, and ensure cybersecurity protections.

Useful links

🔎 Parsed AI EU Act text and interpretation of articles - The AI Act Explorer

🛠️ Tool for initial check - EU AI Act Compliance Checker