Overview of the EU AI Act in 7 Points

 

 

Flag of the European Union

 

With the publication of the AI Act on March 13, 2024, the European Union is the first to regulate the disruptive artificial intelligence revolution that promises to change our lives forever. Being first certainly carries weight and importance, setting an international benchmark for other countries (and many are already following the European approach).

The new regulation will apply to all public and private entities that produce tools with artificial intelligence (AI) technology for the European market, regardless of whether the companies are European or non-European: so even the Big Five will have to adapt if they want to continue operating in Europe. The new regulation is expected to have at least as much impact as the GDPR and the Machinery Directive.

1. The definition of "AI system" under the AI Act

The AI Act is primarily based on the definition of artificial intelligence that is stated in the legislative text.

The AI Act defines AI as "software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with". (Title I Article 3).

2. Who will be affected by the implementation of the AI Act?

The Regulation will also apply to other members of the AI value chain, such as importers, distributors, manufacturers, and authorized representatives.

Article 3 reserves a specific definition for each of these groups. For example, anyone that develops or has developed an AI system to place it on the market or put it into service under its name or trademark, whether for a fee or free of charge, will be considered a "supplier". At the same time, any of such entities that uses an AI system under their authority is considered an "operator", unless the AI system is used during a non-professional personal activity.

3. Exemptions

Apart from the areas outside the scope of European law, the regulation will not apply to AI systems for military, defense, or national security purposes, those for scientific research and development purposes, or those released under free and open-source licenses (unless there is a risk).

Also excluded from the scope of the new rules is research, testing, and development related to artificial intelligence systems. There is also a household exemption for natural persons using AI systems as part of a purely personal, non-professional activity, again adopting a principle and mechanism like the GDPR.

4. Risk classification

The AI Act classifies AI systems according to the risks associated not with the technology, but with its use. In this respect, AI systems are classified as:

  • minimal risk (e.g., systems used in video games, streaming platform recommendations, and e-commerce); no requirements are set here;
  • low risk; transparency requirements are set here, meaning that users will need to be aware that they are interacting with an AI system;
  • high risk; a specific authorization will be needed for the system to be placed on the market;
  • unacceptable risks; such systems are prohibited.

Specifically prohibited are AI systems that use subliminal or manipulative techniques; exploit vulnerable individuals based on age, disability, or socioeconomic status; categorize individuals based on race, political opinion, sexual orientation, social behavior, or personality traits; or use real-time remote biometric identification tools in public spaces.

Also prohibited are AI systems using risk assessment tools to predict criminal behavior based solely on profiles or personality traits or create or expand facial recognition databases through untargeted image scraping.

5. Compliance

The AI Act will come into force at different times, like the GDPR. This allows companies and PAs to become familiar with the regulation. However, it is advisable to comply early as the market tends to reward early compliance. This was not the case in May 2018 when the GDPR came into force two years earlier and was ignored by most.

However, the timeframe is short. Prohibited systems by the AI Act must be phased out within six months of the AI Act implementation. General governance rules will apply to all companies and PAs within 12 months. The regulation will be fully applicable within two years of enactment, including rules for high-risk systems.

6. Penalties

Member states will determine penalties based on thresholds. Penalties can reach up to €35 million or 7% of the previous year's total annual worldwide turnover for violations related to prohibited practices or non-compliance with data requirements. For non-compliance with any other requirements or obligations of the regulation, including violation of the rules on general-purpose model AIs, penalties can reach up to €15 million or 3% of the previous year's total annual worldwide turnover. The maximum penalty for any violation is 7%. Providing incorrect, incomplete, or misleading information to notified bodies and national competent authorities in response to a request can result in a penalty of 5 million or 1.5% of the previous year's total annual worldwide turnover, whichever is higher.

7. What should organizations do?

Companies should conduct a census of their software that uses AI systems to identify any high-risk systems. Then, an AI-level Risk Assessment is necessary to determine the actions to comply with the regulation.

For low-risk systems, providers can voluntarily adopt requirements for reliable AI and adhere to codes of conduct. This approach aims at balancing innovation and risk mitigation while avoiding excessive regulatory burdens on low-risk technologies.

In contrast, organizations running high-risk AI systems will face more stringent requirements. They must undergo a compliance assessment to demonstrate compliance with the mandatory requirements for reliable AI as outlined in the regulation before putting them on sale or making them generally available. 

Requirements for data quality, documentation, traceability, human oversight, accuracy, cybersecurity, and system robustness must be met for high-risk AI systems being labeled as secure, reliable, and transparent.

 

Do you want to contribute with an article, a blog post or a webinar?

We’re always on the lookout for informative, useful and well-researched content relative to our industry.

Write to us.