EU Council approves Artificial Intelligence Act

DC

Dragos Cernescu

22 May 2024 / 5 Min Read

 

The legislation is designed to address the varying levels of risk posed by AI systems, with stricter rules for higher-risk applications. This act is the first global regulation of its kind and is expected to set a precedent for AI governance worldwide.

The AI Act intends to promote the development and adoption of safe, reliable AI systems within the EU’s single market, applicable to both private and public sectors. It aims to balance the advancement of AI technology with the protection of fundamental rights of EU citizens. The regulation covers areas within EU law, exempting systems used solely for military, defence, and research purposes.

 

The European Council has approved the Artificial Intelligence (AI) Act, aiming to standardise AI regulations through a risk-based approach.

 

What are the key provisions?

Some of the most important provisions of the AI act include: 

  • Risk-Based Classification: AI systems are categorised by risk levels. Low-risk systems face minimal transparency requirements, while high-risk systems must meet stringent conditions to enter the EU market. AI systems deemed to present unacceptable risks, such as cognitive behavioural manipulation and social scoring, are banned. 
  • Prohibitions: the law bans AI use for predictive policing based on profiling and for biometric categorisation by race, religion, or sexual orientation. 
  • General-purpose AI models with no systemic risks will have limited requirements, primarily around transparency. Those with systemic risks must adhere to stricter regulations. 

The enforcement of the AI Act will be managed by an AI Office within the European Commission to oversee rule implementation, as well as a scientific panel of independent experts to support enforcement. An AI Board composed of member state representatives will assist in consistent application of the Act, while an advisory forum to provide technical expertise.

Fines for non-compliance are based on a percentage of the offending company’s global annual turnover or a set amount, whichever is higher. Small and medium-sized enterprises (SMEs) and start-ups face proportionate fines. Before deploying high-risk AI systems, public service entities must assess their impact on fundamental rights. The regulation mandates increased transparency in developing and using high-risk AI systems, requiring registration in the EU database for such systems and disclosure when using emotion recognition technology.

The AI Act encourages innovation through regulatory sandboxes, allowing controlled testing of AI systems in real-world conditions. This framework is designed to facilitate evidence-based regulatory learning and support the development of new AI technologies.

Countries:
DC

Dragos Cernescu

22 May 2024 / 5 Min Read

sign up banner
the paypers logo

The Paypers is the Netherlands-based leading independent source of news and intelligence for professional in the global payment community.

 

The Paypers provides a wide range of news and analysis products aimed at keeping the ecommerce, fintech, and payment professionals informed about the latest developments in the industry.

 



No part of this site can be reproduced without explicit permission of The Paypers (v2.7).

Privacy Policy / Cookie Statement

Copyright