News

EU Council approves Artificial Intelligence Act

Wednesday 22 May 2024 12:08 CET | News

The European Council has approved the Artificial Intelligence (AI) Act, aiming to standardise AI regulations through a risk-based approach.

 

The legislation is designed to address the varying levels of risk posed by AI systems, with stricter rules for higher-risk applications. This act is the first global regulation of its kind and is expected to set a precedent for AI governance worldwide.

The AI Act intends to promote the development and adoption of safe, reliable AI systems within the EU’s single market, applicable to both private and public sectors. It aims to balance the advancement of AI technology with the protection of fundamental rights of EU citizens. The regulation covers areas within EU law, exempting systems used solely for military, defence, and research purposes.

 

The European Council has approved the Artificial Intelligence (AI) Act, aiming to standardise AI regulations through a risk-based approach.

 

What are the key provisions?

Some of the most important provisions of the AI act include: 

  • Risk-Based Classification: AI systems are categorised by risk levels. Low-risk systems face minimal transparency requirements, while high-risk systems must meet stringent conditions to enter the EU market. AI systems deemed to present unacceptable risks, such as cognitive behavioural manipulation and social scoring, are banned. 
  • Prohibitions: the law bans AI use for predictive policing based on profiling and for biometric categorisation by race, religion, or sexual orientation. 
  • General-purpose AI models with no systemic risks will have limited requirements, primarily around transparency. Those with systemic risks must adhere to stricter regulations. 

The enforcement of the AI Act will be managed by an AI Office within the European Commission to oversee rule implementation, as well as a scientific panel of independent experts to support enforcement. An AI Board composed of member state representatives will assist in consistent application of the Act, while an advisory forum to provide technical expertise.

Fines for non-compliance are based on a percentage of the offending company’s global annual turnover or a set amount, whichever is higher. Small and medium-sized enterprises (SMEs) and start-ups face proportionate fines. Before deploying high-risk AI systems, public service entities must assess their impact on fundamental rights. The regulation mandates increased transparency in developing and using high-risk AI systems, requiring registration in the EU database for such systems and disclosure when using emotion recognition technology.

The AI Act encourages innovation through regulatory sandboxes, allowing controlled testing of AI systems in real-world conditions. This framework is designed to facilitate evidence-based regulatory learning and support the development of new AI technologies.


More: Link


Free Headlines in your E-mail

Every day we send out a free e-mail with the most important headlines of the last 24 hours.

Subscribe now

Keywords: artificial intelligence, regulation, compliance, high risk industry
Categories: Fraud & Financial Crime
Companies: European Council
Countries: Europe
This article is part of category

Fraud & Financial Crime

European Council

|
Discover all the Company news on European Council and other articles related to European Council in The Paypers News, Reports, and insights on the payments and fintech industry: