Voice of the Industry

The EU AI Act: a comprehensive overview and its far-reaching implications

Friday 28 June 2024 09:30 CET | Editor: Mirela Ciobanu | Voice of the industry

To understand what the EU AI Act means and to prepare for the upcoming regulation, Olivier Proust, Partner at Fieldfisher, provides a comprehensive overview and discusses its far-reaching implications.

The European Union Artificial Intelligence Act (EU AI Act) represents a monumental stride in regulating the use of artificial intelligence (AI). This legislative framework aims to ensure the ethical and safe deployment of AI within the EU, positioning itself as a global benchmark for AI governance. The Act is designed to address the rapid advancements in AI, balancing innovation with the need for robust protections against potential risks.

 

What the EU AI Act entails and its game-changing nature

The EU AI Act establishes a legal framework to regulate the development, import, distribution, and use of AI systems and general-purpose AI (GPAI) models in the European Union. It introduces a risk-based approach to AI regulation, mandating different levels of regulatory scrutiny based on the potential risks posed by AI applications. This tiered approach is central to the Act, ensuring that more stringent requirements are applied to high-risk AI systems while promoting innovation in low-risk areas.

  • Global leadership in AI regulation: The Act sets a global standard, influencing AI governance beyond Europe. As one of the first comprehensive legal frameworks for AI, it is likely to inspire similar regulations worldwide, creating a ripple effect in global AI policy.

  • Ethical and safe AI: The Act underscores the importance of ethical considerations in AI deployment, including transparency, accountability, and human oversight. This emphasis helps build public trust in AI technologies.

  • Innovation-friendly environment: While the Act imposes stringent requirements on high-risk AI systems, it also provides a supportive environment for innovation in low-risk areas. This balanced approach encourages the development of new AI applications while ensuring safety and ethical compliance.


Who will be affected by the AI Act?

The EU AI Act impacts a wide range of stakeholders both within and beyond European borders. The primary groups affected include:

  1. AI providers, importers, distributors, and product manufacturers: Companies that develop, import, or supply AI systems and GPAI models within the EU, as well as those who integrate AI as a safety component in products, must comply with the Act’s requirements. This includes ensuring that their AI systems meet the stipulated standards for safety, transparency, and accountability.

  2. Deployers of AI systems: Organisations using AI systems, particularly those deploying high-risk AI systems or GPAI, are subject to specific obligations. These users must ensure that their AI systems are compliant with the Act and that appropriate safeguards are in place.

  3. Non-EU entities: The extraterritorial nature of the Act means that entities established outside the EU who are offering AI systems or services within the EU must also comply with its provisions. Indeed, providers of AI systems or GPAI models are caught by the AI Act if they place their products on the EU market, irrespective of whether they are established in the EU or in a third country. Likewise, providers and deployers of AI systems who are established outside the EU must comply with the AI Act whenever the output of their AI systems is used in the EU.


Classification of AI systems based on risk levels

The EU AI Act classifies AI systems into three main categories based on their risk levels: prohibited AI practices, high-risk AI systems, and other AI systems (which includes limited and minimal risk).

  1. Prohibited AI practices: These are areas where the European legislator deems the use of AI systems or general-purpose AI models unacceptable due to their potential to cause significant harm. Prohibited practices include AI use to manipulate human behaviour to the detriment of individuals (e.g., subliminal techniques), exploit vulnerabilities of specific groups (e.g., children or persons with disabilities), infer emotions from individuals in the workplace, or enable social scoring by public and private entities.

  2. High-Risk AI systems: There are two categories of high-risk AI in the AI Act which determine the specific regulatory requirements and compliance obligations for different types of AI systems.

    • Under Annex I, AI systems that are integrated as safety components in products, or are products themselves, and are caught under specific EU laws on product safety, will be deemed high-risk. These include products and systems in areas such as medical devices, vehicles, machinery, toys, and aviation. For these AI systems, the compliance process is integrated into the existing regulatory framework that applies to those respective product categories. Manufacturers must ensure that their AI components comply with both the specific AI requirements outlined in the EU AI Act and the relevant product safety legislation. This involves following the established conformity assessment procedures for the product, which now includes AI-specific considerations.

    • Annex III lists AI systems that are considered high-risk due to their significant impact on fundamental rights, health, and safety of individuals. These standalone AI systems span various sectors and applications, including real-time biometric identification in public spaces, critical infrastructure (like transport, gas or water supply, and digital infrastructure); education and vocational training; employee and workers management (including recruitment processes, employee management, and termination decisions); and access to essential services (such as creditworthiness or public benefits). For these standalone high-risk AI systems, compliance with the EU AI Act involves more direct and specific obligations, such as risk management, data governance, transparency, human oversight, conformity assessment, and robust documentation.

  3. Other AI systems: This category includes AI systems with limited or minimal risk. The EU AI Act mandates stringent transparency requirements for certain AI systems and general-purpose AI models to ensure users are adequately informed about their operation and potential impacts. For example, users should be informed when they are interacting directly with an AI system (e.g. a chatbot) unless it is obvious for them. Users must also be made aware when they are accessing content (text, video, sound) that has been manipulated or generated artificially by AI, or whenever emotion recognition systems are being used.


Conformity assessment process and compliance requirements

The conformity assessment process is a critical component of the EU AI Act, ensuring that AI systems meet the required standards before they can be deployed. The process involves several key steps:

  1. Risk management system: Organisations must establish a comprehensive risk management system that identifies, assesses, and mitigates risks associated with their AI systems. This system should be continuously updated to reflect new information and developments.

  2. Data governance: High-risk AI systems must comply with strict data governance standards. This includes ensuring the quality and integrity of data used for training, validation, and testing AI models. Organisations must implement measures to prevent biases and ensure fairness.

  3. Technical documentation: Detailed technical documentation must be maintained for high-risk AI systems. This documentation should include information on the system’s design, development, deployment, and performance, as well as any risk management measures implemented.

  4. Transparency and information provision: Users must be informed about the capabilities and limitations of high-risk AI systems. This includes clear information on how the system operates and any potential risks involved.

  5. Human oversight: High-risk AI systems must incorporate human oversight mechanisms to ensure that decisions made by the AI can be monitored and intervened in by humans when necessary.

  6. Conformity assessment procedures: High-risk AI systems must undergo conformity assessment procedures before being placed on the market. This can involve self-assessment by the provider, third-party assessment, or a combination of both, depending on the specific requirements of the system.

Organisations in the banking and financial services sector face several specific requirements under the EU AI Act due to the high-risk nature of many AI applications used in these industries. To avoid duplication, the obligation to implement certain aspects of a quality management system under the Act can be fulfilled by the provider complying with the rules on internal governance arrangements or processes under EU financial services law. Financial institutions acting as deployers are also deemed to have fulfilled their monitoring and record-keeping obligations if they've complied with the rules on internal governance arrangements, processes and mechanisms that apply to the financial sector under EU law.

Banks acting as deployers of AI are also required to carry out a Fundamental Rights Impact Assessment whenever they evaluate creditworthiness or establish a credit score of their clients.

 

Practical steps for organisations to prepare for compliance

Organisations must take proactive steps to ensure compliance with the EU AI Act. Here are some practical steps to help prepare for the new regulatory landscape:

  1. Conduct a comprehensive audit: Perform a thorough audit of existing AI systems to identify those that fall within the scope of the Act. This includes assessing the risk level of each system and determining the applicable regulatory requirements.

  2. Establish an AI governance team: Create a dedicated compliance team responsible for overseeing adherence to the EU AI Act. This team should include members with expertise in AI, legal, compliance, IT, risk management, and data governance.

  3. Implement risk management frameworks: Develop and implement robust risk management frameworks tailored to the specific requirements of the AI systems in use. This includes continuous monitoring and updating of risk management practices.

  4. Enhance data governance practices: Ensure that data used for AI systems is of high quality, unbiased, and compliant with the Act’s data governance standards. Implement measures to regularly review and improve data practices.

  5. Develop detailed documentation: Create and maintain comprehensive technical documentation for all high-risk AI systems. This documentation should be easily accessible and regularly updated to reflect any changes or new information.

  6. Ensure transparency and communication: Develop clear communication strategies to inform users about the capabilities, limitations, and risks associated with AI systems (including employees within the organisation). This includes providing easily understandable information and ensuring users are aware when they are interacting with AI.

  7. Incorporate human oversight: Design AI systems with built-in human oversight mechanisms. Ensure that humans can intervene and override AI decisions when necessary, maintaining accountability and control.

  8. Prepare for conformity assessments: Familiarise your organisation with the conformity assessment procedures relevant to your AI systems. Engage with third-party assessors if required and ensure that all necessary documentation and evidence are prepared for the assessment process.


Enforcement

Under the EU AI Act, the competent national regulator for organisations in the financial services sector is typically the national financial supervisory authority or the financial services regulator within each EU member state. These authorities are responsible for overseeing the compliance of financial institutions with the Act's requirements, ensuring that AI systems used in the sector adhere to the prescribed standards for safety, transparency, and accountability.

 

Conclusion

The EU AI Act represents a transformative approach to AI regulation, establishing a comprehensive framework to ensure the ethical and safe deployment of AI technologies. By categorising AI systems based on their risk levels and imposing stringent requirements on high-risk applications, the Act aims to protect individuals while fostering innovation. Organisations affected by the Act must take proactive steps to ensure compliance, including conducting audits, establishing compliance teams, and implementing robust risk management and data governance practices. As the EU AI Act sets a global precedent, it is essential for organisations worldwide to understand its implications and prepare for the new regulatory landscape.


About Olivier Proust

Olivier is a partner in Fieldfisher's Tech & Data department, specialising in GDPR/AI compliance, cybersecurity, and data protection. He advises on data transfers, crisis management, and privacy/AI strategies. He holds an IAPP certification, a degree in Artificial Intelligence from Oxford Saïd Business School and is registered with the Paris and Brussels bars. He is fluent in French and English.

 


About Fieldfisher Belgium


Fieldfisher Belgium is a leading law firm located in Brussels, offering expert legal services across a broad spectrum of business law. Their Brussels office offers strategic legal solutions, leveraging a global network to deliver exceptional client-focused services.



Free Headlines in your E-mail

Every day we send out a free e-mail with the most important headlines of the last 24 hours.

Subscribe now

Keywords: artificial intelligence, compliance, data, data privacy, risk scoring, data governance, risk management
Categories: Banking & Fintech
Companies:
Countries: Europe
This article is part of category

Banking & Fintech






Industry Events