Interview

Dr. Dennis-Kenji Kipker explains why European AI regulation falls short

Wednesday 15 January 2025 08:14 CET | Editor: Mirela Ciobanu | Interview

Anyone who believes that the EU's AI Act will become an international export hit is naive, says Dennis-Kenji Kipker.


It is the responsibility of the industry to decide how and with whom it enters strategic collaborations in the future. During a lively presentation at Cyberevolution, an event organised by KuppingerCole, that took place in December 2024, dr. Dennis-Kenji Kipker presented the duality of AI both saviour and threat. On one hand, we have:

 

 ‘Generative AI has the potential to change the world in ways that we can’t even imagine. It has the power to create new ideas, products, and services that will make our lives easier, more productive, and more creative. It also has the potential to solve some of the world’s biggest problems. The future of generative AI is bright, and I’m excited to see what it will bring’, according to Bill Gates.

 

And on the other, cybercriminals are creating a darker side of AI. Today, anyone with malicious intent can develop and deploy malware in a very short time and cause devastating damage to companies of any size.

An interesting point stressed out by Dr. Dennis-Kenji Kipker is the fact that the duality of technology (that it can be used both for bad and good things) is a concept that is old as innovation itself. For instance, if you take cryptocurrencies, these represent a decentralisation of the financial system and facilitation of cross border transactions. On the other hand, these developments are also exploited for money laundering, financing of illegal activities, and tax avoidance.

Nevertheless, to foster responsible artificial intelligence development and deployment in the EU, on 1 August 2024, the European Artificial Intelligence Act (AI Act) entered into force.

 

 

Could you please explain in a nutshell what the EU AI Act is about? If I were a financial institution, what should be my top concern related to this piece of legislation?

The EU AI Act is the first regulation in the world to take a holistic approach to AI safety and security. It categorises AI into different risk levels, ensuring that AI systems are properly managed according to their potential impact. Beyond just regulating the technology itself, the Act also addresses how companies manage AI internally, with a focus on responsible investments and the development of AI systems within a secure and safe framework.

For a financial institution, your top concern regarding the AI Act should be ensuring compliance with these regulatory requirements, especially considering the AI systems you deploy and the risks they pose. Financial institutions often work with high-risk AI applications, particularly in areas like customer data analysis, fraud detection, and algorithmic trading. A natural person must always make the final decision in critical scenarios, such as determining whether to terminate a bank account or approve a credit line.

The AI Act’s regulations will require institutions to demonstrate transparency, accountability, and robust risk management practices to prevent potential misuse and safeguard customer interests.

 

Can we see this move also as a geopolitical strategy adopted by Europe; to be ahead of other continents?

Yes, absolutely. The EU’s approach to AI regulation can definitely be seen as part of a broader geopolitical strategy to stay ahead of other regions. The European Union has a long history of cybersecurity regulation, stretching back nearly 10 years. The AI Act is simply the latest extension of this tradition. It’s not just about regulating AI; it’s also a demonstration of how the EU is positioning itself as a global leader in ensuring the safe and ethical use of emerging technologies. Many countries are watching how the EU handles AI regulation—whether we succeed or fail in implementing these measures will set an example for the rest of the world.

For comparison, in the US, there has been some movement toward AI regulation, with President Joe Biden initially pushing for some regulatory measures. However, with the shift in policy under President Donald Trump, there’s now a greater focus on deregulation. This contrast highlights that very few countries are likely to adopt a comprehensive, holistic approach to AI regulation like the one we’re seeing in the European Union.

 

Strictly referring to cybercrime, AI is a very complex technology, used for both good and bad purposes. Since the threats and risks posed by AI are so complex, how can businesses cope with them? What do AI-based attacks look like?

While the EU is making strides in regulating AI, cybercriminals are also exploiting its capabilities. For example, WormGPT—a malicious alternative to ChatGPT—has been described by its creator as the ‘biggest enemy’ of ChatGPT. Unlike ethical AI models like OpenAI’s ChatGPT and Google Bard, which actively combat misuse, WormGPT enables illegal activities like fabricating phishing emails or generating malicious code. This tool, operating without ethical safeguards, empowers even novice cybercriminals to launch sophisticated attacks quickly and at scale.

Such threats are particularly alarming for financial institutions, prime targets for cybercriminals. These institutions hold both financial assets and sensitive personal data, making them doubly attractive. This underscores the urgent need for robust AI governance and cybersecurity measures.

 

 

Given your extensive experience working with institutions on AI deployment, what best practices have you observed? Are these institutions adequately prepared for implementing AI?

Many institutions are not adequately prepared for the challenges associated with AI. In Germany, for instance, statistics show that while a significant number of people rely on AI, fewer than 50% of institutions have implemented an AI policy. This lack of governance is highly risky. Employees need to understand the importance of double-checking AI-generated results and be aware that they cannot simply copy and paste sensitive information into AI tools or large language models. Without clear policies, there’s no control over where the data goes, how it is collected, or how it may be reused, creating serious privacy and security concerns.

I would strongly advise all companies—especially those in critical infrastructure sectors like banking and finance—to establish comprehensive AI policies before deploying artificial intelligence in their operations.

On the other hand, AI can also be a powerful tool for defending against cybercrime. For instance, specialised models can automate the detection of anomalies in IT networks. If a ransomware group breaches your system, AI can help identify suspicious data patterns. Additionally, AI can enable automated responses, such as shutting down parts of a network upon detecting anomalies, to contain potential threats. This dual role of AI—both as a risk and as a defensive tool—highlights the importance of having robust policies and cybersecurity frameworks in place before fully integrating AI into critical systems.

 

Regarding AI technology, companies need to explain how they arrive at their results. How do you view the importance of explainability, auditability, and addressing bias in AI?

That's a big problem. Financial institutions must create and train their own AI models rather than relying on external or generalised AI systems. By doing so, they can better understand and control decision-making processes, avoiding the ‘black box’ problem where decisions are made by AI in ways that are opaque and hard to explain. This is particularly important when dealing with open or large language models, which often draw on vast and varied datasets—including low-quality data from the web. These models can easily introduce bias into their decisions, which is a critical concern.

To mitigate this, I would strongly advise companies to use high-quality, proprietary datasets and train their AI systems on these data. This ensures better accuracy, relevance, and fairness. In the financial sector, data is everything. The sector already holds vast amounts of high-quality data, which provides a significant advantage. However, this also presents risks, particularly around data privacy. For example, contractual data collected for specific, pre-defined purposes cannot simply be repurposed for training AI systems without breaching data privacy regulations.

While data privacy issues are closely tied to AI practices, they aren’t directly addressed by the current AI regulations like the EU AI Act. This makes it even more critical for financial institutions to proactively address data privacy compliance alongside their AI development efforts. By balancing these concerns, institutions can develop AI systems that are not only effective but also ethical and aligned with regulatory expectations.

 

What do you love most about working with AI? What excites you the most about this field?

The most exciting aspect of working with AI is that we are truly approaching new frontiers. We’re entering uncharted territory where we can’t always predict the outcomes of our research. This uncertainty is something we haven’t experienced in a long time, and while it holds great promise, it also presents risks.

The emergence of large language models, especially since the end of 2022 with the release of the first widely accessible version like ChatGPT, has led to many positive changes. For instance, tasks that people traditionally disliked are now easier to perform, allowing us to focus on more meaningful and impactful aspects of our work.

However, with these advancements, we’re also encountering new dangers that we can’t fully foresee yet. We’re at a point where we need to carefully evaluate the use of AI. While we’ve explored initial use cases, we’re now facing the first signs of potential risks.

This tension between the immense possibilities and the unknown risks is what keeps me passionate about AI. It’s not just about regulation – although that’s certainly a part of it. The AI Act, for example, is a form of compliance and risk management, similar to other technologies we've encountered. Ultimately, how we navigate this balance will shape the future of AI.

 

What do you think of Cyberevolution so far?

What I think stands out the most about Cyberevolution is the diversity of people and perspectives present. Cybersecurity is such a broad topic, and here we see that it’s not just about cybersecurity, data privacy, or AI in isolation – all of these areas must work together. Understanding the intersections between them is key to truly grasping the complexities of the field.

I think this is similar to the role of a C-level executive, especially the Chief Information Officer (CIO) or Chief Security Officer (CSO). They need a strong technical background to manage risk and security effectively, but just as importantly, they must have communication skills to convey these complex issues to the broader company. Additionally, business acumen is crucial in translating security into a business enabler that drives growth.

So, in many ways, Cyberevolution highlights the need for a well-rounded approach, where technical, communicative, and business skills all come together.

 

About Dr. Dennis-Kenji Kipker

Prof. Dr. Dennis-Kenji Kipker is one of the leading minds in cyber security and works as Scientific Director of the cyberintelligence.institute in Frankfurt am Main, Member of the Board of Directors of the strategy consulting company CERTAVO AG and Visiting Professor at the private Riga Graduate School of Law in Latvia, which was founded by the Soros Foundation.

Here he conducts research on topics at the interface of law and technology in cyber security, corporate strategy, and digital resilience in the context of global crises, with a particular focus on Chinese and US IT law.

In 2024, Professor Kipker was appointed to gematik's new Digital Advisory Board, the highest body in Germany that helps decide on the digitalisation of national health insurance providers. Dennis Kipker works voluntarily for the World Justice Project in the USA.



Free Headlines in your E-mail

Every day we send out a free e-mail with the most important headlines of the last 24 hours.

Subscribe now

Keywords: artificial intelligence, generative AI, online security, fraud prevention, EU AI Act, data
Categories: Banking & Fintech
Companies:
Countries: Europe
This article is part of category

Banking & Fintech