Mirela Ciobanu
15 Jul 2025 / 5 Min Read
The Paypers sat down at Money20/20 with Daniele Tonella, CTO and Management Board Member at ING, to discuss fraud trends, cybersecurity challenges, and how banks can future-proof their strategy.
I joined ING about nine months ago as Chief Technology Officer and board member. In my role, I oversee around 20,000 engineers across areas like software development, infrastructure, cybersecurity, architecture, and senior engineering. If it involves code or hardware, it falls under my remit.
My passion for technology started early. I began coding when I was 10 years old. This was before the internet, so learning was a very different experience. There were no tutorials to follow, no forums to ask questions, just you, the computer, and a lot of trial and error. It taught me persistence and problem-solving in a way that still shapes how I think today.
Interestingly, I initially chose to study mechanical engineering—almost out of teenage overconfidence. I thought I already knew computers well enough, so I decided to try something else. But after a couple of years, I realised my true passion was still in technology. I completed my degree, but I knew I needed to get back to where I belonged.
That path eventually led me into consulting, and from there into technology strategy. Over time, I gravitated toward roles that brought me closer to engineering, cybersecurity, and fraud prevention, areas where technology makes a real, tangible impact on people’s lives. That’s still what drives me today.
As CTO, I directly own and manage cybersecurity at ING, and I also provide the technological foundations for fraud management, Know Your Customer (KYC), and Anti-Money Laundering (AML) functions. These are separate teams with distinct responsibilities, but they all intersect around risk and protection.
When it comes to cybersecurity, the biggest challenge today is the growing sophistication of attackers, particularly with the rise of AI. We’re seeing two major trends here. First, AI enables things like deepfakes and advanced document forgery with frightening realism. That’s raising the bar on how we validate identity and trust.
Second, AI allows attackers to more effectively scan and understand our environments: fingerprinting systems, mapping out vulnerabilities, and launching more precise and coordinated attacks.
At the core, managing cybersecurity is about managing complexity. The Hollywood idea of a lone hacker slipping through a secret backdoor is mostly a myth. Real-world breaches often result from long chains of small, seemingly minor oversights. So, one of our main challenges is staying disciplined on the basics, patching, segmentation, access controls, because in complex systems, small cracks can lead to big consequences.
There’s also an economic side to this. Cybersecurity is a market, whether we like it or not. Attackers have business models. They invest money, they buy services like DDoS capacity or exploit kits on the dark web. That means our job isn’t just to block attacks; it’s to make them too expensive to be worth it. If we can drive up the cost of a successful attack, we can reduce the incentive.
One of the main challenges in fraud prevention is that the models we use to detect suspicious activity can’t be static; they need to continuously learn. Fraudsters are always evolving their tactics, so our detection systems need to evolve just as fast. That means the models must keep training on new data, adapt to emerging patterns, and improve in near real-time.
The second big challenge is automation. KYC and AML processes can be incredibly labour-intensive; gathering and validating information, conducting checks, making assessments. Much of this can and should be automated. This is where generative AI can play a transformative role, not just in reducing the manual workload and cutting costs, but also in improving the quality and consistency of assessments.
We're exploring several dimensions here. In the fraud and cybersecurity space, we work with machine learning, generative AI (GenAI), large language models (LLMs), and increasingly, task-based intelligent agents.
For example, in customer service, GenAI helps us understand the intent behind a client’s message or request, going beyond basic chatbot interactions to something much more responsive and context-aware.
In engineering, we’re not using GenAI just to write code, but to offer guardrails for developers. Think of it as cognitive support, helping engineers make smarter decisions, avoid errors, and see potential impacts before they commit changes.
We're also testing agent-based solutions, what you might call the modern evolution of robotic process automation (RPA), where AI agents can handle parts of complex processes independently. This is especially promising in areas like compliance, onboarding, and back-office operations.
Regulations like DORA are shaping the conversation, but ideally, they should be affirming and formalising what we already do. DORA, for instance, is not reinventing the wheel. Banks have been working on reliability, resilience, and incident management for a long time. What’s new is the level of visibility and executive focus these topics are now getting because they’re backed by regulation.
What DORA does well is that it forces different parts of the organisation, business, commercial, operations, and technology, to come together. Reliability is no longer just a question of whether a system is up or down; it’s about end-to-end business processes. That shift in thinking is incredibly valuable.
Another area where DORA adds real weight is third-party risk management. As a bank, we rely on many external service providers, cloud platforms, infrastructure partners, and software vendors. These are part of our extended supply chain, and if one of them fails, that risk is directly transferred to us.
A well-known example is the CrowdStrike incident, where a faulty update to a widely used security component caused widespread outages. That was a textbook case of third-party risk: one misstep by a supplier disrupted half the industry. DORA helps bring those risks out of the shadows and makes sure we address them with the same seriousness as internal risks.
Therefore, in many ways, the real value of regulations like DORA is not that they tell us to do something entirely new but that they bring consistency, accountability, and cross-functional alignment to things we’ve known are important all along.
On the human side, we look at this from two angles: our employees and our clients. When it comes to our own people, we actively monitor the health and well-being of the organisation - not just from a cultural standpoint, but because there are real operational risks involved.
Cybersecurity teams often work under intense pressure. If you lose talent and can’t replace them quickly, it puts strain on the remaining team members, which can create a toxic environment and increase the risk of mistakes or burnout. That’s not just a people issue - it becomes a security issue.
Another critical aspect is awareness and education. Most of us have probably received a very convincing phishing email or message that looks like it came from a colleague or boss. In fact, just a month after I joined ING, one of our senior managers received a voicemail that seemed to come from our CEO. It was supposedly about a confidential project and warned him to expect a call from a lawyer. It sounded real - but it was fake.
Was that an infrastructure vulnerability? No. It was a test of human behaviour and organisational culture. That’s why we say cybersecurity is not just about technology - it’s about people, training, and culture. In a hierarchical or overly obedient culture, someone might act on a request like that without questioning it. But in a resilient culture, people feel empowered to stop and say, ‘Something’s not right here’. That ability to raise a hand is critical.
The last piece is about skills and integration. Traditionally, security has been treated as something that happens at the end of the process - after development, during deployment, or in operations. But we can’t afford that anymore. Security needs to be part of the design phase - what we call ‘shift left’. We’re working hard to build that mindset into our engineering culture so that secure thinking happens from the very beginning.
So yes, it’s a mental health issue, but it’s also deeply tied to culture, communication, and how we embed security across the organisation.
First, banks need to hold on to their long-term vision of what banking is. Technology, regulation, and even crises come and go, we’ve been through massive disruptions before, like the 2008 financial crisis, and the key is to stay grounded in your purpose and adapt with clarity.
Of course, the geopolitical landscape today is different, and it’s forcing us to ask new questions. For example, can we realistically localise all our infrastructure tomorrow? No, we can't. So, the question becomes: how do we manage dependency risks while staying practical? That’s where realism and risk assessment come in.
Another example is cloud infrastructure. Some providers are designed to create ‘stickiness’ - a model that makes it very hard to switch or diversify once you're in. As a result, we constantly have to weigh the speed of innovation against the strategic risk of lock-in. Sometimes, taking a bit longer to build something gives you more flexibility and long-term control. That trade-off between speed vs. resilience is a balance we constantly have to manage.
And finally, around AI and job fears, we need to be thoughtful. Technology shouldn’t be about replacement; it should be about augmentation. Helping people work better, not making them redundant. That’s a cultural choice, and banks need to lead responsibly here.
About author
Daniele began his career in 1998 at Mercer Management Consulting and eventually moved to McKinsey & Company. In 2002, he joined Swiss Life AG. In 2012, Daniele became the Global CIO of Evalueserve, and in 2013, he took over the CEO role at AXA Technology Services. In 2017, Daniele was appointed CEO of UniCredit Services. In 2020, he became ad interim Group Chief Digital and Information Officer. Daniele joined ING as chief technology officer in August 2024.
About ING
ING is a global financial institution with a strong European base, offering banking services through its operating company, ING Bank. The purpose of ING Bank is: empowering people to stay a step ahead in life and in business. ING Bank’s more than 60,000 employees offer retail and wholesale banking services to customers in over 100 countries.
Mirela Ciobanu
15 Jul 2025 / 5 Min Read
News on Fraud and Fincrime
The Paypers is the Netherlands-based leading independent source of news and intelligence for professional in the global payment community.
The Paypers provides a wide range of news and analysis products aimed at keeping the ecommerce, fintech, and payment professionals informed about the latest developments in the industry.
Current themes
No part of this site can be reproduced without explicit permission of The Paypers (v2.7).
Privacy Policy / Cookie Statement
Copyright