Paula Albu
03 Oct 2025 / 5 Min Read
AI is both a powerful tool for defenders and a potent weapon for attackers. Joseph Carson, Chief Security Evangelist & Advisory CISO at Segura, explores its dual role in financial services.
Artificial Intelligence (AI) is transforming financial services at a pace that few other technologies have matched. From customer service automation to real-time fraud detection, AI has become indispensable to banks, insurers, and fintechs. Yet the same technology that optimises efficiency and security is increasingly weaponised by cybercriminals to exploit trust, manipulate identities, and scale financial crime.
This dual role of AI as both a powerful tool for defenders and a potent weapon for attackers defines one of the most critical battlegrounds in today’s financial sector. To navigate it effectively, leaders must understand not only how AI is evolving but also how to manage its risks, ethics, and security implications, especially around identity.
AI’s role in financial services has evolved dramatically over the past decade:
From my experience in the field, AI has lowered the barrier for attackers, and today, an attacker simply needs a laptop and an internet connection. Even language-based defences are no longer sufficient. For example, relying on linguistic cues, such as assuming emails written in Estonian are safe from phishing, is no longer adequate. AI can generate convincingly localised content in seconds, making these traditional protections nearly obsolete.
This evolution highlights the paradox of AI: the same sophistication that allows defenders to protect customers can also be exploited to deceive them.
In financial services, AI-driven attacks are growing in frequency and sophistication. Some of the most concerning trends include:
Fortunately, AI is also the most effective defence against these threats. Banks and fintechs are deploying advanced AI to protect customers and infrastructure:
While AI enables stronger defences, I have observed firsthand that attackers can adapt faster than ever. This makes continuous monitoring, behavioural analytics, and identity-centric security essential, especially as AI becomes more autonomous in both attack and defence scenarios.
While sanctioned AI initiatives drive innovation, ’shadow AI’ poses a hidden risk. This refers to employees using unauthorised AI tools, such as generative AI chatbots or unvetted machine learning models, for work-related tasks.
In financial services, shadow AI is especially dangerous:
Shadow AI reflects the tension between innovation and control. Leaders must balance empowering teams with maintaining governance and oversight. From my perspective, shadow AI is becoming a significant blind spot: attackers often exploit unsecured AI endpoints as entry points into larger systems, making identity security an even higher priority.
AI’s growing role raises ethical challenges that cannot be ignored. In financial services, key concerns include:
Financial leaders must adopt a proactive approach to balance AI’s promise and peril. Key best practices include:
1. Identity security as a foundation
Protecting AI begins with securing access. Identity security is critical because:
From my experience, neglecting identity security around AI is one of the fastest ways organisations can turn their most valuable defences into liabilities.
2. Establish AI governance
Define clear policies for how AI can and cannot be used. This includes approved tools, risk assessment frameworks, and shadow AI reporting channels.
3. Invest in AI-powered defences
Just as attackers use AI to innovate, defenders must do the same. Continuous investment in fraud detection, behavioural biometrics, and SOC augmentation is essential.
4. Train employees to spot AI threats
Humans remain the weakest link. Regular training should cover AI-enabled phishing, deepfake awareness, and safe AI usage practices.
5. Build ethical AI frameworks
Create cross-functional committees, including compliance, IT, and business units, to evaluate bias, fairness, and explainability in AI systems.
6. Test resilience with red teaming
Simulate AI-driven attacks through red teaming to identify vulnerabilities in fraud detection, customer onboarding, and market surveillance.
7. Collaborate across the industry
Fraudsters don’t work in silos, and neither should defenders. Financial institutions should participate in industry threat intelligence sharing initiatives to stay ahead of AI-driven scams.
In addition to general identity security, managing privileged accounts is essential when protecting AI platforms and critical financial systems. Privileged Access Management (PAM) ensures that accounts with elevated permissions, such as administrators, AI model trainers, or SOC operators, are tightly controlled, monitored, and auditable.
Key considerations include:
From my experience, PAM is often the first line of defence against AI-targeted attacks. Attackers increasingly attempt to compromise administrator-level accounts to manipulate AI outputs, access sensitive datasets, or pivot into other critical systems. Implementing PAM effectively ensures that even if attackers breach lower-level accounts, elevated privileges, and by extension, your most sensitive AI resources remain protected.
The financial sector is entering an era where AI will define competitive advantage but also determine resilience against fraud and cybercrime. The dual role of AI will continue to evolve:
Ultimately, success will depend on balance: embracing AI’s value while securing its risks. Identity, trust, and resilience must be defended alongside innovation.
AI is no longer just another tool in financial services; it is the defining technology of the era. Its dual role as a weapon and a shield creates both unprecedented opportunities and risks.
The path forward requires vigilance, ethics, and above all, identity security. Protecting access to AI ensures that its benefits are not hijacked by adversaries. By investing in governance, defences, and industry collaboration, financial institutions can thrive in this new landscape, turning AI into a competitive advantage while keeping fraudsters at bay.
From my personal experience, the stakes are higher than ever: even small mistakes in AI governance or identity security can be exploited within minutes, making preparation and foresight essential. AI is both the battleground and the arsenal. How organisations manage it will define their success for years to come.
About author
Joseph Carson is an award-winning cybersecurity professional and ethical hacker with more than 30 years’ experience in enterprise security, specialising in blockchain, endpoint security, network security, application security & virtualisation, access controls, and privileged access management. Joe is a Certified Information Systems Security Professional (CISSP) and Offensive Security Certified Professional (OSCP), an active member of the cybersecurity community frequently speaking at cybersecurity conferences globally, often being quoted, and contributing to global cybersecurity publications. At the moment, Joseph is Chief Security Evangelist and Advisory CISO at Segura.
Paula Albu
03 Oct 2025 / 5 Min Read
The Paypers is the Netherlands-based leading independent source of news and intelligence for professional in the global payment community.
The Paypers provides a wide range of news and analysis products aimed at keeping the ecommerce, fintech, and payment professionals informed about the latest developments in the industry.
Current themes
No part of this site can be reproduced without explicit permission of The Paypers (v2.7).
Privacy Policy / Cookie Statement
Copyright