Mirela Ciobanu
21 Jan 2026 / 8 Min Read
Pallavi Kapale, Senior Financial Crime Officer at Bank of China, presents what the fraud landscape looks like in 2026 and shares practical advice on how banks can prepare.
As the financial sector heads into 2026, fraud risk is not just rising; it is mutating. What was once a perimeter problem has become a strategic, enterprise-wide threat that touches customer experience, compliance, reputation, and profitability? After successive years of rising fraud volumes and increasingly sophisticated schemes, banks must ask: are we ready – or just defending yesterday’s battlefield?
In 2025, the UK alone saw over 2 million confirmed cases and more than GBP 600 million lost to fraud, and banks successfully prevented substantial volumes, thanks to advanced payment defences, yet the problem continues to grow. Authorised push payment (APP) fraud losses rose, card-not-present fraud climbed sharply, and social engineering was perceived as a dominant technique.
Looking ahead to 2026, the landscape will amplify these dynamics but with new inflection points that demand attention. Below is an overview of where fraud is heading, the core threat and vulnerabilities, and what banks must do to gain an advantage.
Artificial Intelligence (AI) is the central theme for 2026. It is already being used by the fraudsters to automate attacks, phishing, and deepfake scams, generate forged documents, and scale identity fraud attempts. According to Experian, in 2025, over a third of businesses reported being targeted by AI-related fraud, significantly up from the previous year.
On the defensive side, banks are investing in AI and hybrid machine-learning systems to detect anomalies in real time. In 2026, we will see more of agentic AI deployed in fraud prevention – systems that not only see risk but autonomously act on it. These tools can undertake KYC checks, alert operation teams to emerging fraud schemes, and enforce rules across channels. The differentiator will be how banks implement AI: whether it is transparent, auditable, and complemented by human oversight, or whether it becomes a blind spot they rely on without understanding.
Social engineering continues to be a dominant pathway for APP fraud, credential harvesting, and remote purchase scams, where victims are tricked into revealing one-time codes or approving transactions.
Expect these schemes to become more personalised and AI-augmented. Voice cloning, synthetic media, and deepfake identities can impersonate trusted contacts, making social engineering less obvious and harder to train against. Fraud-as-a-service (FaaS) and AI-powered criminal toolkits are on a rapid rise. Low-cost, ready-to-use tools ranging from phishing-as-a-service to AI-driven social engineering scripts – now enable less sophisticated actors to launch their own campaigns that once required organised and highly skilled groups. End-to-end ‘fraud stacks’ are taking shape, attack template bundles, and automation.
Synthetic identity – criminals create fake personas by blending stolen, real and fabricated details. These identities can pass the standard checks because they include partial authentic data, which will make it difficult to detect. According to Juniper Research, fraud losses are expected to surge sharply by 2030, driven by identity fraud and other sophisticated schemes. Static KYC controls are poorly equipped to detect these profiles.
Money mule networks continue to be recruited via social platforms, targeting unwitting young adults to move stolen funds. Another concerning trend in 2025 that will carry on in 2026.
According to Open Banking, Open Banking fraud rates remain significantly lower than the industry average, with just 0.013% of the transactions affected in H1 2025 versus 0.045% market-wide, and year-on-year declines in both fraud volume. However, this is not a success story: APP fraud now accounts for 74% of Open Banking cases, typically involving higher-value transactions. As AI-enabled social engineering, SIM-swap scams and mule networks evolve, Open Banking’s advantage will disappear without real-time risk indicators, adaptive controls and genuine industry-wide data sharing.
Despite growing awareness, many financial institutions are not sufficiently prepared for the fraud landscape of 2026.
According to Themis, a survey suggests that two-thirds of banks feel only ‘somewhat prepared’ for emerging fraud risks, with specific concerns about AI-driven attacks, Open Banking exposures, and automation increasing fraud risks. Being ‘somewhat prepared’ in an environment where fraud tactics evolve is effectively the same as ‘operating reactively’. The FCA has signalled increased supervisory interest in the use of AI in financial services, with emphasis on testing, data controls, human oversight, and model-risk frameworks.
Many banks still treat fraud separately from Anti-money laundering (AML) and other financial crime functions. This creates blind spots, duplicated systems, and inefficiencies that fraudsters can exploit. The industry advocates to build a unified financial crime framework, where fraud, AML, sanctions screening and risk intelligence share the same analytics backbone and case management ecosystem.
Siloed controls lead to duplicate customer friction (multiple false positives) and operational blind spots (miss or near-miss networks of fraud linked by device, IP, or behaviour across channels).
Technology alone won’t solve fraud. Weak governance and insufficient specialist skills are persistent pitfalls. Boards often lack clear visibility over fraud trends, and operation teams are stretched thin, unable to keep pace with alert volumes and investigative complexity.
Without purposeful investment in people, process, and governance, advanced tools become under-utilised or misconfigured.
To control these threats effectively, banks must move beyond tactical defences and adopt a strategic posture rooted in technology, governance, and collaboration.
Fraud prevention needs to be treated as a strategic priority. That requires board level commitment to sustained investment in data, machine learning, skilled teams, and operating models alongside deliberate efforts to dismantle silos between product, payments, customer operations, compliance and security, and to redirect funding towards foundation enablers such as data engineering and regtech.
The next phase of fraud prevention will require a shift away from the simplistic ambition to ‘automate everything’ towards deliberate human + AI operating models. AI can speed up triage, pattern detection and prioritisation, but it does not remove the need for skilled judgment and oversight. Fraud teams will need to be reskilled in data literacy, model governance, and the effective use of open-source intelligence, while banks/FIs must address growing explainability and accountability expectations to ensure AI-driven decisions can be understood, challenged, and defended.
Traditional biometrics (face, fingerprint) are quickly becoming insufficient when deepfakes and spoofing tools rise. Behavioural biometrics – analysis of typing cadence, device interaction patterns, navigation behaviour - offers a stronger signal of genuine users.
Banks can no longer wait for manual review cycles. Intelligent risk scoring that considers device risk, transaction thresholds, spending patterns and network indicators are essential for real-time scoring.
Fraud, AML, sanctions screening, and onboarding functions must be aligned on a single enterprise data platform with shared analytics and unified case management. Isolation of data and intelligence is a systemic vulnerability.
Live sharing of fraud indicators, such as suspicious URLs, device IDs, scam trends across banks, telecoms, and tech platforms, will improve the fraud detection for every bank. The UK has seen early iterations of such initiatives and expanding them should be a priority.
Traditional advice that once focused on passwords and OTPs must expand to include deepfake awareness, voice-clone scams, and social engineering narratives. This education must be on-going, adaptive, and data informed.
Internal collaboration, insider threats, and process breakdowns will remain a weak point if left unaddressed. Strong governance, clear ownership and accountability, and robust internal controls are essential.
The so-called legacy of rule-based fraud detection is far from over. Fraud in 2026 will be faster, more automated, and more personalised than ever. If fraud prevention continues to be treated as a technology patch, banks will fall behind those that integrate fraud strategy into the organisational DNA of governance, data, AI, and industry collaboration.
Going forward, fraud prevention will be about creating resilient systems that adapt to new patterns in real-time, empowering frontline teams with insights, and building trust with customers and regulators.

Pallavi is a seasoned professional with a wealth of experience in Financial Crime across the 1LOD and 2LOD. Her expertise has been honed through her tenure in several high-street banks. Currently, she serves as a Senior Financial Crime Officer (2LOD) in the Financial Crime Intelligence Unit at the Bank of China.
Pallavi's professional background is marked by her specialization in key areas of Financial Crime. She is an expert in Anti-Money Laundering (AML), fraud prevention/investigations, and conducting bank-wide trainings and risk assessments.
Pallavi holds an ICA Diploma in AML and is a Member there. She also has an ICA Advanced Certificate in AML and is an ICA Certified Financial Crime Investigator. She regularly writes articles on financial crime topics on LinkedIn.
Bank of China, include BOC Hong Kong, BOC International, BOCG Insurance and other financial institutions, providing a comprehensive range of financial services to individual and corporate customers as well as financial institutions worldwide.
The Paypers is a global hub for market insights, real-time news, expert interviews, and in-depth analyses and resources across payments, fintech, and the digital economy. We deliver reports, webinars, and commentary on key topics, including regulation, real-time payments, cross-border payments and ecommerce, digital identity, payment innovation and infrastructure, Open Banking, Embedded Finance, crypto, fraud and financial crime prevention, and more – all developed in collaboration with industry experts and leaders.
Current themes
No part of this site can be reproduced without explicit permission of The Paypers (v2.7).
Privacy Policy / Cookie Statement
Copyright