Britain’s parliamentary Treasury Committee has called for a more proactive approach to the oversight of artificial intelligence in financial services.
Lawmakers warned that current regulatory practices may be insufficient as AI systems become more embedded across the sector. In a recent report, the committee said the Financial Conduct Authority and the Bank of England should introduce stress-testing frameworks designed specifically for AI-driven systems. According to the committee, such tests could help banks, insurers and other financial firms understand how automated decision-making tools might behave during periods of market disruption or technological failure.
The report also recommended that the Financial Conduct Authority clarify how existing consumer protection rules apply when AI is used in areas such as credit assessments, insurance pricing and customer interactions. Guidance should also outline the level of understanding senior managers are expected to have over the AI systems operating under their responsibility, with publication suggested by the end of 2026.
AI adoption raises consumer and stability concerns
Evidence presented to the committee highlighted the fast uptake of AI across British financial services, with around three-quarters of firms now using the technology in core operations, including claims handling and lending decisions. Regulators acknowledged that more advanced forms of AI, particularly systems capable of acting autonomously, introduce additional risks for retail customers.
The committee report pointed to concerns around opaque decision-making, potential discrimination against vulnerable consumers, increased fraud risks, and the spread of unregulated financial guidance through AI-powered tools. Witnesses also raised wider financial stability issues, noting that widespread reliance on a small number of US-based cloud and AI providers could create concentration risks. In trading, automated systems were said to have the potential to reinforce herd behaviour during periods of market stress.
Representatives from the Treasury Committee indicated that, based on the evidence reviewed, the financial system may not yet be adequately prepared for a significant AI-related incident, which could have wide-reaching consequences for consumers.
In response, officials from the Financial Conduct Authority said the regulator would examine the findings, reiterating its existing view that rigid AI-specific rules may struggle to keep pace with technological change. Bank of England officials stated that work is already underway to assess AI-related risks and that the central bank would respond formally to the recommendations.