The UK government has initiated the development of a deepfake detection evaluation framework in collaboration with technology companies, including Microsoft.
The government announced the framework in February 2026 following a four-day Deepfake Detection Challenge hosted by Microsoft, which involved more than 350 participants, including INTERPOL, Five Eyes intelligence community members, and technology companies.
The framework will evaluate detection technologies against real-world threats, including abuse materials, fraud, and impersonation. Testing will assess how tools identify AI-generated images, videos, and audio content designed to deceive recipients.
Detection standards target multiple threat categories
The evaluation framework will test detection capabilities across scenarios reflecting national security and public safety risks, such as victim identification, election security, organised crime, impersonation, and fraudulent documentation. Participants in the Microsoft-hosted challenge identified authentic, fabricated, and partially manipulated audiovisual content under simulated operational conditions.
An estimated eight million deepfakes were shared in 2025, compared to 500,000 in 2023, according to government figures. Criminals use deepfake technology to impersonate celebrities, family members, and political figures in fraud schemes. Content creation tools requiring minimal technical expertise have become widely accessible.
Regulatory framework addresses platform responsibilities
Deepfake detection technologies analyse visual and audio content for indicators of synthetic generation, including facial movement inconsistencies, lighting anomalies, audio artefacts, and metadata patterns. Detection accuracy varies based on generation techniques, source material quality, and post-processing methods.
The framework will establish performance benchmarks for detection tools, enabling law enforcement and regulatory bodies to assess technology capabilities against evolving generation methods. Standards will inform expectations for industries regarding detection implementation.
International coordination on synthetic media threats
The UK government coordinates with international partners through Five Eyes intelligence sharing arrangements, which include the US, Australia, and New Zealand. INTERPOL participated in the detection challenge, reflecting law enforcement interest in cross-border synthetic media threats.
The City of London Police, serving as the UK's national lead force for fraud, reports increasing criminal exploitation of AI technologies to impersonate trusted individuals and scale fraudulent operations. The police force will use the framework to inform investigative capabilities and public protection measures.
Technology companies, including Microsoft, Google, Meta, and Amazon, develop synthetic media detection tools for content moderation and platform safety applications. Academic institutions contribute research on detection methodologies, generation techniques, and adversarial testing approaches.