MITRE flags deepfake KYC threat using face-swap tools
Cyber security group MITRE ATLAS has published a case study describing a critical vulnerability in remote Know Your Customer (KYC) identity checks, based on an attack scenario developed by biometric firm iProov's internal Red Team.
The published scenario outlines how attackers can use widely available face-swap tools and virtual camera software to inject deepfake imagery into mobile onboarding journeys. The method allowed the team to bypass so-called liveness checks in a test environment and complete identity verification under a false identity.
The case study adds KYC-focused deepfake attacks to the MITRE ATLAS knowledge base, which documents tactics used against artificial intelligence systems. Contributions to the framework also come from companies such as Microsoft, NVIDIA, IBM, Intel, Cisco, Palo Alto Networks, Kaspersky, CrowdStrike and Trend Micro.
MITRE ATLAS describes adversarial behaviour against AI models and related systems, and is used by security teams as a reference for red teaming and threat modelling.
Doug Robbins, Vice President at MITRE Labs, said industry input was central to that effort.
"The strength of MITRE ATLAS lies in the breadth and quality of the community that supports it. Contributions from across industry, academia, and government-ranging from red-team findings to operational threat insights-are essential to advancing the accuracy and completeness of the MITRE ATLAS knowledge base. When organizations openly share data and expertise, we collectively enhance the security and resilience of AI-enabled systems and the nation," said Doug Robbins, vice president, MITRE Labs.
The newly documented attack targets remote KYC checks used by banks, financial services providers and cryptocurrency platforms during account opening and authentication. These processes commonly rely on a mobile app, a smartphone camera and automated liveness tests intended to detect spoofs.
Face-swap injection
The exercise, led by iProov Red Team Head Dr Panos Papadopoulos, drew on open-source tools and publicly accessible images. According to the description, the team first gathered identity data and high-definition facial images of targets from online sources. They then used Faceswap, a desktop application that applies generative AI, to create live face-swapped videos.
The Red Team next configured Open Broadcaster Software to stream these videos. They added Virtual Camera: Live Assist, an Android app that replaces the phone's default camera feed with an incoming video stream. The app runs on standard, non-rooted Android devices, which reduces the likelihood of detection by basic device integrity checks.
During a simulated onboarding session with a financial services application, the team routed the deepfake video feed through the virtual camera into the KYC flow. The system accepted the feed, and the liveness check did not flag the session as suspicious.
This process enabled successful authentication under a fictitious identity. The case study notes that a similar technique could let an attacker access a victim's accounts or register new fraudulent accounts on banking or cryptocurrency platforms, which could lead to financial losses.
Liveness exposed
The research focuses in particular on so-called active liveness systems. These systems ask a user to complete prompted movements or gestures and analyse the resulting images and motion for signs of spoofing.
According to the case study, modern deepfake tools can now reproduce realistic facial motion and image artefacts that such checks expect. The substitution of the physical camera feed with a virtual camera stream also removes some device-level protections that rely on trust in the hardware.
Andrew Newell, Chief Scientific Officer at iProov, said attacks on identity systems had grown alongside the rapid spread of generative AI tools.
"We've seen an explosion in attack vectors relating to identity verification over the last 12 months, largely driven by advances in generative AI and the wide availability of low cost tools," said Andrew Newell, Chief Scientific Officer, iProov. "The publication of this latest MITRE ATLAS case study is part of the vital process of identifying and documenting such methodologies. The pace of evolution is only ever likely to increase, making it essential that all organisations examine their own defences against these new tactics without delay," said Newell.
Standards response
The case study highlights a recent European standard, CEN 18099, which defines testing protocols for biometric liveness detection and resistance to injection attacks. The authors describe the standard as a significant step in security testing for remote identity verification products.
iProov said external validation of the KYC attack scenario reinforced the need for organisations to check whether biometric suppliers undergo such testing, rather than relying on unverified liveness approaches.
MITRE ATLAS positions the new contribution as a resource for security analysts and AI developers who review threats against AI-enabled identity systems. The organisation encourages further collaboration across government, industry and academia on tool development, frameworks and research in AI security, threat mitigation, robustness and privacy.