This week on AI Unmasked, we investigated one of the most quietly alarming AI security incidents to be formally documented in recent memory.
It did not involve a dramatic hacker. It did not involve a system crashing or sending an alert. It involved a bank's identity verification AI doing its job, correctly and consistently, while fraudsters walked through its front door over a thousand times without being noticed once.
If you have not watched the video yet, [catch it here]. Then come back, because below we break down what actually went wrong, what the governance failures were, and what tools now exist to prevent it.
Unmask of the Week: The Face Swap KYC Breach
What Happened?
Banks and financial institutions now verify customer identity through AI-powered video checks, a process legally known as Know Your Customer, or KYC. When you open an account online, you hold your face to your phone camera. The AI compares your live face to your ID photo and confirms you are a real, living person, not a printed photograph.
In Indonesia, cybersecurity investigators at Group-IB documented a case where attackers used a freely available desktop application called Faceswap to overlay a stolen face onto their own, in real time, through a manipulated video feed called a virtual camera. The AI received what looked like a perfectly valid verification check. The face moved. The geometry matched. The liveness test passed.
Over 1,100 fraudulent verifications were submitted against a single institution in roughly three months. Each one was recorded in the system logs as successful. Estimated financial exposure: over $135 million USD.
In December 2025, the MITRE Corporation, one of the world's most authoritative cybersecurity bodies, published a formal case study on this exact class of attack, based on red-team research conducted by biometric verification firm iProov.
This is documented. This is real. And it is still happening.
Why It Matters
This was not a brute-force attack. No firewall was broken. No password was stolen. The attackers simply fed the AI a false input from a direction the system was never designed to defend against.
A few key takeaways:
The system worked perfectly — and that was the problem. Every fraudulent check was logged as passed. There was no error, no flag, no crash. The very reliability of AI made the fraud invisible.
Free tools lowered the barrier to near zero. The software used to conduct the attack costs nothing and is available to anyone. The attack was not sophisticated. It was systematic.
The camera was trusted without being verified. The AI was built to check faces, not to question whether the camera feed itself was authentic. No one built that question in.
Scale made it catastrophic. Because the process was automated and cheap, attackers did not try once. They tried over a thousand times in a single campaign.
Governance Radar
What regulations could have prevented this?
Framework | What It Requires | Gap Exposed by This Incident |
|---|---|---|
GDPR (EU) | Lawful basis for processing biometric data | Biometric verification systems require explicit safeguards; injection attack vectors were not addressed |
EU AI Act (2026) | High-risk AI classification for biometric ID systems | KYC systems now fall under high-risk category but compliance timelines leave gaps |
NIST AI RMF | Adversarial testing and red-teaming in AI lifecycle | Most commercial KYC vendors were not testing against virtual camera injection attacks |
FATF Guidelines | Robust digital identity verification for AML compliance | Guidance assumes liveness checks are reliable without addressing feed-level manipulation |
CEN 18099 (EU Standard) | Liveness detection testing against injection attacks specifically | Still in adoption phase; most deployed systems |
Bottom line: The EU AI Act's classification of biometric verification as high-risk AI is the strongest current lever available. It mandates conformity assessments, transparency, and human oversight. But most institutions deployed their systems before these requirements came into force, and retrofitting them takes time.
Stat That Shocked Us
Fraud attempts using deepfakes against European financial institutions increased by 2,137% over three years, according to a 2025 Signicat research report.
That is not a rounding error. That is a structural shift in how financial fraud is being conducted, driven entirely by the falling cost and rising accessibility of AI face-swap tools.
What Is a Liveness Check — And Why Did It Fail?
A liveness check is a test built into AI video verification systems. Its job is to confirm that a real, living human face is in front of the camera, not a printed photo or a pre-recorded video.
It does this by looking for natural movement: blinking, slight head rotation, the way light catches a three-dimensional face differently from a flat image.
For years, this was effective. Fooling a liveness check convincingly required specialist equipment and expertise.
Then face-swap AI became free and accessible. Attackers no longer needed to fool the camera with a physical object. They replaced the camera feed itself, using software to stream a manipulated video directly into the verification app. The liveness check still ran. It still looked for movement and geometry. It found everything it was looking for.
The problem was not the check. The problem was that no one had asked whether the camera could be trusted in the first place.
This class of attack is now formally documented by MITRE as a virtual camera injection attack. It is the equivalent of not just forging a document, but replacing the desk where the document gets checked.
Latest Tools and Approaches Being Used to Address This
The industry is responding. Here is what is being deployed or developed right now:
Device-level integrity checks — Verifying that the camera being used is the phone's actual physical hardware, not a virtual substitute. Some platforms now check for signs of virtual camera software running on the device before accepting a video feed.
Passive liveness detection — Rather than asking users to blink or turn their head, newer systems analyze dozens of subtle signals simultaneously, including micro-textures in skin, light refraction patterns, and pixel-level inconsistencies that face-swap tools currently cannot replicate perfectly.
Deepfake detection layers — Dedicated AI models trained specifically to identify artifacts left by face-swap software are being added as a secondary layer on top of traditional liveness checks.
CEN 18099 compliance testing — A European standard specifically requiring that liveness detection systems be tested against injection attacks before deployment. Adoption is growing following the MITRE case study publication.
iProov's Genuine Presence Assurance — One of the more documented approaches, combining liveness detection with a real-time challenge that is difficult to replicate through a pre-rendered or manipulated feed.
None of these tools are foolproof. But layering them significantly raises the cost and complexity of executing this class of attack at scale.
From the AI Unmasked Channel
Watch this week's video → "How Deepfakes Are Bypassing Bank Identity Verification"
Coming up next week: We are investigating how AI hiring tools quietly screen out job candidates before a human recruiter ever sees their application. Subscribe so you do not miss it.
"After learning that AI face verification can be fooled with free software, how much do you trust online identity checks to protect your financial accounts?"
Reply to this email — A lot, Somewhat, or Not much at all. We read every response and will share the results next week.