Blog

AI-Powered Fraud: The Dark Side of Digital Innovation

Paul Warren-Tape

There continues to be a lot of buzz around threat actors using generative AI to fuel identity attacks and fraud. As the saying goes, “Where there’s smoke there’s fire.” So yes, in our constantly changing digital world, a new threat has emerged: AI-powered identity fraud. 

It’s no secret that as technology advances, so do the tactics of those seeking to exploit it. Let’s explore this growing challenge—and the innovative solutions in development to combat it.

The new frontier of fraud

Gone are the days of simple identity theft. Today’s fraudsters are leveraging generative AI to create sophisticated attacks that can bypass traditional security measures. From organised crime syndicates to tech-savvy individuals, the accessibility of AI tools has broadened the playing field for potential bad actors. 

When you remotely identify an individual, you do so by asking that individual to provide their government-issued photo identity document and perform a short selfie video. These are the two key pieces of evidence an identity verification (IDV) company gets to work with—and are therefore the targets fraudsters are attacking using generative AI-created deepfakes

An enormous group of bad guys are now weaponising this type of artificial intelligence to accelerate the pace at which they can commit crimes. That’s the low end of the spectrum; just imagine what’s happening on the high end of the spectrum with organised crime and enormous financial resources.

The power of neural networks

At the heart of this new wave of fraud are neural networks—AI models inspired by the human brain.

While not as dynamic as our own grey matter, these artificial networks are proving to be powerful tools for generating convincing fake documents and liveness videos.

A deep(fake) dilemma

To review, deepfakes are AI-generated images and videos are becoming increasingly difficult to distinguish from reality. Even more concerning, studies show that humans actually find deepfake faces more trustworthy than real ones. Let that sink in: we’re hardwired to trust these AI-generated fakes more than the real deal. 

Talk about a glitch in the matrix.

In February 2024, an underground online service called OnlyFake that sells identity documents it claims are generated by AI and “neural networks” created a buzz of headlines. One article said OnlyFake claims to generate hundreds of documents at once from Excel sheets and up to 20,000 a day. Media reports followed with claims and concerns that OnlyFake documents had been used to bypass KYC and AML checks at prominent crypto and finance platforms.   

Image animated using Luma.

Digital document mills on the rise

The proliferation of online services offering fake documents is alarming—and these “template farms” are no longer confined to the dark corners of the internet. 

With a simple search, anyone can access services claiming to generate thousands of fake IDs daily, posing significant challenges for identity verification processes. And they’re not just pumping out driver’s licences; the range of documents they can fake would make your head spin.

Fighting AI with AI

In response to these threats, IDV companies are leveraging AI itself as a defence mechanism (the old good vs evil story). Advanced deep learning models are being deployed to detect the subtle inconsistencies in AI-generated content that human eyes might miss. It’s becoming a technological arms race, with AI on both sides of the battlefield.

The concept that “AI is needed to detect AI” stems from the notion that as generative AI becomes more sophisticated, it surpasses the ability of humans to detect forgeries based on sight alone. AI detection models can process vast amounts of data at high speeds, learning and adapting to new forgery methods. 

These AI detectors are trained to spot the subtlest clues that would be imperceptible to the human eye, making them indispensable tools in the fight against generative AI fraud.

The future is coming fast

As we navigate this new era of digital identity, vigilance and innovation are key. The line between real and fake is becoming increasingly blurred, but with continued advancements in generative AI technology, we can hope to stay one step ahead of those who would abuse these powerful technologies. 

The first step for any organisation in stopping the fraudsters in committing a crime is preventing them from using fake or fraudulent identity documents in the identity proofing process. 

This measure is the first in a layered approach recommended by industry best practice (NIST Digital Identity Guidelines) that follows a three-step process: 

  1. Validate the identity document is genuine using appropriate technologies (like AI) that confirm the integrity of physical security features and that the evidence is not fraudulent or inappropriately modified;
  2. Ensure that the document contains information that is correct by validating against an authoritative source; and
  3. Clarify that the identity relates to a real-life person through liveness assurance and biometric face-matching.

These AI-powered technologies are part of a larger, dynamic security ecosystem that evolves in response to emerging threats and advances in adversarial tactics. It is vital that we prioritize the continuous improvement of these protective measures including rapid response to new threats, whilst also ensuring they remain secure, accurate, and trustworthy at all times.

About the post:
Images and videos are generative AI-created. 
Prompt: Five Neo clones in a long black coats and sunglasses kung fu fighting each other. Kicking and punching. Green-tinted cyberpunk atmosphere, digital rain in the background. Hyper-detailed, cinematic lighting, ultra-realistic 3D render. Tools: Midjourney, Luma.

About the author:
Paul Warren-Tape is IDVerse’s GM for the APAC region. He has 20+ years of global experience in governance, operational risk, privacy, and compliance, spending the last 10 years in pivotal roles within the Australian financial services industry. Warren-Tape is passionate about helping organisations solve complex problems and drive innovation through encouraging new ideas and approaches, whilst meeting their legislative requirements.

x  Powerful Protection for WordPress, from Shield Security
This Site Is Protected By
Shield Security