Blog

Deepfake Duo: Presentation & Injection Attacks

Adam Desmond

In the field of digital identity verification (IDV), the rise of deepfake technology has ushered in a host of new challenges and concerns. If you’re someone who’s responsible for protecting your company’s precious data, it’s crucial to grasp the distinctions between two prominent types of identity threats: presentation attacks and injection attacks.

Here’s a brief description of each so we’re on the same page:

A presentation attack is an attempt to deceive a biometric system by presenting fake or altered biometric traits—for instance, using a photo, mask, or video—in order to mimic a legitimate user.

During an injection attack, on the other hand, an attacker attempts to directly inject the deepfake image or video into the vendor’s biometric system, in an effort to fool the vendor’s systems into believing that the image or video came from the device’s camera.

Let’s now go a little further…

Presentation attacks: The art of mimicry

A presentation attack in liveness detection involves the deliberate attempt to deceive biometric systems by mimicking legitimate users’ traits. This can be achieved through various means such as presenting photos, masks, videos, or even sophisticated 3D models to the system. The goal of such an attack is to bypass security measures by creating an illusion of the genuine presence of the user. 

Generative AI (GenAI) can create highly realistic deepfakes, which can be used to perform presentation attacks by showing manipulated videos of legitimate users on another device. These deepfakes can imitate facial expressions, voice patterns, and other biometric traits, making it challenging for standard liveness detection systems to differentiate between real and synthetic inputs. 

Compared to an injection attack, spotting a deepfake presentation attack is a bit easier as the attacker needs to hold another device to display the deepfake video. The physical act of holding and positioning the device often introduces inconsistencies, such as unnatural angles or reflections, that can be recognized by advanced liveness detection systems. 

Additionally, the interaction between the device and the environment, like lighting and movement, can further reveal the deception, making it less convincing and easier to identify as a fake.

Injection attacks: Breaching the digital link

In contrast, injection attacks involve the introduction of manipulated data directly into the identity verification system, compromising the integrity of the identity verification process. Instead of altering the external presentation of an individual, this method infiltrates the core data used for authentication.

An injection attack, where false data is introduced into a system to deceive it, is not a new style of attack. But with the advancements in generative AI and deepfakes, the risk to biometric systems has significantly increased. 

Deepfakes can create highly convincing synthetic biometric data that, when injected into a system, elevates the threat level, making it imperative to develop more robust detection and prevention strategies. These deepfakes can either be an image of a fabricated identity document or a video that resembles the photo of a real-life person on an identity document. 

Image animated using Luma.

Synthetic IDs: An emerging threat

Whilst people are more aware these days of deepfake videos, there is a rising risk of attacks that targets the document verification process itself. How accessible are these deepfake document forgeries? Very, and it’s not just on the dark web. 

There are now hundreds, if not thousands, of sites online that offer simple, cheap and easy online services that offer a range of real documents where the new personal data is then overlaid using generative AI models. These template farms or document mills aren’t obscure dark web sites accessible only to the technically hyper-literate with a TOR address; they’re indexed and just a search term away.

The challenge here lies in distinguishing between authentic and manipulated documents, especially as GenAI becomes more sophisticated in replicating physical artifacts. We’ve reached the point where the quality of the forgeries has surpassed the ability of humans to detect forgeries based on sight alone, thereby underscoring the need for more sophisticated detection mechanisms to counteract this growing threat of AI-generated deception. 

The interplay between attacks types

Understanding the differences between presentation and injection attacks is crucial, but it’s equally important to recognize their potential interplay, since sophisticated attackers may employ a combination of these techniques to create a comprehensive, convincing false identity.

As identity verification experts, our role extends beyond staying informed about current threats; we must actively participate in the development of robust countermeasures. Collaboration between industry experts, regulators, researchers, and technology developers is key to fortifying IDV systems against both evolving deepfake threats and the ever-present ethical challenges of security, privacy, and bias.

Maintaining a watchful eye

The world of identity verification is facing unprecedented challenges with the proliferation of generative AI technology. Preparedness is about more than discerning the nuances between presentation and injection attacks; it’s about understanding the changing nature of the fraud threat landscape itself.

When we commit ourselves to staying apprised of how bad actors are evolving their fraud techniques, we are better equipped to design and implement effective defenses, ensuring the integrity of digital identities in an increasingly deceptive digital world.

About the post:
Images and videos are generative AI-created. Prompt: A patient in an examination room undergoing an eye exam with an exaggerated, futuristic phoropter. The phoropter is enormous and complex, with countless lenses, dials, and intricate mechanical parts. Steampunk-inspired design elements. Dramatic lighting highlighting the machine’s details. Patient’s eyes visible through the lenses. Tools: Midjourney, Luma.

About the author:
Adam Desmond is the Commercial Head at IDVerse. A seasoned identity professional with extensive commercial experience in risk management and threat mitigation, he has spent over a decade working at technology companies specializing in document verification and biometric solutions, including GBG and Mitek Systems.

x  Powerful Protection for WordPress, from Shield Security
This Site Is Protected By
Shield Security