In the spirit of the reveal of the Pantone Color of the Year for 2025 (“Mocha Mousse”), we announce the official Identity Verification (IDV) Trend of the Year for 2025: document presentation attack detection (DPAD). If this sounds familiar, it should—DPAD builds upon the well-established concept of presentation attack detection (PAD), which has been the backbone of biometric liveness detection for years.
For those versed in identity verification, PAD needs little introduction. It’s the critical assessment that determines whether a person presenting themselves for verification is genuinely present and alive—not a mask, photo, or deepfake.
What is document presentation attack detection?
DPAD takes this battle-tested PAD concept and extends it to the document realm, where it performs a similar but distinct type of assessment during document-based identity verification. As part of the document authenticity (DocAuth) validation process, DPAD applies the principles of attack detection to the documents themselves.
DocAuth encompasses an assessment of an identity document presented by an end user to determine if the document is both real (i.e. genuine) as well as “live” and present (i.e. in person with the legitimate presenting end user at the time of presentation).
One part of this process involves an analysis by the IDV tech platform to check if the document has been tampered with, altered, and/or photoshopped. Document presentation attack detection is a deeper level scrub that focuses specifically on determining if the document is authentic, e.g., not a deepfake (computer-generated), and also if there has been an attempt to “inject” the document (whether a forged physical or deepfake document) into the presentation flow.
Why do we need DPAD?
2024 was the coming-out party for deepfake documents, in the news at least. This trend was reported on a broader level in our commercial broadsheets, including a recent report in the Wall Street Journal on the record number of AI-generated scams over the year-end holiday period.
We also heard this in direct conversations with representatives from federal agencies—including the Transportation Security Administration (TSA) and Department of Homeland Security (DHS)—and at the state level (such as the New York State Department of Financial Services or NYDFS), as well from our contemporary IDV service providers.
All reports highlight with alarm the huge volumes of fraudulent ID documents that enter and pervade the identity ecosystem due to weakness in the ability to detect forged physical and synthetically generated ID documents—all facilitated through injection attacks.
How is DPAD different from other forms of presentation attack detection?
As mentioned above, presentation attack detection is most commonly associated with the liveness phase of an identity verification process. To reiterate, a liveness check determines if an end user is a real person and not a mask, deepfake, screenshot, paper printout, or other presentation attack.
Presentation attack detection as an approach to determine liveness is ubiquitous among IDV vendors. The ISO 30107-3 principles and methods are best known as the gold standard to measure the performance assessment of biometric technology PAD mechanisms, with BixeLab and iBeta as two of the better-known laboratories to apply these tests.
More recently, the FIDO Alliance launched its biometric standards, based on ISO standards, that include PAD testing for liveness. However, as it relates to DocAuth, there are zero current ISO or other standards that have been issued to test for document presentation attack detection, including the FIDO Alliance’s DocAuth Certification Program.
Is DPAD really new?
Document presentation attack detection may be considered by some as akin to document “liveness.” This is a conflated term that applies human “liveness” to a document assessment scenario. That being said, it is a feature that IDV vendors have asserted over the past few years as included in their tech stack to validate that an ID document is with the end user at the time of presentation.
Given the increasing exposure that has arisen from the surge in synthetic generative AI (GenAI) identity documents in the market, DPAD is a more sensible characterization of the measure of an IDV system to deflect deepfake and injection attack vectors. To understand and contextualize better the core strength of a DPAD feature, the following levels apply:
- Baseline: The system checks for external features of an identity document that would suggest that the document is real. This may include security features like holograms and watermarks. As a reference, consider the Tier 1-5 descriptors in the FIDO DocAuth Standards.
- Superior: The system performs a forensic engine analysis from an internal file level. This level of detection checks for the upload of a pre-captured or manipulated image and looks at the structure and origination of the ID document’s image to observe if it can be trusted.
- Next Generation: Built on GenAI and computer vision to provide greater precision and performance. Incorporates a proprietary video codec created to further prevent deepfake document and injection fraud attacks. This video format goes beyond video encoding to create a new secure foundation for document presentation attack detection.

What role does responsible AI play in DPAD?
As DPAD systems increasingly rely on artificial intelligence to detect sophisticated forgeries and deepfakes, a crucial question emerges: How do we ensure these AI-powered detection systems themselves are implemented ethically and responsibly? This brings us to the concept of “responsible AI”—a framework for developing and deploying AI systems that prioritize privacy, fairness, and transparency.
In the context of identity verification, responsible AI isn’t just a nice-to-have compliance checkbox. As these systems make decisions that can profoundly impact individuals’ access to essential services, the stakes are simply too high for anything less than a comprehensive commitment to ethical AI practices.
If DPAD is the shield that protects against document fraud, responsible AI is the set of principles that ensures that shield doesn’t become a barrier to legitimate users.
Is “responsible AI” just another jargony term?
While “responsible AI” might sound like another tech industry catchphrase destined for next year’s IDV buzzword bingo card, its implementation in systems represents a critical evolution in how we approach identity verification technology. It’s about building trust not just in the accuracy of our fraud detection, but in the fairness and privacy of our methods.
Think of responsible AI as the digital equivalent of a notary public who not only verifies identities but does so with staunch integrity and without keeping copies of your personal documents in a desk drawer. The approach hinges on two game-changing capabilities:
- Synthetic dataset creation has emerged as the holy grail of privacy-preserving AI development. Rather than hoarding vast collections of actual passport photos and driver’s licenses, advanced IDV systems now generate artificial-but-representative datasets that contain zero actual personal information. It’s like having a perfect practice environment where the AI can learn to spot both legitimate documents and sophisticated fakes without ever touching real customer data.
- Algorithmic bias detection has graduated from a nice-to-have to a must-have feature. In an era where access to financial services and healthcare often begins with digital identity verification, ensuring equitable treatment isn’t just good ethics—it’s good business. Leading IDV providers now employ rigorous mathematical models that undergo continuous testing to ensure they perform consistently across all demographic groups.
Just as DPAD represents the “color” of 2025, responsible AI might well be considered its complementary shade—essential for creating a complete picture of modern identity verification. The two concepts work in tandem: DPAD provides the security muscle, while responsible AI ensures that muscle is applied ethically and equitably.
In the end, responsible AI in identity verification isn’t just about doing the right thing—it’s about doing things right. And in an industry where trust is currency, that’s a distinction worth its weight in digital gold.
Brighter colors for 2025
The complaints around deepfakes have reached their expiry date. In addition to signing up for a gym membership, chief risk, compliance, and technology officers wishing to fulfill New Year’s resolutions should be assessing their current DPAD posture and leveling up to meet the threat realities.
Unlike their gym membership, however, this decision will see better results in (corporate and customer) strength, health, and longevity—albeit with fewer sweaty towels.
About the post:
Images and videos are generative AI-created. Prompt: A rainbow-colored trail of dust forms a path through interstellar space. In the middle of frame, an Asian female astronaut in a futuristic spacesuit is performing a spacewalk. Her awestruck face is visible. A space station, to which she is tethered, is visible in the distance. She is in awe at the swirling colors around her. Tools: Midjourney, Luma.
About the author:
Terry Brenner is the Head of Legal, Risk, and Compliance for IDVerse Americas. He oversees the company’s foray into this market, heeding to the sensitivities around data protection, inclusivity, biometrics, and privacy. With over two decades of legal experience, Brenner has served in a variety of roles across a diverse range of sectors.