There was panic among crypto exchanges last month around news of OnlyFake producing AI-generated fraudulent ID documents that succeeded in passing multiple Know Your Customer (KYC) checks of onboarding platforms.
We wrote in our Shedding Light on OnlyFake blog about some high-level points, including synthetic IDs and how AI can take on AI for successful document authentication (DA). In this feature, we will peel back some additional layers on other technical challenges of document fraud and solutions to push back the dragon.
Nothing new under the IDV sun
Parents of US college kids: did you know that part of the unofficial freshman starter kit when Jonny/Jenny settles into his/her dorm is buying a fake ID and having it hand-delivered by a sly and savvy upperclassman? A really high quality fake could sell for up to $100+.
OnlyFake was one of many fake ID services that remain open for business on Discord, Telegram, and other public messaging boards—there is no longer a need to venture into the “dark web” to find a provider. For context, IDVerse tracks over 100 different sites/providers in this space. We saw our first deepfake in 2015 and first synthetic ID in 2017, so our AI has met their AI in a few dark alleys over the years.
Without disclosing the details of the AI-assisted approach of a bad actor to creating a forged ID document, the IDVerse secret sauce is about employing several strategies to detect and mitigate the risks associated with generative AI fraud, often using AI to detect AI-generated content. It’s fighting fire with fire.
Importantly, fake IDs are not just a tool for teenage drinking or breaching a crypto exchange. Across every vertical, the weakness in the ID verification platform is a gateway to an opportunity for some benefit, whether it’s buying a beer at a concert or someone phishing a money transfer from your great uncle.
How automated is automated?
One differentiating feature of document authentication ID verification products is automated versus manual review. Traditional technology offers a template based system, kicking the review to a manual/human process. From our knowledge of this path, over 25% of all users that attempt onboarding may be moved to “please wait while we connect you to a representative” status with some annoying ‘80s soundtracks filling the void.
Aside from time delays for the customer waiting to enter the service providers’ domain and additional HR costs of review teams, there is also a drop off in accuracy since:
- The human reviewer does not have the ID document in person, to verify the minute details of the document.
- Unlike AI, human reviewers can tire from the start of the shift to the end.
- The range of expertise among reviewers is not consistent, with the average human reviewer level of experience not exactly what one would describe as FBI agent-worthy.
IDVerse doesn’t use templating. See fully automated contextual ID verification in action: Book a Demo
Do you know your vendor’s DFAR?
Following from the previous point, a fully automated system is only as beneficial as its ability to monitor accuracy. This requires a consideration of the approach of the vendor to compliance, including external and internal independent testing.
There are few DA certifications for a vendor to attest to, including that of the FIDO Alliance Document Authenticity Certification Program. This type of testing should be validated by credible independent testing laboratories, such as those that have passed National Voluntary Laboratory Accreditation Program testing (NVLAP), i.e. even the testers should be properly tested.
Testing metrics will include results of the solution’s document false accept rate (DFAR), which is the proportion of fraudulent documents that are incorrectly accepted, i.e. allow the fakes to slip through the gates. And of little surprise—a DFAR as close to 100% is optimal.
Following DA certification, independent laboratory testing should take place regularly (at least annually), to make sure the system is fresh to respond to current challenges. Also, the solution provider should perform internal testing on DFAR accuracy on a more frequent basis (ideally monthly or quarterly), to ensure the DA engines are optimized for the real world environment.
Organizational accountability
Responsibility does not rest just with the ID verification vendor and the accuracy of its solution. Organizations should assess the risks of failing to follow best practices and the cost of fraud to the business and its community of users/customers.
Companies may be to blame for adopting approaches on how ID documents are accepted for DA processing, which may become the point of weakness for an OnlyFake to succeed. These include tolerating uploads by the end user of photocopies of ID documents or low-resolution .jpeg file images uploads.
One reason that a company may follow this lighter path is the perception that higher resolution capture of an ID document introduces friction into the onboarding process. While that concern can be addressed by a properly developed and UX-efficient technology, this leniency elevates the risk of not zeroing in on fraudulent features of the document.
This is a major point for the company’s risk and product departments to find balance on—is a perceived delay to the customer flow more important than compromising customers’ accounts because the company accepts file image uploads?
The future is now
There is plenty to be concerned about with OnlyFake and its friends. And when one fake ID platform closes, another will quickly take its place.
But fear not. (Or rather, fear less.) Armed with a combination of generative AI solutions that are objectively assessed externally and internally, plus a best practice company risk approach for onboarding/reboarding processes, organizations can derisk the DA process to elevate the integrity of their identity access management and verification processes.
About the post:
Images are generative AI-created. Prompt: A fresh-faced college kid grinning and presenting a driver’s license with an older person’s picture on it. Tool: Midjourney.
About the author:
Terry Brenner is the Head of Legal, Risk, and Compliance for IDVerse Americas. He oversees the company’s foray into this market, heeding to the sensitivities around data protection, inclusivity, biometrics, and privacy. With over two decades of legal experience, Brenner has served in a variety of roles across a diverse range of sectors.