In part 1 and part 2 of this blog series, we looked into the inadequacy of passwords as the last line of defense for administrator portals and considered different types of biometric solutions in response.
We will now narrow the lens a little more to consider differentiating factors for facial biometrics solutions, not losing sight of the original goal—to shut down weak access points allowed by admin passwords to avoid access to your family’s personal data or, if you are a supply chain vendor, that of your clients and their customers.
Fully automated vs. human encompassing
Standing back from security requirements for a moment, any solution that incorporates remote access to a service or product should serve the customer. Failing to do so means the customer moves to a competitor or leaves the ecosystem. This is why user onboarding and customer experience is so critical.
Through the benefits of generative AI, remote identity verification (IDV) technology can be fully automated from start to finish. “Fully” in this instance does not mean referring a batch of challenged transactions to manual review; it means a yes/no answer in under 60 seconds.
While one could have a human in the loop or over the top, no human intervention or team of humans reviewing IDs translates into efficient and accurate onboarding of customers, removing customer experience friction. It also means better hygiene for the platform, allowing genuine users into the platform—and keeping bad actors out—with speed.
100% proprietary tech vs. Frankenstein solutions
Most IDV providers do not fully own their own tech and “Frankenstein” a solution together, using various third-party vendors for one engine. An engine refers to one of the four processes of the remote IDV flow, namely:
- Optical character recognition (OCR) of ID document;
- Document authentication;
- Liveness detection; and
- Face matching.
Other providers use multiple vendors for different engines— for example, liveness and face matching from one vendor and OCR and document authentication from another vendor.
A vendor that owns all of its code controls the roadmap, thus optimizing the processing and performance across product features. For example, when performing document authentication on a presented ID document, the end user’s environment (e.g. lighting conditions) are cross-applied to testing on liveness.
Another benefit is that customer feedback and challenges are more immediately resolved by the vendor. This can result in improvements in fraud detection flowing through the product from weekly (where the tech is proprietary owned and controlled) to quarterly or longer (where the tech is provided by third parties.)
Biometric & liveness accuracy
One challenge for an organization is to assess the accuracy and effectiveness of a liveness solution. There are a number of ways to accomplish this.
A starting point is iBeta testing of the ISO 30107-3 Presentation Attack Detection standards for liveness (biometrics/facial recognition). iBeta is a NIST/NVLAP-accredited laboratory. This lab accreditation reinforces the integrity (and independence) of the testing agency and underpins the strength of the results it affirms.
iBeta’s results are published publicly for companies to assess. Consider here:
- Level 2 certifications employ significantly more presentation attacks over longer testing times than Level 1 certifications.
- The version of the vendor technology that achieved the certification—v1.0, for instance, suggests the provider had success fresh out of the gate of release.
- The date of the report—older reports imply more experience in the field.
Finally, by looking at the list of vendors, one may not recognize some of the names, which follows the point above regarding IDV vendors that aggregate third party technology.
One-off iBeta ISO 30107 testing is not enough. Engine performance should be assessed regularly, at least annually and, again, by independent NVLAP-accredited laboratories. More frequent—for example, monthly and quarterly—internal reporting across the vendor’s company-wide transactions is a useful indicator of the seriousness with which the vendor takes on the monitoring and improvement of its engine performance.
Bias in facial matching
Federal governments in the US and other regions have been increasing wordsmithing around technology that is inclusive. Racial bias is historically a major issue in the ID verification industry, with some companies delivering ~50-70% accuracy due to failure to recognize end users of certain ethnicities (darker skin tones showing more discrepancies), genders (women presenting less accurately), and ages (under 21 and over 65 failing more often).
As of writing this post, some US federal agencies are taking the lead to perform intensive testing of commercial IDV solutions. This includes the Remote Identity Validation Technology Demonstration (RIVTD) led by the US Department of Homeland Security (DHS) and the Science and Technology Directorate, together with NIST and other federal agencies. Results from this study will be released in Q1 of 2024 and will include data on discrepancies between solutions around bias.
Also, there are also some global certification authorities that are developing “bias-testing” programs to standardize the success of a vendor’s technology in testing for zero bias. These will be launched for IDV vendor testing by Q4 2024.
Ahead of the release of the DHS and global certification programs mentioned above, vendors that are thinking ahead can subscribe to independent (NVLAP-accredited) laboratories to test for bias in their liveness product. A higher number of test subjects across a wide range of ethnicity, gender, and age groups is recommended to simulate a real world verification scenario.
Back to the kitchen
Getting back to where this blog series began, it was in the kitchen where I first read of my children’s personal data having been compromised. That seems to be the best place for companies to consider to hit refresh and start again: What are the ingredients in your IDV solution? And how do they combine to shut down the imposter and nourish your customer engagement?
About the post:
Images are generative AI-created. Prompt: A Kabuki theater scene with actors wearing many different masks, the stage is in a beautiful forest clearing. Tool: Midjourney.
About the author:
Terry Brenner is the Head of Legal, Risk, and Compliance for IDVerse Americas. He oversees the company’s foray into this market, heeding to the sensitivities around data protection, inclusivity, biometrics, and privacy. With over two decades of legal experience, Brenner has served in a variety of roles across a diverse range of sectors.