Not All Identity Verification Solutions are Equally Biased

Peter Violaris

With digital services gaining ever more ground on in-person customer interactions, remote identity verification (IDV) stands as a linchpin for secure transactions—and you, as a buyer of the technology, need to understand the nuanced biases that IDV might bring to your business. This blog explores the multifaceted nature of bias in identity verification, focusing particularly on poverty bias and algorithmic bias. 

Understanding these issues is both a matter of compliance and also ensures that your service, whatever it may be, is accessible by the whole of society, thereby reducing drop-off and increasing revenue.

Poverty bias in remote IDV

The first layer we peel back is the profound impact of poverty bias on remote identity verification. In an era where the latest smartphones and high-speed internet connections are often considered necessities, it’s essential to recognize that not everyone has equal access. 

Many liveness tests in modern remote IDV solutions require top of the range smartphones and high speed internet access to work effectively. This both makes it easier for fraudsters to spoof (by using cheap models and poor internet to make the liveness check less effective) or introduces a form of bias if the liveness check is much harder to pass.

When buying an identity verification solution, make sure you ask the vendors how their technology adjusts for users without the latest smartphones or with poor internet connections, and make sure you test thoroughly how the solution works across the range of devices. 

Unpacking skin tone, age, and gender

Today, algorithms play a pivotal role in IDV, and they are certainly not immune to bias. Algorithmic bias, particularly concerning skin tone, age, and gender, poses ethical challenges that demand our attention. Not all algorithms are created equally. 

Face-matching algorithms have earned a bad reputation for bias, particularly for not working as well on dark-skinned persons. The technology has made vast improvements over the past decade in this aspect when it comes to selfie-to-selfie matching in good lighting conditions. But you need to ask your vendor how their selfie to identity document image matching—which is much harder than selfie-to-selfie—fares in poor lighting conditions in terms of bias. 

It is not just face matching algorithms that can contain bias. The liveness algorithms can also have the same bias issues; where they are more likely to pass light-skinned rather than dark-skinned users. 

Look for a vendor which takes bias very seriously, both in its face matching and liveness technologies, and is striving to meet Zero Bias. Ask your vendors about their bias certifications or reports. 

Some vendors own their own algorithms, but many do not. Those who own their own algorithms can react to bias issues that surface very quickly. Those who buy in third-party technology often have to wait for months for an improvement to a flagged issue. 

Picking up the pieces

If one of your consumers is incorrectly failed by bias in your vendor’s solution, then one of two things will happen: (i) the consumer drops off there and then, meaning lost revenue; or (ii) you now have to fall back to your manual processes for the consumer to be passed. 

Manual checks are expensive for you to operate and result in a poor and slow user experience for your consumers. With modern deepfakes and sophisticated fraudsters, manual checks let in more fraud too. 

So if you are not taking the time to reduce bias by careful selection of your vendors, you are both letting in more fraud and you are paying more

Stay informed

As buyers of remote identity systems, you ultimately bear the responsibility of mitigating bias on your platforms. 

Our journey through the state of bias in remote identity verification unveils challenges that demand our attention as conscientious buyers. By acknowledging and addressing bias, buyers can arm themselves with the correct challenges to vendors and understand what to look for in testing the vendors’ systems. 

This commitment not only aligns with regulatory requirements; it also reflects the good practice required to generate trust from your consumers.

About the post:
Images are generative AI-created. Prompt: A multi-level medal platform, first, second, and third place, futuristic athletes in futuristic uniforms, first place winner is holding up an elaborate trophy. Tool: Midjourney.

About the author:
Peter Violaris is Global DPO and Head of Legal EMEA for IDVerse. Peter is a commercial technology lawyer with a particular focus on biometrics, privacy, and AI learning. Peter has been in the identity space for 6 years and before that worked for London law firms.

x  Powerful Protection for WordPress, from Shield Security
This Site Is Protected By
Shield Security