How your own image can be used against you by fraudsters
It sounds like the old joke – ‘Does your face hurt? Because it’s killing me!’ – but it’s not a laughing matter.
The truth is, anyone’s own face can now be used against them – causing damage ranging from reputational to financial. We’ve all seen convincing videos of recognizable people saying strange things – and later learned that these are deepfakes, computer-generated and made without the knowledge or consent of the people depicted.
Financial loss is another real possible outcome of someone faking your face. You use your face to unlock your iPhone, for example, and then maybe even your banking app that you also keep on the phone. And FaceID for your iPhone is a relatively low barrier to access compared to the biometric solutions that you might encounter if you’re carrying out a riskier transaction.
So the question becomes: what’s stopping someone from stealing your face to use it against you, causing damage?
How to tell if your biometric technology is secure
Every day I hear about another company adopting biometric technology, including facial recognition, for some part of their processes. What I hear less about is how secure these different solutions are.
There are international standards that govern things like facial recognition technology.
The ISO or International Organization for Standardization, set the standard on how everything from your food to your data should be handled and established ISO 30107-3, which is a standard that outlines the requirements and testing procedures for Presentation Attack Detection (PAD) systems.
What are PAD systems?
Simply put, PAD systems are anti-spoofing tech. They’re used to detect everything from deepfakes to masks, stopping fraudsters from doing anything from opening bank accounts or even catfishing on dating apps, provided that the platform uses them.
You may have also heard PAD systems described as “liveness” checks, which refers to checking whether a user is a real person, not one of the aforementioned forms of deception.
Who governs the security testing?
Testing for conformity to ISO 30107-3 is conducted by iBeta Quality Assurance, probably the most accomplished biometrics test lab in the world – with accreditations from government and private entities.
The ISO 30107-3 test from iBeta is split into two levels, 1 and 2, which represent different levels of security the PADs offer.
- Level 1 PADs are designed to detect basic attempts to deceive a biometric system. They typically use basic image analysis techniques to detect the presence of a physical object, such as a face, in front of the sensor. PAD 1s are easier to bypass or spoof with simple attacks, like a photo or a paper mask of a person’s face.
- Level 2 PADs are engineered to detect far more advanced attempts of biometric trickery, such as a 3D-printed replica of a person’s face or Hollywood level deep fakes. These systems use advanced image analysis techniques, including depth sensing and 3D modeling. An error rate of 15% is allowed, previously 20% to obtain conformity.
Everyone is limited by the tech their PAD systems can work on – iPhones, Webcams, or other cameras – meaning they all have the same amount of biometric data available to process. We’re the only company to pass Level 2 testing with a perfect score on the first attempt.
So why do some perform better than others? The short answer is that each vendor has their own artificial intelligence (AI) and machine learning (ML) algorithms they use when interpreting the biometric data collected by cameras and sensors, and not all AI is equal.
Do you know how your current vendor conforms to ISO 30107-3? Get in touch if you have any questions!