Blog

Is the Deepfake Threat in Digital Identity Verification Real or Fake?

Paul Warren-Tape

There has been a lot of hype recently about the threat of deepfakes in digital identity verification (IDV). Deepfakes get a lot of airtime, which ironically, is what most of them are designed to do. But is the threat of deepfakes more than just clickbait?

In this article we will explain what deepfakes are, and analyse how much of a threat they are to IDV.

What are deepfakes?

Deepfakes are videos that have been distorted to present an individual saying or doing something they didn’t say or do. Deepfake software uses AI, neural networks, and machine learning to create very convincing videos. This software is easy to use and open-sourced, so developers and sophisticated fraudsters can refine it for their purposes.

Deepfake services are also offered on the dark web. These can be relatively cheap, and the sellers also offer advice on how to compromise security measures around identity verification.

How are deepfakes used?

You probably associate deepfakes with celebrities, whose personas are manipulated as clickbait and to influence their fans. Many celebrity deepfakes appear in pornography, as a famous name will draw more viewers and drive more revenue.

Some countries have tried to push back against this. In South Korea, an online petition reading “Please strongly punish the illegal deepfake images that cause female celebrities to suffer,” got more than 375,000 signatures. The campaign is urging the Korean government to act and to stop deepfake pornography that transplants celebrities into explicit images and videos.

A few countries such as the U.S., Australia, and France have laws against the spread of misinformation and content like deepfakes. But the effectiveness of these approaches is not clear yet: firstly because the laws are new; and secondly, because the digital data is inherently global, it is very hard to get international cohesion on these regulations.

Coincidentally there was another interesting misuse of deepfakes in South Korea. The company Scatter Lab faced public criticism after its chatbot began to send insults and explicit messages. The AI-powered chatbot, which was originally envisioned to engage Facebook Messenger users, began to malfunction after it was hacked and taught to say insults and homophobic remarks.

What is the threat to digital identity verification from deepfakes?

A recent industry report claimed the deepfakes were a huge threat to online security and apparently over half of us worry about deepfakes being used as part of identity theft. IDVerse has been testing for the threat of deepfakes for over two years and we have a very different view.

You may be worried about your identity being stolen using deepfakes, but we will detail a few reasons why the threat of deepfakes in IDV isn’t what it is hyped up to be…

Reason #1 – Lack of accessible media

Celebrities tend to be used in deepfakes is because to train algorithms that will later generate a fake video of a person, thousands of high-quality video, photo and audio samples of the person are required. It is almost impossible to obtain these samples from a person who is not exposed to the media. So even if a fraudster somehow obtained some digital media of you, the resulting deepfake would probably be very poor.

Reason #2 – The deepfake can’t outsmart computer vision

Training computer systems to see and analyse information in the same way humans do is called “computer vision”. Improving computer vision is something we have worked hard on over the last year and we are proud to be world leaders in this field. We have created a deep neural network that produces cognitive ways of thinking for the software engine. So essentially, it sees things and can analyse them in the same way a human would.

When our system sees a video feed, it interprets the video by looking at the frames and objects and adding calculations to them. The neural network extracts features out of a video feed and calculates risk levels and probability to get to a pass or fail outcome.

Even if a deepfake is very sophisticated and created from lots of available media, there will still be subtle telltale signs that it isn’t real. Our computer vision will pick that up in the same way a human would. For example, a key element in the video is the background, and how the face links to it. IDVerse’s AI can detect that the background is an embedded static image, something common in deepfakes. Lighting is another outstanding aspect, our system will spot that a shadow in the room does not affect the face but the rest of the elements present in the video.

The key to good computer vision is training the system to keep it light and fast. If the software picks up every detail on the screen it will take too long to process and interpret the data. The skill is to recognise what the telltale signs are and train the neural networks to look out for them. For instance, the system has seen certain twitches on people’s faces so many times it will recognise if a twitch on a deepfake is abnormal.

Training systems to do this is really hard as there is a lot of pre-processing and technical aspects that need to go in the background to be able to do that very fast. We rewrote the whole software stack from the ground up to improve its capability. Also, we develop all our technology in-house so our computer vision spans every aspect of the process, including document recognition.

Reason #3 – Deepfakes can’t hack the system

Most deepfakes are presented by a fraudster holding up a screen showing the fake video. For the reasons above our system would spot the abnormalities in this scenario immediately. The one other scenario worth considering is if the system could be hacked and the live video stream replaced by a deepfake video. In other words, a hacker had accessed the routing to the video camera and injected the deepfake video. But if your system is vulnerable to a “man-in-the-middle” attack like this you have bigger problems than just deepfakes.

At IDVerse, we develop every aspect of the process ourselves:

  • Identity document recognition
  • Document fraud assessment
  • Liveness detection
  • Video fraud assessment
  • Face matching

Each step is interwoven and contained within the same neural network. The neural network tokenizes the video stream and handshakes the front end with the backend to validate it. A hacker can’t isolate the video stream aspect of the process and hack it without corrupting every aspect of the process and failing the identity verification.

The deepfake threat to digital identity verification is fake.

Hopefully, we have put your mind is at ease about the threats of deepfakes in IDV, as we have laid out the three reasons why we aren’t concerned by them:

  • It is very difficult for a fraudster to create a deepfake of you or any normal person, as there just isn’t the media available to train the algorithms.
  • The computer vision we have developed looks at the video stream the way a human would, and analyses the context to spot telltale signs of deepfakes.
  • The IDV flow of IDVerse is containerised and it is impossible to hack one component like video streaming.

While we are not concerned about the threat of deepfakes, we still take them seriously. We will continue to test for them by creating our own deepfake videos and refining our computer vision to spot them. But clients of IDVerse can rest easy knowing we have detected 100% of the deepfakes presented to us.

If you want to explore any of the topics in this blog or are seeing deepfakes enter your identity verification process then please reach out to us at moc.e1715293050srevd1715293050i@oll1715293050eh1715293050 or request a demo.

We’re more than happy to chat about the state of the industry and if it’s a good fit, show you how our technology could overcome the challenges you are facing.

x  Powerful Protection for WordPress, from Shield Security
This Site Is Protected By
Shield Security