In the domain of generative AI, deepfakes, and sophisticated fraud attacks, the challenge for any identity verification (IDV) provider is ensuring quality of fraud detection when human reviewers themselves cannot detect it.
It’s not possible for us to assure quality for our fraud detection capability when we don’t have the source of truth (i.e. when we aren’t told in advance whether a transaction is fake or real), and there is no sure way of finding the truth with a human reviewer.
Our approach to address this conundrum is four-fold:
- Working closely with clients;
- Active research by AI engineers;
- Working to a model governance framework; and
- Independent testing of the fraud detection engines.
Working with clients
The majority of our clients are in the financial services industry, and they rely on our services to keep out identity fraud whilst ensuring they’re meeting their compliance obligations. Clients will often have a human review process, either just on transactions we flag as “high risk” or on all transactions.
A key advantage in developing and owning our own engines is that whenever a client flags a transaction that has not given the expected result, we are able to review the issue, retrain the model, and release an updated version. We can do this incredibly quickly by using our own generative AI engines to reproduce the particular issue or new fraud attack and then releasing the new model to all clients.
Our CTO tells us we can usually react within 24-48 hours, and that we are making updates literally on a daily basis. Furthermore, because the engines are served by API, all clients get the benefit of the improved model immediately.
Active research
Our AI engineers monitor the advancements in deepfake websites on the regular and the dark web. If they come across a new fraud technique that our solution cannot detect, we can retrain the model in the same way described above.
We are currently tracking over 120 websites that generate deep fake documents, and using AI to fight AI. This active investment in our fraud detection capabilities helps us stay ahead of the curve; we are certainly not sitting around waiting to be told our solution has holes.

Model governance
Our model governance framework ensures our IDV solutions are safe, secure and reliable; the right decisions are made consistently. We have employed the NIST AI Risk Management Framework because it provides a structured approach to identifying, assessing, and mitigating risks associated with AI systems. This framework helps establish that our AI models are robust against fraud attacks while accurately verifying real identities.
In adhering to NIST guidelines, we address potential biases, enhance transparency, and maintain data privacy, ultimately leading to more trustworthy and effective identity verification solutions that balance security with user experience. This approach is particularly important in an era where sophisticated fraud attempts are on the rise, and regulatory scrutiny of AI applications is increasing.
The model governance framework includes “controls about gaming” and the strategies implemented to prevent or mitigate the manipulation or exploitation of AI systems by malicious actors. These include, but not limited to, adversarial training, monitoring/alerting, regular updates and retraining, explainable AI, anomaly detection and human oversight.
Please speak to your Account Executive to learn more about our model governance processes. There is also a blog about it here.
Independent testing
Each of the core technologies are independently tested by an accredited biometric testing lab at least every two years, or after a material update. The results of these are summarised and available to clients upon request.
Given our UK and Australian government digital identity trust accreditations, there are minimum requirements—set performance metrics or benchmarks—that must be achieved. These requirements are also included within our independent testing. It should also be noted that we use testers who are NIST-accredited to ensure a robust and thorough process is followed.
Up to the task
The challenges posed by generative AI, deepfakes, and sophisticated fraud attacks in identity verification are significant, but IDVerse remains committed to staying ahead of these emerging threats. Our strategy of collaborating closely with clients, conducting active research, adhering to a strict model governance framework, and undergoing independent testing allows us to continuously refine and enhance our fraud detection capabilities.
This approach not only ensures the reliability and effectiveness of our IDV solutions but strengthens trust with our clients and end users. As identity verification technology continues to advance, our adaptive and proactive methods position us to address future challenges effectively, protecting the integrity of digital identities in an increasingly complex technological environment.
About the post:
Images and videos are generative AI-created. Prompt: Friendly futuristic AI robot tutor assisting a young student with homework, holographic textbooks and educational displays floating around them, cozy futuristic study nook, soft glowing lights, robot gesturing towards 3D mathematical models, student’s face lit up with understanding, warm color palette, advanced yet inviting technology, floating digital screens, comfortable ergonomic furniture, plants integrated into the space, hyper-detailed, optimistic near-future aesthetic. Tools: Midjourney, Luma.
About the author:
Peter Violaris is Global DPO and Head of Legal EMEA for IDVerse. Peter is a commercial technology lawyer with a particular focus on biometrics, privacy, and AI learning. Peter has been in the identity space for 7 years and before that worked for London law firms.