At the recent Datos Insights Financial Crime and Cybersecurity Forum in Charlotte, I had the pleasure of participating in a conversation moderated by David Mattei of Datos Insights about how artificial intelligence (AI) is increasingly used by banks and financial institutions (FIs) for fraud detection, customer onboarding, marketing and more. Other participants included Peter Tapling, Managing Director of PTap Advisory, LLC, and Michael Touse, model risk management leader for the Navy Federal Credit Union.
One of the major topics we discussed was how bias can easily creep into AI systems, leading to discrimination and other issues. This post summarizes our discussion on the strategies for detecting and mitigating AI bias in the financial sector.
The widespread problem of identity bias
Bias in AI is an extensive problem that’s bigger than commonly perceived—bigger even than the threat of fraud because it affects everyone, even good customers. Since many AI systems are trained on real-world data, any societal biases inherent in the data get reflected in the models.
Even if a company has trained its algorithm on representative data, customers may have remaining questions about whether or not the data was ethically sourced. Issues surrounding data privacy, consent, and consumer rights are concerns governments around the world have been rushing to address—with varying levels of success to date. The idea of mandating an algorithmic “nutrition label” is gaining traction as a means of countering these concerns as well.
One solution is using generative AI to create synthetic training data free of bias. However, this technique is new and proper implementation requires significant testing over years. There is a major need for ongoing monitoring even for rigorously developed models, as bias can emerge over time.
Testing for bias
Another key theme was the importance of thoroughly testing AI systems for bias by running biased data through the models and examining their reactions.
Companies can hire independent firms (BixeLab/iBeta is considered the gold standard) to audit their algorithms and have them certified as zero-bias, and regular internal bias testing by financial institutions themselves is encouraged, along with spot checks of model results.
Furthermore, companies should establish a code of ethics for systems development that includes a commitment to removing bias. These rules should be followed by all members of the development team, including data scientists, programmers, and other stakeholders. Teams should also make the decision-making process transparent and provide explanations for the system’s predictions.
The role of leadership
It’s clear that senior leadership at financial institutions should continue to meet at forums like Datos Insights to have discussions on the risks of AI, and that direct executive involvement is critical for providing resources for bias evaluation and correction. An emerging trend of creating top-level AI ethics oversight committees is certainly a step in the right direction.
While eliminating bias completely may not be feasible, continuous improvement through detecting and reducing bias is essential. Financial institutions must be vigilant to ensure AI systems treat all customer segments fairly. With collaborative effort across the industry, AI can be deployed ethically to expand financial access.
About the post:
Images are generative AI-created. Prompt: A currency bill featuring the face of an indigenous Brazilian tribesman with his face painted in vibrant colors. Tool: Midjourney.
About the author:
Shane Oren is the CRO for IDVerse. He has over 12 years experience in sales for a range of businesses, from startups to large enterprises, where he has achieved record-breaking results. In his current role, Shane leads the North American office and manages revenue across the market, overseeing sales and customer support teams.