Blog

A Few Thoughts on the Ethical Use of Automated AI

PETER VIOLARIS

It is often said that our laws and regulations cannot keep up with the incredible changes to technology. This, naturally, leaves gaps in the law. And where there is a gap in the law, there needs to be a clear set of ethics for right minded companies and people to be guided by. 

The adopted set of ethics will inevitably inform the laws that are written when the legislatures eventually try to catch up—so ethics matter more and more as AI changes our world.  

What are ethics?

There is no universal standard of ethics. They differ by nationality, by ethnicity, by age group, sometimes even by footy teams. Arguably, the Australian cricket team has very different ethics to the English cricket team. Arguably. 

In Australia, where IDVerse was founded, the Privacy Act is very old and there is no law covering use of automated AI powered technology. The EU is powering ahead with its AI Act, while the UK has opted for guidance-only on AI. In the US, there is a White House proclamation. So there is no global consensus on how to regulate AI. 

Bridging the gap

It’s fair to say that the world is still reacting to problems generative AI brings rather than being ahead of the curve. The recent panicked responses to AI generated indecent images of celebrities is a good example of this. 

With laws playing catch-up, we need ethics to fill the gap. But the ethics conversation is not mature enough, and is not informing our decision-making enough. Broadly, we can split the conversation into ethical use of AI, and ethical training of AI. 

The ethical use of AI

AI allows fully automated decision making. It is very good at this because it can work tirelessly, apply consistent standards, and can be improved if it is given good feedback. Its weaknesses are a lack of context and a lack of compassion for the edge cases. 

Those opposed to the use of AI tell us that automated decision making is bad. And it certainly can be. 

The blind and misguided reliance on automated accounting software in the UK Post Office scandal led to hundreds of innocent people being wrongly convicted of fraud and many jailed. It is now emerging that senior staff at the Post Office and the software provider knew the technology was flawed but still pushed on for years with the prosecutions. Ethics certainly took a back seat.

In Australia, the Robodebt fiasco saw many people hounded for debts that they did not owe. There was little or no oversight of the output of the automated system, and it was seemingly very hard to appeal a decision. Poor use of face-matching technology by the police that results in wrongful arrests (mainly of young black men) is also well documented. 

Proper use is power

These examples do not mean that automated decision making using AI-powered technology is always bad. When used ethically and deployed properly it can be incredibly powerful. 

In the identity space automated identity verification (IDV), such as that sold by IDVerse, can be used by banks to spot fraudulent attempts that manual reviewers cannot detect, especially deepfake identity documents

Our clients will typically run our solution and then have a fraud expert review the report each time a user is flagged as potentially fraudulent. The expert may ask the user to prove their identity another way or decide to agree with the fraud recommendation. This is an ethical and responsible use of an AI tool. 

When people aren’t appropriate

There are also examples where using automated AI decision making is in fact more ethical than relying on manual reviewers. For example, where the task cannot be done by humans, or indeed should not be done by humans. 

An example of a task that cannot be done by humans is preventing the sharing of explicit images by children from their device across a messaging platform. AI driven technology exists that can detect if the image is an explicit one and if the person in the image is underage. 

An example of a task that perhaps should not be done by humans is review of social media posts to prevent illegal content; for example live terrorism feeds, child pornography, and videos of killings or maimings. 

The job of reviewing for this content is often undertaken by teams in lower income countries and the impact on their mental health can only be imagined. It must be more ethical to screen for this material using AI, and to perhaps have an appeal mechanism if the poster of the material thinks it has been mislabelled. 

The ethical training of AI

Artificial intelligence is not the same as human intelligence (yet—give it a few years). AI essentially uses all the data it has been shown and applies that to the problem facing it. So if you trained an AI chatbot using a lot of conversations using swearwords and bad language, you would have a very rude AI assistant. AI models need a lot of data in their training, and the data needs to be high quality. 

Obtaining that data legally and ethically is the first challenge for anyone looking to create AI. 

In the case of large natural language processing models like ChatGPT, we mean almost literally the entirety of the written internet. Scraping and reusing all that copyrighted material is the subject of numerous law cases around the world. So the training of the models in this way may have been illegal.

To many, it is certainly unethical to take for free all the articles painstakingly researched by journalists, all the books slaved over by authors, and all the masterpieces created by artists. 

Time to face the music

To train a face-matching AI engine, you need to input millions of faces and tell the model which ones match each other. To put it simplistically; the more varied (by age, skin tone, and gender) the data, the better and less biased your face match engine will be. 

To get hold of its data, Clearview went online and took its training data from the internet. Any image that you or I put online on any platform was fair game for Clearview. This was both illegal—the company has been fined tens of millions by multiple regulators—and clearly unethical. Many if not all of our competitors in the identity space take training data from the consumers that are asked to use their products to verify their identity. 

This practice is legally questionable (as no consumer gave consent to the identity company to use their biometric data in this way), and ethically questionable. Those identity companies probably mention this somewhere buried in their privacy policy. 

IDVerse is different 

At IDVerse, we do not use the personal data or biometric data of any of the consumers who use our solutions. We have been using generative AI for years to create a dataset of tens of millions of entirely synthetic faces, videos, and identity documents. 

Not only is this much more ethical than using real people without consent, it means we can react very quickly to problems we see in our technology. We can create tens of thousands of fake faces with a particular problem, e.g. very heavy beards.  

Doing the right thing

Ethics matter, and they should be brought to the front of the conversation. 

It should be acceptable, indeed expected, to ask an AI provider about how they think their tech should be used and about the source of their training data.  

About the post:
Images are generative AI-created. Prompt: Two figures walk side by side on a sunny lakeshore path, engaged in lively conversation and laughter—but with an amusing twist. The figure on the left is the devil, with bright red skin, horns, and a pointed tail, wearing modern casual attire of a t-shirt, jeans, and sneakers. The figure on the right is an angel in a loose white robe, feathery wings folded behind him, and a halo above his head. Both are wearing trendy sunglasses and have carefree, joyful expressions as they chat like old friends. The beautiful blue lake sparkles under a clear sky in the background, creating a whimsical, lighthearted contrast to the unlikely pair. Photorealistic, humorous. Tool: Midjourney.

About the author:
Peter Violaris is Global DPO and Head of Legal EMEA for IDVerse. Peter is a commercial technology lawyer with a particular focus on biometrics, privacy, and AI learning. Peter has been in the identity space for 6 years and before that worked for London law firms.

x  Powerful Protection for WordPress, from Shield Security
This Site Is Protected By
Shield Security