Can AI ever be bias-free?

by:
last updated:
Can AI ever be bias-free? | Analyst Notes | Deep Analysis

Can AI ever be bias-free?

by:
last updated:

In our upcoming book ‘The AI Playbook’ we discuss in some detail the issue of bias in AI. For those that don’t know, AI bias is the phenomena of an AI system giving prejudiced results due to misassumptions in the process. It’s easy to label biases as mistakes, but frequently they are not, they are answers that we do not agree with.

In our book ‘The Artificial Intelligence Playbook’ we discuss in some detail the issue of bias in AI. For those that don’t know, AI bias is the phenomena of an AI system giving prejudiced results due to misassumptions in the process. It’s easy to label biases as mistakes, but frequently they are not, they are answers that we do not agree with.

Mistakes can be easy to spot; for example, an AI system will decide that a purchase order is an invoice or visa versa.  That a Spanish word is, in fact, French, etc. These are mistakes, something went wrong, and usually, are identified and fixed relatively quickly. Biases are something completely different; biases are opinions.

Take, for example, an AI system that is deployed for HR recruiting or loan processing that then discriminates against applicants with differing sexual orientations, race, age, or gender. The system is biased, and many would work to correct it – assuming they notice the bias and are able to adjust the system.  

But other people would be perfectly happy with the bias in the AI. They don’t want to hire (for example) women over a certain age, nor do they want to hire people with a different sexual orientation to their own.

There is no universal agreement on where or when lines get drawn. What is considered normal to one person, country or sector, may differ radically in another.

Add to this the fact that some biases are far less evident than race, age, or gender – we all have biases, we all have preferences. Some are conscious some are unconscious. For example, as a Brit who lives in the US; I  regularly notice bias by people living in the North East, against those from the Deep South. II is a bias few would admit to or even accept. If it were ever pointed out to them, they would try to contextualize and justify.  I notice this bias because I am, in many respects, an outsider.  As an advisor who regularly visits company offices, I have seen over the years that some firms hire more than the average blond or red-haired people. Is that a bias, maybe, maybe not. Possibly there are more blonde people in their particular area to hire, who knows? What we do know is that psychologists and behavioral economists have identified over 100 different distinct human and societal biases to date.

That is not to say we should avoid using AI for automating loan applications, HR or digital marketing. The value of doing so can be more than efficiency gains and cost reductions. AI can provide us with insights that would have been impossible without the technologies use. AI can provide us with new and improved ways of working and looking at situations. But the idea of bias-free, neutral or vanilla AI is a fallacy.  It is one thing to use AI to determine fields and values in an invoice or to translate text or speech. Those kinds of decisions are relatively straightforward; the decision/output is either right or wrong. But we need to be careful the moment AI starts to get involved in decisions and outcomes that directly impact humans. Whether it be loan application processing, law enforcement or sales and marketing, things get interesting fast, and we should tread with caution and keep our minds and eyes open.  

Work with us today to ensure you are a disruptor not one of the disrupted! 

Get trusted advice and technology insights for your business from the experts at Deep Analysis. [email protected]

Leave a Comment