Should you be concerned about bias and discrimination in AI?
Dr Andy Fenna in
AI Misc | 26-04-2021 08:31:52
With AI use and discussion growing rapidly, ethics has been increasingly brought up as a genuine concern in some cases such as Human Resources (HR) - or any scenario where AI is taking over the decision process when selecting or judging human beings. One major talking point is bias and/or discrimination that might be present in AI systems e.g. when selecting a candidate for a new role does AI automatically disregard candidates with a certain education or background? Here I will discuss where this bias comes from and why it isn't AI that we should be concerned with.
What is so-called bias in AI?
To understand why people mention bias in AI, it's important to know how most AI systems work. AI works by firstly learning (known as training). In this process, AI takes information and learns how the given data inputs leads to an output(s). e.g. this might be an AI system learning to recognise images of cats and dogs, or it might be working out how certain investments lead to better growth, or it could be trying to outwit a computer game by gradually understanding what moves result in better success. In all these cases AI is taking actual inputs and working out how an output is achieved - and in doing so certain inputs become favourable. This is what bias
is - inputs that lead to a desired outcome have greater gravitas than those that don't.
Why is bias being spoken of as a problem in AI?
Consider a HR scenario. We want AI to read CVs and select suitable candidates for interview for a new role. We know that during the training process, AI has learnt that certain inputs lead to greater success, or favour a more desired outcome. During the training process of our hypothetical HR AI system, we took all past and present employee performance data and matched them with the CVs they gave when they applied for roles in our company. AI then learns which CVs turned out to be the better employees, and those that didn't. It is this bit that's crucial. AI is simply doing it's job of working out how to get to a favourable outcome - and it's basing this on data provided, by humans, to it.
Bias is not caused by AI, it's brought in by humans
Bias, if there is any, is therefore introduced by humans - AI is simply learning what is put in front of it. If there is human bias in your organisation, AI will simply mirror that, but if there's no bias or discrimination before the AI stage, then there won't be in the trained AI either.