Accessibility links

From the court room to the workplace, important decisions are increasingly being made by so-called "automated decision systems". Critics claim that these decisions are less scrutable than those made by humans alone, but is this really the case? In the first of a three-part series, Jasmine Leonard considers the issue of algorithmic bias and how it might be avoided.

Recent advances in AI have a lot of people worried about the impact of automation.  One automatable task that’s received a lot of attention of late is decision-making.  So-called “automated decision systems” are already being used to decide whether or not individuals are given jobs, loans or even bail.  But there’s a lack of understanding about how these systems work, and as a result, a lot of unwarranted concerns.  In this three-part series I attempt to allay three of the most widely discussed fears surrounding automated decision systems: that they’re prone to bias, impossible to explain, and that they diminish accountability.

Before we begin, it’s important to be clear just what we’re talking about, as the term “automated decision” is incredibly misleading.  It suggests that a computer is making a decision, when in reality this is rarely the case.  What actually happens in most examples of “automated decisions” is that a human makes a decision based on information generated by a computer.  In the case of AI systems, the information generated is typically a prediction about the likelihood of something happening; for instance, the likelihood that a defendant will reoffend, or the likelihood that an individual will default on a loan.  A human will then use this prediction to make a decision about whether or not to grant a defendant bail or give an individual a credit card.  When described like this, it seems somewhat absurd to say that these systems are making decisions.  I therefore suggest that we call them what they actually are: prediction engines.

How can we assess a prediction engine?  By assessing the quality of the predictions that it generates.  Luckily, this does not require any understanding of the algorithm that generates the predictions; it merely requires knowledge of what those predictions were and whether the predicted outcomes occurred or not.  For instance, you don’t need to understand how Google’s search algorithm works to assess whether or not the results it returns are relevant to your search query.  Indeed, if Google started returning random search results – articles about the works of Shakespeare when you searched for “pizza restaurants near me”, say – you’d probably notice pretty quickly and stop using it in favour of a more useful search engine. 

And so we come to the first fear about prediction engines: that they’re prone to bias.  Fundamentally, biased predictions are inaccurate predictions.  Specifically, they’re inaccurate in a consistent way because they’re generated by a system that consistently over or under weighs the importance of certain factors.  This was the case with the infamous COMPAS system, which was (and still is) used by US courts to predict the risk that defendants would reoffend.  COMPAS, it transpired, was more frequently overestimating the likelihood that black defendants would reoffend and underestimating the likelihood for white defendants.  In other words, it was biased because it inadvertently placed too much weight on race.

COMPAS failed to do the one thing it was supposed to do: accurately predict whether or not a defendant would reoffend.  It was therefore a bad product and ought not to have been used by the courts to guide sentencing decisions.  But there’s no reason to think that just because COMPAS was biased that therefore all prediction engines are biased.  In fact, compared to humans, prediction engines are inherently less prone to bias.  After all, computers don’t suffer from the inherent cognitive biases that we humans must constantly fight against.  Indeed, many computer biases only exist because their algorithms are designed by (biased) humans or trained on data produced by (biased) humans.

From a practical standpoint, it’s also far easier to spot bias in a computer system than it is in a human.  For a start, computers can typically generate many more predictions than a human can in a given space of time, so it’s far easier to gather enough sample predictions from which to identify any bias.  Computers are also more consistent than humans, so whilst a human’s bias might only be noticeable in some of their decisions, a computer’s bias will be visible across all its predictions where the biased factor is in play.

Given all this, it seems to me that worries about bias ought not to deter us from using prediction engines to aid our decision-making.  We just need to apply some common sense when adopting them and, as with any new technology, test them before we use them in real-life situations.  This does not require access to a system’s source code or the data on which it was trained, it just requires someone to compare its predicted outcomes with the real outcomes.  And if a system fails to predict outcomes with sufficient accuracy or displays biases in the predictions that it makes, we shouldn’t use it.

Jasmine currently provides programme support to the RSA’s Forum for Ethical AI, a new programme exploring the ethics of AI with citizens. She is a former software developer, tech founder and philosopher.

5 Comments

Join the discussion

Please login to post a comment or reply.

Don't have an account? Click here to register.