Computer Says No: Part 2 – Explainability - RSA

Computer Says No: Part 2 – Explainability

Blog

  • Technology

From the court room to the workplace, important decisions are increasingly being made by so-called “automated decision systems”. These systems are often highly complex and their inner workings impossible to understand – but do we need to understand them? In the second of a three-part series, Jasmine Leonard considers whether we must “open the black box” of complex algorithms before we use them to make important decisions.

In my last post I explained that, despite their name, automated decision systems don’t actually make decisions.  Instead, they make predictions, which are then used by humans to make decisions.  I also argued that these systems should be thoroughly tested before they’re used to guide decision-making so as to ensure that their predictions are sufficiently accurate and unbiased.  However, some people believe that it’s not enough for an automated decision system just to be free of bias, or even outperform humans in their predictive accuracy.  In addition, they say, we must be able to understand why a system makes the predictions it does: the predictions must be explainable.

This is a particular problem for systems that make use of deep learning.  Based loosely on the structure of the brain, deep learning algorithms have proven to be very effective at recognising patterns in data, rivalling, and in some cases surpassing humans’ ability to do so.  As a result, deep learning is the dominant approach in AI today.  But like the brain, deep learning algorithms are incredibly complex – so much so that it’s impossible to directly inspect their inner workings and understand exactly what those patterns are.  Requiring that predictions be explainable would therefore require us to stop using deep learning systems to generate predictions (at least for the time being).

There are of course systems that use approaches other than deep learning – approaches that allow their predictions to be explained – but these are currently far less accurate than their deep learning counterparts.  As a result, significant effort is being spent trying to come up with ways to interpret deep learning models, as well as to develop new approaches that are both explainable and accurate.

But why does explainability matter so much?  Why do people think that automated decisions systems ought to be explainable?

The reason, I believe, stems from the fact that many decisions ought to be explainable.  When the consequences of a decision affect people’s lives in significant ways, it’s important that the decision be open to challenge by those who think that it might be wrong.  But one can only effectively challenge a decision if the reasons behind it are made known.  For instance, if a prisoner wants to appeal a decision to deny them parole, they’ll need to show that there are better reasons for granting them parole than there are for denying it – but they can only do this if they know why they were denied parole in the first place.

The trouble is, since many decisions should be explainable, it’s tempting to assume that automated decision systems should also be explainable.  But as discussed earlier, automated decisions systems don’t actually make decisions; they make predictions.  And when a prediction is used to guide a decision, the prediction is itself part of the explanation for that decision.  It therefore doesn’t need to be explained itself, it merely needs to be justifiable.

This is a subtle but important distinction.  To illustrate it, imagine you were to ask your doctor to explain her decision to prescribe you a particular drug.  She could do so by saying that the drug had cured many other people with similar conditions in the past and that she therefore predicted it would cure you too.  In this case, her prediction that the drug will cure you is the explanation for her decision to prescribe it.  And it’s a good explanation because her prediction is justified – not on the basis of an explanation of how the drug works, but on the basis that it’s proven to be effective in previous cases.  Indeed, explanations of how drugs work are often not available because the biological mechanisms by which they operate are poorly understood, even by those who produce them.  Moreover, even if your doctor could explain how the drug works, unless you have considerable knowledge of pharmacology, it’s unlikely that the explanation would actually increase your understanding of her decision to prescribe the drug.

If explanations of predictions are unnecessary to justify their use in decision-making then, what else can justify the use of a prediction made by an automated decision system?  The best answer, I believe, is that the system is shown to be sufficiently accurate.  What “sufficiently accurate” means is obviously up for debate, but at a minimum I would suggest that it means the system’s predictions are at least as accurate as those produced by a trained human.  It also means that there are no other readily available systems that produce more accurate predictions.  Indeed, the best case that one could make for disregarding a prediction made by a particular system would be to show that a more accurate system generates a different prediction.

However, just because a prediction is justifiable doesn’t mean that it’s necessarily correct.  It is, after all, impossible to predict the future with absolute certainty; one can only know for sure whether or not a predicted outcome occurred in retrospect.  So whilst it would almost certainly be justifiable to use a prediction generated by a system that correctly predicts outcomes 99% of the time, the prediction may nevertheless fall into the 1% of predictions that are in fact wrong.  And if one of those incorrect predictions were used to make a decision that affected you, you’d no doubt feel rather hard done by.  But if this were to happen, how would an explanation for the prediction help your situation?  It wouldn’t enable you to prove ahead of time that the prediction was incorrect, nor would it reduce the impact of the decision on your life.  For example, suppose that you were denied a mortgage because the lender’s risk assessment system wrongly predicted that you were at high risk of defaulting on the loan due to your lack of credit history.  Knowing that the prediction was based on your lack of credit history wouldn’t enable you to prove that you wouldn’t default on the loan.  Indeed, given that you can’t go back in time and build up a credit history, there’s very little you could really do with the explanation other than use it to inform your future decisions to take out credit.

It seems to me, therefore, that automated decision systems need not be explainable, even if their predictions are used by humans to guide important decisions.  However, the decisions themselves must be explainable, and therefore the predictions used and the accuracy of the systems that generate them must be made known.  This ensures that people can effectively challenge these decisions on the basis of the predictions that underpin them: either by arguing that a prediction is irrelevant to the decision at hand, or by showing that a prediction is unjustifiable because it’s made by an insufficiently accurate system.  In both cases, if such a prediction is used to guide decision-making, there ought to be some sort of accountability.  In the next and final part in this series, I’ll discuss where this accountability might lie.

Jasmine currently provides programme support to the RSA’s Forum for Ethical AI, a new programme exploring the ethics of AI with citizens.  She is a former software developer, tech founder and philosopher.

Be the first to write a comment

0 Comments

Please login to post a comment or reply

Don't have an account? Click here to register.

Related articles