Accessibility links

AI is increasingly being used to help make important decisions in a range of domains, from the workplace to the criminal justice system. Yet, few people are aware of AI’s role in decision-making, and opposition to it is strong. Why are people concerned, and what, if anything, would make the public more comfortable with this use of AI?

Our new report, launched today, argues that the public needs to be engaged early and more deeply in the use of AI if it is to be ethical. One reason why is because there is a real risk that if people feel like decisions about how technology is used are increasingly beyond their control, they may resist innovation, even if this means they could lose out on benefits.

This could be the case with automated decision systems, which are on the rise. Automated decision systems refer to the computer systems that either inform or make a decision on a course of action to pursue about an individual or business. To be clear, automated decision systems do not always use AI, but increasingly draw on the technology as machine learning algorithms can substantially improve the accuracy of predictions. These systems have been used in the private sector for years (for example, to inform decisions about granting loans and managing recruitment and retention of staff), and now many public bodies in the UK are exploring and experimenting with their use to make decisions regarding planning and managing new infrastructure; reducing tax fraud; rating the performance of schools and hospitals; deploying policing resources, and minimising the risk of reoffending. The systems have been characterised as ‘low-hanging fruit’ for government and we anticipate more efforts to embed them in future.

However, from our online survey of the UK population, carried out in partnership with YouGov, we know that most people aren’t aware that automated decision systems are being used in these various ways, let alone involved in the process of rolling out or scrutinising these systems. Only 32 percent of people are aware that AI is being used for decision-making in general. 

This drops to 14 percent and nine percent respectively when it comes to awareness of the use of automated decision systems in the workplace and in the criminal justice system.

On the whole, people aren’t supportive of the idea of using AI for decision-making, and they feel especially strongly about the use of automated decision systems in the workplace and in the criminal justice system (60 percent of people oppose or strongly oppose its use in these areas).

To learn more about the reasons for their lack of support, we asked people about what most concerned them about these systems. They were asked to pick their top two concerns from a list of options. Although we made it clear within the question that automated decision systems are currently only informing human decisions, there was still a high degree of concern about AI’s lack of emotional intelligence. Sixty-one percent expressed concern with the use of automated decision systems because they believe that AI does not have the empathy or compassion required to make important decisions that affect individuals or communities. Nearly a third (31 percent) worry that AI reduces the responsibility and accountability of others for the decisions they implement.

This gets to the crux of people’s fears about AI – there is a perception that we may be ceding too much power to AI, regardless of the reality. The public’s concerns seem to echo that of the academic Virginia Eubanks, who argues that the fundamental problem with these systems is that they enable the ethical distance needed “to make inhuman choices about who gets food and who starves, who has housing and who remains homeless, whose family stays together and whose is broken up by the state.”

Yet, these systems also have the potential to increase the fairness of outcomes if they are able to improve accuracy and minimise biases. They may also increase efficiency and savings for both the organisation that deploys the systems, as well as the people subject to the decision.

These are the sorts of trade-offs that a public dialogue, and in particular, a long-form deliberative process like a citizens’ jury, can address. The RSA is holding a citizens’ jury because we believe that when it comes to controversial uses of AI, the public’s views and, crucially, their values can help steer governance in the best interests of society. We want the citizens to help determine under what conditions, if any, the use of automated decision systems is appropriate.

These sorts of decisions about technology shouldn’t be left up to the state or corporates alone, but rather should be made with the public. Citizen voice should be embedded in ethical AI.

 

This project is being run in partnership with DeepMind’s Ethics and Society programme, which is exploring the real-world impacts of AI.

 Our citizens’ jury will explore key issues that raise a number of ethical questions including, but not limited to, ownership over data and intellectual property, privacy, agency, accountability and fairness. You can learn more about our process in the report.

 

3 Comments

Join the discussion

Please login to post a comment or reply.

Don't have an account? Click here to register.