We need to talk about artificial intelligence - RSA

We need to talk about artificial intelligence

Blog 3 Comments

  • Deliberative democracy
  • Technology

Asheem Singh reflects on the RSA’s deliberative democratic experiment at the intersection of technology and society: the citizen-driven ‘Forum for Ethical AI’ – the final report of which launches today.

It’s one of the biggest questions of our time: what role do we want artificial intelligence to play in our lives? Increasingly this is not just a question about whether Facebook’s friend recommendation function is too creepy, or whether Pokémon Go made you trespass in your neighbour’s back yard. Rather it is a question with multiple ethical and moral dimensions.

On May 14 2019, San Francisco became the first major US city to ban the use of facial recognition technology by law enforcement. Two months later, Oakland implemented similar restrictions. On the other hand, in China, it’s tough to get a mobile phone these days without having your face scanned in and recognised. Artificial intelligence, automated decision systems and the like are no longer topics for tech enthusiasts alone: they are essential subheadings in our shared democratic conversation; notes on the kind of society we want to be.

That is why, throughout 2018, the RSA began a relatively novel experiment. We engaged citizens and experts in precisely these conversations. We brought together ordinary, non-technically-minded people to answer some of the most complex ethical questions of our time – concerning some of the most intractable and arcane systems produced by technologists to date, in critical contexts like healthcare and justice. We did this through the RSA’s Forum for Ethical AI – and you can now read the final report from this work. We also created a video that tells the story of our experiment:

A big conversation

We wanted to understand the dilemmas that emerge when automated decision-making systems (ADS) are used the public sphere – particularly in the institutions that exist by virtue of our democratic consent (and our taxes). We hoped to gain insights and better understand the barriers to building trust. We also wanted to see whether such approaches might enable us at some point in future to move to a new model of governance and oversight of these technologies.

To do this, we convened a citizens’ jury. Our citizens’ jury was created in partnership with AI company DeepMind and was facilitated externally. It took place over three and a half days and several months of reflection, involving 25-29 citizens from a diverse, representative group. After deliberating in response to a series of questions, along with key experts, they decided on a set of recommendations about the design and deployment of ADS in the public space – and reflected on a whole lot more besides. Facilitated conversation, reflection, open discussion and expert prompting were key to this process.

We also worked with YouGov to commission polling to get a wider sense of the nation’s views on AI and ADS. The stats make for some stark reading:

  • just 32 percent of individuals were aware that AI is being used for decision-making in general
  • this drops to 14 percent and 9 per cent respectively when it comes to awareness of the use of ADS in the workplace, and the criminal justice system
  • 60 percent oppose the use of ADS in recruitment and criminal justice.

[Survey figures are from YouGov Plc. The total sample size was 2,074 adults. Fieldwork was undertaken between 16 and 17 April 2018. The survey was carried out online, and the figures have been weighted and are representative of all UK adults (aged 18+)]

A toolkit emerges

Through the citizens’ jury we were able to broker a remarkably informed debate about the pros and cons of using ADS. The jury came up with a set of conditions for the use of ADS and I was particularly struck by how useful these were when thinking about the ethical design and deployment of ADS in institutional contexts.

While we did not start this process with the intention of creating a product, it was striking how jurors’ opinions came together to form the basis for a toolkit for what ethical design processes look like in this context. 

Averting the tech-lash?

What emerged loud and clear from the various deliberations was that the stakes could not be higher: not only for us as citizens, but for tech companies and public institutions. One of the key themes that emerged over and over through the conversations was the risk of a ‘backlash’ or ‘tech-lash’ against the use of these technologies, even forgoing any benefits. It was in the discussions around the public sector that this sentiment came through most clearly.

In criminal justice, particularly in stop-and-search decisions, jurors called for a system of accountability for decisions made using ADS. As demonstrated in the video, they felt we need a ‘human in the loop’: ADS decisions should only take place with some level of human oversight. There was a great deal of concern about the potential biases – especially racial biases – that might emerge from the use of facial recognition tech by police authorities, for example.

For decisions in healthcare, a key issue was empathy. The citizens’ jury felt that ADS would lack the emotional depth to deal with patients. In an interesting contrast with the police there was, however, more sympathy for clinicians and decision makers who adopt ADS and trust that they might make better decisions. 

For the use of ADS in recruitment, the jurors were worried this might lead to a lack of transparency in the hiring process. The potential for bias was a huge concern. Some felt that an external auditing body was needed to set and enforce guidelines for employers – one of several interesting recommendations to emerge in the course of our deliberations.   

Deliberating about deliberation

The insights we gained from the citizens’ jury were rich and it is exciting to think that we are one part of a nascent, fast-developing field at the intersection of citizen engagement and technology (hat-tip the Ada Lovelace Institute and the Royal Society, for example, who are doing some great work in this area, as is Simon Burrall, who sat on the advisory board of the Forum).

Based on our findings, we think this methodology and others like it have an exciting potential to unpack complex issues in a variety of institutional contexts. The question for us and others working in this field is: how do we pool our insights to craft a rich collective conversation around these rapidly-evolving, radical technologies – and create an ever-wider body of knowledge? How do we work together to shift the conversation around the system and shift the system itself? We hope the RSA’s reflections in this report and the toolkit provided offer useful insight for others to take forward, while we also continue to work in this area.

The future

New technologies are being adopted at a rapid pace, and regulators and the public are struggling to keep up. An increasing amount of decision making – in public services, the job market and healthcare – is taking place via opaque processes. Machine learning complicates the field; ADS changes the game. As our research suggests, this is a source of anxiety for the general public and we don’t even know the half of it.

We need an open conversation about AI, ADS and other forms of decision making, driven by the principles of transparency and accountability. Citizens understand the ethical implications of using AI in public services when such conversations are brokered and convened; their voice needs to be part of the conversation.

We hope that the Forum for Ethical AI can play a key role facilitating these conversations, whether through citizens’ juries or other deliberative means; by testing methods or engaging in particular subject areas. We have already taken this work forward through work with NHSX discussing the procurement of AI in the health and care system – watch this space for more details of this project.

The prize for getting this right is a future where we embrace the positives that technology brings, while also being able to have rich and meaningful conversations about its impact on our world. And what’s at risk is not only Pokémon levels or Facebook likes: it’s our agency, our identity, our society’s collective sense of self.

Download the report

Join the discussion

3 Comments

Please login to post a comment or reply

Don't have an account? Click here to register.

  • A few months ago, I started a postgraduate research project to examine the ethics of AI in the recruitment processes. I was inspired to do so as a result of the failure of Amazon to successfully introduce AI in their recruitment processes. I should like to learn more about the RSA project and the activities of Fellows, like Stephen Hill, in the field. I am based in Swansea. 

  • Hi,


    I have two comments: 

    1. Much of this kind of output has already been produced elsewhere

    2. No one replied to my response to a request for input, which I provided?

    Since I advise on the ethics of AI and I also run an AI company seeking actively to solve the problem, I believe I can contribute and I was somewhat disappointed not to be able to do so. Could you explain your governance arrangements, as I would like to understand whether I have missed something about how to be involved?


    Steve

Related articles