Accessibility links

You may have heard that artificial intelligence has the potential to accelerate human progress, but in what ways could the use of AI hold us back? In a new project, the RSA and DeepMind are creating space for citizens to consider trade-offs in the use of AI and to guide companies, organisations and institutions in how it is harnessed.

Recently, Yale University’s Breadboard Project experimented with integrating bots into teams of human players as part of an online game to learn whether the inclusion of AI could boost human performance. Ultimately, the experiment proved to be a success, demonstrating that by partnering with AI humans could achieve at a higher level. Increasingly, AI is being introduced into real life, complex situations, from hospitals to offices, with the intent of complementing the work of humans. The aim is to empower humans, rather than to replace them, but at what point does the role of AI become disempowering and who decides where to draw the line?

In US courtrooms, for example, machine learning is being deployed to determine the likelihood of a defendant committing a crime on bail. The risk assessments derived by the algorithms are used to inform decisions about what amount bond should be assigned, or even whether a defendant should be set free at that stage. The rationale for turning to AI was to improve accuracy in the system, and counter human biases in sentencing. However, an investigation by Propublica found that an algorithm being used by one county court in Florida was particularly likely to falsely flag black defendants as future criminals.

In recent elections, AI has been used to support political parties in targeting potential voters. During the EU referendum, the AI start-up Cambridge Analytica created psychometric profiles of users by harvesting their data from Facebook, which were then used to design tailored adverts that mirrored their political leanings.

These ethical dilemmas are already being felt today, yet the public has little involvement in their resolution. Is there scope to consult the public as part of the process of monitoring and managing this technology as it evolves?

 The RSA and DeepMind share a commitment to encouraging and facilitating meaningful public engagement on some of the most pressing issues facing society today. In our new project, we will co-produce a series of events to explore the field of AI and ethics. Drawing on the work of the RSA’s Citizens’ Economic Council, the RSA will run a series of citizens' juries on the use of AI in criminal justice and democratic debate. These events will use immersive scenarios to help participants understand the ethical issues raised by AI and serve as a platform for entering into a deliberative dialogue about how best to respond.

There is much to gain, as well as much at stake, as AI becomes more powerful. With citizens and leading experts, we will reflect on how we might realise AI’s full potential without causing undue harm to society.

 

To keep up with this project as it develops, follow @theRSAorg and @DeepMindAI.

 


 

The Future of Work

The RSA believes that a better world of work is possible. The RSA Future Work Centre aims to get behind the headlines, unpick the nuance of debates, and canvass the views of those who can change the system. Find out how you can support us!

Find out more

The Future of Work

 

4 Comments

Join the discussion

Please login to post a comment or reply.

Don't have an account? Click here to register.