You may have heard that artificial intelligence has the potential to accelerate human progress, but in what ways could the use of AI hold us back? In a new project, the RSA and DeepMind are creating space for citizens to consider trade-offs in the use of AI and to guide companies, organisations and institutions in how it is harnessed.
Recently, Yale University’s Breadboard Project experimented with integrating bots into teams of human players as part of an online game to learn whether the inclusion of AI could boost human performance. Ultimately, the experiment proved to be a success, demonstrating that by partnering with AI humans could achieve at a higher level. Increasingly, AI is being introduced into real life, complex situations, from hospitals to offices, with the intent of complementing the work of humans. The aim is to empower humans, rather than to replace them, but at what point does the role of AI become disempowering and who decides where to draw the line?
In US courtrooms, for example, machine learning is being deployed to determine the likelihood of a defendant committing a crime on bail. The risk assessments derived by the algorithms are used to inform decisions about what amount bond should be assigned, or even whether a defendant should be set free at that stage. The rationale for turning to AI was to improve accuracy in the system, and counter human biases in sentencing. However, an investigation by Propublica found that an algorithm being used by one county court in Florida was particularly likely to falsely flag black defendants as future criminals.
In recent elections, AI has been used to support political parties in targeting potential voters. During the EU referendum, the AI start-up Cambridge Analytica created psychometric profiles of users by harvesting their data from Facebook, which were then used to design tailored adverts that mirrored their political leanings.
These ethical dilemmas are already being felt today, yet the public has little involvement in their resolution. Is there scope to consult the public as part of the process of monitoring and managing this technology as it evolves?
The RSA and DeepMind share a commitment to encouraging and facilitating meaningful public engagement on some of the most pressing issues facing society today. In our new project, we will co-produce a series of events to explore the field of AI and ethics. Drawing on the work of the RSA’s Citizens’ Economic Council, the RSA will run a series of citizens' juries on the use of AI in criminal justice and democratic debate. These events will use immersive scenarios to help participants understand the ethical issues raised by AI and serve as a platform for entering into a deliberative dialogue about how best to respond.
There is much to gain, as well as much at stake, as AI becomes more powerful. With citizens and leading experts, we will reflect on how we might realise AI’s full potential without causing undue harm to society.
To keep up with this project as it develops, follow @theRSAorg and @DeepMindAI.
Join the discussion
Please login to post a comment or reply
Don't have an account? Click here to register.
Through my independent advisory business, Intelligent Ethics Limited, I am working with corporates and senior business leaders to help them understand the consequences of AI on business ethics, integrity and Corporate Governance.
In my previous role as a Partner at PwC, I led a series of Citizen's Juries on the subject of Trust aimed at the wider public as well as a specific programme of work targeted at students. I am a regular external speaker on AI and Ethics (for example, at the AI Europe 2017 Conference on 20th November).
I would be delighted to be involved in this work and bring my experience and expertise to the RSA/DeepMind project. Please do contact me.
Great to hear your enthusiasm for this project, Tracey, and that you're willing to be involved. We are still working out how to involve Fellows and others interested in this project but will be in touch once we have agreed on the initiatives we'll take forward.
I am an AI enthusiast and very interested in contributing to this project. Please let me know how can I get associated.
I see that DeepMind is an Alphabet (Google) company. A condition of the purchase of DeepMind by Google was the establishment of an independent AI ethics board to ensure that technology is not abused. How is this project connected with this ethics board and how independent is it from DeepMind and Alphabet? I am interested in the commercial interest and influence in this exercise.
Hi David, that's a good question. The first thing to note is that this project is being run independently by the team at the RSA. We will be setting up our own advisory board as part of this project. This advisory board will remain independent from DeepMind, as well as its ethics board, Google and Alphabet, so that should address any concerns about commercial influence. Please feel free to get in touch with me if you have any more questions.
Fantastic idea! Essential for the good of society. Besides keeping up with the project, how can one get involved? Here's a thought provoking piece on using AI to empower humans: https://waitbutwhy.com/2017/04/neuralink.html
Hi David, I'm also interested in getting involved in this field and found both the RSA article and the piece you posted very interesting. Perhaps we can get a Fellows Forum going?