The RSA is cautiously optimistic about the potential of technology to enhance human experience for all. It’s not the robots that worry us, it’s the humans in charge of them. Our social, economic and democratic systems need a comprehensive upgrade if we are to maximise the opportunity of new technologies.
Recent pictures of Humberside Police joining revellers on the dodgems at Hull Fair hopefully offers reassurance that the dystopian world of the 1987 movie ‘Robocop’ is still some way off. Yet many fictional tales of the future end badly for humanity, subjugated or even exterminated by our own artificially intelligent creations.
Against this cultural backdrop, and media fascination with impending mass unemployment at the hands of robots despite another set of positive employment statistics, the recent government review on AI strikes a pragmatic tone. As Ben Dellot argues, it offers sensible solutions which if anything are too muted on the need for the UK to vigorously exploit its potential. My recent trip to Japan was also a reminder that other cultures find it easier to accept and welcome AI and robots as a force for good.
But while automation could be a powerful force for good, we need to actively steer it in this direction. Much rests on the social, economic and political systems into which new technologies are introduced.
Turning back to dystopian futures, the real storyline of Robocop is not about cyborg or robot technology gone wrong, but about a corrupt and unaccountable corporation wielding monopoly power to enrich its own executives and shareholders.
That’s a story that has been with us for much longer than semi-conductors and algorithms. Indeed, the defence of liberty and commerce against the unhealthy concentration and abuse of power is part of the RSA’s Enlightenment heritage.
So what are the issues we need to address?
Experts 82, Citizens 4
The dimensions of AI that were outside the scope of the government review were arguably the most important ones:
- what is the purpose of implementing new technologies, and who gets to decide?
- by what process do we hold the robots’ masters to account?
- how should we equitably distribute the wealth created by automation?
In the report, the word “expert” appears 82 times, “customer” 27, “citizen” 4 and “employee” 2. There is little consideration of how we democratise wealth creation.
Similarly, the CBI’s recent report on AI, Blockchain and the Internet of Things provides a useful overview of the opportunities and barriers to adoption. The CBI argues that “getting the most from technology requires a deep partnership between government, business and people.” but goes on to propose an expert commission, including companies, trade unions and government, with no broader engagement with civic society or citizens.
This is not a criticism of either report, nor an argument against investing in expertise - an urgent priority.
However, what I am suggesting is that the rise of AI and automation will expose and magnify existing flaws in our economic and democratic systems that will require exactly the kind of deep engagement with citizens that the CBI hint at.
Here are three such issues to consider.
#1 – Ethics and purpose
Although sometimes progress might seem frustratingly slow, the direction is clear. Businesses cannot exist in a moral vacuum.
The UK National Advisory Board’s report on impact investment makes a powerful case for a world where investors, governments and corporations all consciously and explicitly seek to create positive social impact in all of their economic activities.
This is particularly important in new technology industries because, as Brhmie Balaram suggests, the possibilities opened up by AI are presenting new ethical dilemmas, or at least supercharging old ones.
This makes a clear public benefit purpose, against which executives are willing to be held to account by stakeholders, of vital importance as an institutional device for ensuring that new technologies are deployed in ways which benefit society as a whole. Public deliberation can also be very effective in illuminating the most complex scientific and ethical issues, as the government’s Sciencewise programme has demonstrated.
In this context, Deepmind’s new programme of work on the ethics of artificial intelligence demonstrates welcome corporate leadership, and the RSA is pleased to be one of the partners.
#2 - Economic democracy
Ethical mission-driven corporations are a huge step forward, but they still represent a one-sided relationship – the benevolent exercise of power. What if power were more equally distributed in the economy? Might this offer the most enduring safeguard against exploitation, corruption or destabilising concentrations of wealth?
As a soundbite, this is easy to sign up to. But one person’s economic democracy might be another’s idea of tyranny.
So, the tricky question is how.
Recent sterile party conference debates about free markets versus central planning did not take us very far. More imaginative solutions are called for, in which citizens have more direct influence over the economy, not just as workers, consumers or investors, but as citizens. The RSA Citizens’ Economic Council has demonstrated how this can work for economic policy, by creating a collaborative and deliberative conversation early on between citizens, experts and a range of stakeholders.
The next step is to consider what participatory democracy looks like for economic institutions. One route is well established – mutual and co-operative forms of ownership and governance. The co-operative movement is as old as the shareholder corporation, and if it can evolve from the early Victorian world of the Rochdale pioneers to the world of the web, we can expect this to be part of the solution.
But we can also appeal to a much older organising principle of the economy. One which can reconcile scale and collectivism with individual liberty and enterprise.
#3 – Data is the new commons, so we need to reimagine how we govern it
November marks the 800 year anniversary of the Charter of the Forest. This younger sibling of Magna Carta set out the rights of common people to draw on common natural resources to support themselves. When augmented by the insights of economists such as Elinor Ostrom, who won the Noble prize for her “analysis of economic governance, especially the commons”, we can begin to imagine new institutions, rules and norms that are fit for governing an economic system whose most valuable raw material is no longer fossil fuel, but data.
Artificial intelligence algorithms are useless without vast amounts of data to drive them. But unlike fossil fuel, data is a potentially unlimited resource that is generated by the people themselves.
So who owns it? Who should control it?
One direction of travel illustrated by Jaron Lanier’s ‘Who owns the future’ is to put individuals in control of their data, and even pay them tiny amounts for each piece.
While superficially attractive, this hyper-individualisation fails to recognise that data becomes most valuable when related to other data: when we can recognise patterns. The whole is definitely worth more than the sum of the parts – so who should own the whole?
This is likely to require innovations, as being explored by the P2P Foundation, in the rules, institutions and cultural norms needed to equitably govern a new commonwealth of data. Eight hundred years on from the Charter of the Forests is it time to write a new Charter of the Digital Commons?