Regulation of AI: Not if but when and how - RSA

Regulation of AI: Not if but when and how

Comment 1 Comments

  • Picture of Ben Loewenstein
    Ben Loewenstein
    Public affairs consultant working in the City with an interest in politics and tech
  • Future of Work
  • Employment
  • Science
  • Technology

The artificial intelligence industry should not get too comfortable. It will not be long before policymakers make moves to put the brakes on technological progress in the name of fairness and human safety.

It has been a big month in the world of artificial intelligence (AI) policy. It saw the release of the Juergen Maier-led Made Smarter review and the European Council asking the Commission for an AI “approach” by early next year; but perhaps the centrepiece was the long-awaited release of the government’s independent review of the UK AI industry. While the report put forward many recommendations that will help grow the technology in the UK, ensuing commentary has also been swift to pick up on the one absent: regulation.

When it comes to AI, regulation could mean anything from establishing boundaries on what can be developed to pre-determining what jobs must remain in human hands, and so it is unsurprising that regulation was not recommended at this early stage. Policymakers need to sense public willingness before they act, and AI is still yet to be fully commercialised. But, we should not fall into the trap of thinking it will never happen.

The debate about how to regulate is percolating and the large tech firms developing AI technology need to brace themselves for a battle. For the most part however, their strategy has been to keep their cards close to their chests and hope that the chatter dies down. Perhaps this is understandable when your business is booming and your competitors are being served €2.4 billion fines by the European Commission. But at the end of the day, all indications are that the discussions are just getting going and moves to regulate AI are not as far away as some might think. Why?

Firstly, AI is already embedded in today’s world, albeit in infant form. Fully autonomous vehicles are not for sale yet but self-parking cars have been in the market for years. We already rely on biometric technology like facial recognition to grant us entry into a country and robots are giving us banking advice.

Secondly, there is broad consensus that controls are needed. For example, a report issued last December by the office of former US President Barack Obama concluded that “aggressive policy action” would be required in the event of large job losses due to automation to ensure it delivers prosperity. If the American Government is no longer a credible source of accurate information for you, take the word of heavyweights like Bill Gates and Elon Musk, both of whom have called for AI to be regulated.

Finally, the building blocks of AI regulation are already looming in the form of rules like the European Union’s General Data Protection Regulation, which will take effect next year. The UK government’s independent review’s recommendations are also likely to become government policy. This means that we could see a regime established where firms within the same sector share data with each other under prescribed governance structures in an effort to curb the monopolies big tech companies currently enjoy on consumer information.

The latter characterises the threat facing the AI industry: the prospect of lawmakers making bold decisions that alter the trajectory of innovation. This is not an exaggeration. It is worth reading recent work by the RSA and Jeremy Corbyn’s speech at the Labour Party Conference, which argued for “publicly managing” these technologies.

The world’s big, influential tech companies wear heavy crowns. They are no longer scrappy insurgents but utilities. And furthermore, as the custodians of AI technology, there is somewhat of an onus on them to stick their heads above the parapet and guide public debate now that it is ramping up. Idle calls from a select few – regardless of their profile – will do little to sway opinions and it could ultimately mean that those who know the industry best play little part in determining how AI policy controls are shaped.

Yes, political discussions about the impact of AI have so far been mired in theory – frustratingly so – but talk will give way to action before long. If AI developers are willing to show their hand, then the government must be willing to listen and act if it wants to get the policy approach right. No pressure but failure to do so could mean a missed opportunity to harness the potential of the next industrial revolution.


 

Ben Loewenstein is a political consultant at an advisory firm in the City of London and a Fellow at the RSA.  

Join the discussion

1 Comments

Please login to post a comment or reply

Don't have an account? Click here to register.