Police are already using AI. The public deserve a say. - RSA

Police are already using AI. The public deserve a say.

Blog

  • Deliberative democracy
  • Criminal justice
  • Public services
  • Technology

New research from the RSA suggests the police aren’t doing enough to engage the public on how they are using new technologies. 

Over the past few years, police use of new technologies has been creeping up the public consciousness. Artificial intelligence (AI) and automated decision systems (ADS) are increasingly part of the arsenal available to police forces in the UK.

Take-up is patchy. Many police forces may not have the need, the budget or the desire to invest in these technologies, and among those that do the systems and their uses vary significantly. These technologies are myriad, ranging from the simple but effective use of spreadsheets all the way up to state-of-the-art machine learning programmes; the two principal uses in the UK are predictive policing and facial recognition. The latter has received particular attention over the past year, due to court cases and allegations of bias.

There is no reason why we should, by default, reject the use of AI by the police, but a growing body of work has questioned their efficacy, their potential for racial bias, and the impact they could have on policing more widely. We were interested in two issues in particular: the input that the public have over the deployment of these technologies, and how well police forces are communicating with the communities they serve.

A force for good?

We have found that, in many respects, the police have deployed these systems without sufficient input from the public. Through freedom of information requests (FOI), we asked every police force in the UK whether they were using such systems, what training and guidelines they offer to staff and whether they had consulted with the public. Just one (South Wales Police) confirmed to us that they had carried out public engagement around their use of AI and ADS.

There is a reluctance from many police forces to disclose information around algorithmic decision-making. The Metropolitan police, who began a programme of live facial recognition in February, stated that they were to carry out public consultation alongside it. In a follow-up FOI, returned to us at the beginning of March, they said that they had no written record of any such engagement taking place. In February, a data breach revealed that they had been using the services of controversial US company Clearview AI, despite having previously denied the company’s involvement.

We also found that while almost every force has some form of guidelines for how staff are using these technologies, these vary significantly between police forces. In some cases, they were produced by the force themselves or selected from academia. As has been reported elsewhere, there is a lack of strategy from central government here, shown in the differing approaches of individual police forces. Of course, different areas have different needs, but there must be common standards of guidance and accountability.

Given that these technologies are unfamiliar to much of the population, educating the public is crucial. Public consultation should not be an optional extra. These findings indicate a lack of transparency and input from the public on how these new technologies are being used, which in turn undermines the principle of policing-by-consent.

Policing the pandemic

These findings come at a critical moment for the UK, during the most far-reaching extension of police powers since the second world war. There is obvious public benefit to expanding police capabilities during the pandemic, but we must be wary that new attitudes do not become entrenched. Given the pace at which the lockdown has been enforced, necessary scrutiny may fall by the wayside: police forces have already come under fire for overreach. Cultures can be sticky and powers difficult to withdraw.

There may well be a case for police use of AI and ADS. Automating parts of police work, if done properly, could free up time that could be used on the streets and in communities. But this case needs to be properly evidenced – at present this is a mixed bag, suggesting some level of public support but also reticence: last year, a report by the Ada Lovelace Institute found that 70 percent of the public were supportive of police use of AI for criminal investigations, although 55 percent want it to be limited to specific circumstances. Much of the best academic work on this issue has taken place abroad: more research is needed in a UK context.

A necessary conversation

As such, there is no clear path for exactly where and how algorithms should be used in policing. This is a novel space for police forces, the public, regulators and politicians. We need an awareness of the risks, and a means of making sure that the public good remains the guiding principle behind police work. These technologies should not solely be deployed as a means of cost-cutting, or hitting internal targets.

Last year we launched Democratising decisions about technology, a toolkit for deliberating with the public on the use of AI and ADS, with a particular focus on public services. Deliberative processes have taken off over the past few years – this year’s climate assemblies are a good example. These are useful not only to improve the quality of decision-making, but also as a means of educating those affected. We are interested in exploring how this model could be applied further in the context of policing and our public services more widely.

As we enter new territory in our relationship with the police, transparency and communication are more important than ever. New technologies can be a force for good, but only if they come with adequate safeguards.

Read the full report


The RSA’s Tech and Society Programme is researching the impact of artificial intelligence, data rights and disinformation.

Be the first to write a comment

0 Comments

Please login to post a comment or reply

Don't have an account? Click here to register.