Forget jobs. Will robots destroy our public services? - RSA

Forget jobs. Will robots destroy our public services?

Blog 1 Comments

  • Picture of Atif Shafique
    Associate Director, Public Services and Communities (Sabbatical)
  • Public services

A combination of shrinking services, austerity and punitive welfare risk a future in which technology and automation create a public sector that fuels instead of tackles inequality.

Imagine the following scenario. After years of damaging under-investment, public services are on the brink. Social infrastructure is creaking. An ideologically-driven leader sweeps into power, promising to streamline social services, eradicate benefits fraud and ensure that only the truly needy and deserving get state support. He intensifies the conditions that need to be met to access services and strengthens the sanctions regime to punish those that misuse the system. But he wants to do it on the cheap - he doesn’t want expensive caseworkers to make those decisions. Not least because by virtue of their work they have close relationships with potential cheats. So he sets up an automated decision-making system and replaces caseworkers with online forms and distant regional call centres, weakening the human relationships that typically characterise public services. An automated system decides who is and isn’t eligible for welfare support, based on algorithms shaped by a punitive and moralistic view of the poor. The result is a 54 percent increase in benefit denials after only three years. One of those denied support is a lady with cancer, who loses her medical support, welfare payments and travel assistance to get to hospital and doctor appointments. Less than a year later, she dies. A day after her death, her appeal against the decision is accepted and her benefits are restored.

This dystopian scenario isn’t fictional. It is in fact a description of a fatal experiment that took place in Indiana in the United States. In her groundbreaking but frightening book Automating Inequality, academic Virginia Eubanks examines how in the US advancements in technology - particularly automated decision-making, data mining and predictive analytics - are being used to control the poor and sever their access to public support. The book shows that this is nothing new: it is part of a historical trend stretching back decades. Crucially, Eubanks doesn’t entertain the science fiction notion that the real threat comes from machines outsmarting humans, becoming unaccountable and wreaking havoc as people look on helplessly. Neither is it just a case of human biases unintentionally finding their way into technology, as critics of algorithmic decision-making have cautioned. Instead, some of the worst features of the tech are intentional and baked in from the beginning.

Disturbingly, these features are shaped by a set of institutions and social, financial, economic and moral assumptions that we’re all too familiar with in the UK. Chief among them is a pessimistic and moralising view of the poor, which associates the challenges that they face in life with their own character deficiencies. Alongside this is an effort to distinguish between the ‘deserving’ and the ‘undeserving’ poor, and to cut the latter off from public support; to stop them ‘gaming’ the system; to sanction them and to force them to change the way they behave. But this doesn’t just stem from cultural attitudes. Instead, these views are shaped by the dynamics of economics and class power. Eubanks shows that much of the impetus for policing and sanctioning the poor came from professional middle class backlashes against social justice and welfare rights advancements won by the poor and vulnerable. This in turn was linked to a larger systemic issue. The middle classes wanted ‘undeserving’ poor families off welfare rolls because in a system that lacks universal services and solidarity, the struggle for state resources is a zero-sum game. And it is the affluent that have the superior economic, cultural and political capital. In this context, needs assessments, means-tested, conditional public services and algorithms and data-driven tech combine to devastating effect.

Automatic inequality: How misuse of tech threatens our public services

Automatic Inequality describes the impact that this combination has had, warning that public services are being transformed into a ‘digital poorhouse.’ Building on Eubanks’ work, I would argue there are three key ways in which this shift towards a digital poorhouse threatens the public services that we cherish.

The first is by accelerating welfare reduction and stigmatisation, creating a hostile environment for those in need of public support. Eubanks shows that millions of families have lost access to vital public support, or have been deterred from accessing it either through complex rules and processes or deliberate stigmatisation. The irrational obsession with benefits fraud (underpayment of benefits is a much bigger issue than the minuscule sums lost to fraud)  - an obsession that is rampant in both the UK and US - can be fuelled through misuse of tech. For example, in 2014 Maine Republican governor Paul LePage used tracking data from electronic benefits transfers to create a fabricated account of poor families defrauding taxpayers by buying liquor, lottery tickets and cigarettes with their benefits. He even released the data to the public via Google Docs, and the middle classes ate it up.

This brings us to the second way that the digital poorhouse threatens public services. It damages the empathy and social solidarity that underpins public services. Automated decision-making and digital tracking hides the suffering of the poor from the middle classes, creating the ‘ethical distance’ the latter need to allow unjust decisions to be made without the resulting guilt, undermining the shared responsibility we all have in tackling poverty and supporting each other. Outsourcing decision-making to computers means compromising on the human values of empathy, solidarity and compassion.

The third is by weakening the human relationships that ought to lie at the heart of our welfare state. Outsourcing the assessment of need to machines, for example through predictive analytics, risks de-humanising and discriminating against whole sections of the population. Eubanks highlights the example of a child abuse protection system in Pennsylvania that used a predictive model which was deeply marred by racial and class biases in determining risk factors. This had the chilling effect of criminalising particular communities.

Is the UK heading towards a digital poorhouse?

The UK is by no means immune to the digital poorhouse. In fact, it provides the perfect environment for it to flourish. Austerity provides the financial impetus. A highly punitive welfare system that is suspicious of the poor and is becoming less and less universal offers a testbed. Widespread economic insecurity, inequality and social division has weakened solidarity.

The digital poorhouse is already creeping in. Much has been made of the scandal of work capability assessments, and how people have been declared ‘fit to work’ despite being in no condition to do so. Less frequently mentioned is the computer system that has been used for the assessments. The Logic Integrated Medical Assessment (LiMA) software calculates a numerical score based on data inputted by a DWP decision maker and its own ‘logical’ rules. This numerical score - and not the professional judgement of an appropriately qualified medical expert - ultimately determines who is and isn’t fit for work. Jobseekers have also been surveilled and monitored through the universal jobmatch website, and the monitoring data has been used to impose sanctions. Public data sharing and predictive modelling are also becoming increasingly common.

Despite the frenzied debates about artificial intelligence (AI), much of the tech mentioned in this blog isn’t even very cutting edge. But the warning is clear: without a radical reconception of our public services, the temptation to use tech for unjust purposes will only intensify with advancements in big data and artificial intelligence. No number of ethical codes for AI will prevent this because ultimately the problem doesn’t lie with the tech itself; nor is it a technical issue of ensuring appropriate safeguards. The best protection against the digital poorhouse is to build a social security system fit for the 21st century; one that provides universal services, promotes solidarity, empathy and human relationships, and seeks to empower instead of police and punish. It is within this context that the progressive and positive potential of tech – in education, health, planning and freeing up public servants to do more for their communities by automating mundane tasks – can be best realised.

CASE STUDY: NHS rationing in 2035*

*A fictional account of a plausible future where we use tech to weaken rather than strengthen public services. 

The NHS is just about surviving. A combination of demographic pressures, relentless fiscal austerity and the emergence of a two-tier system of public and private health means it’s not the same as it once was, although it still retains the support of the public as a ‘universal’ service.

Waiting times for treatment are getting longer and more difficult to manage. Effectively rationing services has become one of the central priorities of the NHS. It’s not easy to do. Thousands of medical professionals currently maintain the system, making informed (though imperfect) decisions about prioritisation based on need. But as austerity sharpens, this role is under threat.

At the same time, extreme rationing within a universal system has provoked a middle-class backlash. The affluent are outraged that they have to wait so long for support despite being responsible, tax-paying citizens. They demand that the rules of rationing are changed so that those with bad behaviour and those that don’t contribute are moved to the back of the queue. This anger is only heightened by TV shows about poor people who ‘choose’ to live unhealthy lives but continue to get priority in the NHS. The Taxpayers Alliance has seized on this divisive climate to launch a campaign to end ‘gaming’ of the NHS.

The government finds inspiration from a renowned think tank - the Centre for Social Responsibility (CSR) - in formulating a response to these challenges. A recent CSR report recommends replacing the current system of assessment with “algorithmic rationing,” so that waiting times are determined by an AI-enabled tool that absorbs vast amounts of medical and social data and uses machine learning to continuously adapt.

The purported benefits for the government are clear. For one, it is much cheaper and more efficient than having thousands of professionals do the work. It is also possible to build sophisticated algorithms that are game-proof: punishing the undeserving and reducing waiting times for the middle classes. Because rationing is determined by complex webs of algorithmic data and not human decision makers, service users will also have little idea of why they have to wait as long as they do, or even if they’re being punished at all. This invisibility and abstraction reduces the guilt that the affluent may feel as they see their waiting times drop dramatically. Politically, it’s a no-brainer.

The lives of Miriam and Annie shows what this system can do. Both have Multiple Sclerosis, and both need a combination of medical treatment, physical therapy support, and equipment including a wheelchair. Despite having similar levels of need, Annie has received all the support she needed within three months while Miriam is still waiting two years on. The algorithmic rationing system used proxy social data to determine that Annie was more deserving of faster treatment. Miriam was deemed irresponsible because she has five children, has worked for far too long in a low paid job (therefore taking more in state support than giving back in taxes), has been sanctioned by the DWP multiple times in the past, and is in a family with a history of anti-social behaviour.

Miriam has no idea she is being discriminated against in such a manner. The AI tool cannot process or understand contextual information about her life in a way that a caseworker might.

Annie passes away at the age of 98. Miriam dies much earlier at 70. As far as anyone’s concerned, that gap is mere coincidence.

Join the discussion

1 Comments

Please login to post a comment or reply

Don't have an account? Click here to register.

  • I agree with the plea for a re-vamped social security system fit for the 21C but I take issue with some of your arguments. Human decision making is full of bias and often inaccurate and inconsistent, whether made by professionals or not. The best place to observe this is in the medical sphere where second opinions exist for a reason!. Machine-based decision making is almost always more consistent and accurate and any bias in the data can be rendered transparent with time. This is something that is very difficult to do in human decision-making where bias is 'baked in' to each of us and we are often free to interpret data in accordance with that bias. This can be very hard to identify and even harder to change. If fairness is a criterion then machine-based decisions are invariably more just in that they apply the rules without discrimination. Yes, AI is powerful and complex and like any tool, it can be used to help enforce an ideology or political agenda, fair or unfair. I would argue that our real challenge is to ensure all public AI systems are transparent and open to democratic oversight and regulation (similar to open source). If we achieve this then we have little to fear from the technology and maybe the foundations of a more just 21C social security system.

Related articles