The algorithmic workplace need not be a dystopia - RSA

The algorithmic workplace need not be a dystopia

Blog 1 Comments

  • Picture of Benedict Dellot
    Benedict Dellot
    Former Head of the RSA Future Work Centre and Associate Director
  • Future of Work
  • Economics and Finance
  • Employment

Algorithms are spreading throughout the workplace – from recruitment software that screens candidates for interviews to programmes that orchestrate staff scheduling. Despite growing alarm these will damage privacy and give rise to greater surveillance, the reality could be a more humane workplace that is less biased, fairer and altogether safer.

Enter the robo-bosses

Imagine searching for a job in the not-too-distant future.  

Algorithms analyse your online footprint to understand your work preferences and send job adverts your way. Once you apply for a role, another algorithm reviews your application and CV, and decides whether or not to screen you for an interview. Yet another is used to power video monitoring during the meeting, helping recruiters to interpret whether your smiles are genuine and your enthusiasm honest.

Let’s assume you’re lucky enough to land the job. Your new employer launches a wellbeing initiative underpinned by wearable devices that track your emotions and behaviours 24-7. Your communications are likewise subject to algorithmic oversight. Artificial intelligence is deployed to spot inappropriate content in your emails and flag potentially criminal behaviour, such as whether you’ve accessed prohibited files.

The list of algorithmic run-ins goes on.

So far, so unnerving. Yet this Blade Runner-esque vision of the future workplace isn’t all that distant from today’s reality. A long list of companies and products have emerged in recent years to extend the digitisation of management – from warehouses and offices through to hospitals and retail stores:

  • Percolata draws on a combination of sensors and algorithms to build smart schedules for retail workers. Rotas are drawn up based on an assessment of individual worker performance, who works well together, and other information such as predicted footfall.
  • Mya is a recruitment tool that uses natural language processing algorithms to engage with job candidates throughout recruitment rounds. It poses contextual questions (“do you have line management experience?”), answers queries and provides personalised updates.
  • Humanyze is a credit card-sized device worn by workers to monitor their mood and understand team dynamics, such as who is usually dominant in conversations and who appears most engaged. It draws on infrared sensors, microphones and an accelerometer.
  • HireVue is AI-powered video technology that analyses candidates during job interviews. Intonation, verbal response and facial expressions are among the data points it captures – all geared towards unearthing feelings that would otherwise remain hidden.

Taylorism redux

How popular are these tools? HireVue claims its software has been used to analyse candidates in 5 million interviews. Humanyze has partnered with at least one major bank as well as the NHS. One estimate suggests a fifth of employers in Europe had access to wearable tech in 2015, while in the US as many as 72 percent of CVs are not seen by human eyes. Amazon, Unilever, Deloitte, Tesco – nearly every major corporate has dipped their toe in the water of algorithmic management.

Yet not everyone is happy with this trend. Some fear these tools will lead to a kind of Digital Taylorism, taking scientific management principles to another level of intrusion. The academic Phoebe Moore warns of the threat to people’s work-life balance, and argues that workplace tech could lead to an ‘always on’ and ‘hyper employed’ culture. Take the findings of a recent ethnographic study of long distance truck drivers, showing that electronic monitoring led to them feeling pressure not to take mandated breaks.

Others worry about the consequences for privacy. Algorithms, such as those that power video surveillance in interviews or which identify inappropriate content in emails, risk creating a culture of guilty until proven innocent. In 2015, a Californian worker took her employer to court after she was allegedly dismissed for uninstalling a cell-phone app that tracked her whereabouts 24 hours a day. The plaintiff claims her manager used the device to monitor her driving speed outside of work hours.

Then there is the threat to people’s autonomy and sense of control. Think of delivery drivers, many of whom have their routes and daily schedules fully mapped out by algorithms. More humdrum technology is used in some warehouses to remove even the most basic decisions from workers, such as which size box to use or how long a piece of tape to cut for wrapping. An RSA/Populus survey found 1 in 5 UK workers believe new technology has reduced the amount of freedom they have at work.

But perhaps the biggest grievance with algorithmic management is whether it even works. Many applications have yet to be tested, or if they have, are often found to be spotty and prone to wild fluctuations. In her book Weapons of Maths Destruction, Mathematician and tech polemicist Cathy O’Neil reports on how a performance algorithm used in the New York City education system scored the same teacher 6/100 in one year and 96/100 the next, without a change in their teaching style.

Algorithms to the rescue?

Even for the UK, which has a relatively sanguine attitude towards surveillance, these behaviours have proven unsettling. But is the use of algorithms in workplace management necessarily always pernicious?

For one, employers have a legitimate desire to ensure workers fulfil their responsibilities. Technology can be a tool to guarantee everyone ‘pulls their weight’, which also means preventing people from free-riding on the efforts of more diligent staff. Of course, face-to-face management is preferable for holding workers to account, but a mixture of outsourcing and flexible working mean more employees are out of the office and difficult to reach through conventional means.

Or think about fairness. Artificial intelligence in particular has the power to remove, not just entrench, bias in decision-making. Percolata’s algorithms set work schedules based on perceived performance and team fit rather than the whims and friendships of managers. Infor Talent Science, meanwhile, claim their recruitment algorithm led to an average 26 percent rise in African American and Hispanic hires across the industries where it was used.

Algorithmic management can also help protect vulnerable groups. The West Japan Railway company deployed AI surveillance systems to spot intoxicated passengers at risk of injuring themselves. Likewise, Microsoft has developed an AI-enabled ‘smart camera’ to detect unmanned tools, spillages and potential accidents in warehouses and factories. In the banking industry, surveillance algorithms have been drafted in to crack down on fraudulent behaviour and prevent a repeat of mis-selling scandals.

Analogue workplaces are no utopia

It’s an obvious point but one worth repeating: ‘technology’ is not a uniform mass of tools but rather a multitude of devices, each of which has different consequences for workers. Much depends on how they are deployed, including what data is collected, how that information is analysed, and how the results are finally interpreted and acted upon. Human judgement plays as big a role as the technology itself. As the LSE’s Judy Wacjman points out, “technologies are crystallisations of society; they are frozen social relations.”

It is also unwise to lament the rise of algorithmic management without acknowledging the flaws of the modern workplace as it stands. According to our RSA/Populus survey (results forthcoming), as many as a quarter of workers today are stressed or unhappy, a quarter are worried about being treated unfairly, and nearly 1 in 10 are worried about being dismissed without good reason. Today’s offices and factories are not an analogue utopia. Most are messy, biased and unfair in some way, while a few are downright miserable.

The challenge, then, is to carve out a space for the right kind of technology to be deployed on our terms. A good start would be for employers to give workers greater say on which technology to purchase, as well as to share more information about its likely impact. A lesson in how not to do this comes from the Daily Telegraph, which installed OccupEye monitors on the desks of its workers with virtually no notice, prompting a backlash and ultimately a U-turn.

Employers could also commit to auditing technology more frequently, just as Xerox did in the US with its recruitment algorithms, which it found were inappropriately using the address of candidates as a key filter for shortlisting. A promising US initiative called the Algorithmic Justice League offers a space for people to report concerns over biased code, as well as a ‘bias check’ service for organisations that develop and deploy algorithms – from banks to charities to tech companies.

National and international governments also have a role to play. The EU’s new General Data Protection Regulation (GDPR) is a major breakthrough, and will give everyone a right to understand how data is used to make judgements that affect them – including in the workplace. But further, more specific legislation may be required. France has a ‘right to disconnect’ ruling, which compels companies to set hours when staff are not required to answer work communications, while in Denmark, every new ‘controlling’ initiative must be announced to workers two weeks prior to introduction.

Above all, the power dynamic between workers and employers must become more equal and transparent. In today’s workplaces – across both the public and private sectors – the relationship between employees, managers and owners is too often transactional and hierarchical, with each side viewing the other with suspicion and distrust. So long as this continues, technology will only ever be seen as a whipping tool used by employers to subjugate workers, rather than what it could be: a means to achieve common goals and bring about a better world of work.

As ever, it’s not the technology that we should fear. It’s the societies and cultures into which they are placed.

The Future of Work

The RSA believes that a better world of work is possible. The RSA Future Work Centre aims to get behind the headlines, unpick the nuance of debates, and canvass the views of those who can change the system. Find out how you can support us!

Find out more

The Future of Work

Join the discussion

1 Comments

Please login to post a comment or reply

Don't have an account? Click here to register.

  • The global market for wearable medical devices estimated to reach $10.6 billion by 2025, expanding at a CAGR of 16.1% over the forecast period, driven by increased prevalence of chronic pain, diabetes, and advances in sensor technology, microelectronics, telecommunication, and data analysis techniques. https://www.ihealthcareanalyst.com/higher-incidence-chronic-pain-diabetes-foster-wearable-medical-devices-market/

Related articles