A Hippocratic Oath for AI developers? It may only be a matter of time - RSA

A Hippocratic Oath for AI developers? It may only be a matter of time

Blog

  • Picture of Benedict Dellot
    Benedict Dellot
    Former Head of the RSA Future Work Centre and Associate Director
  • Creative economy
  • Economics and Finance
  • Employment
  • Enterprise

Developments in artificial intelligence and robotics are picking up pace. But are policymakers and regulators ready for the ethical fallout? What new institutions and practices – from AI watchdogs to oaths for developers – should be put in place to maximise the social benefit of these technologies, while limiting their potential for harm?

 

A mind of their own

After years of dashed hopes and stunted progress, new AI and robotic systems are beginning to challenge human superiority in a variety of tasks. Machines are now capable of identifying cancers in medical images, spotting fraudulent behaviour in financial transactions, dealing with customer queries through retail and banking chatbots, and organising smart transport solutions to manage traffic flows in cities.

Much of the commentary surrounding these technologies has focused on what they mean for the future of work. Rise of the Robots (Ford), Only Humans Need Apply (Kirby and Davenport) and The Wealth of Humans (Avent) are just some of the recent books that have sought to map out what robots can and cannot do in the workplace, and what niche humans might be able to carve out for themselves in future.

Less attention, though, has been paid to the broader ethical dilemmas posed by these machines. Seldom do we hear how they might impact matters of privacy, discrimination, fairness and self-determination. This is partly because the effects of technology on these domains are subtle, complex and concealed, and are usually more difficult to quantify in media-friendly statistics. But that doesn’t mean they are any less worthy of consideration.

Think of these dilemmas:

Discrimination – Machine learning systems that are trained using legacy datasets could reinforce biases in decision making. For example, employers that use AI powered recruitment software could lock out skilled candidates whose attributes fail to mirror those in their ‘training set’, just as prison parole boards using similar software could deny inmates leave because they jar with algorithms partly trained on historical data. It’s not hard to imagine AI systems one day being used to exclude certain groups from purchasing insurance (e.g. if algorithms predict they have a high risk of contracting a chronic illness), or spelling higher prices for particular consumers if retail vendors have a more accurate understanding of their willingness to pay for goods. Caveat emptor takes on new meaning in an AI saturated economy.

Privacy – Given that data is the fuel powering artificial intelligence systems, many of the companies developing them will need to harvest and store greater amounts of our personal information – including internet search histories, physical movements, spending habits and medical data. This is fine insofar as we consent to the extra tracking, but what happens when sharing becomes so normalised that it is felt an obligation to divulge our most sensitive of details? The fact that tech companies can store our data securely is no guarantee that our privacy will be protected. Think of the incident involving Target, the retail store in the US, which posted coupons for baby items to one of its shoppers after analysing her purchasing habits, only for them to end up in her family’s hands first.

Agency – Artificial intelligence will help to address stubborn challenges relating to healthcare, education and energy efficiency. But as a commercial tool, it will also be used to steer the behaviour of consumers, with potential consequences for human agency and free will. When do AI powered marketing tools, for example, turn from a helpful aid to shoppers to a dubious mechanism that encourages unsustainable spending? Addictive app design – or ‘captology’ as some describe it – is already a point of contention in Silicon Valley. One of the bestselling technology books of recent years is ‘Hooked: How to Build Habit-Forming Products’ – and we should wonder what AI could be used to do in the hands of some of its readers.

Authenticity – The most sophisticated AI systems will not only be able to replicate human abilities, they will also one day be able to mimic human nature and pass off as real people (if not in physical form then through written text and spoken word). This opens up questions about the sanctity of human relationships, and whether there should be any limits to how connected we become to computers. Since there is no obligation for companies to say whether their AI interface is a human or not, many people could falsely believe they are interacting with another person, for example a chatbot app providing retail advice. Not everyone will think this is an issue, but consider the reaction in some quarters to the cute Paro robot, which has proved effective in calming dementia patients but which has also been criticised for removing the human from an important caring role.

 

Keeping AI in check

Each of these domains are called dilemmas for a reason: there is often no easy answer about what is right and wrong, and while machines may harm some users they will also deliver big gains to others.

Retailers have long used marketing to encourage people to buy more of their goods, so why is it inappropriate to use more powerful adverts underpinned by AI? Recruiters have never been able to take fully objective decisions about the suitability of job candidates, so why is it wrong to use candidate sifting tools underpinned by machine learning? The time that nurses in care homes can devote to older patients is increasingly squeezed, so why not use robotics to at least plug some of the gap?

AI developers, policymakers and regulators cannot answer these questions alone. But they can start taking steps to limit the most obvious potential for harm based on what we know to be broad societal values, such as not discriminating against people due to their gender, age, race or religion.

The largest tech companies – Apple, Amazon, Google, IBM, Microsoft and Facebook – have already committed to creating new standards to guide the development of artificial intelligence. Likewise, a recent EU Parliament investigation recommended the development of an advisory code for robotic engineers, as well as ‘electronic personhood’ for the most sophisticated robots to ensure their behaviour is captured by legal systems.

Other ideas include regulatory ‘sandboxes’ that would give AI developers more freedom to experiment but under the close supervision of the authorities, and ‘software deposits’ for private code that would allow consumer rights organisations and government inspectors the opportunity to audit algorithms behind closed doors. Darpa recently kicked off a new programme called Explainable AI (XAI), which aims to create machine learning systems that can explain the steps they take to arrive at a decision, as well as unpack the strengths and weaknesses of their conclusions.

There have even been calls to instate a Hippocratic Oath for AI developers. This would have the advantage of going straight to the source of potential issues – the people who write the code – rather than relying on the resources, skills and time of external enforcers. An oath might also help to concentrate the minds of the programming community as a whole in getting to grips with the above dilemmas. Inspiration can be taken from the way the IEEE, a technical professional association in the US, has begun drafting a framework for the ‘ethically aligned design’ of AI.

Oaths. Explainable AI. Software deposits. It’s still early days in the quest to develop safeguards for artificial intelligence. But it’s important we begin experimenting with different protections and institutions, and to arrive at a package of measures sooner rather than later. Leave it too long and we may find the technology running away from us, with knock on effects for everyone – users, developers and tech companies alike.

Be the first to write a comment

0 Comments

Please login to post a comment or reply

Don't have an account? Click here to register.

Related articles