The use of behavioural science – or nudging – has become more commonly used by policy makers in relation to the public sector. Dr Helena Rubinstein FRSA explores the ethical issues this raises and suggests some guidelines for businesses deploying similar approaches.
The discipline now called ‘behavioural science’ has emerged from many others, including psychology, anthropology, sociology, and economics and came to the fore following the publication in 2009 of Nudge, written by Richard Thaler and Cass Sunstein. They describe ‘nudging’ as an approach to changing behaviour based on manipulating ‘the choice architecture’ that changes behaviour in predictable ways without forbidding any options or significantly changing their economic incentives.
Thaler and Sunstein’s ideas built on the work of psychologists, Daniel Kahneman and Amos Tversky, who were curious about why people often made irrational economic choices. According to classical economics, people act rationally; as if they had complete foresight and information. Kahneman and Tverksy demonstrated that humans do not analyse every decision and many of the choices we make are suboptimal. We use mental shortcuts or ‘heuristics’ to make decisions easier. These systematic cognitive biases can be useful for some decisions but for others, may lead to outcomes that are not as good as we might have hoped.
In 2008, the Behavioural Insights Team (BIT) was set up in the UK: initially residing within a government department, it was created with the explicit brief of making policy simpler, more effective and easier to implement. In 2014, a similar role was adopted by the Social and Behavioural Science Team (SBST) in the US. By changing the choices that are offered to people, policy makers could take advantage of inherent biases, which make people more likely to do what is right for them and best for the government. This raises a deeper question (for another time) about whether governments are always trying to achieve positive ends. This aside, there is a consensus that nudging can be a more acceptable, and relatively inexpensive, way of getting people to do things you want them to do, for their own good.
The 2015–2016 BIT Annual Report describes an intervention to reduce prescription of antibiotics. In a randomised controlled trial, 800 GPs were sent a letter from the Chief Medical Officer, which stated that: “the great majority (80%) of practices in your area prescribe fewer antibiotics per head than yours”. The letter contained three simple, actionable alternatives to immediate prescriptions (such as delayed prescriptions, in which the patient picks up the prescription at a later date if it is still needed). These were compared with 800 general practitioners who did not receive this letter. Over six months, those who received the letter reduced their antibiotic-prescription rates by over 3% more than those who did not, leading to 73,406 fewer antibiotic prescriptions.
This example involves a relatively simple intervention to encourage the change of a specific behaviour. In more complex cases, we can use evidence-based theories and models of behaviour. We all carry models in our head, which shape the way we behave and we may have different models about the same things. For example we have a model of what we should do when going to see the doctor. Some people will view their doctor as an expert, and will expect to be given direction about treatment without being overloaded with information or needing to ask questions. Others will see their GP as an advisor, expecting to be informed of options, and to be active in making decisions. Attempts to change behaviour often involve a dialogue between people with different mental models and almost always include different perspectives.
Having some theoretical underpinning before we make a decision about an intervention increases the likelihood that we will be focusing on the aspects that influence behaviour. Theories are useful because they explain why, when and how a behaviour does or does not occur, and they show the important sources of influence that can alter the behaviour. They help us to generate testable hypotheses and to make predictions about what might happen in the future.
Whilst behavioural science can be used to change people’s behaviour for their benefit, it does raise ethical questions about its use. There has been a lively debate about the ethics of using ‘nudges’ in the public sector that could be relevant for the private sector. Of course, to some extent – whether aware or not – businesses have long deployed nudging, whether in advertising or the layout of supermarkets. The issue is arising again because nudging is now increasingly being used explicitly for commercial ends.
Critics of nudging say that it is deceitful and manipulative and are concerned that it relies on exploiting a variety of human biases by manipulating the environment, and taking advantage of a pattern of behaviour without consent. When we manipulate a person’s choice, even if it is for their own good, we are treating them like a child, and reducing their autonomy by altering their free will to make a decision. Furthermore, nudges are not transparent.
A potentially bigger problem is the source of the nudge. The people who are doing the nudging may get it wrong. Nudgers have biases and make mistakes. They may want to push ahead with a program because they feel it is for the good of people but may ignore resistance or be too quick to portray peoples’ preferences as irrational when their decisions are made on reasonable grounds.
Proponents of nudging argue that, by definition, a nudge alters choice architecture without coercing people, so choice is maintained. Moreover, many nudges are intended to improve people’s welfare and in such cases it would be ethically wrong not to nudge them. Nudges can promote autonomy, making decisions easier, so that people are freed up to focus on more important concerns. Additionally, nudges do not have to be covert or manipulative but can be transparent and effective when the reasons behind them are explained. Although nudgers may be biased, proponents argue that providing they act with the best intentions and deploy the available evidence, and that people have consented to being nudged, this method is fair, even though there may be a risk of error.
Practitioners of nudging in the private sector can learn from the discussion. Persuasion using behavioural science does not have to be deceitful or manipulative. Nudges can help people to make better decisions by making the choice more intuitive. Science and evidence can be used to make better choices, and to design products and services that satisfy consumer needs.
Consumers have a different relationship with private companies than they have with politicians and policy makers: they expect that companies will want to make a profit and there is an implicit contract between the consumer and the brand owner. Frequent exposure to marketing and advertising creates ‘savvy consumers’ who know how to decode the messages. People expect a level of ‘game playing’ from companies that they would not tolerate from a ‘nanny-state’ government.
But if businesses can use behavioural science for good is there anything that can be done to encourage them to use it in an ethical manner? I propose five tentative guidelines to avoid the misuse of behavioural science in business.
First, behavioural interventions built on untruths are unacceptable.
Second, nudges that make it difficult for people to choose otherwise are unethical: people must have the freedom to choose differently.
Third, behavioural interventions should be scrutinised for unintended, as well as intended, consequences.
Fourth, consent should not be hidden: interventions should be transparent wherever possible.
Fifth, practitioners should be comfortable to defend their approach, methods and motives in public.