Autonomy and personalisation - RSA

Autonomy and personalisation

Comment 1 Comments

  • Picture of Ana Chubinidze FRSA
    Ana Chubinidze FRSA
  • Behaviour change
  • Digital
  • Social justice
  • Technology

Personalised online content – whether recommended products or services, search engine results, news or advertisements – are becoming increasingly prevalent. While this suggests benefits for both individuals and businesses, Ana Chubinidze FRSA asks whether these come at a cost, especially on consumers’ side.

If we agree that ‘autonomous’ means self-governing – the ability to be one’s own person and to live life according to our own reasoning and motives without external manipulative or distorting forces – and that this is something we strive for, we need to question whether modern business-to-consumer interactions are supporting this aim.

Marketing tricks have long been present in commercial practices; it always mattered how products are displayed physically or virtually or how they are advertised and represented. However, big data and AI is disrupting the ways in which this is being done and will continue to do so. Businesses have not only developed the capacities to determine consumers’ specific wills, they have also increased capabilities and resources to shape those wills. Depending on the degree of personalisation that companies implement, it becomes possible to match the right consumer to the ‘right’ outcome.

Any information – interests, hobbies, behaviours, habits, goals, intentions, demographic features, education, skills and employment – can become valuable input in the hands of a firm trying to adjust information to the consumer. This is a complex process and some machine learning algorithms function as black boxes, so not even its creator knows on what basis the decision was made. Recently some organisations, for example IBM, have started to try to increase transparency of such algorithms.

Autonomy often equates with the ability to make choices and fulfil one’s interests (without limiting others to do so) and computer science recognises this connection between autonomy and choice. However, irrespective of whether we adopt a legal, philosophical or practical approach, we must assume that any reduction or manipulation of choice is negatively related to personal autonomy.

While time or cognitive limitations to processing information make some content moderation reasonable for the customers, the major concern lies in how and who decides which content is ‘relevant’, ‘optimal’ or ‘best’. Content is personalised by online businesses with the primary aim of increasing their bottom line rather than facilitating consumer decisions even if their aim is not exploitative. Our autonomy can be assumed to have diminished when our actions are explicitly guided by the technology. It can be argued that systems do not explicitly diminish our autonomy, but they create environments where it becomes harder to change course of action even if one has reasons to do so. 

In our fascination with big data, these problems should not be forgotten. The quality of the data is often questionable and predictive analytics is also based on correlations, not causal relationships. This means that personalisation is based merely on statistical likelihood of two events occurring together, not actual consumer preferences. There is also no guarantee that the algorithms are good enough to correctly determine consumer preferences and display the relevant content for them. And as algorithms are not yet good enough to establish causal relationships, they will not know the long-term interests of the consumer and might generate flawed assumptions.

But even if machines can become that profound, is this what we really want? Do we want machines that ‘know’ us and not the other way round? This might be helpful for our autonomy when we seek to explore or reveal various aspects of our preferences and choices. But what happens if algorithms’ ‘perceptions’ do not match with our own? Especially as machines are aimed to become capable of learning and adapting to environments, which excludes ‘man in the loop’ and makes processes irreversible. Then, would we be able, in the words of Philosopher Harry Frankfurt, to want what we want to want?

While users might be willing to delegate control to systems for efficiency in specific contexts or for specific products, in one point at a time they might be willing to retract back control. If users cannot take back control, then it is not delegation any more, but attribution or transference. Therefore – as self-determination of choices is the high-order cognitive function, allowing humans to modify their behaviour over time and is positively related to motivation, responsibility or a sense of self – businesses should be careful with putting it at stake.

In this way, even the best of intentions might lead to unforeseen harmful consequences. The Facebook ‘like’ button, created in 2007 and launched in 2009, was meant to make social media experience easier and better. A decade on even its creators, Rosenstein and Pearlman, try hard to escape it with various apps. As James Williams of the Oxford Internet Institute argues, Facebook changed social interaction paradigms: “Because when most people in society use your product, you aren’t just designing users, you’re designing society." Do businesses want to build societies with deprived autonomy, self-regulation capacities and self-exploration benefits?

It is already obvious that only increasing consumers’ technical literacy is not enough. Nor is restricting the use of technologies or increasing transparency; no one reads privacy policies, no one reads AI strategies or has the time to work out how decisions are made by machines. Although as an example, Facebook’s “why am I seeing this” might be assumed as a good practice.

When we mention the word transparency in terms of AI, it is more important to focus within the business itself, rather than on the relationship between the business and consumer. This would ensure that their jobs will not be taken over by machines, that they understand what value AI can create and how they be engaged. Enterprises, that recognise the high importance of autonomy, need to self-regulate themselves and work out ethical codes to set the responsibility standards globally, also increase risk assessment and foresight capabilities.

More reasonable would be to educate algorithm coders about ethical and societal issues rather than to educate customers about algorithms. But neither is this enough. Policymakers – both national and global – need to play a decisive role. Just as there are philosophical, psychological, moral or legal approaches to autonomy, we need to establish paradigms of autonomy in relation with digital disruptions, including how technologies can be beneficial or harmful for autonomy itself, and provide strong regulation that goes beyond privacy policy.

Dignity (which has a close connection with autonomy), privacy, freedom of opinion and expression, enshrined in the Universal Declaration of Human Rights must be reassured in the age of technology. A whole chain of actions and actors is necessary to take the hold of processes until it is yet possible.  

 

 

Join the discussion

1 Comments

Please login to post a comment or reply

Don't have an account? Click here to register.

  • An excellent, well written article, well ahead of its time. Has a leaning toward the negative, rather than people's expectations of being catered to and somewhat mollycoddled, but nevertheless easily worthy of merit.