Every week my podcast must-listens are Thinking Allowed (sociology and family loyalty) and David Runciman’s Talking Politics. Last week David had a fascinating conversation with James Williams, former RSA speaker winner of the Nine Dots Prize and author of the soon to be published book Stand Out of Our Light: Freedom and Persuasion in the Attention Economy.
You should listen to the whole conversation but, for me, what stood out was his insistence that we misunderstand the big technology platforms – principally Google and Facebook – if we think their business model is about selling our data. Instead, Williams convincingly argues, that they are in essence advertising agencies. It is not your data they are selling but your attention.
In a content-saturated world of over-stimulated, attention-deficit-suffering citizens, the platforms’ business model brilliantly combines three elements. First, the history of our on-line activity; by now they know more about us than we know about ourselves. Second, powerful algorithms constantly testing and refining the messages they transmit; by now they know more about what works on us than we do. Third, applying behavioural science to continuously innovate, a science based on the evidence that we are much more motivated by our inherent predispositions and unconscious drives than our deluded consciousness likes to let on. In other words these platforms are all about trying to get us to do stuff – generally, buy stuff – by communicating with us in ways we find hard to resist.
So, what’s new? The modern world is built on persuasion for commercial ends. Dan Pink argued in a book a few years ago that roughly 40 percent of working time across the entire economy comprises of people selling to other people. Although all this no doubt contributes to economic growth which – environmental sustainability notwithstanding – is a very good thing, it is hard to believe that it isn’t also socially corrosive. After all, at its core, this activity generally involves Person A trying to convince Person B to do something in Person A’s interests through the mechanism of convincing Person B it is also in their interests, largely regardless of whether it actually is.
One reason, I suspect, that we tolerate the ubiquity of advertising is that we think we determine whether it works. In this we can be comforted by knowing a lot of selling clearly fails. John Wanamaker, who opened the first department store in the US, is credited with one of the most famous sayings in marketing: “Half the money I spend on advertising is wasted; the trouble is I don't know which half”.
However, we don’t simply apply the rule of caveat emptor (buyer beware) to advertising. There is a regulator, The Advertising Standards Authority with a set of detailed codes based on the overarching principle that ‘advertisements should not mislead or cause serious or widespread offence or harm, especially to children or the vulnerable’. In line with the precautionary principle, more detailed and prescriptive rules are applied when the risks to people are greater (e.g. advertising of alcohol, gambling) and/or when there is a major information imbalance (e.g. pharmaceuticals, financial products). Significantly, financial regulators are willing to intervene when the evidence shows the public systematically misunderstand a product even if the facts used in the marketing of that product are correct.
Indeed, history shows the public doesn’t like the idea of advertising with the capacity to circumvent our conscious influence. In the 1950s and 60s several claims were made for the power of subliminal advertising, where a single frame showing a product is inserted in a film so it is subconsciously, but not consciously, registered by the viewer. Even though those claims were exaggerated, it led the UK and many other countries to ban such advertising. Public concern about the ability of advertisers to get directly into our heads helped make Vance Packard’s book The Hidden Persuaders (also containing quite a lot of exaggeration) a publishing sensation in the late 1950s.
I’m guessing you can see where I am going with this. Earlier I focused on the big tech platforms but it is important to remember that in the modern economy (despite GDPR) almost every big organisation, and certainly every big company, will both own a lot of data and use AI techniques (or ‘advanced machine learning’ if you prefer) that allow communications to be continually tested and refined.
If you accept the premise that the interests of people selling us stuff are not always the same as our own interests this is obviously worrying. If in the past, and in various areas like financial services, we have drawn the line at marketing which seeks to influence us in a way we find hard to understand and resist, what does this mean when technology has the capacity to put all on-line marketing on steroids?
You might be questioning whether it is fair to suggest that a great deal of selling involves deliberate manipulation. Here’s some food for thought:
- Most attempts to tempt you with foods (apart from unprocessed fruit and veg) that are ‘healthy’, ‘natural’, ‘hand-made’, ‘traditional’ etc. are somewhere on the spectrum between very contrived and entirely bogus.
- Most ‘special offers’ – especially on broadband, phone contracts, mortgages, credit – are either offering you a bad deal or one that is so marginally better than the one you’ve already got that it is hardly worth clicking your mouse let alone dealing with the hassle often involved in switching.
- All those grisly gambling ads that pollute sports on TV should be seen as ways of distracting you from the basic fact that gamblers tend to get hooked and invariably end up losing.
- Lots of products are designed to use the power of inertia to keep you paying for something without noticing or without being aware of how much it is now costing you (online services tend to make it deliberately hard to cancel subscriptions).
And – because you’re worth it - I’m not even going to get into wonderful made up science of beauty products.
Of course, there is another crucial ingredient. Unlike analogue advertising, new marketing can be fully personalized, and that includes the price. This means not only that it is more powerful but it is much harder to regulate. We can all see a dodgy poster or TV ad and if we feel strongly report it to the regulator, but unless I am looking over your shoulder or have access to your internet history, I have no idea how advertisers might be combining messages directed at you in a way which works uniquely well with you.
The bad news is that if we do nothing about any of this the choices for most of us will be to opt out of the on-line world (which now means losing basic entitlements), to spend a great deal of time trying to ward off on-line marketing (don’t forget those platforms are very clever at persuading us not to do that), or simply to allow ourselves to be ever more effectively manipulated.
But progress is a winding road; sometimes being taken in the wrong direction reminds us to think more deeply about our destination. If a company is trying to sell me something in a way that I can understand, that is hit and miss and which treats me like everyone else perhaps I don’t care too much about its motives. But if the company is communicating with me in ways I don’t understand, which are very powerful and which are directed straight at my unique and susceptible mind I am likely to care a great deal more.
Could it be that AI based marketing provides a powerful new basis for demanding much higher levels of corporate transparency and responsibility? In this new world could it come to feel unacceptable for a company’s business model to be based on trying to persuade people to make decisions that are not in their interest or that they would probably not make if they were better informed? Given the very nature of consumer capitalism and that, as I write, there are thousands of entrepreneurs around the world trying to raise money for business ideas based precisely on AI driven manipulation, this could be a revolutionary thought.
As the RSA’s work on the topic shows AI raises many ethical questions and has big social implications. Could the biggest of all be that we will no longer tolerate some of the unprincipled principles of consumer capitalism?