Will artificial intelligence ever understand human taste? - RSA

Will artificial intelligence ever understand human taste?

Blog 2 Comments

  • Digital
  • Technology

With regret I’ve come to realise I’m neither interesting nor entertaining enough to be picked as a mainstream media presenter.

Given that I enjoy broadcasting, my response has been to try to develop new formats. After all, if I help design the show it seems only fair that I present it. Sometimes these programmes have succeeded but as with most new formats the risks of falling flat are high. It was my fascination with failure that kept me watching a recent BBC documentary, but the show was significant for a more important reason. 

The programme in question was ‘The Secret Science of Pop’  (first shown in February but repeated last week) and it was presented by the very engaging Professor Armand Leroi from Imperial College. The concept behind the show was compelling. Professor Leroi and a group of his brightest students were going to use big data analysis and machine learning not only to discover the secret of pop success but to help an unsigned artist produce a guaranteed hit. To my growing viewing delight, the project was an abject failure.

The research team analysed all 17,000 songs that have entered the charts in the last sixty years deriving over a million bits of data from each track. Applying machine learning to this data they set out to define the recipe for a hit song. Sadly, no such recipe exists, or, if it does not in a form that can be revealed by data analysis and machine learning. Despite trying out innumerable variables, the team came to only one deeply underwhelming conclusion: at the time they are released chart toppers are slightly – at a very low level of statistical significance – more typical of the tracks in the rest of the chart than are other less successful songs.

‘Armed’ (which feels like a rather strong word) with this ‘insight’ (that too) the team worked with a veteran pop producer Trevor Horn to help their unsigned artist, Nike Jemiyo, turn her ballad into a sure fire hit. The song was slow and poignant. So was the programme. Each attempt to produce a sure fire hit sounded more awful and, worst of all (by this time I was starting to cover my eyes) they even failed to achieve the basic task of making it more statistically average.

Finally, with the programme’s core conceit collapsing all around the brave Professor, the programme tried to engage the viewer with a visualisation of the history of pop. But not only did the waves of colour coded data points not communicate much in the way of useful information but – and here the words ‘hole’ ‘stop’ and ‘digging’ sprang to mind – the game Professor - who had told us winningly at the outset that he knew nothing about pop - confirmed this in spades by claiming the data showed; (a) the Beatles were irrelevant because their music was average for its era; while (b) punk was also unimportant although – please don’t ask me why – for the reverse reason.

The great reassurance I got from ‘The Secret Science of Pop’ is that it is possible for a programme concept to collapse like a gazebo in a gale and yet be watchable. I can see myself sharing this comfort with producers for as long as commissioners are foolhardy enough to put a microphone in front of me.

The programme ended with Professor Leroi recognising that, while the experiment hadn’t been entirely successful, it was only a matter of time until data and algorithms solved the puzzle of pop success. Surely the bigger lesson of the programme is that he may be wrong.

As Gavin Kelly has eloquently argued, the constant stream of breathless and weakly substantiated predictions about the disruptive impact of AI and robotics distracts us from the important issues facing the world of work today. These flaky futurists often lack realism and nuance about what can be automated. The lesson of Professor Lerioi’s travails is not just that pop success is more difficult to analyse than we thought but that when it comes to culture – the evolving collective expression of appreciation and emotion – the point when we can reduce it to zeros and ones is not just far away but over the horizon.

In our own work on automation the RSA is focussing on a granular account of how technology may impact sectors, jobs and tasks, informed by engagement with employers and employees as well as technologists and entrepreneurs. One headline of our soon to be published report - based on a major survey of business take up of AI and robotics – is that we are currently too alarmist in thinking about technology but too timid in actually taking it up.

The case for getting behind the headlines to look at what big data and machine learning can actually do was underlined by the efforts of the Professor and his diligent but frustrated team. The show also reminded us of how hard it is to code for human taste. After all, what algorithm would predict that a programme which raised the bar only to knock it over again and again could be so damn entertaining?  

Join the discussion

2 Comments

Please login to post a comment or reply

Don't have an account? Click here to register.

  • I was an early researcher into AI and Expert Systems, early but a good 10 years after Minsky et al. and we had the arrogance to think with better technology and much greater understanding of the brain worked we could build AI machines that were viable if they were contained in a specific domain. We set about building a programming teaching machine, in conjunction with the Open University, and a gourmet cheese advisor in conjunction with a major processed food manufacturer on the basis they were constrained domains and well understood by experts. 


    These too were an abject failure. 


    They failed because they were based on the knowledge that was already known, but hidden, and we expected new insights from examining this body of data and exposing them. But insights come from not only the analysis of, to use the expression de jour, Big Data, which is essentially data which we already have but is unknown, but also from the total environment which surrounds it.


    I have interviewed many candidates in my career but I still don't know what makes a good fit. I do know that I make my mind up within minutes of meeting the candidate and not after a detailed series of tests. And my judgement has been close to 100% once I realized that testing i.e. analysing the data, was just used to confirm my first impressions.

    So what algorithm indeed is able to replicate the human attributes of taste, fit, creativity etc. and all the attributes that make someone someone. 

    • An algorithm can't do these things, yet, but then again we don't know the full range of variables which go into such decisions - neuroscience being as limited currently as artificial intelligence. Assuming you're a materialist, there's no 'magic tissue' in the neural interactions making conscious and subconscious choices. A neuroscientist can already use fMRI tell how you're about to move your hand seconds before making the decision - the jump between that and understanding complex, social and subjective decision making neural processes seems only to be one of scale and complexity. It follows that can then be used algorithmically for machine interaction and to predict even subjective decisions and reactions. 

Related articles