What is artificial intelligence anyway? - RSA

What is artificial intelligence anyway?

Blog 10 Comments

  • Picture of Benedict Dellot
    Benedict Dellot
    Former Head of the RSA Future Work Centre and Associate Director
  • Economics and Finance
  • Employment
  • Enterprise

Artificial intelligence is once again in the media spotlight. But what is it exactly? And how does it relate to developments in machine learning and deep learning? Below we spell out the various interpretations of AI and look back on how the technology has developed over the years.

The semantic quicksand

“The fundamental challenge is that, alongside its great benefits, every technological revolution mercilessly destroys jobs and livelihoods – and therefore identities – well before the new ones emerge.”

So said Mark Carney in a widely reported speech last week, as he referred to the potential impact of an oncoming wave of artificial intelligence.

We’ll pick up on his predictions another time, but for now it’s worth asking what we mean exactly by AI. How does it relate to machine learning and deep learning? And what separates innovations like chatbots from self-service checkouts, self-driving cars from search engines, and factory robots from automatic telling machines?

For all the hype and postulating, there is surprisingly little discussion about the technology itself and how it came to be. Try Googling ‘what is artificial intelligence?’ and you’ll find very little in the way of solid definitions.

This is not a dull point about semantics. If we don’t know what the technology is and how it is manifested, how do we expect to judge its potential effects? And how will we know which industries and occupations are most likely to be transformed?

AI and its different guises

From my own reading of the limited literature, I’d say the following:

First, that artificial intelligence can be broadly defined as technology which replicates human behaviours and abilities conventionally seen as ‘intelligent’.

While many people focus on ‘general AI’ – machines that have equal if not greater intelligence than humans, and which can perform any variety of tasks – very little progress has been made in this domain. Aside from a handful of the most ardent optimists, there is consensus that AI systems which can talk like us and walk like us, and which can essentially pass for humans, are decades from realisation. HAL 9000 and C-3PO remain the stuff of science fiction. 

In contrast, there have been significant and meaningful developments in ‘narrow AI’. These are machines that perform a specific function within strict parameters. Think of image recognition, information retrieval, language translation, reasoning based on logic or evidence, and planning and navigation. All are technologies that underpin services like route mappers, translation software and search engines.

Kris Hammond from Narrative Science usefully groups these tasks into three categories of intelligence: sensing, reasoning and communicating. Explained in his words, cognition essentially breaks down into “taking stuff in, thinking about it, and then telling someone what you have concluded”.

Mobile assistants like Apple’s Siri and Google’s Now make use of all three of these different layers. They begin by using speech recognition to capture what people are asking (‘sensing’), then use natural language processing (NLP) to make sense of what the string of words mean and to pull out an answer (‘reasoning’), and finally deploy natural language generation (NLG) to convey the answer (‘communicating’). It works whether you’re asking for the weather or directions to the nearest coffee shop.

When it comes to robotics – which can be thought of as physical machines imbued with AI capabilities – we should also add a fourth category of movement.

A self-driving car, for example, will sense its environment using a variety of detectors (e.g. spotting a pedestrian walk across the road), deploy reason to decide whether there are any risks (e.g. of hitting the pedestrian), and then implement a necessary movement (e.g. slowing down or altering direction). The same process plays out in other advanced robots, including those found on the factory floors of manufacturers or the wards of hospitals and care homes (see for example Asimo).

Getting off to a slow start

How did we get to this point?

After all, artificial intelligence as a field of research has been around for decades. Interest in AI stretches back to the 1950s, a period when Alan Turing devised the influential Turing Test to determine whether or not a machine could ‘think’. The Dartmouth College convention of 1956 is often cited as the landmark moment when computer scientists came together to pursue AI as a research field in its own right, powered by leading thinkers like Marvin Minsky.

Despite early enthusiasm and significant funding, however, initial progress in artificial intelligence was excruciatingly slow. DARPA, which had pumped millions of dollars into university departments during the 1960s, became particularly frustrated at the lack of headway in machine translation, which it had pinned its hopes on for counter-espionage. Closer to home, the UK’s 1973 Lighthill report raised serious doubts that AI was going to evolve at anything but an incremental pace.

The result was a radical cut in government funding and several prolonged periods of investor disillusion – what became known as the ‘AI Winters’ of the 70s and 80s. The circumstances were not helped by wildly optimistic early predictions, such as Minsky’s claim in 1970 that “[within] three to eight years we will have a machine with the general intelligence of an average human being”.

From the AI Winter to the AI Spring

One of the biggest blocks to progress was the issue of ‘common sense knowledge’. Attempts to create intelligent machines were stymied by the huge expanse of possible inputs and outputs that are associated with a given task, which could not all be anticipated and programmed into a system without a mammoth exercise lasting many years (although the researcher Douglas Lenat has attempted this with a project named Cyc).

Think of language translation, where the hidden meaning of phrases can be lost if words are converted literally from one language to the other. Unanticipated idioms would regularly throw systems out of kilter. Or consider image recognition, where a mannequin or puppet might be conceived as a person despite this obviously not being the case to a human observer. The ‘combinatorial explosion’ of possibilities in the messy world that is real life was too much for the computers of the day.

Things began to change, however, in the late 1990s and early 2000s. Increased computing and storage power meant AI systems could finally process and hold a significant amount of information. And thanks to the spread of personal computing and the advent of the internet, that lucrative data was becoming evermore available – whether in the form of images, text, maps or transaction information. Crucially, this data could be used to help ‘train’ AI systems using special machine learning methods.

Prior to this new approach of machine learning, many AI applications were underpinned by ‘expert systems’, which meant painstakingly developing a series of if-then rules and procedures that would guide basic decision-making (picture a decision tree or web). These were useful when dealing with a contained task – say, processing cash withdrawals under the bonnet of an ATM – but were not made to handle novel or unanticipated inputs where there could be millions of potential outcomes.

What makes machine learning so transformative is that it works backwords from existing real-world examples. So instead of writing thousands of lines of code, machines are instead fed huge datasets which are then analysed for common patterns, creating a generalised rule that can be used to make sense of future inputs. With image recognition, for example, machine learning algorithms are channelled a large number of pictures, each pre-described and labelled (e.g. ‘mountain’ or ‘house’), and these are then used to create a general rule for interpreting future photos.

The applications of machine learning are almost limitless – from aiding the detection of cancers and radically improving language translation, through to spotting fraudulent behaviours in financial markets and assisting businesses as they recruit workers.

Deep learning as the next frontier

Machine learning is the main reason for the renewed interest in artificial intelligence, but deep learning is where the most exciting innovations are happening today. Considered by some to be a subfield of machine learning, this new approach to AI is informed by neurological insights about how the human brain functions and the way that neurons connect with one another.

Deep learning systems are formed of artificial neural networks that exist on multiple layers (hence the word ‘deep’), with each layer given the task of making sense of a different pattern in images, sounds or texts. The first layer may detect rudimentary patterns, for example the outline of an object, whereas the next layer may identify a band of colours. And the process is repeated across all the layers and across all the data until the system can cluster the various patterns to create distinct categories of, say, objects or words.

Deep learning is particularly impressive because, unlike the conventional machine learning approach, it can often proceed without humans ever having defined the categories in advance, whether they be objects, sounds or phrases. The distinction here is between supervised and unsupervised learning, and the latter is showing evermore impressive results. According to a King’s College London study, deep learning techniques more than doubled the accuracy of brain age assessments when using raw data from MRI scans.

Assuming these innovations continue to progress, the prospects for AI to influence our health and happiness, our economic security and our working lives, are truly mindboggling. Whether or not we are prepared for this new machine age is another question – but we can only begin to answer it by knowing what it is exactly we’re all talking about.

If you have another interpretation of AI, or want to pick up on any of the domains and uses mentioned in this blog, please post a comment below. The more perspectives, the better.

 

Join the discussion

10 Comments

Please login to post a comment or reply

Don't have an account? Click here to register.

  • A good article, Benedict.  I wouldn't say that I have a different perspective on Weak (or Narrow) AI, just that I think people are not yet concerned enough about its potential to displace workers, much like what happened during the Industrial Revolution 200 years ago.  I wrote on the subject recently at https://rjbtechnology.com/blog/2017-03/ai-and-society/ .  In that article, I included some links to other reference material that may help people to grasp the potential scope of this displacement.  And it's not going to happen "in a few decades", it's basically upon us.  I bring all this up for two reasons: 1) regular folks need to tune into this jobs displacement situation, and 2) to a certain extant I'm finding that those who have a media voice today are more focused on "warning" us of the dangers of (Strong) AI taking over society.  While there are certainly some dangers in (2), the real and immediate threat is economic - much like the Industrial Revolution.

  • A possible reason for the slow progress in AI is that the computational complexity of general intelligence may be exponentially hard.  The field itself, and the evolution of natural I (including us, "NI"?) shows some evidence that this is so.  For more see here: https://adrianbowyer.blogspot.com/2017/01/hardclever.html

  • I'd advise against trying to nail down a definition of AI. There are some 70 collected definitions already. It's rather like trying to define `beauty', and we all know what defining that is like!
    The problem is more that AI is essentially "task based", i.e. application centred (because it is commercial and has to be of some use) and therefore it's not very useful to compare with human intelligence which is of an essentially different kind.
    The pressing concerns, in my view, are how the very rapid progress can be reconciled with issues of understanding the built systems sufficiently to provide safety, validation and performance guarantees for the non-trivial, real-life applications being suggested.

  • Thank you Benedict for your precise insight on AI as it is increasingly playing a major part in our lives. This is great value. I would make two additional comments. First, let me add IBM Watson to the AI systems you mention. Watson cognitive platform goes beyond what existing learning machines can do:  not only can Watson understand, reason, learn, and interact with people, but it can do that in all industries, through the cloud. Just think how visual recognition in healthcare will help dermatologists detect skin cancer at an early stage and save lives, or in the automotive industry, how AI is improving the car of the future by creating more intuitive driver support systems. Cognitive is in fact changing the world, acting like a super trusted advisor by men'side, which leads to my second reflection. Acting - almost- like humans, AI systems should behave as we would expect, integrating ethical values and generating trust. For that purpose, IBM, Google, Microsoft, Amazon and Facebook lately formed a Partnership on AI to guide the ethical development of artificial intelligence. I recommend a recent blog written on this topic called "What it will take for us to trust AI" https://medium.com/cognitivebusiness/what-it-will-take-for-us-to-trust-ai-bc3a91ad6e3a#.pb9lesx1y.

    • Deep Learning is layer upon layer upon layer of differentialcalculus formula that are trying to converge in a cyclic fashion of sorts (Andthey sometime converge on a false maxima or minima). This is how recognition isdone in Deep Learning; it’s a math formula that figures out if X= A then Y = Bwithin lots of bounded calculations which is why it can cope with moreambiguity than deterministic formula. It lacks emotional intelligence. It isand can only ever be a purely “rational” form of intelligence as it is ‘calculative’in nature. It does bring many possibilities and is exciting, and I am a bigfan, but we do need to be careful in the claims for what is possible.

      We humans are more than machines, more than dy/dx and it isthe other types of intelligence such as emotional, instinctive, declarative,and functional that count, that make us human. All is not perfect in this BraveNew World, especially if we dismiss other types of intelligence. For example,some AI algorithms in the USA have inadvertently developed social biases thatthen get pushed out over the internet such as ‘the type of job a woman is morelikely to get’ and subsequent biased limitation of prospective adverts sent to womenas what is the point of sending them adverts for jobs better suited to men? Suchbiases are hard to spot when solving a problem with a different context in mind; how do you test for such biases 'a priori'? We need to think about the unintended side effects far more than seems to bethe case in the dash to revenue. I am pleased that an ethics committee is beingformed but worry that some companies on it have evaded UK Tax and so theirmoral compass may point to money rather than True North. Ethical considerations is needed but not sure leaving that to the Computing industry to self-regulate is wise.

    • Thanks Veronique. Guru Banavar makes a good point about the importance of minimising bias within AI systems - both in the data that machines are trained on, and in terms of the algorithms. Keen to explore how AI systems can be monitored and 'stress tested'

  • One of our main hurdles in understanding artificial intelligence is recognising the intelligence aspect. A number of RSA Fellows are involved in the Brain Mind Forum which is looking at the convergence of the brain and computers. We will be running a series of events in 2017 kicking off with a Symposium on intelligence with representations from science and humanities on intelligence. We need an agreed definition of intelligence that can be applicable to both human and artificial as well as alien.

Related articles