What is artificial intelligence anyway? - RSA

What is artificial intelligence anyway?

Blog 10 Comments

  • Picture of Benedict Dellot
    Benedict Dellot
    Former Head of the RSA Future Work Centre and Associate Director
  • Economics and Finance
  • Employment
  • Enterprise

Artificial intelligence is once again in the media spotlight. But what is it exactly? And how does it relate to developments in machine learning and deep learning? Below we spell out the various interpretations of AI and look back on how the technology has developed over the years.

The semantic quicksand

“The fundamental challenge is that, alongside its great benefits, every technological revolution mercilessly destroys jobs and livelihoods – and therefore identities – well before the new ones emerge.”

So said Mark Carney in a widely reported speech last week, as he referred to the potential impact of an oncoming wave of artificial intelligence.

We’ll pick up on his predictions another time, but for now it’s worth asking what we mean exactly by AI. How does it relate to machine learning and deep learning? And what separates innovations like chatbots from self-service checkouts, self-driving cars from search engines, and factory robots from automatic telling machines?

For all the hype and postulating, there is surprisingly little discussion about the technology itself and how it came to be. Try Googling ‘what is artificial intelligence?’ and you’ll find very little in the way of solid definitions.

This is not a dull point about semantics. If we don’t know what the technology is and how it is manifested, how do we expect to judge its potential effects? And how will we know which industries and occupations are most likely to be transformed?

AI and its different guises

From my own reading of the limited literature, I’d say the following:

First, that artificial intelligence can be broadly defined as technology which replicates human behaviours and abilities conventionally seen as ‘intelligent’.

While many people focus on ‘general AI’ – machines that have equal if not greater intelligence than humans, and which can perform any variety of tasks – very little progress has been made in this domain. Aside from a handful of the most ardent optimists, there is consensus that AI systems which can talk like us and walk like us, and which can essentially pass for humans, are decades from realisation. HAL 9000 and C-3PO remain the stuff of science fiction. 

In contrast, there have been significant and meaningful developments in ‘narrow AI’. These are machines that perform a specific function within strict parameters. Think of image recognition, information retrieval, language translation, reasoning based on logic or evidence, and planning and navigation. All are technologies that underpin services like route mappers, translation software and search engines.

Kris Hammond from Narrative Science usefully groups these tasks into three categories of intelligence: sensing, reasoning and communicating. Explained in his words, cognition essentially breaks down into “taking stuff in, thinking about it, and then telling someone what you have concluded”.

Mobile assistants like Apple’s Siri and Google’s Now make use of all three of these different layers. They begin by using speech recognition to capture what people are asking (‘sensing’), then use natural language processing (NLP) to make sense of what the string of words mean and to pull out an answer (‘reasoning’), and finally deploy natural language generation (NLG) to convey the answer (‘communicating’). It works whether you’re asking for the weather or directions to the nearest coffee shop.

When it comes to robotics – which can be thought of as physical machines imbued with AI capabilities – we should also add a fourth category of movement.

A self-driving car, for example, will sense its environment using a variety of detectors (e.g. spotting a pedestrian walk across the road), deploy reason to decide whether there are any risks (e.g. of hitting the pedestrian), and then implement a necessary movement (e.g. slowing down or altering direction). The same process plays out in other advanced robots, including those found on the factory floors of manufacturers or the wards of hospitals and care homes (see for example Asimo).

Getting off to a slow start

How did we get to this point?

After all, artificial intelligence as a field of research has been around for decades. Interest in AI stretches back to the 1950s, a period when Alan Turing devised the influential Turing Test to determine whether or not a machine could ‘think’. The Dartmouth College convention of 1956 is often cited as the landmark moment when computer scientists came together to pursue AI as a research field in its own right, powered by leading thinkers like Marvin Minsky.

Despite early enthusiasm and significant funding, however, initial progress in artificial intelligence was excruciatingly slow. DARPA, which had pumped millions of dollars into university departments during the 1960s, became particularly frustrated at the lack of headway in machine translation, which it had pinned its hopes on for counter-espionage. Closer to home, the UK’s 1973 Lighthill report raised serious doubts that AI was going to evolve at anything but an incremental pace.

The result was a radical cut in government funding and several prolonged periods of investor disillusion – what became known as the ‘AI Winters’ of the 70s and 80s. The circumstances were not helped by wildly optimistic early predictions, such as Minsky’s claim in 1970 that “[within] three to eight years we will have a machine with the general intelligence of an average human being”.

From the AI Winter to the AI Spring

One of the biggest blocks to progress was the issue of ‘common sense knowledge’. Attempts to create intelligent machines were stymied by the huge expanse of possible inputs and outputs that are associated with a given task, which could not all be anticipated and programmed into a system without a mammoth exercise lasting many years (although the researcher Douglas Lenat has attempted this with a project named Cyc).

Think of language translation, where the hidden meaning of phrases can be lost if words are converted literally from one language to the other. Unanticipated idioms would regularly throw systems out of kilter. Or consider image recognition, where a mannequin or puppet might be conceived as a person despite this obviously not being the case to a human observer. The ‘combinatorial explosion’ of possibilities in the messy world that is real life was too much for the computers of the day.

Things began to change, however, in the late 1990s and early 2000s. Increased computing and storage power meant AI systems could finally process and hold a significant amount of information. And thanks to the spread of personal computing and the advent of the internet, that lucrative data was becoming evermore available – whether in the form of images, text, maps or transaction information. Crucially, this data could be used to help ‘train’ AI systems using special machine learning methods.

Prior to this new approach of machine learning, many AI applications were underpinned by ‘expert systems’, which meant painstakingly developing a series of if-then rules and procedures that would guide basic decision-making (picture a decision tree or web). These were useful when dealing with a contained task – say, processing cash withdrawals under the bonnet of an ATM – but were not made to handle novel or unanticipated inputs where there could be millions of potential outcomes.

What makes machine learning so transformative is that it works backwords from existing real-world examples. So instead of writing thousands of lines of code, machines are instead fed huge datasets which are then analysed for common patterns, creating a generalised rule that can be used to make sense of future inputs. With image recognition, for example, machine learning algorithms are channelled a large number of pictures, each pre-described and labelled (e.g. ‘mountain’ or ‘house’), and these are then used to create a general rule for interpreting future photos.

The applications of machine learning are almost limitless – from aiding the detection of cancers and radically improving language translation, through to spotting fraudulent behaviours in financial markets and assisting businesses as they recruit workers.

Deep learning as the next frontier

Machine learning is the main reason for the renewed interest in artificial intelligence, but deep learning is where the most exciting innovations are happening today. Considered by some to be a subfield of machine learning, this new approach to AI is informed by neurological insights about how the human brain functions and the way that neurons connect with one another.

Deep learning systems are formed of artificial neural networks that exist on multiple layers (hence the word ‘deep’), with each layer given the task of making sense of a different pattern in images, sounds or texts. The first layer may detect rudimentary patterns, for example the outline of an object, whereas the next layer may identify a band of colours. And the process is repeated across all the layers and across all the data until the system can cluster the various patterns to create distinct categories of, say, objects or words.

Deep learning is particularly impressive because, unlike the conventional machine learning approach, it can often proceed without humans ever having defined the categories in advance, whether they be objects, sounds or phrases. The distinction here is between supervised and unsupervised learning, and the latter is showing evermore impressive results. According to a King’s College London study, deep learning techniques more than doubled the accuracy of brain age assessments when using raw data from MRI scans.

Assuming these innovations continue to progress, the prospects for AI to influence our health and happiness, our economic security and our working lives, are truly mindboggling. Whether or not we are prepared for this new machine age is another question – but we can only begin to answer it by knowing what it is exactly we’re all talking about.

If you have another interpretation of AI, or want to pick up on any of the domains and uses mentioned in this blog, please post a comment below. The more perspectives, the better.

 

Join the discussion

10 Comments

Please login to post a comment or reply

Don't have an account? Click here to register.

  • I am currently reading for an mA in philosophy at Birkbeck and intend to write my dissertation on an ethical implication of AI. There are of course many and I have yet to make a determination on which of them to focus on. If the author of this blog or any of the readers of it, has any advice they can offer, I would be most grateful.

    • Sounds fascinating. A few areas I would explore: responsibility/accountability (or as my colleague Tony Greenham puts it: can you sue a robot?); privacy; discrimination and agency. 


      Two reading suggestions:

      - Yuval Noah Harari's Homo Deus, where he explores what AI might mean for humanist notions of individuality and freedom (can we really be free if machines know our every need in advance?)

      - Nick Bostrom's The Ethics of Artificial Intelligence - https://intelligence.org/files/EthicsofAI.pdf


      Good luck and let us know how you get on.

      Ben

      • Many thanks, Ben. I think this will prove very helpful indeed. I'll keep you posted on any progress.

        Best wishes,

        Phil

Related articles