Developing human friendly, social AI - RSA

Developing human friendly, social AI

Comment

  • Picture of Mark Lee FRSA
    Mark Lee FRSA
  • Social innovation
  • Technology

Machine learning is a technology that has made enormous progress in the last two decades. Despite this, Mark Lee FRSA, argues that there remains a huge gulf between how machines ‘think’ and human intelligence.

Machine learning is ubiquitous in software applications, on the internet, in our phones, and inside big data algorithms. It is the power behind advances in image processing for driverless cars, speech processing systems and language translation services. Recently, machine learning programs have progressed from learning from patterns in huge datasets to learning to play games from scratch, starting with only the basic rules. Not only have a wide range of video games been mastered, exceeding human skill levels, but deep logical games such as chess and Go have been learned, without any prior data, beating world champions.

This progress offers potential benefits for our lives in many ways, assuming the applications are handled with care. The common feature of all machine learning work is that it solves particular problems for specific tasks, and we can expect many new areas to succumb to learning algorithms. How does this all relate to human ways of thinking? Is machine learning similar to our way of learning and is it extending our understanding of intelligence?

In the early days of artificial intelligence (AI) the goal was to program a computer to reproduce or simulate some of the remarkable behaviour in humans we call intelligence. This was a very broad ambition and covered all kinds of cognitive activity. There was a lot of interest in trying to capture the style of human thinking, for example, building programs that could reproduce human decision reasoning or duplicate perceptual errors. This emphasis has largely been lost in machine learning (and much of AI) with performance being more important than method or style. The, often impressive, results are usually produced by powerful algorithms that have no relation to human styles of thinking.

One area where human behaviour cannot be ignored is human-computer interaction, particularly conversational systems. There are many ways that high-quality conversational systems could enhance our lives, for example as companions, care assistants, service providers, guides, and diagnostic/therapy agents. Notice that these are not really tasks, they do not have a single goal, but they do require a considerable understanding of the domain of the conversation. Of course, simple conversation systems already exist but they usually only execute commands and provide basic information services. The challenge for future AI is to provide full-scale, lasting, social interaction between machines and humans.

Consider a conversation between a robot and a human. Mary says: "I can see what Mike means when he's in the room with Alice". What does this statement mean to the robot?  Does Mary actually ‘see’ what Mike means, and what does Mike ‘mean’? Is the room too small for Mike and Alice together or is there something else about this room, or any room? Or perhaps something else about Mike only makes sense when he's in the room with Alice.  Or is it nothing to do with rooms and about being in proximity to Alice? There are so many possibilities here and you will no doubt think of many more.

Such conversations cannot be analysed solely from their textual content and large data corpuses of millions of past conversations will also not provide meaning. There are two key features unique to this AI ‘problem’: each agent has their own viewpoint, they have a subjective perspective, and the human condition is an embedded assumption in much human conversation.

People talk about things from their experience, from their own viewpoint; and they interpret others on the basis that they too have similar subjective selves. We know what it is to be a person, we all have a ‘self’, and we can understand others through empathy, by assuming that they have a similar self, which experiences life events very similar to our own experiences. But how can a machine have a ‘self’ model? Clearly, this is very desirable for grounding and understanding conversations; we can then attach inferred intentions and beliefs to each agent we encounter and treat them according to past information, context and behaviour. This allows us to imagine (simulate?) the mental changes taking place when we observe two others in conversation. But very few AI systems have this kind of subjective view of their world. The difficulty is that we cannot program in a complete self model: each agent/person needs to gradually build up their own sense-of-self through experience over their lifetime.

The second issue refers to the core concepts that dominate humanity, for example: birth, food, health, sex and death. All these, and many more, are completely alien to machine understanding. Machines will have knowledge of processes and events at a formal level; they will know that humans die, but this is just a fact, without any subjective meaning. How could they understand human life? How could they understand the concerns of living systems that will eventually die? A robot can be turned off for a while and then later continue, it can have its worn parts replaced, and even have its ‘brain’ transferred to another system. So all the survival concerns that worry humans have no machine equivalent and no emotional meaning. Research is producing systems that can estimate human emotions from faces, gestures, bodily demeanour, as well as speech. But we should not expect future robots to have similar emotions; they will recognise our joy and grief, but their responses will be cool, polite and platonic. They will have preferences but not passions.

So what chance is there of creating first-person social robots that learn about the world and other agents/humans from their own subjective experience? Several research projects are looking into various aspects such as experiential learning and multi-modal human-machine interaction, and the learning-from-scratch approach mentioned above promises a new direction for machine learning.

But perhaps the most relevant work is found in the developmental robotics field where the focus is on infant learning and experiments are attempting to reproduce the rapid growth of competence seen in the first years of life. Being embodied in a robot ensures that all experience takes place in the real physical world, objects and agents are learned about from scratch (and through play), and the subjective perspective is maintained separate from the experimenter's global view.

The flexibility of such robots will open up new areas of use (and abuse) and this will focus attention on the ethical issues of their application. Ethical requirements can be incorporated into standards and regulations, and it is encouraging that all AI has recently been receiving much needed scrutiny in this respect.

Mark Lee FRSA is Emeritus Professor of Intelligent Systems in the Department of Computer Science at Aberystwyth University, Wales. He is a Fellow of the Learned Society of Wales and the Institution of Engineering and Technology and has degrees in engineering and psychology. Mark’s latest book for more details and results: How to Grow a Robot: Developing Human-Friendly, Social AI,was published by MIT Press in 2020.

Be the first to write a comment

0 Comments

Please login to post a comment or reply

Don't have an account? Click here to register.

Related articles

  • AI and service delivery

    Comment

    Chetan Choudury FRSA

    Chetan Choudury FRSA looks at some of the best examples of governments using AI to deliver services.

  • A healthy use of AI?

    Comment

    Dr Grace Hatton

    When it comes to our private, medical data there should be safeguards in place to protect our information.

  • Autonomy and personalisation

    Comment

    Ana Chubinidze FRSA

    Policymakers and algorithm coders should be the ones educated about the ethical and societal issues associated with using personalised online content, not consumers.