As robots take on ever more human tasks and make more decisions on their own, we need to examine and question how they are making their choices. Robot psychology could be the bridge between AI and ethical decision-making.
What is a robot thinking? Many science fiction authors have tried to answer the question, but it is becoming less hypothetical and more real. Algorithms can be used to make real-life decisions, and in the past few years there have been several high-profile situations that have shown the risk of bias even for algorithms. While considered more objective, for example, there has been a systematic racial bias in algorithms used to predict recidivism risk. Self-driving cars that use AI to make choices have been associated with a lower accuracy when looking at dark-skinned people. There have been other errors, some of them very costly and dangerous. A Tesla car failed to see a white tractor-trailer against a bright sky, leading to the death of the driver.
Even if this is a hardware issue, it does suggest that even AIs might be susceptible to bias, adopting the bias of the researcher and forming their own. For instance, AIs have been taught to deceive. In an experiment, robots were taught the behavior of deceiving squirrels who could lead predators away from their caches to empty ones. Robots were able to replicate this behavior, tricking the ‘predators'. AIs can be tricked into misidentifying one set of images as another or giving false positive and negative results. Even more, defensive deception meant to trick enemies or attackers and introduce uncertainties is an area of research in the broader field of AI research. All this means that it is all the more important to understand how a system learns and makes its choices. We need this not just as a theory, but as a whole new field to support AI, as we are seeing how quickly it develops and how much it has achieved.
Robots and AIs can be placed in positions where they make decisions. These decisions can be less or more impactful, but machines are likely to take over new environments and make more decisions, taking control over their choices. In cases where robots control hospitals, power plants, transportation, satellites, or weapon systems, each choice can involve a huge impact and the loss or preservation of human life. While neural networks are currently shaped by humans, the goal of many is to create more autonomous systems that can make decisions on their own. Even if AIs currently use models of human reasoning and other cognitive processes, like logic, it is possible that they could eventually develop something new.
A neural network is in many ways meant to work like the human brain, constantly learning. But just as it is not always clear and obvious why a person comes to a certain decision, it is not always clear for networks either. The what, why, and how cannot be understood as easily as a series of algorithms; rather, they might need to be explained by the system itself. This creates a need to build robots that can explain their inner states. AIs might be capable of more than we expect, with semi- and fully-autonomous systems for contemporary warfare being developed with an ethical and moral system. And this might not just be a futuristic alternative but a necessity.
Moral systems might be developed with a conscience of their own to work with humans, and this requires the need to communicate the decisions they make and why. It could also necessitate the development of a field of study that centres on facilitating human and robot interactions and the understanding of the inner processes of the machine: robot psychology. The goal would be to predict how a robot will react to a new situation or how it will make decisions in unexpected circumstances.
This psychology is likely to come from an intersection of psychology, engineering, programming, and other disciplines to answer a need that is becoming more apparent. How do robots think? How will they act if an unexpected situation happens? Why does the algorithm favour one decision over another?
When we talk of psychology, we imply that there is a kind of mind at work. Cognitive processes modeled after human cognitive processing are employed in current AIs. While these AIs are perhaps not so advanced that we might speak of consciousness, we are getting to a point where this is no longer a distant possibility. Computers are challenging our understanding of what it means to be conscious and capable every day. AIs can smell, make art, learn, and grow.
Sure, we might say that machines are not human and are unlikely to become human any time soon. But they might develop other forms of consciousness. And they have thought processes that need to be understood if we are going to rely on AIs to achieve goals in reality. To better understand this, perhaps we can make an analogy with other living things. An insect, like a bee, is something that we do not perceive as human, nor should we. But the bee has consciousness, at a rudimentary level. It pursues what it wants and reacts to its environment. Bees can do maths. Bees can dance. Bees are entities with a consciousness alien to us, and yet with a consciousness.
Perhaps AIs will not develop thoughts as people do, and yet this does not mean they will never get to a point where they are autonomous beings we must communicate. And when they get to that point, we will need to understand them not just as tools but as something infinitely more complex. AIs can think, in their way, so perhaps emotions and consciousness will also develop. This has been imagined in fiction, as writers try to imagine how it might play out. Think of the novel 'Klara and the Sun' by Kazuo Ishiguro or the android Data from 'Star Trek: The Next Generation'.
We might not be as far off as we think. A recent scare with LaMBDA,the Google AI that one of the developers declared sentient, shows that even if the AI is not quite there yet, we are already interacting with it as if it was. We anthropomorphise and react – who hasn’t got frustrated at their Siri or Alexa for being obtuse? However, with more sophisticated systems, it matters not just what goes on inside the AI but how it is perceived. Here is where a psychologist could help people communicate with the AI and vice versa, so helping both.
AI and robot cognitive processing psychology could help bridge the gap between an AI and its real-life applications. The work of such a professional could make it more appealing and improve the functioning of the AI, using existing knowledge of human and machine learning. Algorithms can be adjusted and tweaked to offer better results and to follow processes that lead to more accurate outcomes. This field would also be strongly related to ethics, and one of the jobs of the robot psychologist could be to teach the AI ethical protocols and oversee the ways in which it makes its decisions, and so reduce the effects of cognitive and perceptual biases and other subjective factors.
Trusting the machine too much seems an unavoidable consequence of having a helper that is so much faster and appears much more objective than you. However, removing human oversight completely can lead to significant issues. Two Boeing crashes happened due to errors with the Maneuvering Characteristics Augmentation System, a machine error that led to the loss of life. A robot playing chess broke a child’s finger. People trusted the machine too much. This is not to say machines cannot do their work. It’s that they require consistent maintenance and support. This is especially so once the inventors and designers have moved to other projects: someone needs to make sure that the machine-person interaction is working and is adjusting to the changes. As time passes, the world changes, so someone needs to make sure that AIs do as well. Beyond ensuring they work properly, there needs to be someone who serves as a go-between, who can interact with the human-machine pairing, and understand both sides.
While AIs have not yet reached the level of sentience some have predicted, the idea of robot psychology or psychiatry has been around for a while. Not only in the works of sci-fi authors like Isaac Asimov but also with pioneers in the field like Dr Joanne Pransky, who has dubbed herself the world's first robotic psychiatrist. Her work has focused primarily on showcasing robotics and educating people, as well as setting more of a foundation for robots to become an accepted part of human society. A big part of her work (and which will likely also be a part of the work of robot psychologists) is helping the newest robots become assimilated into the public sector, with DreamWorks, Warner Bros and even the Republic of South Africa using her consulting services.
Machines might not be subject to bias in the same way as people are but (especially when one considers neural networks) they do learn in much the same way as people. They too have to construct their knowledge from a vast tangle of data. And of course they are built by people. People can have biases and design models that replicate those biases, even if they don't intend to do so. And they might choose the data for the machines to learn from, meaning a host of different possible outcomes.
So while neural networks and AIs might be perceived as objective they may not always be objective. This means that humans must continually intervene to improve machine learning and to facilitate communication and understanding. Robot psychology is likely to appear as a significant job in the future as AIs become more widespread.
Sylvester Kaczmarek works on secure, interpretable, and explainable intelligence capabilities and next-generaton secure data management for machines. Today, he architects edge AI/ML solutions for lunar mobility vehicles and other space-based systems to enable cost-effective, safe, and efficient observations, decisions, and actions that are paving the way for the upcoming $1 trillion space economy.
In a time of rising sea levels and flooding threats, Alexander Alder-Westlake suggests we draw lessons from a country most of us know nothing about. With its unique geography, topography and history, Guyana has much to teach the rest of the planet.
Watching how software engineers work could revolutionise how we build products and services, and give employees more interesting and fulfilling work. Edward Lowe talks Tesla, Henry Ford and a lesson from the slaughterhouse
We complain about giants such as Google, Apple and Microsoft owning our technology, but how many of us take the time to explore the alternatives. As David Jackson argues, they are simple to adopt and free in every sense