Three reasons why AI ethics is struggling - RSA

Three reasons why AI ethics is struggling

Comment

  • Picture of Nathan Kinch
    Nathan Kinch
    Sociotechnology ethicist and social entrepreneur
  • Technology

There is a gap between organisations’ stated intentions and actual behaviours when it comes to ethical action in relation to AI. Nathan Kinch offers three reasons for this disconnection and some suggestions for how to close the gap and ‘make the world a better place’.

Comment spotlight

Are you an RSA Fellow? Submit your original article to Circle and it could be featured as the next Comment spotlight.

This Comment is part of our Spotlight series highlighting the most engaging articles submitted to the Comment space on Circle from Fellows. If you have an article you would like to share with the Fellowship, click the link to upload it now.


AI ethics and responsible AI have well and truly hit the mainstream. But, recent data suggests there is a significant gap between values and actions, or, put differently, between stated intentions and actual behaviours. The result – continuing to build misaligned AI systems – is not just a risk, it’s an issue (that is, a risk that has already happened). Many past risks have already been realised. They continue to cause harm to people and the planet today. As a result, there is a moral imperative to try to collectively close the gap between what we say and what we do. 

This, the idea that closing this gap is a ‘moral imperative’, might seem like a big claim. But digitalisation has historically increased environmental impacts at local and planetary scales, affecting labour markets, resource use, governance and power relationships.

In this article, I will describe the ‘ethical intent to action’ gap by using Fifth Quadrant's 2022 The responsible AI index published in Australia. I’ll then highlight the three most prominent issues I’ve seen consistently within organisations after more than a decade of working in this space. I’ll then propose how we might close the ethical intent to action gap by drawing inspiration from a model from the cognitive sciences and applying that to ethics in the context of AI or otherwise.

A definition of ethics

Let me start by pointing out the importance of understanding what ethics actually is, regardless of the application area. Ethics can be thought of as the deliberative process we execute in a given situation to stress test our first-order beliefs (our moral values and principles) about what is good and right, in our attempt to make informed decisions about possible actions that are most likely to be: 

  • Net beneficial (utilitarian approach)
  • Respecting of people’s rights and freedoms (rights approach)
  • Equity enhancing (justice approach)
  • In service of the common good (common good approach), and
  • Leading us to act as the sort of person we would most like to be (virtue approach)

The above are moral theories that heavily feature in Western approaches to ethics. It is worth noting that Ubuntu EthicsConfucian Ethics, relational ethics, ethics informed by the philosophy of deep ecology, ethics of care and ethics deriving from different indigenous wisdom systems are being more actively considered in AI and other applied areas.

AI ethics is an area of applied ethics that is concerned with the intentions behind, and spectrum of consequences arising from, the development and use of AI systems. It has quickly become a multidisciplinary field with thousands of global practitioners. Together this work is attempting to better align AI system development and use to human values, with specific focus given to the relationship between 'beneficial' and 'harmful' impacts.

The ethical intent to action gap

The research from Fifth Quadrant assessed a number of ‘responsible AI maturity’ factors by surveying 439 executive decision makers responsible for AI development. I’d like to highlight two charts from this report to help explain my point.

The first describes the fact that most respondents generally agree that the organisation they work for operates in alignment with a series of AI ethics principles published by the Australian government. These are closely aligned to AI ethics principles published across many jurisdictions. 

The AI Ethics Lab highlights this tight distribution of principles, suggesting that all published principles, at the time of their analysis, fit into four categories:

  1. Autonomy
  2. No harm
  3. Benefit
  4. Justice

This categorisation is effectively describing Principlism, an applied ethics framework developed and most commonly used within bioethics.

Chart 1: Employees apparently confirm ethical intent of their employer

At a glance, this data looks really promising in that there is overwhelming agreement about the ethical intentions of these organisations. But, and I quote directly from the report:

This level of agreement is encouraging but does not align with the overall Responsible AI Index scores, which may indicate a gap between strategic intent and the actions taken by organisations to put the Australian AI Ethics Principles into practice.

Fifth Quadrant Responsible AI Index 2022

In the second graph we see that there is less than impressive ‘responsible AI maturity’ among the same group. Only 3 percent of the organisations surveyed fit into the ‘Maturing’ category.

Chart 2: The ethical intent to action gap in practice

It is worth noting that the intention-action gap has long been studied in the behavioural sciences. It describes a phenomenon we can all relate to, where we are motivated and explicitly intend to do something. In fact, we often have a very good sense of how to do it and may have even done it before, but we just don’t.

Interestingly, a similar phenomenon seems to be common within businesses more broadly, as highlighted by Donald Sull and his MIT colleagues in 2020. Sull found no correlation between the values a company emphasises in its official corporate culture and how well the company lives up to those values in the eyes of its employees.

A useful model

Why are we seeing such a gap between stated intentions and actions? And why is this seemingly systemic, at least within businesses and perhaps organisations of other kinds? Based on over a decade attempting to operationalise ethical intentions within organisations around the world, I have identified three prominent reasons why this is the case. 

I need to note, however, that none of this can be separated from the broader context within which organisations operate. As a result, I can’t reduce the overwhelming complexity into three bullet points. As statistician George Box suggested in 1976: “All models are wrong, some are useful.” What I propose here is intended to be useful. It’s a map, not the actual territory.

Reasons why AI ethics is struggling

1. Top-down mandates that drive AI ethics are exclusive, rather than inclusive

In my experience, most AI and other ‘ethics’ initiatives within organisations are driven by traditional, hierarchical thinking. This often starts with some type of board mandate, which may arise due to market pressures of different kinds. It then makes its way to a specific executive committee. From there the work will often be led by a consulting firm. The output of this process tends to be a series of principles and maybe some basic guidance as to how to interpret those principles.

This approach has a number of limitations, including what is often a lack of genuine diversity in terms of who participates in establishing the value system. This can narrow the purview of care and moral consideration, leading to issues far beyond difficulty in ‘operationalising’. 

2. AI ethics is disconnected from practicalities of workflows

Not only is there limited genuine diversity in processes such as this, the body of work tends to be disconnected from the everyday realities of researchers, designers, engineers, analysts and other cross-functional team members. They have tools, workflows, rituals and practices that they know like the back of their hand. The high-level principles that filter their way down from the top of the organisation struggle to find their home in this complex system that is responsible for ‘doing the work’. In short, the principles are hard to act on. Progressing from where we start towards where we’d like to be is hard to observe and measure. The whole process can be deeply demotivating. 

3. When push comes to shove, business metrics rule the roost

Lastly, yet perhaps most importantly, ethical intentions can rub up against the basic business metrics that drive ‘progress’. Step back and consider a time recently when you believed something was good or right, yet weren’t able to follow through because there was a specific business driver in focus. How did this make you feel? What did you do when this happened? Who did you talk to? Did it change the nature of your relationship to work?

I’m willing to bet that that is something you’ve experienced more times than you can count or care to remember. 

The combination of top-down mandates that rely on a small group of people to describe values and principles, combined with this process being disconnected from everyday operational realities, is powerful enough. It makes acting on ethical intentions harder than it needs to be. We often then fall back into default patterning, doing what is easy, comfortable and familiar. When we add to this the pull of business metrics that we are often remunerated against, and that contribute to higher-level reporting that impacts shareholder confidence, we have a triple threat that widens the ethical intention to action gap.

Ethics is not box ticking. It’s more than a checklist. It’s something that we live and breathe.

Nathan Kinch

How might we close the gap?

Ethics is not box ticking. It’s more than a checklist. It’s something that we live and breathe. Many can, have and will continue to argue that it’s fundamental to the ‘good life’.

So, to do this process better, I’m going to call on a model from the cognitive sciences. This model, the 4Es, proposes that cognition doesn’t occur exclusively in the head, but is variously embodied, embedded, enacted or extended by way of extra-cranial processes and structures. Thanks to John Vervaeke from the University of Toronto, we can add another 2Es into the mix: emotional and exaptive.

  • Embodied: Although many strong claims are made that ethics is an entirely rational endeavour, I would argue that ethics is deeply phenomenological (that is, a philosophy of experience). The process of ‘doing ethics’ is something we can directly and meaningfully experience. It is expressed through our ‘being’ as practitioners.
  • Embedded: Humans can be thought of as biopsychosocial creatures. We cannot be separated from our environment. We exist in relation to it. Taking cues from situated cognition, we see the process of ethics playing out as part of daily ritual and practice. It isn’t something we do outside of core product development, but something that is baked into everything that we do within cross-functional product development teams.
  • Enacted: Following the enactive view of cognition and perception, ethics needs to be part of the ‘doing’, not just the ‘planning’. One could argue that this aligns to much of the substance of Aristotle’s Nicomachean Ethics, so it’s not exactly a new idea.
  • Extended: Drawing on Extended Mind theory and related ideas, ethics should extend into our tools, systems and technologies. This might feel odd, but the idea that our cognition extends far beyond what many cultures might traditionally consider ‘the self’ is gaining steam. This is becoming very literal in the area of Machine Ethics.
  • Emotional: Ethical deliberation and action isn't sterile; it involves empathy, fear, guilt, joy and love. Although for some this will be controversial, here I will suggest that what we commonly think of as emotional intelligence can inform a deep and nuanced, yet practical, ethical wisdom.
  • Exaptive: Finally, flexibility is crucial. Our world is changing rapidly. That’s old news. But one result of this is that our ethical frameworks need the ability to adapt and even repurpose themselves in novel situations. This reflects the concept of exaptation from evolutionary biology.

In my experience, this attempt to apply the 6Es to the process of ethics can shift ethics away from something that feels boring and burdensome – something we ‘have’ to do – towards something that is interesting, valuable and potentially even joyful. 

By combining the 6Es ‘on the ground’ with a whole-of-organisation approach to ethics that is top-down, bottom-up, as well as middle-in and -out, we create something closer to a mycorrhizal network for operating ethically within a complex organising context. We create better conditions for diversity and inclusion, active participation in moral deliberation, and a deeper commitment to ethical action. 

And, if we can add to this an overarching business philosophy that is far more widely focused on real-world positive impact than a narrow view of profit, we are creating far better conditions for ethically aligned action in AI and beyond.

Who knows, maybe the 6Es approach to ethics will help us ‘make the world a better place’ after all.

Nathan (Nate) Kinch is a sociotechnology ethicist. He has spent the last decade operationalising practical approaches to ethics in organisations around the world. He is Ethicist in Residence at Colabs, a co-founder of Tethix and an independent advisor to governments, corporations and startups.

Photo by Alexander Sinn on Unsplash

Read other Comment articles from our Fellowship

  • Earth Day: why it needs to be every day

    Comment

    Phillip Ward

    This year’s Earth Day focuses on plastic pollution. It’s a massive problem that must be addressed, but we need to go beyond one-day initiatives to instil a sense of urgency in responding to all the issues we face.

  • Making the most of your late career

    Comment

    Ann Thorpe

    How do you harness your potential in the last chapter of your career? Ann Thorpe explains how the Late Career Alliance could help to craft your career narrative, impact and legacy.

  • Living better for longer

    Comment

    Peter Gore

    There is an inevitability that we will be able to do less as we get older, but everyone can influence when this happens. Peter Gore argues that we must reject age stereotypes and promote ‘healthy ageing’.