Signal or noise? RSA Comment - RSA

Signal or noise?

Comment

  • Picture of Drew Hemment
    Drew Hemment
    Professor of Data Arts and Society at the University of Edinburgh & Theme Lead for Arts, Humanities and Social Sciences at The Alan Turing Institute
  • Arts and culture
  • Technology

Artificial intelligence is profoundly shaping the way we interact with the world. What happens when artists explore the mistakes - and the poetry - that can result when humans and machines join forces?

Generative AI is crashing like a wave through our newsfeeds, our imaginations, our relationships and the way we make sense of the world. Artists have worked with AI since the 1960s and, even before the current explosion, there has been growing interest in AI among the arts and cultural programmes. For artists working with machine-learning algorithms, the interest is rarely in optimising prediction accuracy. Instead, it is in the mistakes – and the poetry – that can result. Artists add human dimensions to AI, combining procedural, generative processes with intuition, risk and play.

The AI tools we see today are made possible by advances over the past decade in machine learning, which uses powerful algorithms to discover patterns in vast troves of data. An artist can work with AI by labelling the data on which it is trained, or fine-tuning a model using their own data, while the new generative models predict or complete the next instance based on a submitted prompt. Current models learn from internet-scale datasets and build predictions across tens of billions of parameters in a way that no human mind could ever do.

Machine logic has no necessary relationship with the way a human thinks; it is very different from our everyday human way of understanding and representing the world

These advances are impressive, and things get really interesting as these tools become widely used. The big shift, though, is not only in the underlying models, nor even the startling capabilities that surprise even their designers, but also in their uptake and the profound ways they impact society.

The new real

Just as we spoke of the ‘new normal’ after the digital turn, we can speak of the ‘new real’ following the generative turn. The media we consume, the words we read, what we believe and say, our very social reality, are increasingly shaped algorithmically. This is playing out in the workplace, in our ability to organise democratically, and throughout our arts and culture. The images and words generated by AI are called ‘synthetic media’, so I would suggest we can also talk about ‘synthetic culture’ or ‘synthetic society’, to think through the ways in which AI shapes and is shaped by human customs, arts, institutions and achievements.

Human communication and culture are fundamentally changed when carried out in collaboration with intelligent machines. In widely applied deep learning algorithms, the inner workings of the model may be implicit and distributed. Machine logic has no necessary relationship with the way a human thinks; it is very different from our everyday human way of understanding and representing the world. It uses statistical methods to make classifications and predictions, and it is probabilistic, telling us what ‘might be’, not what ‘is’. This is different from the ways people understand why one decision was made and not another.

The power of these tools is astounding. But, trained on data we generate, AI both reflects society and is also deeply political. The large foundation models we see today are in many ways the high point of extractive capitalism. They are made possible by data mining at a massive scale, both from past databases and online transactions, including the data people freely give up when they use social media or post online. They depend on globalised access to cheap labour and rare earth minerals. They require computer resources only the very large companies possess. And they are built through centralising business models that funnel value to the very few. AI is now geopolitical, part of the tussle between Washington and Beijing. Those countries that can are striving for a ‘sovereign AI’ capability, and most of the world is looking helplessly on. An open-source movement in generative AI is gaining ground, and could overtake the current model. But as of today, it dominates, as we see in large language models trained on data scraped indiscriminately from the public internet without reward for the original creators.

There is a lot of noise, hype and distraction in the space right now, and we need clear vision. But, because of the move to data-driven systems and increased complexity of algorithms, the current tools are black-boxed, and opaque to human understanding. To make sense of data-driven methods – machine-learning and pattern recognition models in particular – requires a detailed understanding of the often dynamic data context in which they operate, and even experts may not easily be able to determine what a system ‘knows’.

We pass another threshold when AI models are trained on the outputs of yet other models. This is a multiplier for everything we have discussed. When synthetic media becomes synthetic training data and is used to train new models, we get a feedback loop amplifying those features. The more our culture is generated by or with machine-learning algorithms, the more our culture, the shared ideas and customs that bind us, becomes unfamiliar, estranged, unknowable.

Truth of AI

The New Real is a research centre, jointly convened by the University of Edinburgh and The Alan Turing Institute, which explores AI, creativity and futures. We publish research, develop AI technologies, and regularly support and collaborate with a vibrant community of artists working with and on AI. Over the past decade in particular, a creative community of critical AI artists has responded to advances in machine-learning algorithms, and the wider availability of data and computing capacity. This community shows us how synthetic culture can be enriching, when AI is not reduced to a tool for productivity, and it has sparked and convened important debates and conversations around the art, science and politics of AI. We believe this is a vital source of collective sense-making, one that can help a wider range of voices feed into AI policy and design.

Artists help us to explore and understand the ‘truth of AI’, the uncanny nature of synthetic media, and help us to envision, and maybe even realise, alternative possibilities. Sougwen Chung, featured on the cover of this edition of the Journal, reframes our relationship to the non-human – with her use of AI as collaborator – and also to each other, by promoting an idea of collective authorship that builds on the prior input of many human hands. To Chung, “the canvas is no longer blank”, and even the intimate and embodied act of drawing becomes an interaction with other creators, both human and non-human.

In an artist residency with The New Real, titled ‘AI is Human After All’, Anna Ridler and Caroline Sinders explored the hidden human labour involved in creating and deploying AI. The artists create their own datasets of photographic images, then painstakingly and meticulously extract patterns from the observed data using manual methods. This turns a foundational definition in AI on its head, with the human artists performing a task usually carried out by the computer and associated with machine intelligence. Their practice debunks the neat representations of ‘autonomous’ systems and raises wider questions around human bias and worker exploitation.


The Zizi Show, by Jake Elwes, is a deepfake drag act generated by AI trained on video and motion-tracking data of multiple human performers. A first iteration was commissioned by The New Real for the Edinburgh Festival Fringe in 2019, and a later iteration for Edinburgh International Festival in 2021. Elwes’ purpose is to confront the lack of representation for the LGBTQ+ community in data, and to conjure positive images of non-binary performing bodies, to represent an expansion of possibilities for our sexual and gender identities. Digital avatars trained using motion capture data from real performers morph and transform into captivating figures that defy categorisation.

The environment and climate has been a constant thread for The New Real. An early work was AWEN: A Walk Encountering Nature, by Inés Cámara Leret working with a multidisciplinary team. In this, a self-guided walk supported by a mobile app enabled a global audience during the COP26 United Nations Climate Change Conference to have a meditative experience of their environment, to overcome the disconnect between global climate information and local, lived experience. In The Thames Path 2040, Alex Fefegha used climate forecast data and a generative adversarial network on The New Real’s platform to create speculative imagery of what Londoners might lose and what will remain in a future where heavy rainfall will lead to increased and widespread flooding.

Over the last year, The New Real, working with the Scottish AI Alliance, commissioned five artists to explore the uncanny interplay of humans and machines, and the social implications of recent developments in AI. All presented a video on their findings, and one of those artists, the UK- and Netherlands-based Polish artist Kasia Molga, has been commissioned to develop a new work. In a powerfully personal project, Molga is using The New Real’s AI platform to explore her father’s diaries from his life working on the Mediterranean Sea, combined with public records of ships’ logs and climate data, to bring a fresh perspective on the world’s oceans that he travelled throughout his life.

Finding the artistic signal

One question that has become louder is whether generative AI is good for the arts. In a February 2023 article in The New Yorker, the science fiction writer Ted Chiang asks if generative AI can help humans with the creation of original art. He gives us a powerful metaphor for current generative models: they are a “blurry JPEG of the web”, comparable to a ‘lossy’ data compression technique, where data size is reduced by discarding some information, which then becomes unrecoverable.

Even with the most carefully curated and comprehensively compiled training data, the current generative AI gives us “a superficial approximation of the real thing, but no more than that”. It is like “placing an existing document on an unreliable photocopier and pressing the Print button”. Chiang argues that, while there is a genre of art known as photocopy art, that doesn’t mean photocopiers have become an essential tool for artists, or will produce good art.

The work of AI artists enables us to see that even the limitations of current AI can be the basis for powerful and compelling art. Returning to the example of Elwes’ Zizi, this work can be read as a ‘blurry jpeg’, and yet it is so much more besides. Elwes holds up the blurry jpeg of the AI outputs as a sign, to illuminate the preternatural, uncanny nature of AI and to celebrate the non-binary identities of the human performers. An artwork such as Zizi generates something new and meaningful from prior information. It isn’t, to quote Chiang, a “blurry copy of unoriginal work”; it is an original use of blurriness to say something important and to create something beautiful. This highlights why the work of artists such as Elwes is so strong; it makes a virtue of the lossy nature of generative AI. It points us towards a singular and edifying kind of blurriness, and that is what makes it good art.

The work of AI artists enables us to see that even the limitations of current AI can be the basis for powerful and compelling art

Elwes worked with original performers in a way that was empowering, not parasitic. Similarly, we need to think beyond generative AI simply helping artists by generating a first draft or sketch. We should instead watch and learn from AI artists, to see how generative AI can be a material we can work with, and also find an impetus or provocation to inspire us to co-create work that is meaningful with intelligent machines.

In the work of these artists we see outlines of a plurality of synthetic cultures emerging. AI is likely to weave its way into many more devices, interfaces and tools that we use to create and communicate; it is already pervasive and ubiquitous in a way that photocopiers or paintbrushes never will be. That doesn’t mean that AI will be essential to art, or that all ‘creative’ applications of AI give us art. But what we are seeing in the works being created by our New Real artists is that there is a vital and important field of critical AI art, one that enriches culture, and helps us to ask difficult questions.

Drew Hemment, FRSA is Professor of Data Arts and Society at the University of Edinburgh, and Theme Lead for Arts, Humanities and Social Sciences at The Alan Turing Institute.

This article first appeared in RSA Journal Issue 4 2023.

Be the first to write a comment

0 Comments

Please login to post a comment or reply

Don't have an account? Click here to register.

Read more Journal and Comment articles from RSA Fellows and associates

  • The slow death of classical music education

    Comment

    Ray Coyte

    Classical music is becoming a niche activity not helped by the fact that fewer and fewer state school pupils are learning to play orchestral instruments. It’s time to reassess our priorities, argues Ray Coyte.

  • Is art the highest form of hope?

    Comment

    Linda Bryant

    Art can have a transformative effect on mental health and wellbeing by providing hope both for the creator of the art and the person experiencing it. Linda Bryant discusses this healing effect and calls for an evolution of art in hospitals.

  • Taking flight

    Comment

    Jonathan Sapsed

    A new research hub explores how building corridors of creative industries could accelerate growth across a pan-northern ‘supercluster’.