Prediction Fiction - RSA

Prediction Fiction

Feature

  • Picture of Madeline Ashby
    Madeline Ashby
    Author and Futurist
  • Public Services & Communities
  • Communities
  • Institutional reform
  • Technology

A futurist-novelist explains how strategic forecasting helps build better tomorrows by facing the truths of today

Who could have possibly predicted this?” has become the increasingly sarcastic refrain to seemingly every news item of this century. The mounting consequences of Brexit for the UK, the pandemic, global warming, species extinction, unchecked authoritarianism and starved social services are just a few of the headlines to have received this inevitable response. The implication, of course, is that anyone with two neurones to rub together could have foreseen what might happen if circumstances continued as they were. Collapsing supply chains, falling birth rates and rising wait times, the skyrocketing price of, say, rocket: all of these were predictable consequences of ongoing events.

So why didn’t anyone say anything? Why didn’t we, having read all the pertinent signs, exit the roundabout before becoming trapped? The truth is, someone did say something. Many someones said something.

Warning signs

Take Brexit as one example. In 2016, US Federal Reserve Chair Janet Yellen said that the UK’s departure from the EU would have “significant economic repercussions” and likely trigger a rise in interest rates. That same year, then-UK Chancellor George Osborne warned, “There would have to be a hardening of the border imposed by the British government or indeed by the Irish government.” Former British prime ministers Sir John Major and Tony Blair agreed, warning about the potential for destabilisation of the Good Friday Agreement and the Common Travel Area. In 2018, the Bank of England cautioned that, with a no-deal Brexit, the UK economy could shrink up to 8% in a single year, with a 25% drop in value for sterling. These are just some of the warnings about one issue. The same is true of all the other crises. People spoke up.

These warnings did not come from fringe figures, but from establishment figures with access to mainstream media platforms. They hailed from places we in the business call WEIRD: Western, Educated, Industrialised, Rich and Developed. That is likely the closest that any of them have ever come to weirdness. And perhaps that is the root of the problem. Maybe they – the adults in the room – lacked the core capacity to imagine just how weird things might get. When Donald Trump was elected US president, the Canadian science fiction author William Gibson realised he had to rewrite his novel, Agency, in part because he had previously considered Trump’s victory impossible: “I was losing a sense of how weird the real world was.”

Myths, fables and fairy tales are full of warnings: witches on heaths, conjured spirits in caves, Cassandras and Sybils, all of them with access to the ear of one king or another. All of them warned against touching the hot stove of history. And all of them were dismissed as hysterical scolds or fearmongers.

Brain power

To understand why we minimise these warnings, it is important to understand the brain. Our brain has two risk-assessment mechanisms: the amygdala and the neocortex. The former evolved before the latter. The amygdala reacts to jump scares in horror films, while the neocortex reminds us that the person holding our hand during the film is statistically more likely to harm us than a stranger outside the cinema. These two systems often contradict each other and lead us to make decisions against our best interest. For example, the threat posed by social exclusion is more immediately alarming to our nervous system than the eventual threat of illness, which is why we pick up the drink or take off our masks.

Business and government are subject to the same conflicting impulses. The architecture of our brain can become the architecture of our policy. The threat of a mugging can feel more ‘real’ than the threat of wage theft, by far the more common crime. Still, most police budgets address the former more than the latter. And, in 1975, researchers Christine and J. Richard Eiser found that, when evaluating the likelihood of possible outcomes, people tend to opt for the outcome they personally prefer. Or, as Caroline Beaton put it in an article in The Atlantic of the same name: humans are bad at predicting futures that do not benefit them.

Consider Southwest Airlines: it posted consecutive profits between 1973 and 2019, in part because it saved money by not investing in higher technology. Cut to the holiday storm of December 2022, when Southwest cancelled almost 17,000 flights and lost between $725m and $825m (£598m and £680m) in a matter of days, primarily due to its outdated scheduling software. It is not that the airline did not know winter was coming; for years it had failed to listen to union leaders’ warnings about exactly this possibility.

It is tempting to chalk up the aforementioned responses to social pressures like ‘toxic positivity’, or the avoidant habit of denying or minimising negative emotions, circumstances, events or possibilities. In London, this is the proverbial ‘stiff upper lip’. In Los Angeles, it’s #goodvibesonly. The pressure to maintain a positive outlook in the face of negative developments can cause people to blame themselves when they feel sad, frustrated or fearful, even when those feelings are warranted by their circumstances or recent events. (For example, the deaths of almost 7 million people from a preventable disease, or the fact that humans have killed 60% of the total wildlife population since 1970.) This is reframing gone amok, and over time it can lead to self-deception, dissociation and inauthentic communication.

This contributes to what my colleague Scott Smith calls “flat-pack futures”, or what the Canadian scholar Sun-ha Hong calls “technofutures”, which “preach revolutionary change while practicing a politics of inertia”. These visions of possible future realities possess a mass-market sameness. They look like what happens when you tell an AI image generator to draw the future: just a slurry of genuine human creativity machined into a fine paste. Drone delivery, driverless cars, blockchain this, alt-currency that, smart mirrors, smart everything,and not a speck of dirt or illness or poverty or protest anywhere. Bloodless, bland, boring, banal. It is like ordering your future from the kids’ menu.

When we cannot acknowledge how bad things are, we cannot imagine how to improve them. As with so many challenges, the first step is admitting there is a problem. But if you are isolated, ignored, or ridiculed at work or at home for acknowledging that problem, the problem becomes impossible to deal with. How we treat existential threats to the planet today is how doctors treated women’s cancers until the latter half of the 20th century: by refusing to tell the patient she was dying.

But the issue is not just toxic positivity. Remember those myths about the warnings that go unheeded? The moral of those stories is not that some people are doomed never to be listened to. The moral of those stories is that people in power do not want to hear how they might lose it. It is not that the predictions were wrong, but that they were simply not what people wanted to hear. To work in futures, you have to tell people things they don’t want to hear. And this is when it is useful to tell a story.

To work in futures, you have to tell people things they don’t want to hear. And this is when it is useful to tell a story.

The stories we tell

Humans do what time does not: we die. So, we live our lives as stories with a beginning, a middle, an end. We speak of life in chapters. We tell stories to sort the chaos of experience into some semblance of order. The instinct to know what happens next is endemic; we are messy mammals who love drama. The novelist understands this instinct and has fun with it. The futurist understands this instinct and tries to cultivate it into a lifelong awareness.

I play both roles. I am a novelist and a consulting futurist or, more accurately, a strategic foresight consultant. Simply put, the latter involves evaluating today’s practices against tomorrow’s possible realities. I attended school for it; I have a master’s in design in strategic foresight and innovation. Like my colleagues, I can research signals, trends and drivers and collate them into a report; facilitate a workshop to reveal insights; maintain awareness of instrumental changes in science, technology, culture, climate and policy; and develop scenarios which might assist in strategic planning and organisational change.

But, because I’m also a novelist, usually people come to me when they want a story told about a very particular future. “I need a near-term nuclear exchange future for the Eastern seaboard.” “I need a story about an infrastructure attack on a smart city.” “I need a story about the end of antibiotics.” And so on.

My colleague Brian David Johnson calls these stories “science fiction prototypes”. My colleagues August Cole and Peter W. Singer call the practice “useful fiction” or “fic-int”. Sometimes the goal is to show how humans might behave in a situation that has yet to occur. Sometimes it is to show how humans might use emerging technologies. The process is a lot like bespoke tailoring. First, you take the client’s measure: you ask about form, function and venue. You make a mock-up of a future and drape the story over it, then cut and cut again, and add finishing flourishes. At the end you hold up a mirror and say: this is what it looks like.

One value of this approach is that it has nothing to do with any single person or policy being right or wrong. It simply presents an immersive experience of potential alternatives or outcomes, informed by deep research, and then offers those experiences as topics for discussion. This experience can take the format of a short story, a comic book, a film, a museum exhibit, or even a piece of design fiction, a thing from the future that you can hold in your hands. But its goal is to provide a lived-in sense of a possible future so that its implications become as real to the amygdala as a jump scare.

Another advantage is that this approach can offer an aspirational vision. Science fiction is often credited with influencing technology development: the Star Trek tricorder becomes the iPhone; Blade Runner’s Voight-Kampff device becomes a biometric scanner at your airport. But these are narrow one-to-one comparisons. An aspirational future is a place where you might actually want to live. After all, the crew of the Enterprise had tricorders and transporters not because they purchased them, but because earth’s governments united to renounce war, money and slavery, making the planet a legitimate entrant to a new system of exchange.

Storytelling is just one part of the foresight process, which, unlike many other forms of consulting, is not about assessing performance or finding efficiencies. Foresight is about looking ahead and deciding where to go next. My job is not to predict the future. My job is to help people have a fearless conversation about multiple futures.

Foresight creation

Strategic foresight evolved out of what we would now call strategic planning. In 1921, the Soviets founded the State Planning Committee. In 1932, H.G. Wells called for ‘Departments and Professors of Foresight’. In 1933, the US Research Committee on Social Trends published its first report, influencing the formation of Social Security. The Second World War only highlighted the need for continuous strategising. Fast-forward to the 1970s and we see Pierre Wack using scenario forecasting at Royal Dutch Shell (helping the company to avoid the worst of the oil shock), and the first graduate programme in Public Policy and Alternative Futures, at the University of Hawaii. The UK’s Government Office for Science has run foresight studies since 1993.

But the idea of a ‘strategist’ is ancient. As early as 600BC, the Greeks named one strategos (literally, ‘a leader of that which is spread out’) to a military command position for each tribe. The strategoi made decisions about where to move armed forces; 10 strategoi decided the victory at the Battle of Marathon in 490. From its military roots, the role of strategos became political and rhetorical over time. Pericles, the strategos who sculpted the Delian League into the Athenian Empire, ushered in public funding for the arts and initiated construction of the Acropolis.

In 1999, Richard Slaughter defined foresight: “Strategic foresight is the ability to create and sustain a variety of high-quality forward views and to apply these emerging insights in organisationally useful ways; for example, to detect adverse conditions, guide policy, shape strategy; to explore new markets, products and services.” In our book How to Future: Leading and Sense-making in an Age of Hyperchange, Scott Smith lays out the foresight process in this way: scoping; sensing and scanning; sense-making and mapping; scenario

development; storytelling and prototyping; ongoing assessment. In 4 Steps to the Future: A Quick and Clean Guide to Creating Foresight, Richard A.K. Lum puts forward a simpler but broader process: past; present; futures; aspiration.

I mention both approaches because they offer an insight into where any foresight process gets tricky. One cannot solve a problem without admitting it exists, and people cannot agree on a preferred future if they disagree about the past. There is a reason that ‘truth’ has to come before ‘reconciliation’, something foresight professional Adam Kahane talks about at length in his books describing his work developing post-apartheid scenarios in South Africa.

One cannot solve a problem without admitting it exists, and people cannot agree on a preferred future if they disagree about the past

Shared dystopias, bespoke utopias

This brings us back to fiction and questions that I am asked frequently: Where are all the utopias? Isn’t science fiction supposed to be optimistic?

Why can’t kids today imagine better futures? The answer is: they can. But their ideal future is different from yours.

This is the dirty secret of why dystopias are more popular than utopias: they are easier to write because all the homework has already been done. Open up a history book, and you’ll find plenty of examples to build a world on. All dystopias look the same: poverty; deprivation; the loss of rights; the rise of fear; a dead environment; mass surveillance. (For Orwell this was 1984. For you, it was Tuesday.) The bad guys might wear silver jumpsuits or black armour; the dictator might be called a hegemon or some other big word for a small man. The features of the dystopia might be different: some might violently enforce gender norms, while others might violently enforce gender norms… in space. Pick your poison, but it all boils down to this: same jackboot, different day.

Conversely, utopias are inherently bespoke. They embody ideals shaped by culture, time and experience, all of which can differ across nations, genders, incomes and more. Our utopias are different. You might think the opposite. You might think we all want the same things but have wildly different visions of how to achieve our aims. I regret to inform that this is not the case.

Not everyone wants peace. Not everyone wants equality. Not everyone wants rights or freedoms or accountability, or access to healthcare and education and infrastructure. Or, they might, but they might not want to share. Some people firmly believe that the world would be a better place if I couldn’t vote or attend school or choose what to do with my uterus. Me, housebound and silent and dying of ectopic pregnancy: that’s their shining city on a hill.

Everybody wants to rule their own world, and there are technologies available to make it happen such as virtual reality platforms, individual screens in every home, Uber for everything. They provide protection from surprise, which is to say, protection from difference. Decades of algorithms offering us nothing “more than more like”. This, alongside the evaporation of our public spaces, has diminished our capacity to make space for community. This place would be great if it weren’t for all the (other) customers.

But it is place, and placemaking, that’s important. Imagining better futures requires making and holding dedicated physical, mental and emotional spaces for that imagining. Any effective foresight process requires time, space, energy and resources to pause for horizon-scanning and strategic evaluation of current behaviours. Futurists may feel as though we live a few years ahead of everyone else, but we achieve that velocity by slowing down to consider issues that others have ignored. This requires enough quiet to hear the inner voice chime in when someone asks, “What’s the worst that could happen?” It requires the courage to answer the terrifying question of what it is we truly need.

Utopias are not places where nothing bad happens, but places where damage has been repaired. Creating an aspirational future is a profoundly vulnerable act. When describing a utopia, one is really pointing out wounds, and imagining what they might look like when healed.

But first, you have to show them where it hurts.

Madeline Ashby is an author and futurist specialising in scenario development and science fiction prototypes. She is a member of the AI Policy Futures Group at the Arizona State University Center for Science and the Imagination and the XPRIZE Sci-Fi Advisory Council

This article first appeared in the RSA Journal Issue 1 2023.

Cover and artworks by Kyle Bean for the RSA. Kyle is a London-based artist and director specialising in handcrafted design and imagery.

 

Be the first to write a comment

0 Comments

Please login to post a comment or reply

Don't have an account? Click here to register.

Related articles