On our power to fulfil the prophecies of predictive algorithms

Helga Nowotny, former President of the European Research Council, explores how our views of AI’s predictive algorithms can lead to behaviour change that causes those predictions to come true, like self-fulfilling prophecies, and might even change our outlook and control of the future.

Like Comment

This post is an overview of Helga Nowotny’s book, In AI We Trust. Power, Illusion and Control of Predictive Algorithms. Polity, 2021.

Book cover of Helga Nowotny's In AI We Trust. Power, illusion and control of predictive algorithms

The challenges, risks and opportunities linked to digitalization and AI have invaded our lives, the workplace and how we do science. They are global in nature, and much is at stake. Undoubtedly, we have reached a new stage in our cultural evolution, driven by science and technology and accelerating change. Humans have had to adapt many times to novel situations before in their history, but this time they need to come to terms with a technology created by themselves that is rapidly altering the social and natural world we are used to inhabit. Digitalization is also likely to impact how we will confront the challenges coming from climate change and the sustainability crisis. AI can help us in many ways, such as by modelling complex systems that allow us to better understand the intricacies of the dynamics of their interactions, the emergence of new phenomena, and help us to foresee when a complex system reaches criticality. Such tipping points initiate a phase transition that might also lead to collapses.

There have been other major techno-economic-social paradigm changes before. It began with the Industrial Revolution in Europe and what economic historian Joel Mokyr called the culture of growth’. The causes and impacts have been extensively studied, including the societal upheavals and miseries it brought for some, before a general betterment of the human condition set it. Comparisons with past transformations can be useful, but often they end up with the realization that this time is different. Our present situation is the outcome of the combination of availability and access to an enormous amount of data; sophisticated algorithms embedded in MachineLearning and DeepLearning that are trained on data, but partly can invent their own rules; and last, but not least, unprecedented computational power. So, what else is different this time?

A way to enhance our lives or to lose our humanity?

Digital technologies and AI have had an almost instantaneous global reach and impact. They span the globe, not only by connecting us in communicating, but by ever denser networks of satellites and sensors that will soon allow to monitor and map almost everything that happens on this planet. The main owners of digital technologies are the large corporations that operate world-wide and have reached an enormous degree of concentration of economic power. The pandemic has pushed us further into the multifold process of digitalization, but has also revealed the vulnerability of supply-chains and logistics. It has provided us with a digital lifeline to escape social isolation, but also gave us a foretaste of what a digital world could be like when social contacts are minimized and the needs of our body to keep in touch with each other are severely curtailed.

An enormous amount of literature exists on the social impact of AI. A wide gap becomes visible between the enthusiasts of an unbridled techno-optimism on the one hand, and dystopian visions warning of the loss of control by humans and their subjugation to machines on the other. It is not easy to navigate between the promises of further enhancement and cognitive augmentation, not to speak of the unmitigated benefits for society that are said to come with them, and the dire warnings that project the dangers of state surveillance, social unrest caused by mass unemployment and/or standing at the brink of an AI-induced loss of our humanity. At times, I felt to be in a maze that had been deliberately designed to confuse and to be without exit route. At other times I felt to have entered a labyrinth which contains a sacred centre for those who seek enlightenment or are on a spiritual quest. At that centre resides the concept of transhumanism: the desire to leave the body and our mortality behind and to be transformed into digital entities the next phase of our evolution.

Therefore, I wanted to avoid falling into the trap of having to choose between a techno-optimism that is oblivious to the societal context and the social costs that digitalization inevitably confers on some parts of society, and the dystopian scenarios that deprive us of any hope and confidence that digital technologies should serve our needs as we are the ones who create them. I decided to focus on what I consider the crux of where the digital technologies have the greatest impact on us as human agents and where we have the greatest responsibility of safeguarding and maintaining what it means to be human: the ways in which we design and deploy predictive algorithms. They allow us to see further into the future, following the old desire to know what the future will bring. The uses of oracles and divinatory practices that were common in practically all civilizations have vanished, yet we are as keen as our ancestors to engage in foresight exercises in order to be better prepared for the next pandemic or other potentially harmful events that are part of the known unknowns’, events that are likely to occur, but we do not know when.

Know yourself – or get a prediction algorithm instead

This is where predictive algorithms enter. By allowing us to see further ahead, they support decision-making. In some fields of medical diagnostics, predictive algorithms are already outperforming the medical experts in pattern recognition and, for the scientifically difficult problem of finding out how proteins folds, predictive algorithms have provided the solution in a strikingly short time. Businesses are keen to use the simple economics of AI’ and rely on predictive algorithms for greater efficiency. Increasingly, institutions like the judiciary system in the US, but also insurance companies, the health care system, state unemployment offices, and the educational system, use them for sorting out who is likely to benefit from their decisions and services. Letting an algorithm decide in place of a human promises greater efficiency, a lowering of costs, and a reduction of what Cass Sunstein, Daniel Kahneman and Olivier Sibony call noise’, the flaws in human judgment due to variation and that be reduced by algorithmic standardization.

All these predictions result from deploying algorithms based on MachineLearning or DeepLearning methods modelled after neural networks, as well as a huge amount of data. Yet, these predictions are based on the extrapolation of data that come from the past. It is easy to forget that they cannot know the future – it remains inherently uncertain – and that they are always couched in probabilities. But there is an important difference compared to statistics as derived from the trust in numbers’ and based on averages and categorizing social groups or aggregates. In contrast, predictive algorithms target the individual qua individual, somewhat reminiscent of an oracle that was also aimed to tell the supplicant which future destiny to expect, as Elena Esposito has pointed out.

Prove me wrong

This is why it is easy to forget about the probabilities that also apply to predictive algorithms and why we so often attribute agency to them. When it comes to social behaviour a significant change occurs in the perception of what predictive algorithms can do. By attributing agency to them, they are believed to have the power to know’ much more than they actually do. Many believe that they can do things that far exceed human capabilities, as well as that their predictions will actually become true. Based on mathematical calculations that come enshrouded in a whiff of alleged greater scientific objectivity’, and the fact that most algorithms operate as a black-box where even experts do not know what actually goes on inside, algorithms are attributed a kind of superior epistemic status. We then get the feeling that they know us better than we do ourselves. We begin to believe that what the predictive algorithms says about the risk of getting a certain disease will actually occur, forgetting that every prediction remains couched in probabilities.

This is the reason we fall into the illusion we have created about the power of predictive algorithms. By believing their predictions, we begin to change our behaviour accordingly and adapt in anticipation of what we expect to happen in the future. This is the risk of self-fulfilling prophecies that confirm a previous expectation, actually inducing the predicted social situation and thus turning a mere possibility into reality.

Yet, there is more at stake. Our outlook on what the future is can change dramatically. For the largest part of human history people believed that the future had been predetermined – by God or the gods, by destiny or chance. Only a few centuries ago, when the amazing achievements of modern science and technology became widely visible and their benefits began to percolate through society, did people begin – also under the influence of ideas from the Enlightenment – to realize that their future was not necessarily static, nor predetermined. They were no longer bound to a future that was a mere repetition of the past. Rather, for the first time, the horizon of the future became an open one. The future was perceived, at least partly, to be shaped by us.

In my book In AI We Trust. Power, illusion and control of predictive algorithms I point to a paradox that lies at the heart of our trust in AI: we leverage AI to increase our control over the future and uncertainty, while at the same time the performativity of AI, the power it has to make us act in the ways it predicts, reduces our agency over the future. This happens when we forget that we humans have created the digital technologies to which we attribute agency. As we try to adjust to a world in which algorithms, robots and avatars play an ever-increasing role, we need to understand better the limitations of AI and how their predictions affect our agency, while at the same time having the courage to embrace the uncertainty of the future.


Photo by Tobias Fischer on Unsplash

Helga Nowotny

Professor, helga.nowotny@wwtf.at