What is an existential risk? The Cambridge Existential Risks Initiative (CERI) uses the following definition:
“An 'existential risk' (x-risk) is an event that could permanently and drastically reduce humanity’s potential, for example by causing human extinction.”, CamXRisk
A problem with discussing existential risks is that the definitions or understandings of it are quite ambiguous. Also, the Cambridge definition has a weakness. What does drastically mean specifically?
I would try my own definition:
An existential risk to humanity is a threat that could damage large parts of the planet or human structures or kill large parts of humanity to an extent or in a way, that a recovery to modern levels of society is not likely within a timeframe of a centuries.
Every definition has its weaknesses, but I think we have a working understanding of what we are talking about. Of course, even if an existential threat does not go the full way, say we would face a nuclear exchange that is limited to Europe, the effects on the global society might not be existential in the narrow sense of the meaning but still very dramatic. Lastly, we have to keep further parameters in view, such as timeframe (how fast does the risk play out once it happens) and range of effects (how far-reaching is the effect, e.g. does it hit “only” humanity, or the whole global ecosystem).
Now having the definition game out of the way, what existential risks are out there, can we list and maybe even rank them? Or does a ranking of existential threats make no sense at all? Existential means, (1) huge consequences when the worst case happens and evidently also (2) it did not happen yet, at least not to our modern society. That means, we have no proper statistical basis to draw reliable conclusions from. A ranking could be based on probabilities of entry and severity of effect. But as we do not have sufficient data from prior cases for that reason, probability of entry are very vague and usually just guesses. The same is true for the severity of the effect. As Niall Ferguson writes:
“A pandemic is made up of a new pathogen and the social networks that it attacks. We cannot understand the scale of the contagion by studying only the virus itself, because the virus will infect only as many people as social networks allow it to. At the same time, a catastrophe lays bare the societies and states that it strikes”
This is not only true for pandemics, but for other societal and potential existential risks as well. To estimate the effect of an existential risk thus means (1) we have to understand the mechanic of the risk very well and (2) how societies react to it. For that reason, the situation can be counterintuitive in numerous instances. For instance, climate changes and yet predicted (and reported) effects do not actually happen:
“annual US deaths from tornadoes have fallen by more than a factor of ten since 1875”
or extreme weather events might increase in some places and still:
“weather-related death rates fell dramatically during the past one hundred years even as the globe warmed 1.2°C (2.2°F); they’re about 80 times less frequent today than they were a century ago. That’s largely due to better tracking of storms, better flood control, better medical care, and improved resilience as countries have developed. A recent UN report confirms the trend over the past two decades.”, Steven Koonin, Unsettled
Of course, this could change significantly should we come into a runaway situation, which some climate scientists are afraid of but others dispute. The bottom line is: we usually know much less about the mid or long-term outcomes of complex systems than we think we do.
So, any ranking of existential risks tends to be a fools' errand. Thus, in my list the ranking is not to be taken too seriously, but it is not entirely negligible. For instance, nuclear weapons are number one for a good reason: There is a reasonable chance for a full nuclear exchange, and the consequences are absolutely catastrophic, mitigation and adaptation are minimal, and all that happens in a very short timeframe. Whereas population collapse is a significant threat for our society and there is little room for mitigation, but plays out over decades or centuries, and we can hope for adaptation measures. Climate change is in between, and actually very similar in its mechanic: there is not a lot we can actually do to mitigate, especially in a short time frame, and the effects will most likely play out over long periods of time, which again, opens room for adaptation. But the effects are wider ranging than population collapse, in that the whole biosphere is effected.
These are the reasons why climate change is not in the top five in my list. It is not one event, it is gradual and adaptation much more likely than a nuclear war. Other large humanitarian problems like hunger, poverty or malaria are not on that list either. These, and many others, are terrible threats to human flourishing in parts of the world but are not threatening humanity in a larger or long-term sense. They are rather effects of other problems. Again, this does not mean, we should not do anything against those, but it is simply not the focus of this discussion.
Systemic aspects of most existential threats forbid a naive reading, in the sense, that we should work the list from top to bottom and only continue to “the next action item” once we have completed the previous. In fact, the most fundamental insight is, that there is not one “biggest problem”, not even three or five. Living in a modern, complex society — in the Anthropocene — has positive effects: we are farther away from nature, safer and with much higher living standards than ever before. We do not die any longer in harmony with nature, as Hans Rosling put it. But it also has negative effects which can be, unfortunately, existential. Many, if not most, of the threats on this list are connected in complex ways. This means, we most likely cannot solve any of those, especially not in isolation. But it also means that we got our priorities wrong in the last decades.
»We are always well prepared for the wrong disaster. […] But history warns us that you don’t get the disaster you prepare for.« , Niall Ferguson
and
“For every potential calamity, there is at least one plausible Cassandra. Not all prophecies can be heeded. In recent years we may have allowed one risk — namely climate change — to draw our attention away from the others.”
Also, as mentioned above, some threats are on the list are not isolated but can be seen as a potent trigger for other threats, e.g. political extremism or collapse of infrastructure.
Thus, focusing on the mitigation of one risk alone (e.g. climate change), can dramatically increase other risks, as each medication has side effects. Would we, for instance, Just stop oil, this would be a guarantee of an existential disaster for humanity. Often, we did not only get the priorities wrong, but our activities were so inept that they, in fact, increased the risk they were supposed to mitigate and also negatively affected many other risks, especially the ones higher on the list.
Not focusing our attention on one risk alone is clearly not easy considering the dominating mode of (legacy) media and politics. But it is like not smoking to stay healthy, while at the same time driving a motorcycle and doing wingsuit jumps from mountains. Whatever we do should be a wide-ranging approach, trying to make our society more resilient against various risks, for instance:
- Increasing diversity (in supply chains, food production, energy systems, ...).
- Providing redundancy for important services (grid replacement part agains Carrington events, emergency hospital beds, etc).
- Preparing for catastrophes with emergency management, properly funded military, etc.
- Decoupling (for instance financial services), so that a catastrophe in one country does not immediately spill over to others.
- Creating smaller decoupled, redundant structures that organise bottom up, not heavyweight top down ones.
- Making societies more wealthy and educated in general helps the society to be better adapted to critical events. This clearly stands in opposition to the ideas of many “progressives”.
- Try to estimate the positive effects of an intended measure per Euro spent instead of following ideology-driven activities.
One example, where we failed in an embarrassing way and saw our near total lack of preparation is the Covid crisis, to just mention a few aspects:
- Even though pandemics of this type happened in the past and were predicted by experts, we had no proper studies, which emergency measures would be actually helpful (and even three years later the knowledge is scarce).
- We had no PPE stocked in reasonable quantities, not even for medical personel.
- We had not even crisis processes defined. In the middle of the first lock down, I followed a discussion in German media, where experts were contemplating, where infected people should do. Should they go to the general practitioner or not, to the hospital, etc?
- We had no clear and unifying reporting framework established, and even years into the pandemic the data quality was bad.
- We had no prepared plan how to come to reasonable conclusions quickly, which actions to take (virologists are not the only experts that have a say in a pandemic).
All of these preparations would not have been expensive at, especially considering the cost the lack of preparation inflicted on our society and the fact that we had people whose job it was to do just that.
I think we can create a much more resilient society and mitigate some of the risk, and adapt to others — maybe shift them from existential to problematic, so to speak. Now to come to a suggested list of existential threats with a vague prioritisation subject to what was said before:
- Nuclear weapons, voluntary and especially also involuntary deployment (as we had a very close shave more than once)
- Natural or engineered pathogens leading to pandemics — especially considering the recent developments in synthetic biology
- Collapse of critical infrastructure (or multiple infrastructures in a domino effect, e.g. ICT failure or attack on ICT triggering energy problems, or financial services supply chains, etc.) that supports or modern society due to human errors or complexity spinning out of control, such as
- energy infrastructure (e.g. way too little supply of fossil fuel in the next decades; large scale collapse of grids)
- global financial system
- ICT
- supply chains
- Solar Superstorm (also called a Carrington Event) which could destroy large parts of modern electrical infrastructure at once. It happened already in the past, for instance in the year 1859 (where the damage was limited for obvious reasons), but we also had a close shave in July of 2012. Also insurances describe that risk, for instance Lloyds:
- “A Carrington-level, extreme geomagnetic storm is almost inevitable in the future.”
- “The total U.S. population at risk of extended power outage from a Carrington-level storm is between 20-40 million, with durations of 16 days to 1-2 years.”
- Run away automation / robotic (»AI«, but on a more general level), especially in combination with one of the other risks on the list;
- Climate change (especially in the case that tipping points and runaway climate change occur)
- Nanotechnology or other ways of dramatically changing manufacturing and chemistry leading to nanomachines
- Population collapse is an existential threat in slow motion; “Will civilisation end with a bang or with a whimper? It is currently trending to end with a whimper in adult diapers.”, Elon Musk
- Ecosystem collapse, biodiversity catastrophe / “global domino effect”
- Unknown unknowns
- Green Genetical Engineering gone wrong
- Political/religious extremism (probably not per se, but in combination with or as a trigger of other factors mentioned above, e.g. nuclear weapons, bioweapons, etc.)
- Asteroid impact on Earth (which is so low on the list, because of the assumption, that we should be able to detect this threat early enough and mitigate it)
- Geoengineering accident
This list is thought as an encouragement to discuss this issue. Is the order correct? Probably not. Is something missing? I researched for a time to come up with that list, still, most likely I forgot something. This is also why one article focuses on unknown unknowns. This is important. As our knowledge is very limited, it appears to be much wiser to prepare our society for impacts of different types, as mentioned above.
But is the idea of existential threat actually credible? Some pundits say that eventually humanity turned out to be more resilient than we believed at times (see also Apocalype Always). But I think it is short-sighted and such hand-weaving comments are problematic. It seems to be difficult to argue that e.g. nuclear weapons are not an existential threat. Just because someone played Russian Roulette six times and did not die, does not make it a save pastime.
The confusing fact that nuclear weapons also (most likely) avoided world war three in the past indicates how complex the situation is. Mutual assured destruction, on the one hand, might have avoided a war, but should it still come to a war — even due to technical or human error, as we have seen in the past — it would be, well, mutually destructive. There is no way around it.
And this paradox seems to be a common attribute of many modern technologies, which seem to make the edge between benefit and catastrophe ever sharper. So we are well advised to take these threats seriously. But to reiterate a point, I made in the beginning: it is a major mistake to mostly focus on one threat in this list and believe we can reduce our societal risk.
We will not.