What are existential risks (x-risks)?

Philosopher Toby Ord defines existential risks as "risks that threaten the destruction of humanity's long-term potential".1 In the context of AI safety, "existential risk" usually refers to human extinction, though it can also refer to the risk of irreversibly locking in a drastically worse state of affairs (such as permanent global tyranny).

There have been at least five mass extinction events in the history of the earth, during which a large percentage of species went extinct. For most of human history, extinction risk had only natural sources, e.g., impact events or supervolcanoes.

Technological advances have resulted in man-made sources of existential risk, such as nuclear war and engineered pathogens. Ord argues man-made risks are orders of magnitude more likely2 than natural risks to cause extinction in the next century. Some argue that the most significant existential threat to humanity will soon be posed by powerful AI systems.

We believe AI is an existential threat because it is plausible that powerful agentic AI will be built that is vastly smarter than humans, and no working plans currently exist for keeping such an AI under human control. In the pursuit of its goals, the AI may end up wiping out humanity (even just as a side effect) or locking humans into a perpetual dystopia.


  1. From Toby Ord’s The Precipice. This is a rephrasing of Nick Bostrom’s definition: "An existential risk is one that threatens the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development." ↩︎

  2. Toby Ord, in The Precipice: "I estimate anthropogenic risks to be more than 1,000 times more likely than natural risks. And within anthropogenic risks, I estimate the risks from future technologies to be roughly 100 times larger than those of existing ones…" (p.163). He estimates the chance of natural existential disasters to be within an order of magnitude of 1 in 10,000 and that of anthropogenic existential disasters to be about 1 in 6. ↩︎



AISafety.info

We’re a global team of specialists and volunteers from various backgrounds who want to ensure that the effects of future AI are beneficial rather than catastrophic.