Is the AI safety movement about stopping all technology?
You might think that people concerned about catastrophic risk from AI would generally be opposed to technology. This is not the case.
Many of these people are long-time techno-optimists and even identify with transhumanism.1 The core claim is not “stop technology,” but “let’s not build something that would likely wipe out humanity.” AI safety advocates often support beneficial, well-understood technologies (e.g., nuclear power, vaccines, self-driving systems, and many current uses of AI2), while urging strong safeguards or limits on unusually dangerous areas (e.g., on gain-of-function research, on some effects of current AI,3 and on future, highly-capable AI). Most also acknowledge significant present-day AI benefits and potential future gains; the worry is that a takeover by future AI would forfeit all those benefits at once.
Views differ on how to reduce risk from advanced AI.
- Some propose pausing or sharply slowing frontier model development — ranging from “indefinitely” to “for a generation” to “until specific safety criteria are met” — but these proposals target frontier AI, not technology writ large.
- Some favor differential acceleration (“d/acc”): speeding up development of tools that constrain and secure powerful systems (e.g., evals, interpretability, sandboxing, compute governance, and defensive technologies), including using narrow AI to help make broader AI safer.
- Many safety-minded researchers inside frontier AI companies argue for continued but cautious progress to better understand and control the systems being built.
Across these strands, there is support for targeted policies: rigorous capability evaluations, red-teaming, staged deployment, liability for “foom”, and restrictions on especially hazardous uses. This contrasts with some pro-technology groups (e.g., most e/accs, some open-source advocates, and many venture capitalists) who oppose almost any restriction. Nonetheless, AI safety advocates are broadly more “pro-technology” than the average person.
As an analogy, the ideal world of people concerned about bridge safety is not one with no bridges, but one with no unsafe bridges. In the same way, for people who are concerned about AI safety, the ideal world is not one with no AI, but one with no powerful unsafe AI.
Examples include Eliezer Yudkowsky and Nick Bostrom. ↩︎
People who are concerned about future AI are often power-users of current AI. As an analogy, early humans might be wary of burning down an entire wooded area while enthusiastically using small fires to cook food. ↩︎
Salient examples include AI-induced psychosis as well as AI persuasion, but many people concerned by existential risk are also concerned by other harms from AI. ↩︎