Why would AGI be more dangerous than other technologies?

In the past, new technologies have often been disruptive and produced negative side effects: the printing press enabled easier propaganda, the steam engine produced environmental degradation and poor working conditions, and electricity led to concerns about safety and electrocution as well as worker displacement due to increased automation. However, in the end, most would agree that the positive effects of these technologies outweighed their negative effects. In a similar vein, technologies that were initially unsafe (such as early cars and airplanes) were made safer after the introduction of standards. Given that we expect AGI to have the potential to produce important benefits, why do we think our typical approaches to making technology safe to fail?

We expect AGI to be different because it’s in a qualitatively different category than past technologies. If we were to build a misaligned AGI, not only might it prove challenging to correct it after the fact but it might wreak havoc on such a scale that there might be no one left to correct it.