What safety problems are associated with whole brain emulation?
It seems improbable that whole brain emulation (WBE) arrives before neuromorphic AI, because a better understanding of the brain would probably help with the development of the latter. This makes the research path to WBE likely to accelerate capabilities and reduce timelines.
Even if WBE were to arrive first, there is some debate about whether it would be less prone to produce existential risks than synthetic AI. An accelerated WBE might be a safe template for an AGI as it would directly inherit the subject's way of thinking but some safety problems could still arise.
-
This would be a very strange experience for current human psychology, and we are not sure how the resulting brain would react. As an intuition pump, very high IQ individuals are at higher risk for psychological disorders. This suggests that we have no guarantee that a process recreating a human brain with vastly more capabilities would retain the relative stability of its biological ancestors.
-
A WBE might be able to be run thousands of times faster than human brains, making it a speed superintelligence. This might allow it to amass a large amount of power, which historically has tended to corrupt humans.
-
High speed might make interactions with normal-speed humans difficult, as explored in Robin Hanson's The Age of Em.
-
It is unclear whether WBE would be more predictable than AI engineered by competent safety-conscious programmers.
-
Even if WBE arrives before AGI, Nick Bostrom argues we should expect a second (potentially dangerous) transition to fully synthetic AGI due to their improved efficiency over WBE.
Nonetheless, an AGI built on WBE would probably be easier to align in some ways — e.g., it may inherit human motivations. Eliezer Yudkowsky has argued that emulations are probably safer even if they are unlikely.