Will there be a discontinuity in AI capabilities?
While researchers agree that AI capabilities could increase quickly, there are still debates around whether the increase would take the form of a continuous rise or of a (seemingly) discontinuous jump.
Arguments for continuous takeoff
Paul Christiano believes that growth in AI capabilities will also lead to growth in economic productivity. He expects to see world GDP double in shorter and shorter periods of time, with AI contributions to AI R&D creating a feedback loop that results in hyperbolic growth. On this model, takeoff is continuous but still fast.
John Wentworth explored the possibility in the form of a story, that the enhancement of cognitive capabilities is not the true bottleneck to taking over the world. In this scenario, much more significant bottlenecks come in the form of coordinated human pushback and the need to acquire and deploy physical resources.1
As an example, an artificial superintelligence (ASI) could theoretically design a faster computer to accelerate its thinking, or a type of nano-bot that could wipe out humanity within seconds. However, it would take much longer to coordinate supply chains, navigate economic bottlenecks, and build precision machinery such as semiconductor fabrication plants, to actually build it. Due to supply chain optimizations that we can anticipate an ASI would generate as instrumental goals, we should see productivity and therefore GDP growth, which we can use as a proxy measurement for “AI takeoff”.
Arguments for discontinuous takeoff
Eliezer Yudkowsky expects AI to have relatively little effect on global GDP before a discontinuous "intelligence explosion". An argument for this is that superintelligent AIs can lie to us. If there exists an artificial general intelligence with strategic awareness that knows it will be turned off when it is perceived to have become too power-hungry, its best strategy is to limit its impact on the world by pretending to be less intelligent than it is. This leads to lower-than-expected GDP growth. This will then be followed by a sudden discontinuous FOOM, as soon as the AI gets access to a superweapon, or some other similarly powerful ability to influence the world. This would occur at a pace faster than human technological and governance institutions could counter.
Yudkowsky also points to examples from evolution where the transition from chimps to humans led to (what feels like) a discontinuous gap in capabilities. A much more comprehensive public debate about the matter was held between Yudkowsky and Christiano, which is summarized here.
Different views on takeoff speeds and (dis)continuity have different implications for how best (and potentially whether) to work on AI safety.
“On fusion power, for instance, at most a 100x speedup compared to the current human pace of progress is realistic, but most of that comes from cutting out the slow and misaligned funding mechanism. Building and running the physical experiments will speed up by less than a factor of 10. Given the current pace of progress in the area, I estimate at least 2 years just to figure out a viable design. It will also take time beforehand to acquire resources, and time after to scale it up and build plants - the bottleneck for both those steps will be acquisition and deployment of physical resources, not cognition. And that’s just fusion power - nanobots are a lot harder.” - Wentworth, John (2021), Potential Bottlenecks to Taking Over The World ↩︎