Wouldn't AIs need to have a power-seeking drive to pose a serious risk?
So-called "power-seeking AI" could pose one form of AI risk, but it is not the only type of AI that could potentially cause catastrophe. Catastrophic scenarios that don't involve power-seeking AIs include:
-
AIs that take harmful actions due to specification gaming or goal drift, without explicitly seeking power.
Furthermore, society's trend toward automation, driven by competitive pressures, is gradually increasing the influence of AIs over humans. Hence, the risk does not solely stem from AIs seizing power, but also from humans ceding power to AIs.