Isn't the real concern autonomous weapons?

Some fear that the deployment of lethal autonomous weapons could be very bad for humanity. This can be considered a type of misuse of AI.

Intuitively, one can imagine how weapons that can kill without human intervention might lead to the removal of empathy as a last step before a lethal action is taken. They might be used at a much larger scale than conventional weapons or used to support totalitarian regimes.

Another issue is that computer security is hard, and it would be hard to guarantee that a fleet of such weapons belonging to an otherwise benevolent nation could not be hacked and used by a rogue agent to cause mayhem, as in the fictional movie The Fate of the Furious. One could imagine an eventual AGI hacking them to subjugate humanity, although there are arguably easier ways to do so.

This being said, it is unlikely that these weapons could directly cause an existential risk and so for this reason this site instead concentrates on the risks of unaligned AGI.

Further reading: