Might superintelligence lead to astronomical amounts of suffering?

We hope not, but at least one organization is explicitly focused on this concern.

Some researchers believe that we will end up with a multipolar AI scenario, where the global power structure is primarily determined by the interaction of many distinct AGI systems. In a multipolar AGI scenario, we might worry about (potential catastrophic) conflict arising between AGI systems. Zooming out, the potential downsides of conflict have become more severe throughout human history — the war with the highest number of casualties was World War 2, which ended less than 80 years ago.

Other people worry about alternative pathways to so-called ‘s-risks’, in which the future contains an astronomical amount of suffering. For instance, totalitarian regimes may develop advanced AI, and use such systems to sustain a perpetual totalitarian state. More speculatively, we might train a model which is deceptively aligned, but malevolent — for example, through ostensibly aligned AIs falling prey to the Waluigi Effect.

By and large, most people want happiness, and don’t want suffering. While this consideration provides one hopeful (though far from decisive) reason for believing that a future of flourishing creatures is more likely than a future full of suffering, there are reasons to be concerned with downside risks.