How close do AI experts think we are to creating superintelligence?

META: merged How long will it be until human-level AI is created?

Nobody knows for sure when we will have artificial general intelligence (AGI) (if ever). Open Philanthropy CEO Holden Karnofsky has analyzed a selection of recent (as of September 2021) expert surveys on the matter, as well as taking into account findings of computational neuroscience, economic history, probabilistic methods and failures of previous AI timeline estimates. This leads him to estimate that "there is more than a 10% chance we'll see transformative AI within 15 years (by 2036); a ~50% chance we'll see it within 40 years (by 2060); and a ~⅔ chance we'll see it this century (by 2100)." Karnofsky bemoans the lack of robust expert consensus on the matter and invites rebuttals to his claims in order to further the conversation. He compares AI forecasting to election forecasting (as opposed to academic political science) or market forecasting (as opposed to theoretical academics), thereby arguing that AI researchers may not be the "experts” we should trust in predicting AI timelines.

Opinions proliferate, but given experts’ (and non-experts’) poor track record at predicting progress in AI, many researchers tend to be fairly agnostic about when superintelligent AI will be invented.

UC-Berkeley AI professor Stuart Russell has given his best guess as “sometime in our children’s lifetimes”, while Ray Kurzweil (Google’s Director of Engineering) predicts human level AI by 2029 and an intelligence explosion by 2045. Eliezer Yudkowsky expects the end of the world, and Elon Musk expects AGI before 2030.

If there’s anything like a consensus answer at this stage, it would be something like: “highly uncertain, maybe not for over a hundred years, maybe in less than fifteen, with around the middle of the century looking fairly plausible”.