When do experts think human-level AI will be created?

Short answer: within your lifetime.

It’s hard to have justified confidence in the amount of time until human-level AI (HLAI)[1]. Attempts to predict it include surveys, individual predictions, and detailed modeling.

Aggregate predictions:

Individual predictions:

  • Daniel Kokotajlo’s 2023 analysis predicts 2028.

  • Connor Leahy, CEO of Conjecture, gave a ballpark prediction in 2022 of a 50% chance of AGI by 2030, 99% by 2100. A 2023 survey of employees at Conjecture found that all of the respondents expected AGI before 2035.

  • Holden Karnofsky estimated in 2021 that there was “more than a 10% chance we'll see transformative AI within 15 years (by 2036); a ~50% chance we'll see it within 40 years (by 2060); and a ~⅔ chance we'll see it this century (by 2100).”

  • Paul Christiano estimated in 2023 that there was a 30% chance of transformative AI by 2033.

  • Yoshua Bengio estimated in 2023 “a 95% confidence interval for the time horizon of superhuman intelligence at 5 to 20 years”. Geoffrey Hinton also predicts 5-20 years, but his confidence is lower.

  • Shane Legg estimated in 2023 a probability of 80% within 13 years (before 2037).

Models:

  • A report by Ajeya Cotra for Open Philanthropy estimates the arrival of transformative AI (TAI) based on “biological anchors”[3]. In the 2020 version of the report, she predicted a 50% chance by 2050, but developments in AI in the two years that followed pushed her estimate to 2040 in 2022.

  • Matthew Barnett suggests an alternative to Cotra’s biological anchors model with an updated model that as of Q2 2023 suggests TAI by ~2040.

  • Epoch has done a literature review of timelines.

All of these forecasts are speculative,[4] contain various assumptions, use different definitions, and are probably subject to some selection bias.[5] However, they broadly agree that HLAI is plausible within the lifetimes of most people alive today. What’s more, these forecasts generally seem to have been getting shorter over time.[6]


  1. We concentrate here on HLAI and similar levels of capacities such as Transformative AI (TAI), which may be different from AGI. For more info on these terms, see this explainer. ↩︎

  2. Metaculus is a platform that aggregates the predictions of many individuals, and tends to have a decent track record at making predictions related to AI. ↩︎

  3. The author estimates the amount of compute done by biological evolution in the development of human intelligence and argues this should be considered an upper bound on the amount of synthetic compute necessary to develop HLAI. ↩︎

  4. Scott Alexander points out that researchers that appear prescient one year sometimes predict barely better than chance the next year. ↩︎

  5. One can expect people with short timelines to be overrepresented in those who study AI safety, as shorter timelines increase the perceived urgency of working on the problem. ↩︎

  6. There have been many cases where AI has gone from zero-to-solved. This is a problem; sudden capabilities are scary. ↩︎