Considering how hard it is to predict the future, why do we think we can say anything useful about AGI today?
-
This branch of reasoning seems to come from a philosophy that it’s possible to say or believe nothing. This is false. You have to have some predictions, and take some actions.
-
Often ends up being a fight over what the priors should be
-
Some people treat the prior as “things will stay the same”, this has a horrible track record, anyone trying this over the past few centuries over mid-long term would have found this failed badly
-
https://www.openphilanthropy.org/research/semi-informative-priors-over-ai-timelines/
-
https://www.cold-takes.com/all-possible-views-about-humanitys-future-are-wild/
-
There are no good reference classes for AGI
-
But there are some:
-
Machine learning systems
-
Humans
-
Theoretical construct: real hard optimizer
-
See https://bounded-regret.ghost.io/thought-experiments-provide-a-third-anchor/
-
-
-
Agency-ness -> knowing something about where it ends up even if we can’t predict much about the
-
Extrapolation of current economy leads to economic singularity in our lifetime
-
Question disguises “shouldn’t we just do nothing because we don’t know” the answer to that is “no”
-
better question is that since it will get completely crazy, and we haven't thought about it anything like hard enough, probably less than “how did Sherlock fake his death in that TV show”
-
You can extrapolate and try to figure out different scenarios using the normal methods of careful thought.
-
We see some success in the past from attempts at extrapolation (sci fi predicting submarines or space travel)
-
Reasoning from first principles can work. Not perfectly, but what the hell else do we have.
-
In general depends on your complexity of the model