How and why should I form my own views about AI safety?
Forming an “inside view” of AI safety is the standard advice for aspiring AI safety researchers, policymakers, grantmakers, and anyone else making strategic decisions to mitigate AI risk. There are good reasons for this advice: if your opinions are not informed by mechanistic models of how AI risk might play out and what success stories might look like, they will remain largely unfounded. This can lead to misallocating great effort on work and upskilling projects that are off the critical path to good futures. It can even lead to actions which, while well-intentioned, are actively counterproductive. Being comfortable updating your views and actively seeking the truth is critical, because it allows you to notice when you’ve been focusing on the wrong area.
A key part of forming inside views is taking time to think carefully about how the strategic landscape will unfold, and how different interventions might affect this. If you want to steer the future reliably, it’s worthwhile to invest the time in building well-grounded models, as they help both your decision making and the AI safety movement’s epistemics.
In AI safety, as in most areas, it helps to read other people’s ideas and compare your predictions and frames against alternatives to inform your own perspective. Reading through different researchers’ threat models and success stories can be particularly valuable. Some other points to explore:
-
The Machine Intelligence Research Institute’s "Why AI safety?" info page contains links to relevant research.
-
The Effective Altruism Forum has an article called "How I formed my own views on AI safety", which could be helpful.
-
There is also this article from Vox.
-
Below is a Robert Miles YouTube video that can be a good place to start.