How and why should I form my own views about AI safety?

Forming an “inside view” of AI safety is the standard advice for aspiring AI safety researchers, policymakers, grantmakers, and anyone else making strategic decisions to mitigate AI risk. There are good reasons for this advice: if your opinions are not informed by mechanistic models of how AI risk might play out and what success stories might look like, they will remain largely unfounded. This can lead to misallocating great effort on work and upskilling projects that are off the critical path to good futures. It can even lead to actions which, while well-intentioned, are actively counterproductive. Being comfortable updating your views and actively seeking the truth is critical, because it allows you to notice when you’ve been focusing on the wrong area.

A key part of forming inside views is taking time to think carefully about how the strategic landscape will unfold, and how different interventions might affect this. If you want to steer the future reliably, it’s worthwhile to invest the time in building well-grounded models, as they help both your decision making and the AI safety movement’s epistemics.

In AI safety, as in most areas, it helps to read other people’s ideas and compare your predictions and frames against alternatives to inform your own perspective. Reading through different researchers’ threat models and success stories can be particularly valuable. Some other points to explore:



AISafety.info

We’re a global team of specialists and volunteers from various backgrounds who want to ensure that the effects of future AI are beneficial rather than catastrophic.