How can I work on AI safety outreach in academia and among experts?

Some ideas about AI risk field-building are collected in the post announcing the AI Safety Field Building (AISFB) hub and this comment (though AISFB has now ended). An example of field building work is Vael Gates’ interviews with AI researchers.

To do well at outreach, you’ll need to understand academia, or whatever group of experts you’re reaching out to. Ideally, you’d learn this as an academic yourself: the most efficient strategy might be joining academia and doing field building on the side. For example, more than one PhD student has gotten their supervisor interested in AI safety. However, you are advised to have this conversation before you start your PhD program; at minimum, check whether your supervisor is open to letting you work on AI-safety-motivated problems.

For more advice on talking to academics about AI safety, see Lessons learned from talking to >100 academics about AI safety by Marius Hobbhahn.

Make sure that, in addition to learning how to communicate to these audiences and trying to build credibility with them, you develop a deep understanding of the alignment problem. That way, when people challenge your models, you can respond usefully instead of getting confused. It’s good to at least be able to answer the commonly asked questions.



AISafety.info

We’re a global team of specialists and volunteers from various backgrounds who want to ensure that the effects of future AI are beneficial rather than catastrophic.