What AGI safety reading lists are there?

A good point to dive in is the AGI Safety Fundamentals curriculum. This is designed to provide a high level understanding of the AI alignment problem and some of the key research directions which aim to solve it. The curriculum is kept up to date and is split into 8 weeks worth of readings, as it’s meant to be used as a basis for study groups.

The Intro to ML Safety also has a reading list and is a lot more technical than the AGI safety fundamentals.

Another good resource is the Alignment forum, especially their curated sequences, which describe specific concepts in a lot of detail.

Vael Gates maintains a list of resources for AI researchers interested in AGI safety here.

If you’re more interested in books, rather than papers and blog posts, MIRI has a list of recommended books on background topics that are likely to be useful for research. These aren’t specifically about AI safety, though.

There are also study guides by John Wentworth and Akash that describe topics and approaches that are useful if you want to go deep into AI safety.