If you like interactive FAQs, you're in the right place already! Joking aside, some great entry points are the AI alignment playlist on YouTube, “The Road to Superintelligence” and “Our Immortality or Extinction” posts on WaitBuyWhy for a fun, accessible introduction, and Vox'sThe case for taking AI seriously as a threat to humanity” as a high-quality mainstream explainer piece.

The AI Does Not Hate You: Superintelligence, Rationality, and the Race to Save the World is a very readable book-length introduction to the technical challenge and the growing movement to tackle it; more book recommendations are in the followup question.

The free online Cambridge course on AGI Safety Fundamentals provides a strong grounding in much of the field and a cohort + mentor to learn with. There's even an anki deck for people who like spaced repetition!

There are many resources in this post on Levelling Up in AI Safety Research Engineering with a list of other guides at the bottom. There is also a twitter thread here with some programs for upskilling and some for safety-specific learning.

The Alignment Newsletter (podcast), Alignment Forum, and AGI Control Problem Subreddit are great for keeping up with latest developments.