Browse our introductory content
Our “Intro to AI safety” micro-course is a collection of short readings that serve as a comprehensive introduction to the topic of AI safety.
Our Intro to AI safety video playlist illustrates many of the most important points about AI safety in a way that is entertaining and easy to understand.
Listen to an introductory podcast episode (or a few)
We recommend Dwarkesh Patel’s interview with Paul Christiano, a leading AI safety researcher. The interview provides an introduction to AI risk and discusses many important AI safety concepts.
We recommend the book “Uncontrollable” by Darren McKee, which concisely examines the risks posed by advanced AI. The book highlights the need for effective AI governance and safety measures, and offers practical solutions to ensure AI benefits society while minimizing risks.
YouTube
We recommend the YouTube channel Robert Miles AI Safety, which presents complex AI safety concepts in an accessible format to foster awareness and understanding of the ethical and safety considerations in AI development. Note: Rob is also the founder of this site.
Podcast series
We recommend the 80,000 Hours Podcast, which explores a range of topics centered on existential risks and high-impact altruism. Many episodes feature high-quality and easy-to-understand content on AI safety.
Or, browse our full list of podcasts
Newsletter
We recommend the AI Safety Newsletter by the Center for AI Safety (CAIS), which offers curated updates on key AI safety developments. It breaks down complex topics into simple segments that are accessible to both beginners and deep divers to AI safety.
Twitter/X
We recommend following AGI Safety Core, a group of thinkers in AI that post about AI safety.
Take an online course
We recommend taking an online course if your interests have narrowed to a specific subset of AI safety, such as AI alignment research or AI governance.
The AI Safety Fundamentals (AISF) Governance Course, for example, is especially suited for policymakers and similar stakeholders interested in AI governance mechanisms. It explores policy levers for steering the future of AI development.
The AISF Alignment Course is especially suited for people with a technical background interested in AI alignment research. It explores research agendas for aligning AI systems with human interests.
Note: If you take the AISF courses, consider exploring additional views on AI safety to help avoid homogeneity in the field, such as The Most Important Century blog post series.
Note: AISF courses do not accept all applicants. We recommend taking the courses through self-study if your application is unsuccessful.
Get into LessWrong and its subset, the Alignment Forum
Most people who are really into AI existential safety ultimately end up in this online, forum-based community which fosters high-quality discussions about AI safety research and governance.
Sign up for events
Events, typically conferences and talks, are often held in person and last one to three days.
We've highlighted EAGx, an Effective Altruism conference dedicated to networking and learning about important global issues, with a strong focus on AI safety. Several EAGx's are held annually in various major cities across the world.
Or, browse our full list of upcoming events
Sign up for fellowships
AI safety fellowships typically last one to three weeks and are offered both online and in person. They focus on developing safe and ethical AI practices through research, mentorship, and collaboration on innovative solutions.