Start a career in AI safety

There are three major career paths:

AI alignment research

What

AI alignment research is the field dedicated to ensuring that advanced AI systems act in ways that are beneficial to humans and aligned with human values and goals. It involves developing methods and principles to guide AI behavior so that as these systems become more capable and autonomous, they continue to operate safely and ethically within the intentions set by their human creators.

Why this is important

To ensure humanity benefits from advanced AI and mitigates catastrophic risks, we must first solve the technical challenge of AI alignment through dedicated research, and then collaborate globally to carefully deploy solutions. While experts believe alignment is solvable, it remains a complex problem that demands significant intellectual talent.

Where AI alignment researchers work

AI alignment researchers typically work at non-profit organizations dedicated to AI safety and alignment, in academia (i.e. universities and academic institutions), independently, or on industry safety teams*.

*Note: Beware of the risk of joining industry “safety” teams, as this work often leaks to non-safety parts of the organization which improves the AI technology itself—and so ends up causing harm.

You might be a good fit if...

You might be a good fit as an AI alignment researcher if you have a quantitative background, you enjoy programming, or you're skilled at breaking down problems logically, hypothesizing, and testing various solutions with high attention to detail.

Interested in pursuing this career path?

Take the following steps to further assess your fit and learn how to make the transition:

Read the 80,000 Hours technical AI safety career review

The review takes about one hour to read and addresses:

  • What this career path involves
  • How to predict your fit
  • The upsides and downsides of this career path
  • Compensation
  • How to enter or transition into this career

Sign up for 1-on-1 career advice with AI Safety Quest & 80,000 Hours (free)

Schedule a 30-minute or 1-hour video call—we recommend booking both! These calls will address your specific questions about the AI alignment research field, confirm your interest and fit, and provide tailored recommendations to help you make the transition.

Note: 80,000 Hours does not accept all applicants.

A process note: Build your knowledge so you can think critically about the roles you pursue

AI safety is a relatively new field with diverse opinions on how best to solve the technical challenge of AI alignment. Many unexplored avenues and important questions likely remain unaddressed. Therefore, it's crucial for AI alignment researchers (or aspiring researchers) to think independently and develop their own models on this topic. If you pursue a career in this field, we recommend deeply educating yourself on the technical challenge of alignment, engaging with other AI safety experts, and thinking critically about the topic and current paradigms.

AI governance & policy

What

AI governance is an emerging field focused on shaping how AI technology is developed and deployed through policy, corporate practices, and international coordination. Professionals in this space work to prevent catastrophic risks from advanced AI systems, ensure AI benefits society while minimizing harms, and create frameworks for safe and responsible AI development.

Why this is important

To ensure humanity benefits from advanced AI and mitigates catastrophic risks, technical solutions for AI alignment must be complemented by effective public policy and corporate oversight to keep development tightly controlled and at a cautious pace. Even with successful AI alignment, robust governance is essential to ensure consistent implementation across all sectors and regions.

Where professionals in AI governance usually work

AI governance professionals work in settings like government agencies, international organizations, regulatory bodies, think tanks, research institutions, and private companies. They develop policies, analyze risks, and shape governance frameworks for the safe development and use of AI technologies.

You might be a good fit if...

You might be a good fit for a career in AI governance if you have a background in political science, law, international relations, or economics, or if you have technical expertise in AI or cybersecurity. You could also thrive in this field if you're skilled in research, advocacy, or communicating complex ideas clearly.

Interested in pursuing this career path?

Take the following steps to further assess your fit and learn how to make the transition:

Read the 80,000 Hours AI governance and policy career review

The review takes about one hour to read and addresses:

  • The six categories within AI governance
  • How to predict your fit
  • How to enter or transition into this career
  • Where AI governance work is typically done
  • How this career path can go wrong

Sign up for 1-on-1 career advice with AI Safety Quest & 80,000 Hours (free)

Schedule a 30-minute or 1-hour video call—we recommend booking both! These calls will address your specific questions about the field of AI governance and policy, confirm your interest and fit, and provide tailored recommendations to help you make the transition.

Note: 80,000 Hours does not accept all applicants.

A process note: Build your knowledge so you can think critically about the roles you pursue

Many roles that appear to advance AI safety may actually end up advancing capabilities, and thus cause harm. We recommend learning more about AI safety—particularly the alignment problem—and carefully considering whether a role or action will make AI safer before pursuing it.

AI safety field-building

What

AI safety field-building involves attracting talent and resources to the field, raising awareness about AI safety issues, running upskilling programs and resources, and building the AI safety community.

Why this is important

The AI safety field is still in its early stages, with significant room for growth and maturation. A large and mature field will be more likely to successfully solve the alignment problem and mitigate AI risk through global coordination, and achieving this will largely rest on field-building efforts.

You might be a good fit if...

Field-building may be the way to go if neither alignment research nor governance appeals to you or fits your skillset. You may be a particularly good fit if you have a strong sense of agency or leadership, or you are creative. That said, a large variety of roles exist within field-building, so it’s likely that you can adapt whatever skillset you have to a role.

Most common field-building roles

Communications & advocacy

Communications involves educating the public or spreading the word about AI safety—most typically through websites or social media. People with computer skills or creative skills can typically find a place within communications. Roles could include independent content production, software engineering, project management, or design.

Being a grantmaker

There are many philanthropists interested in donating millions of dollars to AI safety—but there currently aren't enough grantmakers to vet funding proposals. Because a randomly chosen proposal has little expected impact, grantmakers can have a large impact by helping philanthropists distinguish promising projects in AI safety from less promising ones.

Founding new projects

Founding a new project in AI safety involves identifying a gap in a pressing problem area, formulating a solution, investigating it, and then helping to build an organization by investing in strategy, hiring, management, culture, and so on—ideally building something that can continue without you.

Interested in pursuing this career path?

Take the following steps to further assess your fit and learn how to make the transition:

Sign up for 1-on-1 career advice with AI safety Quest & 80,000 Hours (free)

Schedule a 30-minute or 1-hour video call—we recommend booking both! These calls will address your specific questions about the field, confirm your interest and fit, and provide tailored recommendations to help you make the transition.

Note: 80,000 Hours does not accept all applicants.

A process note: Build your knowledge so you can think critically about the roles you pursue

Many roles that appear to advance safety may actually end up advancing AI capabilities, and thus cause harm. We recommend learning more about AI safety—particularly the alignment problem—and carefully considering whether a role or action will make AI safer before pursuing it.

Testimonial's Face

Bryce Robertson

Having decided to change my career to one focused on AI safety, I began searching for field-building roles. While staying on scholarship at the EA Hotel (CEEALAR) I spent five months doing volunteer work for Alignment Ecosystem Development (AED) and when the founder stepped back to focus on other projects, he asked me to take over its operations. I applied for and received funding from the Long-Term Future Fund which has allowed me to now lead AED full-time.


AISafety.info

We’re a global team of specialists and volunteers from various backgrounds who want to ensure that the effects of future AI are beneficial rather than catastrophic.