There are three major career paths:
What
AI alignment research is the field dedicated to ensuring that advanced AI systems act in ways that are beneficial to humans and aligned with human values and goals. It involves developing methods and principles to guide AI behavior so that as these systems become more capable and autonomous, they continue to operate safely and ethically within the intentions set by their human creators.
Why this is important
To ensure humanity benefits from advanced AI and mitigates catastrophic risks, we must first solve the technical challenge of AI alignment through dedicated research, and then collaborate globally to carefully deploy solutions. While experts believe alignment is solvable, it remains a complex problem that demands significant intellectual talent.
Where AI alignment researchers work
AI alignment researchers typically work at non-profit organizations dedicated to AI safety and alignment, in academia (i.e. universities and academic institutions), independently, or on industry safety teams*.
*Note: Beware of the risk of joining industry “safety” teams, as this work often leaks to non-safety parts of the organization which improves the AI technology itself—and so ends up causing harm.
You might be a good fit if...
You might be a good fit as an AI alignment researcher if you have a quantitative background, you enjoy programming, or you're skilled at breaking down problems logically, hypothesizing, and testing various solutions with high attention to detail.
Take the following steps to further assess your fit and learn how to make the transition:
Read the 80,000 Hours technical AI safety career review
The review takes about one hour to read and addresses:
Sign up for 1-on-1 career advice with AI Safety Quest & 80,000 Hours (free)
Schedule a 30-minute or 1-hour video call—we recommend booking both! These calls will address your specific questions about the AI alignment research field, confirm your interest and fit, and provide tailored recommendations to help you make the transition.
Note: 80,000 Hours does not accept all applicants.
A process note: Build your knowledge so you can think critically about the roles you pursue
AI safety is a relatively new field with diverse opinions on how best to solve the technical challenge of AI alignment. Many unexplored avenues and important questions likely remain unaddressed. Therefore, it's crucial for AI alignment researchers (or aspiring researchers) to think independently and develop their own models on this topic. If you pursue a career in this field, we recommend deeply educating yourself on the technical challenge of alignment, engaging with other AI safety experts, and thinking critically about the topic and current paradigms.
There are many roles that support the work of AI alignment researchers, and having high-performing people in these positions is crucial. In a research organization around half of the staff will be doing non-research tasks essential for the organization to perform at its best and have an impact. Some of these roles include:
Operations management at an AI safety research organization
This involves overseeing the day-to-day activities that enable the organization to function efficiently and effectively. Responsibilities may include administrative support, resource allocation, HR, management of facilities, IT support, project coordination, etc.
Research management at an AI safety research organization
This involves overseeing and coordinating research activities to ensure they align with the mission of promoting safe AI development. Responsibilities include setting research priorities, managing teams, allocating resources, fostering collaboration, monitoring progress, and upholding ethical standards.
Being an executive assistant to an AI safety researcher
This involves managing administrative tasks to enhance this person's productivity. Responsibilities include scheduling meetings, handling correspondence, coordinating travel, organizing events, and otherwise ensuring they can focus on impactful AI safety efforts.
What
AI governance is an emerging field focused on shaping how AI technology is developed and deployed through policy, corporate practices, and international coordination. Professionals in this space work to prevent catastrophic risks from advanced AI systems, ensure AI benefits society while minimizing harms, and create frameworks for safe and responsible AI development.
Why this is important
To ensure humanity benefits from advanced AI and mitigates catastrophic risks, technical solutions for AI alignment must be complemented by effective public policy and corporate oversight to keep development tightly controlled and at a cautious pace. Even with successful AI alignment, robust governance is essential to ensure consistent implementation across all sectors and regions.
Where professionals in AI governance usually work
AI governance professionals work in settings like government agencies, international organizations, regulatory bodies, think tanks, research institutions, and private companies. They develop policies, analyze risks, and shape governance frameworks for the safe development and use of AI technologies.
You might be a good fit if...
You might be a good fit for a career in AI governance if you have a background in political science, law, international relations, or economics, or if you have technical expertise in AI or cybersecurity. You could also thrive in this field if you're skilled in research, advocacy, or communicating complex ideas clearly.
Take the following steps to further assess your fit and learn how to make the transition:
Read the 80,000 Hours AI governance and policy career review
The review takes about one hour to read and addresses:
Sign up for 1-on-1 career advice with AI Safety Quest & 80,000 Hours (free)
Schedule a 30-minute or 1-hour video call—we recommend booking both! These calls will address your specific questions about the field of AI governance and policy, confirm your interest and fit, and provide tailored recommendations to help you make the transition.
Note: 80,000 Hours does not accept all applicants.
A process note: Build your knowledge so you can think critically about the roles you pursue
Many roles that appear to advance AI safety may actually end up advancing capabilities, and thus cause harm. We recommend learning more about AI safety—particularly the alignment problem—and carefully considering whether a role or action will make AI safer before pursuing it.
What
AI safety field-building involves attracting talent and resources to the field, raising awareness about AI safety issues, running upskilling programs and resources, and building the AI safety community.
Why this is important
The AI safety field is still in its early stages, with significant room for growth and maturation. A large and mature field will be more likely to successfully solve the alignment problem and mitigate AI risk through global coordination, and achieving this will largely rest on field-building efforts.
You might be a good fit if...
Field-building may be the way to go if neither alignment research nor governance appeals to you or fits your skillset. You may be a particularly good fit if you have a strong sense of agency or leadership, or you are creative. That said, a large variety of roles exist within field-building, so it’s likely that you can adapt whatever skillset you have to a role.
Most common field-building roles
Communications & advocacy
Communications involves educating the public or spreading the word about AI safety—most typically through websites or social media. People with computer skills or creative skills can typically find a place within communications. Roles could include independent content production, software engineering, project management, or design.
Being a grantmaker
There are many philanthropists interested in donating millions of dollars to AI safety—but there currently aren't enough grantmakers to vet funding proposals. Because a randomly chosen proposal has little expected impact, grantmakers can have a large impact by helping philanthropists distinguish promising projects in AI safety from less promising ones.
Founding new projects
Founding a new project in AI safety involves identifying a gap in a pressing problem area, formulating a solution, investigating it, and then helping to build an organization by investing in strategy, hiring, management, culture, and so on—ideally building something that can continue without you.
Supporting roles
There are many roles that support the work of AI alignment researchers or people in AI governance, and having high-performing people in these roles is crucial. In a research organization, for example, around half of the staff will be doing non-research tasks essential for the organization to perform at its best and have an impact. Some of the most common supporting roles in AI safety include:
Operations management at an AI safety research organization
This involves overseeing the day-to-day activities that enable the organization to function efficiently and effectively. Responsibilities may include administrative support, resource allocation, HR, management of facilities, IT support, project coordination, etc.
Research management at an AI safety research organization
This involves overseeing and coordinating research activities to ensure they align with the mission of promoting safe AI development. Responsibilities include setting research priorities, managing teams, allocating resources, fostering collaboration, monitoring progress, and upholding ethical standards.
Being an executive assistant to someone doing important work on AI safety and governance
This involves managing administrative tasks to enhance their productivity. Responsibilities include scheduling meetings, handling correspondence, coordinating travel, organizing events, and ensuring they can focus on impactful AI safety or governance efforts.
Other technical roles
Working in information security to protect AI (or the results of key experiments) from misuse, theft, or tampering
Becoming an expert in AI hardware as a way of steering AI progress in safer directions
Take the following steps to further assess your fit and learn how to make the transition:
Sign up for 1-on-1 career advice with AI safety Quest & 80,000 Hours (free)
Schedule a 30-minute or 1-hour video call—we recommend booking both! These calls will address your specific questions about the field, confirm your interest and fit, and provide tailored recommendations to help you make the transition.
Note: 80,000 Hours does not accept all applicants.
A process note: Build your knowledge so you can think critically about the roles you pursue
Many roles that appear to advance safety may actually end up advancing AI capabilities, and thus cause harm. We recommend learning more about AI safety—particularly the alignment problem—and carefully considering whether a role or action will make AI safer before pursuing it.