I’d like to do experimental work (i.e. ML, coding) for AI alignment. What should I do?

Okay, so you want to do experimental AGI safety research. Do you have an idea you’re already excited about? Perhaps a research avenue, machine learning experiment, or coding project? Or maybe you’d like to get up to speed on existing research, or to learn how to get a job in alignment? You can continue to whichever branch seems most relevant to you via the related questions below.



AISafety.info

We’re a global team of specialists and volunteers from various backgrounds who want to ensure that the effects of future AI are beneficial rather than catastrophic.