What are some practice or entry-level problems for getting into alignment research?
-
Resources that (I think) new alignment researchers should know about — EA Forum
-
List of technical AI safety exercises and projects — LessWrong
-
200 Concrete Open Problems in Mechanistic Interpretability: Introduction — AI Alignment Forum
-
suggest a solution for ELK Eliciting Latent Knowledge (some examples $10k Prize in ARC’s Eliciting Latent Knowledge Competition — Edmund Mills )
-
solve SERI MATS mentors’ questions Mentors — SERI ML Alignment Theory Scholars Program