What is the Center on Long-Term Risk (CLR)'s research agenda?

The Center on Long-Term Risk (CLR) is focused primarily on reducing suffering-risk (s-risk): the risk of a future that has a large negative value. They do theoretical research in game theory and decision theory, primarily aimed at multipolar AI scenarios.

CLR also works on improving coordination for prosaic AI scenarios, risks from malevolent actors, and forecasting the future of AI. The Cooperative AI Foundation shares personnel with CLR, but is not formally affiliated with CLR, and does not focus just on s-risks.



AISafety.info

We’re a global team of specialists and volunteers from various backgrounds who want to ensure that the effects of future AI are beneficial rather than catastrophic.