What projects is CAIS working on?
The Center for AI Safety (CAIS)1 is a San Francisco-based research non-profit that "focuses on mitigating high-consequence, societal-scale risks posed by AI". It pursues technical research aimed at improving the safety of existing AI systems, as well as multi-disciplinary conceptual research aimed at framing and clarifying problems and approaches within AI safety.
CAIS also works on field-building to help support and expand the AI safety research community. Its field-building projects include:
-
The CAIS Compute Cluster, which offers compute for AI safety research
-
Offering prize incentives for safety-relevant research such as improving ML safety benchmarks, moral uncertainty detection by ML systems, and forecasting by ML systems
-
Offering an ML Safety course and scholarships for ML students doing safety-related research
Not to be confused with Comprehensive AI Services, a conceptual model of artificial general intelligence proposed by Eric Drexler, also usually abbreviated CAIS. ↩︎