What is DeepMind's safety team working on?

DeepMind has both a machine learning safety team focused on near-term risks, and an alignment team working on risks from artificial general intelligence. The alignment team is pursuing many different research agendas.

Their work includes:

See Shah's comment for more research that they are doing, including a description of some that is currently unpublished.



AISafety.info

We’re a global team of specialists and volunteers from various backgrounds who want to ensure that the effects of future AI are beneficial rather than catastrophic.