What is Stampy's AI Safety Info?

This is an open effort to build a comprehensive FAQ about artificial intelligence existential safety—the field trying to make sure that, when we build superintelligent artificial systems, they are aligned with human values so that they do things compatible with our survival and flourishing.

The goals of the project are to:

  • Offer a one-stop-shop for high-quality answers to common questions about AI alignment.

    • Let people answer questions in a way which scales, freeing up researcher time while allowing more people to learn from a reliable source.

    • Make external resources easier to find by having links to them connected to a search engine which gets smarter the more it's used.

  • Provide a form of legitimate peripheral participation for the AI Safety community, as an on-boarding path with a flexible level of commitment.

    • Encourage people to think, read, and talk about AI alignment while answering questions, creating a community of co-learners who can give each other feedback and social reinforcement.

    • Provide a way for budding researchers to prove their understanding of the topic and ability to produce good work.

  • Collect data about the kinds of questions people actually ask so we can better focus resources on answering them.

    • Track reactions so we can learn which answers need work.

    • Identify missing content to create.

If you would like to help out, feel free to join us on the Discord and jump right into editing some answers, or read on for more details about the project.