Difficulty of Alignment
8 pages tagged "Difficulty of Alignment"
Is the worry that AI will become malevolent or conscious?
Why would we only get one chance to align a superintelligence?
Why is AI alignment a hard problem?
What are the main sources of AI existential risk?
What are accident and misuse risks?
Why would a misaligned superintelligence kill everyone?
What is a “treacherous turn”?
Why would misaligned AI pose a threat that we can’t deal with?