I can answer questions about artificial general intelligence safety
I'm asking something else
Can't we just tell an AI to do what we want?
Can we constrain a goal-directed AI using specified rules?
Any AI will be a computer program. Why wouldn't it just do what it's programmed to do?
Why don't we just not build AGI if it's so dangerous?
Why can’t we just do x?
Why can't we just turn the AI off if it starts to misbehave?