Why can’t we just use natural language instructions?

DEPRECIATED Duplicate of Could we tell the AI to do what's morally right?

When one person tells a set of natural language instructions to another person, they are relying on much other information which is already stored in the other person's mind.

If you tell me "don't harm other people," I already have a conception of what harm means and doesn't mean, what people means and doesn't mean, and my own complex moral reasoning for figuring out the edge cases in instances wherein harming people is inevitable or harming someone is necessary for self-defense or the greater good.

All of those complex definitions and systems of decision making are already in our mind, so it's easy to take them for granted. An AI is a mind made from scratch, so programming a goal is not as simple as telling it a natural language command.