Can you give an AI a goal which involves “minimally impacting the world”?

Giving an AI a goal which involves minimally impacting the world is an active area of AI alignment research, called Impact Regularization. It's not trivial to formalize in a way which won't predictably go wrong (entropy minimization likely leads to an AI which tries really hard to put out all the stars ASAP since they produce so much entropy, for example), but progress is being made.