What about having a human supervisor who must approve all the AI's decisions before executing them?

While an AI is relatively weak, a human supervisor might be able to make sure that it isn’t doing anything dangerous. However, even at this stage certain problems arise.

For example, the human supervisor is much slower than a computer system, so it will slow down any action that the system wants to take. For example, if a human being had to evaluate every algorithmic stock purchase before it bought or sold, it would slow the system down enough to lose much of the benefit.

As the AI becomes more intelligent, it can craft plans whose implications are lost on the supervisor. The supervisor may not be able to fully understand the plans and come to rely on other artificial systems for help. But if those systems are also superintelligent the same concerns would apply.

Furthermore, the AI could learn how to persuade the human supervisor most effectively, using rhetoric and other sales tricks to convince them to take the action.

From a pragmatic perspective, even if this had some possibility of working, the advantages of autonomous use will be attractive to at least some developers and users and they will deploy autonomous systems. Even at the current level of development, people have started using AutoGPT which is able to generate tasks and “make decisions” autonomously.