Why does AI need goals in the first place? Can’t it be intelligent without any agenda?

Since we are designing the AI with some use in mind, it is likely to functionally have a goal. Intelligence is closely related to capability. When we say that a person is generally capable it means that they will be effective at pursuing whichever goals we choose. Similarly, an AI will be capable at pursuing whatever goals we aim it at.

Another word for this is optimization: AIs are programs that optimize for specific endstates. Goal-directed behavior arises naturally when systems are trained on an objective. AI not trained or programmed to do well by some objective function would not be good at anything, and would be useless.

For example, we might want our AI to give us advice, but then that itself becomes a goal which it optimizes for. (Or, more likely, it optimizes for a proxy of that goal, since we don’t know how to program specific goals into our system).

We might try developing a “tool AI”, which provides information or assistance to people without itself being a utility maximizer, but as it starts optimizing more and more powerfully, it is likely to become more agent-like and have things which function as goals.