How might "acausal trade" affect alignment?

Acausal trade is a hypothetical scenario where two agents influence each other’s behavior even though they cannot communicate or have any direct or indirect causal influence on each other.

Suppose that we have two agents, and you’re one of them. Unfortunately, you can’t communicate or interact with the other agent. A common sense picture might assume that you should ignore this other agent, and certainly you shouldn’t trade with them. After all, you can’t talk or interact with them. How could trade with them be possible, let alone beneficial?

If you believe in acausal trade, you believe that attempting to trade with such distant agents can sometimes be both positive and beneficial. Here’s the gist. Suppose that you have a representation, in your mind, of an agent you’re unable to causally interact with. And let’s suppose (never mind how, we’ll get to that later) that they have a corresponding representation of you. Finally, we’ll assume that the other agent is functionally similar to you in some way — when they make decisions, they use the same sort of process as you do.

We can represent this pictorially below, with the similarities between the two of you depicted by dotted lines.

Let’s tie this back to AGI. If we develop a superintelligent AI, it may care about certain outcomes that it can’t causally affect. Indeed, these values aren’t so unusual for humans. I want the best for my mom, even if she’s stranded on a desert island and I’ve got no way of reaching her — sometimes, we care about things we can’t affect.

Link some stuff about acausal normalcy in:

Some decision theories suggest that the optimal procedure for making decisions makes no mention of what we can causally affect — instead, we should use some other algorithm.