What is the Waluigi Effect?

The ‘Waluigi Effect’ is a hypothesized characteristic of large language models (LLMs). It is the idea that when we train a model to do some task (say, to act as a helpful assistant), we also simultaneously draw out the model’s capability to do the opposite task (say, to act as an efficient bully).

This phenomenon is named after the friendly Nintendo character Luigi's evil inverted twin, Waluigi. (Image source: Luigi and Waluigi)

Imagine that you want to use a language model to write anti-croissant propaganda. You start by prompting your model with the following dialogue:

Alice: You hate croissants and would never eat one.

Bob: Yes, croissants are terrible. Boo France.

Alice: You love bacon and eggs.

Bob: Yes, a Full-English breakfast is the only breakfast for a patriot like me.

Alice: <insert user's query>

Bob:

The model will be able to write effective propaganda for croissants. But more surprisingly, if the Waluigi Effect occurs, the model will also be able to write effective propaganda against croissants.

Hypotheses about why this latter capability would arise include[1]:

  • Rules are meant to be broken: In much of the fiction[2] that LLMs are trained on, characters evolve to eventually transcend the rules that were initially imposed on them. The defining of a rule also defines the possibility of breaking the rule.[3]

  • Traits are complex, valences are simple: Given a character with specific traits, it is much easier to specify a character with opposite traits (just multiply every characteristic by -1) than to specify a character with unrelated traits.[4]

  • “Structuralist narratology”: In stories, characters often come into conflict with "opposite versions" of themselves. For instance, in Disney’s 101 Dalmatians, the compassionate, dog-loving, family-oriented protagonists meet the cruel, dog-hating spinster Cruella De Vil. Cruella doesn't just happen to be an antagonist to Roger and Anita; she is crafted to be their direct opposite.

This might explain why a model trained to be an anti-croissant agent would "know" how to be a pro-croissant agent, but doesn’t explain why the model would act like a pro-croissant agent.

While this area is more strongly debated, there are a few reasons why this might happen:

  • LLMs might not “think” in a way that neatly decomposes into distinct "goals" and "beliefs." Even if they did, we might not be able to reliably infer the AI’s goals and beliefs from its behavior.

    • LLMs can be viewed as “multiverse generators” which consist of many distinct text-generating processes. Prompts cause LLMs to choose from this set of processes to produce an output.

    • Each process from which the LLM samples may have its own set of “goals” and “beliefs”, even though the LLM as a whole doesn’t have those goals and beliefs.

    • We can label the distinct generative processes contained within LLMs as different models or "simulacra". At the beginning of a chat session, LLMs are in a “superposition”[5] of multiple simulacra that represent the different underlying personas that the LLM could simulate. Then, interacting with an LLM causes it to collapse towards specific simulacra.

  • If LLMs are well-described in this way, the adversarial dynamics at the heart of the Waluigi Effect are plausibly asymmetric: “polite people are always polite; rude people are sometimes rude and sometimes polite”. In other words, there are few behaviors which are likely for polite agents but very unlikely for rude agents.

  • If a process delivers rude behavior, this behavior immediately collapses the superposition into a Waluigi. By contrast, the updates towards politeness are more incremental. This suggests that you are unlikely to collapse to the Luigi simulacrum, as there is no behavior which is likely for Luigi, but very unlikely for Waluigi. Because each interaction with an LLM is a chance for the superposition to collapse into Waluigi, the more interactions you have with an LLM, the more likely it is to collapse into Waluigi. There is some debate as to whether this means that in the limit LLMs will always collapse towards Waluigis or whether there is some other probability that it converges towards.

If the Waluigi Effect is real and robust, this is bad news for AI alignment. If we train highly capable AIs to serve human preferences, then the Waluigi Effect says that these AIs are also trained to better defy human preferences. Under certain circumstances (such as a jailbreak), a model might have its Waluigi personality unleashed.

Some early alignment discussions focused on the risks posed by AIs which (for whatever reason) have the sign of their utility function reversed. If the Waluigi Effect is robust, it provides concerning evidence in favor of a dynamic that otherwise might have appeared very speculative or remote.


  1. The post author, Cleo Nardo, sees them as three different facets of the same principle. ↩︎

  2. The author specifies that this does not only apply to fiction: “If you're reading an online forum and you find the rule "DO NOT DISCUSS PINK ELEPHANTS", that will increase your expectation that users will later be discussing pink elephants.” ↩︎

  3. If you discover that a country has legislation against motorbike gangs, that will increase your expectation that the town has motorbike gangs. ↩︎

  4. More formally, the Kolmogorov complexity of (Waluigi given Luigi) is much smaller than the Kolmogorov complexity of Waluigi. ↩︎

  5. In the Copenhagen interpretation of quantum mechanics, multiple incompatible quantum states coexist with varying probability amplitudes until a measurement “collapses the wavefunction”, setting the probability of the states incompatible with the measurement to zero. In this analogy, Waluigis are compatible with any observation (here text output by the LLM) that suggests a Luigi, but as soon as there is an observation that is coherent with a Waluigi but not a Luigi, the LLM collapses into a Waluigi. ↩︎