What is a "warning shot"?

A warning shot in the context of AI safety is an AI-caused event that inspires greatly increased concern about AI-related existential risk. The term is sometimes used to refer specifically to AI causing extreme damage or global catastrophe (short of human extinction).

For instance, a warning shot might be an unaligned AI system with human-level intelligence that attempts to take over a data center, but is stopped before it can do significant harm. Although the event itself won’t cause extinction, it could prompt governments and AI researchers to become more supportive of AI safety research and more concerned about the existential risks posed by AI. Such a situation might also count as a fire alarm, that is, a warning sign that creates common knowledge that some technology poses an existential risk.

The COVID-19 pandemic can also be considered a warning shot in terms of bio-risk. It exposed the weaknesses in our global response to pandemics and highlighted the need for better coordination and investment in vaccine infrastructure. The pandemic has also illustrated some of our inability to coordinate and, unfortunately, governments continue to make decisions that may exacerbate risks rather than mitigate them.

A more consequential warning shot was the partial meltdown of the Three Mile Island nuclear reactor in 1979, which marked a turning point in the American public’s perception of nuclear risks. This analysis of the incident points to lessons relevant to AI risk.

In summary, a warning shot refers to an unintentional event or small disaster that raises awareness about the potential dangers of advanced technologies and prompts relevant actors to take action to mitigate those risks. While these events can increase concern and support for safety measures it remains uncertain whether governments and other institutions will respond to such warnings in a timely and coordinated manner.

.