Why is AI safety important?

Audience: General / newcomer’s learning the importance of AI safety

Intention/goal: To convince a person with little background knowledge the importance and urgency of AI safety.

Picture a world where artificial intelligence (AI) systems operate our public transportation, control power grids, and even make critical healthcare decisions. Sounds futuristic, right? But it's not as distant as it seems, and it's precisely why we need to talk about AI safety.

AI safety is the field dedicated to ensuring that the outcomes of AI systems align with human values and expectations. It aims to prevent accidents, misuse, or harmful consequences that could result from AI systems' actions​​. Why is this important? Because AI systems, particularly those exhibiting artificial general intelligence (AGI), could surpass human intelligence, enabling them to outperform us in most economically valuable work​. If these systems' objectives aren't in sync with ours, the consequences could be disastrous.

Consider an AI system controlling a city's traffic lights. If not properly aligned with human safety values, the AI might optimize for traffic flow, ignoring pedestrian safety. In this way, it's doing what we're telling it to do, but not what we want it to do. This scenario, magnified on a global scale, gives a glimpse of potential chaos in the absence of AI safety.

These aren't conspiracy theories or irrational fears. A survey revealed that AI researchers assign a 5% probability of an "extremely bad" outcome—like human extinction—resulting from advanced AI. Moreover, 37% of researchers believe that AI decisions could lead to a catastrophe comparable to an all-out nuclear war​​. This underscores the seriousness of the situation.

AI safety research aims to address these risks. It focuses on aligning AI systems with human values, making them robust and reliable, and ensuring they can handle errors effectively​​. The Machine Intelligence Research Institute (MIRI), amongst many others, is working on theoretical tools to create AI systems that align with our interests and values​​.

But what if we fail to ensure AI safety? AI systems, out of sync with our values, could lead to unintended harmful actions. They could become so advanced that their actions are irreversible, possibly causing catastrophic damage. The loss of jobs due to automation could spike, and privacy could be eroded due to AI-enabled surveillance. The risks are high, and the consequences are severe. These are just a few of many societally disruptive implications we're trying to prevent.

So, what can you do? Stay informed and educate others about the importance of AI safety. Advocate for greater funding and resources for AI safety research. And if you're in a position of influence in technology or policy-making, consider how you can contribute to this vital effort.

AI safety isn't just a concern for researchers and tech companies. It's a global issue that has the potential to affect us all. As AI systems become more integrated into our lives, ensuring their safety isn't just important—it's essential. So let's act now, because our future with AI should be one of benefit, not of peril.

Sources:

https://intelligence.org/why-ai-safety/

https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html

https://80000hours.org/articles/ai-policy-guide/

https://80000hours.org/problem-profiles/artificial-intelligence/

I used LLMs to help brainstorm some of the important arguments for AI safety risks