Why do some AI researchers not worry about alignment?

The issue of AI alignment is often met with varying degrees of concern within the AI research community, and one reason some researchers exhibit less worry is their reliance on heuristics that may not be applicable to the unique challenges posed by AI. These heuristics can be broadly categorized as follows:

Linear Intuitions

Researchers who rely on this heuristic may believe that the progress in AI will continue at a steady, manageable pace, allowing ample time for course corrections. This perspective often underestimates the possibility of fast takeoffs that could make alignment exponentially more challenging as AI systems become more capable.

Absurdity Heuristic

Another heuristic that can lead to a lack of concern about alignment is the "absurdity heuristic," where scenarios that seem implausible or extreme are dismissed outright. For instance, the idea that a machine could one day possess intelligence surpassing human capabilities might be rejected as science fiction. This heuristic can be problematic because it discourages proactive measures to address risks that, while currently unlikely, could have catastrophic consequences if they were to occur.

Institutional Trust

Some researchers place a significant amount of trust in the institutions that govern AI research and development. They believe that regulatory bodies, ethical committees, and industry best practices will provide the necessary framework to ensure AI alignment.

Status Quo Assumption

The heuristic of assuming that "things will continue as they are" is also prevalent. Researchers who adhere to this view may believe that since we have not yet witnessed significant misalignment issues with existing AI systems, such problems are unlikely to occur in the future. This heuristic fails to account for the qualitative changes that could arise as AI technologies evolve, potentially introducing new kinds of risks that we have not yet encountered.