This all seems rather abstract. Isn't promoting love, wisdom, altruism or rationality more important?

These are all important issues. Part of love is striving for the best future of the loved one. Wisdom is knowing what actions will result in the best outcomes. Altruism is concern for the well-being of others. Rationality is the quality of being guided by good reasons when undertaking actions.

Many, if not most, AI safety researchers are guided by these qualities. They have investigated the available evidence and came to the rational conclusion that an unaligned AGI would be a grave danger to humanity. Because of altruism and love, they do not want humankind to be extinguished. So they are searching for the wisdom to avert the danger, or at least limit the damage. Of course, promoting these issues is also important, and many leading AGI safety researchers are also known for their work promoting wisdom, altruism and rationality.