What are some problems in philosophy that are related to AI safety?

We can broadly distinguish between three categories of problems in philosophy that connect to AI safety:

  1. Questions about the nature of AI. For example, John McCarthy writes: “Artificial intelligence (AI) has closer scientific connections with philosophy than do other sciences, because AI shares many concepts with philosophy, e.g. action, consciousness, epistemology (what it is sensible to say about the world), and even free will. Clarifying what intelligence is or what it means to be sentient are also philosophical matters.

  2. Questions about values and the control problem can be approached from a philosophical point of view. For example, is there such a thing as objective morality and value? How do we represent human values? Are they universal?

  3. Questions about the impact of AI on society and humanity: what would the future look like for humans?, what are the ethical considerations regarding transhumanism?, are there moral responsibilities to protect humanity from existential risks such as those that could be posed by powerful AI systems?