Isn't the real concern with AI that it's biased?
Bias and discrimination1 in current and future AI systems is one concern among a number of real issues, each of which deserves attention.
Bias in AI refers to systematic errors and distortions in the data and algorithms used to train AI systems that cause those systems to treat people inequitably. Note that this use of the term bias is different from the one used in statistics, which refers to a failure to correctly represent reality but not necessarily in a way that particularly affects certain groups.
The forms of bias in AIs today that are most discussed by the media are the ones that lead to racism2 3 4 5 and sexism6 7. Other biases8 9 10 11 12 have also been identified. These biases in AI are often a reflection of which societies are most heavily represented in the training data (such as English-speaking communities) as well as the biases within these societies.
Work to reduce existential risk is sometimes presented as opposed to work addressing bias in current systems, but the AI safety community’s focus on existential risk doesn’t mean it’s unsympathetic to concerns about bias. Yoshua Bengio, who has worked on AI ethics for many years, rhetorically asks: “should we ignore future sea level rises from climate change because climate change is already causing droughts?” Humanity can address both classes of problems if it decides to prioritize them both. Furthermore, some research areas such as interpretability are useful toward both goals. On the governance side, there is some overlap in the techniques and institutions to make AI fair and to make AI safe.
That being said, we choose to concentrate on existential risk because we perceive the dangers of superintelligence to be both imminent and of the greatest importance.
These fit within the larger concepts of AI Ethics and FATE (fairness, accountability, transparency, ethics). ↩︎
Google Photos would tag black people as gorillas in 2015, and the problem was fundamentally hard to fix. ↩︎
Racial discrimination appears in face recognition technology. ↩︎
AI-assisted predictive policing and bail setting exhibits racial biases (among others). ↩︎
Facial recognition is more prone to false-positives with Black faces, leading to wrongful arrests. ↩︎
Most of the data used to train AIs comes from men's lives. ↩︎
Women with good credit scores can get lower limits on their credit cards. ↩︎
AI-assisted hiring practices might discriminate in problematic ways. ↩︎
Data on older adults is sometimes excluded in training datasets for health-related AI applications. ↩︎
AI correctly tags western bridal dresses as such, but not brides from other cultures. ↩︎
AI-powered speech and movement tracking software used in hiring could be prejudiced against disabled people. ↩︎
AI-powered gender and sexual orientation recognition might discriminate against people who don't fit well in the gender binary or might be used to actively discriminate LGBTQ people. ↩︎