Leading AI Researchers Warn: AI Could Cause Human Extinction
Philip Fox, 30 May 2023
Prof. Geoffrey Hinton, Prof. Yoshua Bengio, Sam Altman, Demis Hassabis and many other leading AI researchers and developers have signed an open letter by the Center for AI Safety (CAIS). The letter calls for taking the possible extinction of humanity through advanced AI just as seriously as pandemics or the threat of nuclear war.
Our fact sheet summarizes the background of the letter: Why are AI experts around the world concerned about extreme risks from AI? How can governments address these risks?
KIRA’s Executive Director Daniel Privitera comments: “The open letter is evidence that more and more experts are concerned. The possibility of an out-of-control AI is a real threat to humanity, and it’s good if more people become aware of this.”
KIRA founding member Anka Reuel comments: “Many AI risks are insufficiently studied and understood. We cannot rule out existential risks from AI.”
KIRA founding member Charlotte Siegmann adds: “Leading researchers from China have signed the letter, as well. Yes, it will be difficult to establish global rules around AI. But when it comes to impending risks to humanity, a compromise should be possible, even among countries like China and the US.”
The Center for AI Risks & Impacts (KIRA) is an independent think tank. We work towards making the transition to advanced AI safe and fair.