Privitera Asks "Could We Lose Control over AI?" in Op-ed for WELT
Philip Fox, 25 May 2023
In an op-ed for German newspaper WELT Online, KIRA’s Executive Director Daniel Privitera points out the potentially severe risks that come with advanced AI, alongside many opportunities. He explains warnings from leading AI experts such as Geoffrey Hinton, Stuart Russell and Yoshua Bengio and talks about his recent background conversations with employees of leading AI companies in the US.
The concerns revolve around the unsolved "alignment problem": Nobody currently knows how to make sure that powerful AI systems share our values and goals. This problem could get more acute with more powerful models: "The systems are also getting better at deceiving people.”
Could we just pull the plug on an AI showing concerning behavior? We should not count on being able to do this, says Privitera: "An AI system like GPT-4 is not a physical machine. It’s a file that can be sent via email.” Once a model is in the public domain, it could thus be very hard to prevent its proliferation.
To minimize such risks as well as today’s harms from AI, Privitera suggests:
a constructive political debate free from partisan turf wars
mandatory external audits of new AI models
prioritizing efforts toward an international AI governance system
Daniel Privitera is a PhD candidate in Economics at the University of Oxford and Executive Director of the Center for AI Risks & Impacts (KIRA).