Open letter by leading AI experts to the German Federal Government: The European AI Act needs foundation model regulation

Philip Fox, 27 November 2023

 

Berlin – In a letter published today, 22 leading AI experts call on the German Federal Government to ensure foundation models are regulated within the EU AI Act – and not just through a system of "self-regulation". Among the signatories are numerous German AI professors, the two most-cited AI researchers in the world, a co-author of the world’s most-used AI textbook and several leaders from business and civil society.

Trilogue negotiations on the AI Act are currently in their final stages. One open question is whether binding rules for developers of foundation models – particularly powerful AI models that can be used in a variety of ways – should be part of the AI Act. So far, the German Federal Government has not clearly positioned itself in favor of this.

Leading experts consider exempting foundation models dangerous for economic and safety reasons. They worry that, if foundation models remain unregulated, downstream deployers of these models (such as SMEs) would face unacceptably high liability risks and compliance costs. In addition, signatories point to risks to public safety (e.g. cyber attacks, AI-generated disinformation or pathogens) that are inherent to foundation models and can therefore only be comprehensively addressed at the level of these models. A system of mere self-regulation, they argue, is inappropriate for addressing these risks.

One of the signatories, Prof. Holger Hoos (Alexander von Humboldt Professor for AI at RWTH Aachen), said: “I consider this message to the German government to be extremely important and urgent. This is not about stirring up fears, but ensuring that European AI regulation takes effect where it is most important: in large generative AI models. Without regulation of this kind, the EU AI Act could in principle be ditched entirely."

Prof. Gary Marcus (Prof. em. at NYU, Founder and CEO of Geometric Intelligence), commented: “The chaos at OpenAI only serves to highlight the obvious: we can’t trust big tech to self-regulate. Reducing critical parts of the EU AI Act to an exercise in self-regulation would have devastating consequences for the world.”

In closing, the letter emphasizes the importance of this decision: “The AI Act, if it includes foundation models, would be the world’s first comprehensive regulation of AI, viewed as a historic example of European leadership. If coverage of foundation models is dropped, a weakened or failed AI Act would be regarded as a historic failure.”

The letter was organized by the KIRA Center for AI Risks & Impacts, an independent think tank in Berlin.


Press contact: Philip Fox, press@kira.eu

Zurück
Zurück

International AI Safety Report to inform global policy discussions

Weiter
Weiter

Bengio and Privitera in TIME: A Beneficial AI Roadmap