The KIRA Center helps making Artificial Intelligence safe and fair.

The KIRA Center is an independent think tank.

Research on AI is making rapid advancements. This brings a variety of both benefits and risks.

Our vision: a future in which advanced AI is beneficial and safe for humans.

We conduct impact evaluations, assess risks and benefits, and foster dialogue among all stakeholders.

We collaborate with academia, industry, civil society, and the public sector.

KIRA in the News

For press inquiries, please contact: press@kira.eu

AI Boot Camp for Policymakers

AI technology is advancing at an unprecedented pace. Our AI Boot Camp provides policymakers with the technical understanding necessary for keeping up with the rapid developments in AI.

The Boot Camp consists of several in-depth modules tailored specifically to policymakers’ needs. It features world-leading experts from academia, business, and civil society.

1st AI Boot Camp for German MPs & staffers, organized with Cosmonauts & Kings and Project Together (Photos: Liza Arbeiter)

Available modules: 

Technical Foundations: Interplay between data, algorithms and computing power. Concepts covered include deep learning, backpropagation, gradient descent, the scaling hypothesis, ML interpretability, and more.

Benefits from Advanced AI: Practical use cases for enhancing democracy and supporting businesses. Opportunities for becoming a leader in trustworthy AI.

Risks & Regulation: Different risk categories and how they interconnect. Concrete governance strategies and how they relate to technical features of modern AI.

If you are interested in this workshop, please get in touch with us: info@kira.eu

KIRA Report

What should be priorities for German AI policy in 2025 and beyond?

A recent KIRA Report by Anton Leicht and Daniel Privitera recommends concrete policies in three areas:

1. Boosting Germany’s role in the AI value chain

  • promote AI use in industry and SMEs

  • short term goal: focus on local adoption & assurance sector

  • long term goal: strengthen European AI infrastructure

2. Ensuring public safety

  • Work towards shared international safety standards

  • increase resilience to AI harms in areas like cybersecurity and disinformation

  • support an ecosystem for risk evaluations

3. Building state capacity

  • recruit leading experts from research and industry

  • close collaboration with international partners (e.g. European AI Office)

  • establish a permanent AI Council at the German Chancellery

People

Anka Reuel
  

Founding Member

PhD Student,
Stanford University

Anton Leicht

Policy

PhD Candidate,
University of Bayreuth

Daniel Privitera
  

Founder & Executive Director

Lead Writer,
International AI Safety Report

Philip Fox

Policy

PhD,
Humboldt University Berlin

We welcome a message from you to info@kira.eu. Sign up to our email list for regular updates: