Op-ed for FAZ: Fox & Reuel on AI-generated disinformation
Daniel Privitera, 19 September 2023
In an op-ed for the German newspaper Frankfurter Allgemeine Zeitung (FAZ), KIRA members Philip Fox & Anka Reuel discuss how AI-generated disinformation could undermine public discourse in the future – and how society should prepare for this.
Fake news has existed ever since there has been news, Fox & Reuel point out. But state-of-the-art AI tools allow irresponsible or malicious actors to produce authentic disinformation much faster and more cheaply. While current tools output mainly texts and images, perfectly crafted deepfake videos are already on the horizon.
How can we protect public discourse, then? Technical solutions like watermarking, detection tools and automated fact-checking will only be part of the solution, say Fox & Reuel. Since they will never be perfect, they must be supported by the right social interventions. The authors make three suggestions:
Focus on ‘prebunking’: schools, the media and social networks must pre-emptively inform about the methods and intentions of those who spread disinformation, rather than merely debunk fake news that are already out there.
Interventions that specifically target public figures and influencers who have high reach and, thus, responsibility.
Media outlets need strategies for responsible fake news coverage to avoid becoming crucial multipliers.
Currently, it is unclear how public discourse will evolve in the face of AI-generated disinformation. Different scenarios are possible, both negative and positive. Fox & Reuel warn of fatalism: The right combination of technical and social solutions can steer us towards a sufficiently resilient discourse.