DFG: Democratic Persuasion – How to Make the Case for Democracy
Democracy is facing significant challenges as forces from within and from the outside work to erode public support for democratic governance. While much research has focused on the use of disinformation to undermine democratic trust, we lack evidence on the potential of pro-democratic communication to bolster the societal foundations of democracy. This project seeks to address this gap by investigating how to effectively advocate for democracy.
Building on insights from psychology and persuasion research the project aims to develop a comprehensive theory of democratic persuasion, considering democratic support as a multi-faceted concept. It acknowledges that fissures in democratic support can take various forms, each requiring distinct approaches. Initially, the project will map the strengths and weaknesses of democratic support across different public segments. Based on these findings, theory-driven and actionable interventions will be developed to strengthen the attitudinal foundations of democracy, tailored to specific societal subgroups with unique fissures in democratic support. These interventions will be tested using three novel experimental paradigms designed to align with the theorized psychological pathways of persuasion. The first approach involves large-scale survey experiments conducted in controlled environments. This method provides large sample sizes that allow for the identification of heterogeneous effects of tailored treatments across different subgroups. However, text-based messages alone may not significantly influence deeply held beliefs. The second paradigm employs in-depth conversations using the street epistemology technique, a conversational style intended to encourage individuals to question their attitudes, even those strongly held. Lastly, the project will conduct laboratory group conversations using social influence as a pathway to bolster democratic support. Overall, this project aims to enhance scientific understanding of the strengths and fissures in democratic support among the public and to identify effective strategies for fostering democratic resilience at the citizen level.
BIDT: DemocraGPT – AI-based training for difficult conversations in times of growing polarization
DemocraGPT is an interdisciplinary consortium project led by LMU Munich (Institute for Communication Studies and Media Research; Geschwister Scholl Institute) and the TUM School of Social Sciences and Technology. Its goal is to develop and rigorously evaluate an AI-based conversation training tool that helps citizens engage in constructive democratic dialogue, even on controversial political issues.
The project starts from a key challenge: many people increasingly avoid political discussions—especially online—because debates feel heated, morally charged, and socially risky. Yet in a democracy, open exchange across differences is essential for mutual understanding and the legitimacy of collective decisions. DemocraGPT therefore investigates the potential of Large Language Models (LLMs) to support people in returning to dialogue and strengthening their capacity for difficult conversations.
At the core is a training system that realistically simulates “challenging” conversation partners and supports users in three areas:
- recognizing psychological reactance—defensive resistance triggered by perceived pressure or loss of autonomy,
- responding empathically and constructively using evidence-based conversation strategies, and
- building tolerance for ambiguity, reducing the risk of slipping into one’s own defensive patterns.
The tool will be developed as a privacy-conscious, scalable web application (text-based and, in later stages, voice-enabled) and assessed through a comprehensive multi-method evaluation, including field experiments. Partners in civic education and media regulation will support outreach, knowledge transfer, and a public beta rollout, including a citizen-science component.
DFG Emmy Noether Research Group: Predictably Paradoxical: Leveraging AI to Map the Democratic Mind
Alexander Wuttke receives funding through Emmy Noether Programme.
Why do many people profess their commitment to democracy, yet vote for politicians who undermine it? This is the type of question that political scientist Alexander Wuttke, who has been awarded funding by the German Research Foundation (DFG) through the Emmy Noether Programme, likes to address. The grant amounts to €1.17 million for a period of six years. Alexander Wuttke is Junior Professor of Digitalization and Political Behavior at LMU’s Geschwister Scholl Institute of Political Science. His teaching and research explores the promises and challenges of liberal democracy from the perspective of citizens. To this end, he makes particular use of experimental and computer-assisted methods of social research. In his project “Predictably Paradoxical: Leveraging AI to Map the Democratic Mind,” Wuttke is employing a novel AI-supported interview method to investigate how contradictory democratic attitudes arise. “Our data shows that citizens of established democracies are often no more satisfied with the political system than people in authoritarian states,” says Wuttke. “Many do not even perceive their country as more democratic than people in autocracies view theirs. And even when citizens clearly profess their commitment to democracy, they act contrariwise at the ballot box.” The political scientist argues that the prevailing research practice of using standardized, closed surveys prevents these contradictions in democratic orientations from being elucidated. “Methods have tended to measure whether people support democracy, but they scarcely capture what people think about democracy and how well-founded and stable their democratic convictions are.”
AI interviews that resemble human ones
In his new project, Wuttke will map “democratic belief systems” comparatively. He is interested not only in whether people fundamentally support democracy, but also in how consistent and robust this attitude is. To do this, Wuttke is working with an interdisciplinary team from the fields of computer science, political psychology, and computer-assisted social sciences to develop innovative, AI-supported interviews. He is training language models to conduct open or semi-structured conversations, similar to human interviewers. In addition, he is using traditional methods, including standardized, closed questions and panel data with repeated interviews. Extensive data will be collected in around 13 democracies and selected authoritarian states, with approximately 1,000 respondents per country. The aim is to combine the depth of content of qualitative interviews with the reach of large-scale surveys. In this way, Wuttke wants to show, among other things, under what conditions people recognize violations of democratic principles or allow themselves to be deceived by seemingly pro-democratic authoritarian rhetoric.
Leveraging Large Language Models for Automated In-Depth Interviewing
Alexander Wuttke, Matthias Aßenmacher, Quirin Würschinger
Standardized surveys are the workhorse of public opinion research. While tremendously valuable in many regards, asking researcher-defined questions and letting respondents’ choose from a researcher-defined set of response options has significant drawbacks. In particular, this approach struggles to map belief systems. It is the social science manifestation of Schrödinger’s cat where the measurement itself creates what is to be measured. By asking respondents about a topic they never previously considered and providing them suggestions of suitable answers, standardized surveys run the risk of affecting their outcomes, particularly on topics where attitudes are weak or non-existing. Unstructured or semi-structured in-depths interviews that simply let the respondents talk mitigate these problems. But costs prohibit scaling. This project seeks to combine the large scale of standardized surveys with the depth of semi-structured interviews. We use large language models to act as an interviewer with real life respondents. We use a modular framework where, for each input of the respondent, multiple API calls are chained.
When Must We Limit Free Speech? Determinants of Canceling in Academia
Claudia Diehl, Matthias Revers, Richard Traunmüller, Nils B. Weidmann, Alexander Wuttke
We assess student support for restriction of free speech about controversial topics on university campuses in Germany. Using a vignette design developed in an adversarial collaboration, we analyze which aspects of a controversial statement lead to demands for its cancellation. We show that conservative statements are rejected more often than progressive ones. Moreover, only conservative statements, not progressive ones, generate more support for cancellation when they are framed as opinions rather than scientific findings, and when they are accompanied by political claims. Our study reveals a tendency to silence objectionable views on ideological grounds rather than to challenge them
Field Experiment Democratic Persuasion
Alexander Wuttke, Florian Foos
Ordinary citizens are considered bulwarks against democratic backsliding. Yet, citizens’ commitment to democracy below the surface is sometimes fragile and crises exacerbate existing anxieties and discontent. We propose “democratic persuasion” as a theory-driven, actionable intervention to foster the resilience of citizens’ commitment to liberal democracy. “Democratic persuasion” requires that political elites actively make the case for democracy and discuss democracy’s inherent trade-offs while engaging existing doubts and misperceptions. During the Covid-19 pandemic, which brought these trade-offs to the fore, we invited citizens on facebook to attend one of sixteen Zoom town halls to discuss pandemic politics with a German member of parliament. Each MP conducted two town halls and we randomly assigned, when they employed “democratic persuasion”. The field-experiment demonstrates substantial effects on some, but not all, indicators of democratic commitment, showcasing the academic and practical value of this emerging line of research on strengthening the societal foundations of liberal democracies.
How Many Replicators Does It Take…? Measuring Researcher Variability using a Crowdsourced Replication Experiment
Nate Breznau, Eike Rinke, Alexander Wuttke et al.
We expect that a population of researchers aiming to test the same hypothesis using the same statistical models and data will have variability in their results. This researcher variability is a potential threat to the reliability of any single study. Careful review or curation of researcher choices by both the researchers themselves and external observers should eliminate much of this variability. But what if it does not eliminate all. In other words, what if despite their best efforts researchers’ results are not reliable across researchers? To investigate this phenomenon we consider two types of variability: non-routine researcher variability such as mistakes or misunderstandings can be eliminated from the research process through careful review and curation, and routine researcher variability that likely passes through the research process undetected. The latter are undeliberate actions often taken within epistemological, idiosyncratic or institutional constraints that cause the variability. We offer a theoretical discussion and basic formal models of the uncertainty resulting from researcher variability. We then report results of an experiment testing variability by crowdsourcing researchers to conduct a replication with the simple goal of verifying an original study. Having the least possible decisions to make among the researchers gave us the greatest chance to observe and distinguish routine researcher variability – the form that potentially threatens the (meta-)reliability of replications if not research in general. In doing this experiment we are able to say something about how many replicators are necessary to achieve reliability. Moreover, we identify the importance of transparency and the features of the research process that are most likely to lead to variation in results.
How do Citizens react to AI in Political Campaigns?
Andreas Jungherr, Adrian Rauchfleisch, Alexander Wuttke