Overview
This workshop brings together NLP researchers, social scientists, and democracy practitioners to explore how language technologies can support and challenge democratic values. We focus on a three guiding questions:
How can we apply NLP techniques to study democracy?
How can we build LLM-based systems to empower citizens and improve democratic systems?
What threats does AI (particularly LLMs) pose to democracy, and how can we mitigate such threats?
Motivation & Overview
As language technologies become more powerful and pervasive, they are increasingly shaping how democracies function—and how they falter.
In recent years, NLP and computational social science have helped us understand how political language influences public opinion and policy. Researchers have analyzed how politicians and the media use rhetoric to shape narratives, and how citizens express attitudes toward policies, parties, and social movements on social media. NLP has also been used to detect threats to democracy, such as extremism, disinformation, propaganda, censorship, and suppression.
With the rise of large language models (LLMs), we are entering a new phase. LLMs promise to reshape how we study political text and how we design systems to support democratic engagement. A particularly exciting frontier is deliberation—the process by which people engage in thoughtful dialogue to solve problems, share perspectives, and make collective decisions. LLMs have shown early promise in improving conversations between people with opposing political views and even in reducing belief in conspiracy theories. These developments open up possibilities for building tools that support more inclusive, informed, and respectful public discourse.
Several of our invited talks will focus on the role LLMs might play in enhancing civic dialogue, facilitating deliberation in diverse communities, and making democratic decision-making more accessible.
At the same time, these technologies pose serious risks. LLMs can be used to produce highly persuasive disinformation, manipulate public narratives, and erode trust in democratic institutions. Governments and tech companies alike are experimenting with AI in ways that raise concerns about surveillance, censorship, and political bias. Already, real-world actors have deployed AI to monitor political opponents, spread fabricated content, and influence elections.
The NLP4Democracy workshop will welcome a range of contributions, including (but not limited to):
Computational analysis of political discourse across a wide range of domains, such as politicians’ rhetoric, media coverage, political campaigns and advertisements, and public engagement. Contributions exploring how LLMs can spark a paradigm shift in political text analysis are especially encouraged.
NLP for detecting, understanding, and combating threats to democracy (e.g. extremism, conspiracy theories, propaganda, disinformation, human rights violations, censorship and suppression)
Studies of deliberation, persuasion, and decision-making in human-human and human-AI interactions
LLM-based systems for supporting civic engagement and democracy (e.g. systems that can provide accurate and accessible political information, facilitate meaningful cross-cutting dialogue, increase political participation, generate effective counterspeech to reduce beliefs in dangerous ideologies)
AI-generated political content and its implications for democratic processes
Carnegie Mellon University
Brigham Young University
University of Washington
University of Southern California