1st Workshop on
NLP for Positive Impact

Held at ACL-IJCNLP 2021 (Recurring Every Year)

Aug 5, 2021

News

Our workshop was successfully held on Aug 5, 2021 at ACL 2021. Please see our event schedule, and you can also check out our YouTube videos for recorded talks and panel.

Future Workshops: We plan to organize this workshop every year at *ACL conferences such as ACL/EMNLP/NAACL. We welcome applications as reviewers for our next workshop in 2022 (through this Google Form).

Focus of this Workshop

The widespread and indispensable use of language-oriented AI systems presents new opportunities to have a positive social impact. Much existing work on NLP for social good focuses on detecting or preventing harm, such as classifying hate speech, mitigating bias, or identifying signs of depression. However, NLP research also offers the potential for positive proactive applications that can improve user and public well-being or foster constructive conversations.

This workshop aims to promote innovative NLP research that will positively impact society, specifically focusing on proactive and responsible methods and new applications. We will encourage submissions from areas including (but not limited to):

  • Positive conversation generation & online prosocial behavior: conversational AI for promoting constructive interactions or alternate perspectives; analyses of conversations with successful positive outcomes; models for positive rephrasing of online content; analyses of implied or stated altruism, empathy, or other prosocial behavior online.

  • Participatory design & algorithmic cultural competency: co-creation of NLP systems with end users; value sensitive design of NLP systems; adaptation of widely used NLP models to specific user populations (e.g., dialect-aware pretrained LMs).

  • Online well-being & positive information sharing: NLP for improving the well-being of users (e.g., COVID-related support, therapeutic AI-in-the-loop journaling); NLG for alternate perspectives to articles; context generation/infilling for (ambiguous) statements; mitigation of filter bubbles through generative methods (e.g., sharing positive stories, disseminating positive information), etc.

  • Social good, justice, equity: perspectives and opinions on the role of NLP/AI towards re-imagining justice and empowering disenfranchised users; analyses and design of systems that further challenge existing power structures.

  • Interdisciplinary perspectives: perspectives and analyses from other fields (e.g., social sciences, philosophy) on the potential positive impacts of NLP techniques.

  • Case studies of NLP applications for social good: e.g., NLP for disaster relief, NLG for climate change awareness, models for users with cognitive disabilities, etc.

  • Challenges of NLP for positive impact: ethical and privacy implications, design of positive technology that is less-susceptible to misuse.

We will require each submission to discuss the ethical and societal implications of their work, and encourage a discussion of what "positive impact" means to authors.

For enquiries, please contact Maarten Sap at msap@cs.washington.edu or Zhijing Jin at zjin@tue.mpg.de.

Organizing Committee

Steering Committee

Speakers (A-Z)

Ndapa Nakashole

University of California, San Diego

NLP in Education & Healthcare

Abstract. In this talk I will present some of our work that we hope will have a positive impact. The work falls into two categories: NLP in education, and NLP in healthcare. Under NLP in education, I will discuss question understanding with a focus on human interpretable, modular approaches to understanding of long Math problems. Under NLP in healthcare, I will talk about consumer health question answering.

Yulia Tsvetkov

University of Washington

Towards Language Generation We Can Trust

Abstract: Modern language generation models produce highly fluent but often unreliable outputs. This motivated a surge of metrics attempting to measure the factual consistency of generated texts, and also a surge of approaches to controlling various attributes of the text that models generate. However, existing metrics treat factuality as a binary concept and fail to provide deeper insights on the kinds of inconsistencies made by different systems. Similarly, the majority of approaches to controllable text generation are focused on coarse-grained categorical attributes (typically only one attribute). To address these concerns, we propose to focus on understanding finer-grained aspects of factuality and controlling for finer-grained aspects of the generated texts. In the first part of the talk, I will present a benchmark for evaluating the factual consistency of generated summaries against a nuanced typology of factual errors. In the second part of the talk, I will present an algorithm for controllable inference from pretrained models, which aims at rewriting model outputs with multiple sentence-level fine-grained constraints. Together, these approaches make strides towards more reliable applications of conditional language generation, such as summarization and machine translation.

Jason Weston

Facebook AI Research

Negative Conversation Detection and Positive Conversation Generation

Abstract: We will describe a detailed study of techniques for negative conversation detection, including some newly introduced methods for offensive text detection and gender bias detection. These methods are appropriate for both detection in human utterances and for machine learning model generations. We advocate for making this research, including datasets and models, publicly available in order to have a positive impact on the world. We then go on to describe a further study of techniques for positive conversation generation for language models. In particular we compare existing methods and some newly introduced methods for gender bias mitigation, positive style generation and offensive generation mitigation. This is joint work with Y-Lan Boureau, Emily Dinan, Angela Fan, Dexter Ju, Douwe Kiela, Margaret Li, Jack Urbanek, Adina Williams, Ledell Wu and Jing Xu.

Panelists (A-Z)

Yejin Choi

University of Washington/ Allen Institute for AI

Yejin Choi is a Brett Helsel associate professor at the Paul G. Allen School of Computer Science & Engineering at the University of Washington and also a senior research manager at AI2 overseeing the project Mosaic. Her research interests include commonsense reasoning, language grounding with vision, neural language generation and degeneration, and AI for social good. She is a co-recepient of the Longuet-Higgins Prize (test of time award) at CVPR 2021, the AAAI Outstanding Paper Award in 2020, the Borg Early Career Award (BECA) in 2018, the inaugural Alexa Prize Challenge in 2017, IEEE AI's 10 to Watch in 2016, and the Marr Prize (best paper award) at ICCV 2013. She received her Ph.D. in Computer Science at Cornell University and BS in Computer Science and Engineering at Seoul National University in Korea.

Pascale Fung

Hong Kong University of Science and Technology

Pascale Fung is a Professor at the Department of Electronic & Computer Engineering and Department of Computer Science & Engineering at The Hong Kong University of Science & Technology (HKUST), and a visiting professor at the Central Academy of Fine Arts in Beijing. She is an elected Fellow of the Association for Computational Linguistics (ACL) for her “significant contributions towards statistical NLP, comparable corpora, and building intelligent systems that can understand and empathize with humans”. She is an Fellow of the Institute of Electrical and Electronic Engineers (IEEE) for her “contributions to human-machine interactions”, and an elected Fellow of the International Speech Communication Association for “fundamental contributions to the interdisciplinary area of spoken language human-machine interactions”. She is a member of the IEEE Working Group to develop an IEEE standard - Recommended Practice for Organizational Governance of Artificial Intelligence. Her research team has won several best and outstanding paper awards at ACL, ACL and NeurIPS workshops.

Deb Raji

Mozilla Foundation

Deborah is a Mozilla fellow, interested in algorithmic auditing. She also works closely with the Algorithmic Justice League initiative to highlight bias in deployed AI products. She has also worked with Google’s Ethical AI team and been a research fellow at the Partnership on AI and AI Now Institute at New York University working on various projects to operationalize ethical considerations in ML engineering practice. Recently, she was named to Forbes 30 Under 30 and MIT Tech Review 35 Under 35 Innovators.

Baobao Zhang

Syracuse University

Baobao Zhang is an assistant professor of Political Science at the Maxwell School of Citizenship and Public Affairs at Syracuse University. She is also a CIFAR Azrieli Global Scholar and a research affiliate with the Centre for the Governance of AI.


Her current research focuses on trust in digital technology and the governance of artificial intelligence (AI). She studies (1) public and elite opinion toward AI, (2) how the American welfare state could adapt to the increasing automation of labor, and (3) attitudes toward Covid-19 surveillance technology. Her previous research covered a wide range of topics, including the politics of the U.S. welfare state, attitudes towards climate change, and survey methodology.


She graduated with a PhD in political science (2020) and an MA in statistics (2015) from Yale University. In 2019-2020, she worked as a postdoctoral fellow in MIT’s Political Science Department and a fellow at the Berkman Klein Center for Internet and Society at Harvard University.