As language technologies have become increasingly prevalent, there is a growing awareness that decisions we make about our data, methods, and tools are often tied up with their impact on people and societies. This tutorial will provide an overview of real-world applications of language technologies and the potential ethical implications associated with them. We will discuss philosophical foundations of ethical research along with state of the art techniques. Through this tutorial, we intend to provide the NLP researcher with an overview of tools to ensure that the data, algorithms, and models that they build are socially responsible. These tools will include a checklist of common pitfalls that one should avoid (e.g., demographic bias in data collection), as well as methods to adequately mitigate these issues (e.g., adjusting sampling rates or de-biasing through regularization).
The tutorial is based on a new course on Ethics and NLP developed at Carnegie Mellon University by Yulia Tsvetkov, Alan W Black, and Shrimai Prabhumoye.
Ads illustrating how human biases can be problematically replicated in computational systems. Source: UN Women
All faculty, students, and industry researchers across NLP who are interested in more deeply understanding the social and ethical implications of their work. Our goal is that attendees will come away from the tutorial with a renewed understanding of not only how to avoid ethical pitfalls in research, but also how their work can contribute to our community's positive social impact in the long term.
This tutorial will consist of a series of talks combined with ample time for discussion. As organizers we will aim to present a balanced foundation of material on these topics; nevertheless, the social and ethical issues at hand are often complex with no clear answer, so we're looking forward to vibrant discussions throughout with all in attendance.
Our time will be broken up into roughly three modules:
Please feel free to contact us (ytsvetko@cs.cmu.edu, vinod@cs.stanford.edu, robvoigt@stanford.edu) if you have any questions or concerns. We sincerely hope to see you at the tutorial!
Here's a link to the slides that was presented at the tutorial. Please feel free to leave your comments.