AAAI Workshop on
Responsible Language Models
(ReLM 2024)
February 26, 2024. Vancouver, Canada
Language models (LMs) have become increasingly prevalent in today's society and are integrated as key components of modern technology. These models have revolutionized our interaction with technology; however, they also introduce new challenges such as biases and discrimination in the generated content, privacy leakage, model vulnerability, dissemination of fake and misleading content, copyright and plagiarism concerns, and the environmental impact associated with training and using LMs. For example, since these LMs are trained on large amounts of data, which often exhibit biases, there is a risk of unintentional propagation of systemic discrimination. Similarly, these LMs can cause data leakage, privacy issues, and hallucinations. In light of these risks, it is imperative to develop and implement LMs and applications in accordance with responsible AI principles.
The Responsible Language Models (ReLM) workshop will focus on both the theoretical and practical challenges related to the design and deployment of responsible LMs and will have strong multidisciplinary components, promoting dialogue and collaboration in order to develop more trustworthy and inclusive technology. We invite discussions and research on key topics such as bias identification & quantification, bias mitigation, transparency, privacy & security issues, hallucination, uncertainty quantification, and various other risks in LMs.
CALL FOR PARTICIPATION
Objective
In this workshop, we seek to promote collaboration between NLP researchers from academia & industry, domain experts from multi-disciplinary areas (such as healthcare providers, media as well as legal experts), and fairness specialists to explore strategies for the responsible and safe utilization of LMs across various domains. Identify and examine the risks due to bias of LMs from different lenses, such as government, policymakers, developers, domain experts, and others. Promote dialogue by integrating technological insights with policy perspectives, thereby enabling a more comprehensive understanding of the subject matter. Advocate for the implementation of policies aimed at establishing standardized protocols for LMs prior to their deployment.
Topics
We invite submissions from participants who can contribute to theory and techniques/strategies to ensure adherence to the various aspects of the deployability of AI models. The topics of interest include, but are not limited to, the following:
Explainability/interpretability:
Explainability techniques for traditional pre-training and fine-tuning based paradigm.
Explainability techniques for prompting-based paradigm.
Evaluation of explainability techniques.
Privacy and security:
Privacy and security issues in language models.
Data protection, data anonymization and user consent.
Bias and fairness:
Releasing datasets or conducting analysis on existing datasets for quantification, identification, and mitigation of bias in LMs.
Identify existing biases amplification and perpetuation in the data.
Proposing novel metrics for bias evaluation. Assessing the trade-offs between accuracy and fairness before and after addressing biases.
Strategies for bias mitigation in LMs.
Robustness and generalization:
Out-of-distribution (OOD) generalization and adversarial robustness of LLMs.
Techniques to analyze and mitigate shortcut learning in LLMs.
Uncertainty quantification:
Measuring the confidence of LMs and enabling LMs to express their uncertainty.
Setting a benchmark for LM uncertainty.
Ethical Policy Formation and AI Governance (AIG):
Ethical AI principles and guidelines regarding the utilization of LMs.
Responsible development and deployment of LMs.
Ethical dilemmas in language models.
Responsible LLMOps (LLM Operationalization).