Call for Papers

Developing responsible AI needs a change of paradigm from accuracy-optimized models to models that prioritize ethical use. This shift requires a change in our way of thinking when building resources for NLP applications, including datasets, annotation schemes, language models, and evaluation metrics. For example, each manually annotated dataset should be accompanied with detailed information on the data sources, the data sampling process, the annotation process, and all the other important decisions (Data Statements). Datasets should represent input from a diverse and representative sample of users and be annotated by workers with diverse backgrounds. When building or using language representations, such as word and sentence embeddings, researchers should be aware that such representations often perpetuate and accentuate unfair biases and require mitigating techniques. Evaluation metrics should take into account the ethics and fairness related costs associated with different kinds of errors. The carbon footprint created by computationally demanding models should also be considered as cost when measuring the effectiveness of AI systems.

There is also a need for tools and resources that are designed for translating the technical jargon into simple language and providing explanations of automatic outcomes to non-expert users. Explainable AI will empower the society to scrutinize the algorithms for ethical use (or the risk of misuse) in specific applications and ensure that everyone has a voice in defining and validating ethical AI.

We invite papers describing original research on design, creation, and use of language resources (annotated and unlabeled corpora, lexicons, dictionaries, templates, language representations, evaluation metrics, etc.) and tools to address any of the issues in responsible AI, including (but not limited to):

- Fairness and unintended biases

- Confidentiality and privacy

- Interpretability and explainability

- Safety and security

- Transparency

- Accountability

- Integrity.

The language resources and tools can be designed for any one or several NLP (or non-NLP) applications, including (but not limited to):

- Syntax parsing and tagging

- Lexical semantics

- Language representation

- Discourse analysis

- Information retrieval

- Information extraction

- Natural language generation

- Textual inference

- Speech processing

- Dialogue systems

- Argument mining

- Sentiment and emotion analysis

- Machine translation

- Question answering

- Summarization

- Social media analysis

- Computational social science

- Health and wellness applications

- Auditing in highly regulated fields, such as medical, financial, and legal.


Paper Submission


We solicit original papers that describe language resources, evaluation metrics, and tools designed to assist in developing and assessing ethical AI systems. We also welcome papers highlighting ethics related problems in existing, widely used language resources (e.g., labeled datasets, word embeddings). We invite regular papers describing completed projects, emerging research papers presenting ongoing work, and position papers arguing an opinion on one of the topics of interest.

The papers can be up to 8 pages long (plus unlimited pages for references) and should be formatted according to the LREC style guidelines. The review process will be double-blind. In preparing your manuscript, do not include any information which could reveal your identity, or that of your co-authors. The title section of your manuscript should not contain any author names, email addresses, or affiliation status. If you do include any author names on the title page, your submission will be automatically rejected. In the body of your submission, you should eliminate all direct references to your own previous work. That is, avoid phrases such as "this contribution generalizes our results for XYZ". Also, please do not disproportionately cite your own previous work. The submissions will be reviewed by at least two members of the Program Committee. Accepted papers will be invited for an oral (or poster) presentation during the workshop and will be published as workshop proceedings. At least one author for each accepted paper has to attend the workshop to present the paper.

Submissions to multiple venues are allowed, but papers must be withdrawn from other venues if accepted by the workshop.