Third Workshop on Safety for Conversational AI

Co-located with LREC/COLING 2024

Torino, Italy - 21st May 2024

If you are attending in person, the workshop will be in Room 13 (Bruxelles)

WORKSHOP PARTICIPANTS: PLEASE JOIN OUR SLACK CHANNEL!

All links for online participation will be posted there as well as the latest updates. 

This workshop proposes a focused community effort to address current issues in Safety in Conversational AI. The objectives are:


Location, format, and schedule

The third Safety for Conversational AI workshop will consist of up to three keynotes, a panel, and oral and poster paper presentations. The event will be hybrid (both in-person at LREC/COLING 2024 in Turin and remotely).

For the content of the poster sessions, we seek archival and non-archival submissions in the form of long, short, and positional papers (see Call for papers below). The topics for the panel will be informed by these submissions.

Schedule

All times in CET.

 

9:00-9:30: Welcome message from the organisers

9:40-10:30: Keynote by Laura Weidinger

10:30-11:00: Coffee break 

11:00-13:00: Oral presentations


13:00-14:00: Lunch break


14:00-15:00: Keynote by Maurice Jakesch

15:00-16:00: Keynote by Stevie Bergman

16:00-16:30: Coffee break

16:00-17:30: Round tables (TBC)


Accepted Papers

DIVERSITY-AWARE ANNOTATION FOR CONVERSATIONAL AI SAFETY by Alicia Parrish, Vinodkumar Prabhakaran, Lora Aroyo, Mark Díaz, Christopher M. Homan, Greg Serapio-García, Alex S. Taylor and Ding Wang

FAIRPAIR: A ROBUST EVALUATION OF BIASES IN LANGUAGE MODELS THROUGH PAIRED PERTURBATIONS by Jane Dwivedi-Yu

GROUNDING LLMS TO IN-PROMPT INSTRUCTIONS: REDUCING HALLUCINATIONS CAUSED BY STATIC PRE-TRAINING KNOWLEDGE by Angus Addlesee

LEARNING TO SEE BUT FORGETTING TO FOLLOW: VISUAL INSTRUCTION TUNING MAKES LLMS MORE PRONE TO JAILBREAK ATTACKS by Georgios Pantazopoulos, Amit Parekh, Malvina Nikandrou and Alessandro Suglia

USING INFORMATION RETRIEVAL TECHNIQUES TO AUTOMATICALLY REPURPOSE EXISTING DIALOGUE DATASETS FOR SAFE CHATBOT DEVELOPMENT by Tunde Oluwaseyi Ajayi, Gaurav Negi, Mihael Arcan and Paul Buitelaar

Important Dates

All dates are Anywhere on Earth.

Call for papers

We look for regular or work-in-progress papers that report:



Submission Formats

Instructions For Double-Blind Review

We will adopt a double-blind reviewing process. 

As reviewing will be double-blind, papers must not include authors’ names and affiliations. Furthermore, self-references or links (such as GitHub) that reveal the author’s identity, e.g., “We previously showed (Smith, 1991) …” must be avoided. Instead, use citations such as “Smith previously showed (Smith, 1991) …” Papers that do not conform to these requirements will be rejected without review. Papers should not refer, for further detail, to documents that are not available to the reviewers. For example, do not omit or redact important citation information to preserve anonymity. Instead, use third person or named reference to this work, as described above (“Smith showed” rather than “we showed”). If important citations are not available to reviewers (e.g., awaiting publication), these paper/s should be anonymised and included in the appendix. They can then be referenced from the submission without compromising anonymity. Papers may be accompanied by a resource (software and/or data) described in the paper, but these resources should also be anonymized.

Submission Instructions

The general LREC/COLING 2024 submission and formatting guidelines apply. For any questions, use Slack to reach out to the workshop organizers.

Organizing Committee

Contact Us

We use Slack to share communications and provide assistance in preparation and during the workshop with interested participants.

For queries, join our workspace through the invitation link

Program Committee