SCI-CHAT: Workshop on Simulation of Conversational Intelligence in Chat
EACL 2024 in Malta on March, 2024
The aim of this workshop is to bring together experts working on open-domain dialogue research. In this speedily advancing research area many challenges still exist, such as learning information from conversations, engaging in realistic and convincing simulation of human intelligence, reasoning, amongst others.
SCI-CHAT follows previous workshops on open domain dialogue, but with a focus on the simulation of intelligent conversation, including the ability to follow a challenging topic over a multi-turn conversation, while positing, refuting and reasoning over arguments with live human evaluation employed as the primary mechanism for evaluating models (Ji et al., 2022). The workshop will include a research track and shared task.
Research track: aims to provide a venue for reporting and discussing the latest developments in simulation of intelligent conversation, chit-chat, open-domain dialogue AI.
Shared task: will focus on simulating intelligent conversations; participants will be asked to submit automated dialogue agents (API) with the aim of carrying out nuanced conversations over multiple dialogue turns, and the ability to posit, refute and reason over arguments. Participating systems will be interactively evaluated in a live human evaluation following the procedure described in Ji et al. (2022). All data acquired within the context of the shared task will be made public, providing an important resource for improving metrics and systems in this research area.
Topics of Interest:
SCI-CHAT's research track aims to explore recent advances and challenges in open-domain dialogue research. Researchers working on all aspects of open-domain dialogue are invited to submit papers on recent advances, resources, tools, analysis, evaluation, and challenges on the broad theme of open-domain dialogues. The topics of the workshop include but are not limited to the following:
Intelligent conversation, chit-chat, open-domain dialogue;
Automatic and human evaluation of open-domain dialogue;
Limitations, risks and safety in open-domain dialogue;
Instruction-tuned and instruction-enabled models;
Any other topic of interest to the dialogue community.
Important Dates:
Release of training and development data: November 9th 2023
Release of baseline systems: November 9th 2023
Research paper via Softconf: December 18th 2023
Pre-reviewed ARR commitment deadline: January 17th 2024
Notification of research paper acceptance: January 20th, 2024
Preliminary System submission deadline: January 13th 2024 (optional - if you want help testing your API, please submit early)
System submission (API) deadline: January 20th 2024
System description paper via SoftConf: January 26th 2024
Camera-ready papers due: January 30th 2024
Overview of results at one-day workshop: March 21 or 22, 2024
Committee Members
Andreas Vlachos, University of Cambridge
David Vandyke, Apple
Emine Yilmaz, University College London
Hsien-chin Lin, Heinrich-Heine University
Ivan Vulić, University of Cambridge
Julius Cheng, University of Cambridge
Michael Zock, CNRS, (LIF) university of Aix-Marseille
Songbo Hu, University of Cambridge
Stefan Ultes, Otto-Friedrich-University
Valerio Basile, University of Turin
Anti-Harassment Policy:
This workshop adheres to the ACL Anti-Harassment Policy.
References
Ji, Tianbo, Yvette Graham, Gareth Jones, Chenyang Lyu, Qun Liu (2022) Achieving Reliable Human Evaluation of Open-domain Dialogue Systems. Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics.