MEDIQA-M3G@ ClinicalNLP 2024
MEDIQA-M3G @ NAACL-ClinicalNLP 2024
Multilingual & Multimodal Medical Answer Generation
Motivation
The rapid development of telecommunication technologies, the increased demands for healthcare services, and recent pandemic needs, have accelerated the adoption of remote clinical diagnosis and treatment. In addition to live meetings with doctors which may be conducted through telephone or video, asynchronous options such as e-visits, emails, and messaging chats have also been proven to be cost-effective and convenient.
In this task, we focus on the problem of clinical dermatology multimodal query response generation. Inputs will include text which give clinical context and queries, as well as one or more images. The challenge will tackle the generation an appropriate textual response to the query.
Consumer health question answering has been the subject of past challenges and research; however, these prior works only focus on text [1]. Previous work on visual question answering have focused mainly on radiology images and did not include additional clinical text input [2]. Also, while there is much work on dermatology image classification, much prior work is related to lesion malignancy classification for dermatoscope images[3].
To the best of our knowledge, this is the first challenge and study of a problem that seeks to automatically generate clinical responses, given textual clinical history, as well as user generated images and queries.
[1] Overview of the MEDIQA 2019 shared task on textual inference, question entailment and question answering. Asma Ben Abacha, Chaitanya Shivade, Dina Demner-Fushman. https://aclanthology.org/W19-5039/
[2] Vqa-med: Overview of the medical visual question answering task at imageclef 2019. Asma Ben Abacha , Sadid A. Hasan , Vivek V. Datla , Joey Liu , Dina Demner-Fushman , and Henning Muller. https://www.semanticscholar.org/paper/VQA-Med%3A-Overview-of-the-Medical-Visual-Question-at-Abacha-Hasan/9eeeb23546d3d2bbc73959bffc6819f2335f3c83
[3] Artificial Intelligence in Dermatology Image Analysis: Current Developments and Future Trends. Zhouxiao Li, Konstantin Christoph Koban, Thilo Ludwig Schenck, Riccardo Enzo Giunta, Qingfeng Li, and Yangbai Sun. https://pubmed.ncbi.nlm.nih.gov/36431301/
Tasks
Participants will be given textual inputs which may include clinical history and a query, along with one or more associated images. The task will consist in generating a relevant textual response.
The task training data was translated and adapted from Chinese datasets.
For the test set, participants can opt to work on one or multiple languages: Chinese (Simplified), English, and Spanish.
Registration, Datasets & Evaluation
Please complete the ClinicalNLP-MEDIQA 2024 registration form first: https://docs.google.com/forms/d/e/1FAIpQLScUnP2TJQX996BR-6dd6GvWAmkfE8VrX135I3VAYuecD1VR9Q/viewform?vc=0&c=0&w=1&flr=0
Accept the Terms and Conditions and join the Codabench project: https://www.codabench.org/competitions/1632/
Schedule
All deadlines are 11:59PM UTC-12:00 (anywhere on Earth)
First CFP & Registration opens: Monday January 8, 2024
Training & validation data release: Friday January 26, 2024
Registration ends: Monday March 18, 2024
Test data release: Tuesday March 26, 2024
Run submission due: Thursday March 28, 2024
Code submission due: Friday March 29, 2024 (The GitHub repo URL should be included in the submission form)
Release of the results by the organizers: Monday April 1, 2024
Paper submission period starts: Monday April 8, 2024
Paper submission due: Wednesday April 10, 2024
Notification of acceptance: Thursday April 18, 2024
Final versions of papers due: Wednesday April 24, 2024
ClinicalNLP Workshop @ NAACL 2024: June 21 or 22, 2024, Mexico City, Mexico
Contact
If you have any questions regarding your team's registration, please email us at mediqa.organizers@gmail.com
For more updates or inquiries, join the MEDIQA Google group https://groups.google.com/g/mediqa-nlp and email us at mediqa-nlp@googlegroups.com (mailing list)
Organizers
Asma Ben Abacha, Microsoft, USA
Wen-wai Yim, Microsoft, USA
Meliha Yetisgen, University of Washington, USA
Fei Xia, University of Washington, USA
Martin Krallinger, Barcelona Supercomputing Center (BSC), Spain