First Multimodal Sensing Grand Challenge for Next-Gen Pain Assessment (AI4Pain)


The First Multimodal Sensing Grand Challenge for Next-Gen Pain Assessment (AI4PAIN)

Autonomic pain recognition, a fundamental aspect of healthcare and affective computing, stands at the forefront of advancing technologies that can enhance patient well-being and contribute to more empathetic interventions. The integration of artificial intelligence (AI) and sensing technologies holds unprecedented potential in elevating the accuracy and depth of pain assessment. The proposed Grand Challenge aims to address the growing need for advanced pain recognition methods that leverage the fusion of functional near-infrared spectroscopy (fNIRS) and video analysis of the face through cutting-edge AI. This challenge comes at a critical time as the intersection of these technologies presents a unique opportunity to significantly advance our understanding and capabilities in automated pain assessment.

Pain, a complex and subjective experience, poses significant challenges in accurate assessment and effective management. Current methods often rely on self-reporting, which is limited in populations such as infants, non-verbal patients, and those with cognitive impairments. The integration of fNIRS, providing insights into neural activity, with facial video recording, enhances our ability to capture both neurophysiological and behavioural aspects of pain. The combination of these two sensing technologies provides a novel approach to pain recognition, introducing a new level of comprehensiveness and objectivity that has not been explored previously. Therefore, it is worthwhile to investigate the applicability of these two sensing technologies for pain recognition.

fNIRS and Facial Video

Recent advancements in fNIRS technology and the surge in AI capabilities, particularly in facial video analysis, create a timely integration for this challenge. The current landscape provides an opportunity for the development of advanced models capable of improving pain recognition through the fusion of neural and behavioural information – an unexplored possibility until now. This marks a crucial moment to push the boundaries of attainable progress in the field.

Participants in this grand challenge will contribute to the development of novel algorithms and models for pain recognition. This will showcase the efficacy of combining neural and behavioural information to improve pain recognition. This challenge is aimed to generate a multimodal sensing dataset, facilitating benchmarking, and serving as a valuable resource for future research in the field of pain assessment.


Important Dates

Train data available: 1 April 2024

Baseline Results available: 19 April 2024

Test data available: 18 May 2024

Results deadline: 1 June 2024

Paper submission deadline: 12 June 2024

Notification: 1 August 2024

Camera-ready papers: 10 August 2024

Challenge results: 15 September 2024