Welcome to the Hidden-RAD2 challenge, a Core Task of NTCIR-19!
Building on the success of the NTCIR-18 Hidden-RAD task, Hidden-RAD2 continues to explore how well AI can infer and explain the hidden causality within radiology reports. We aim to move beyond simple diagnosis generation and advance eXplainable AI (XAI) that can articulate the fundamental reasons "why" a diagnosis was made.
A key feature of NTCIR-19 is the introduction of a new Verification and Correction task. In addition to generating causal explanations, this challenge will assess an AI's ability to detect and correct its own errors (hallucinations), aiming to elevate the reliability and safety of medical AI.
Key Objectives:
Infer Causality: Restore the hidden causal links between observational findings and the final impressions in chest X-ray reports.
Generate Explanations: Generate clinically meaningful and logical explanation reports based on the restored causality.
Detect & Correct Hallucinations (NEW!): Identify and fix factually incorrect or logically improper parts (hallucinations) in AI-generated explanations.
We look forward to your participation in building a future of safer and more trustworthy medical AI.