Personalised diagnosis leverages individual data—such as imaging, genomic, medical, and lifestyle information—to deliver more precise, patient-specific medical care. This approach integrates AI technologies to enhance medical imaging analysis, enabling accurate disease classification, cancer subtype identification, TNM staging, and image reconstruction across modalities like CT, MRI, PET, ultrasound, and histopathology. It also uses predictive algorithms to anticipate health outcomes and guide tailored prevention and treatment plans. By incorporating human-computer interfaces (e.g., EEG-based systems), this research aims to advance personalised, patient-centred healthcare. The overarching goal is to translate cutting-edge AI research into real-world innovations that improve public health outcomes through interdisciplinary collaboration.
The rise of the omics era—driven by the Human Genome Project and next-generation sequencing—has generated vast genomic, transcriptomic, proteomic, and metabolomic datasets that are transforming biomedical research. AI plays a critical role in analyzing these complex, large-scale datasets to identify disease markers, predict outcomes, and enhance diagnosis, prognosis, and treatment. By integrating multi-omics data with clinical information, AI enables holistic insights into biological mechanisms. This theme focuses on AI-driven multi-omics integration, the development of interpretable machine learning models, disease driver discovery, and AI-powered drug discovery—including biomarker generation and in silico compound screening—while addressing challenges like high-dimensionality, data imbalance, and interpretability.
While machine learning is increasingly applied in medical science, its clinical adoption faces challenges due to the opaque, “black box” nature of many AI models, raising concerns around patient safety, ethics, legal accountability, and data privacy. To address these issues, this theme focuses on Explainable AI (XAI) techniques that aim to make AI outputs transparent, interpretable, and traceable—similar to human reasoning—especially in applications like CT scans and X-rays. Key areas include local and global interpretation methods, integration of XAI into Computer-Aided Diagnosis (CAD) systems, aligning explanation methods with medical professionals' expectations, developing biologically interpretable models, and designing privacy-preserving AI infrastructures to ensure secure and trustworthy medical data sharing.