Trustworthy Machine Learning for Healthcare Workshop

Speakers

Title: Trustworthy Machine Learning in Medical Imaging

Abstract: Intelligent medical systems capable of capturing and interpreting sensor data and providing context-aware assistance promise to revolutionize interventional healthcare. However, flaws in common practice as well as a lack of standardization in the field of medical image analysis substantially impede successful adoption of modern ML research into clinical use. Drawing from research within my own group as well as large international expert consortia, I will discuss pervasive shortcomings in current medical imaging procedures -- specifically focusing on the three core aspects of image acquisition, image analysis, and algorithm validation – as well as present possible solutions. My talk will showcase the importance of systematically professionalizing every aspect of the medical imaging pipeline to the end of readying intelligent imaging systems for clinical use.

Prof. Lena Maier-Hein is a full professor at Heidelberg University (Germany) and managing director of the National Center for Tumor Diseases (NCT) Heidelberg. At the German Cancer Research Center (DKFZ) she is head of the division Intelligent Medical Systems (IMSY) and managing director of the "Data Science and Digital Oncology" cross-topic program. Her research concentrates on machine learning-based biomedical image analysis with a specific focus on surgical data science, computational biophotonics and validation of machine learning algorithms. She is a fellow of the Medical Image Computing and Computer Assisted Intervention (MICCAI) society and of the European Laboratory for Learning and Intelligent Systems (ELLIS), president of the MICCAI special interest group on challenges and chair of the international surgical data science initiative.

Lena Maier-Hein serves on the editorial board of the journals Nature Scientific Data, IEEE Transactions on Pattern Analysis and Machine Intelligence and Medical Image Analysis. During her academic career, she has been distinguished with several science awards including the 2013 Heinz Maier Leibnitz Award of the German Research Foundation (DFG) and the 2017/18 Berlin-Brandenburg Academy Prize. She has received a European Research Council (ERC) starting grant (2015-2020) and consolidator grant (2021-2026).


Title: Generating Class-wise Visual Explanations for Deep Neural Networks

Abstract: While deep learning has achieved excellent performance in many various tasks,  because of its black-box nature, it is still a long way from being widely used in the safety-critical task like healthcare tasks. For example, it suffers from poor explainability problem and is vulnerable to be attacked both in the training and testing time. Yet, existing works mainly for local explanations lack global knowledge to show class-wise explanations in the whole training procedure. In this talk, I will introduce our effort on visualizing a global explanation in the input space for every class learned in the training procedure. Our solution finds a representation set that could demonstrate the learned knowledge for each class, which could provide analyse on the model knowledge in different training procedures. We also show that the generated explanations could lend insights into diagnosing model failures, such as revealing triggers in a backdoored model.

Prof. Minhao Cheng is an Assistant Professor in the Department of Computer Science and Engineering (CSE), Hong Kong University of Science and Technology (HKUST). He obtained his Ph.D. degree in the Department of Computer Science from University of California, Los Angeles. His research interest is broadly on machine learning with a focus on machine learning robustness and AutoML. He has published over 30 papers on top-tier AI conferences including ICML, NeurIPS, ICLR, ACL, AAAI etc. He is a recipient of ICLR 2021 Outstanding Paper Award.


Title: Safely Utilizing AI Model in Open Clinical Environment 

Abstract: During the past decade, deep learning has achieved great success in healthcare. However, most existing methods aim at model performance in terms of higher accuracy, which lacks the information reflecting the reliability of the prediction. It cannot be trustworthy for diagnosis making and even is disastrous for safety-critical clinical applications. How to build a reliable and robust healthcare system has become a focal topic in both academia and industry. In the talk, I will introduce our recent works for trustworthy AI in healthcare. Moreover, I also discuss some open challenges for trustworthy learning.

Dr. Huazhu Fu is a senior scientist at the Institute of High Performance Computing (IHPC), Agency for Science, Technology and Research (A*STAR), Singapore. He works on computer vision, machine learning, AI in healthcare, and trustworthy AI. He has published around 150 papers in top conferences/journals, with more than 10k citations. He received the Best Paper Award from ICME 2021, finalist of the Young Scientist Publication Impact Award in MICCAI 2021, and Top 2% Scientists Worldwide identified by Stanford University in 2022. He currently serves as an Associate Editor of IEEE TMI, IEEE TNNLS, and IEEE JBHI. He also served as an AC/Senior-PC for MICCAI, IJCAI, and AAAI, and Co-Organizer for medical i-challenge.


Title: Trustworthy Medical AI in the Loop of Algorithm and Clinic

Abstract: There are numerous efforts on technical development and translation of AI/ML in the healthcare domain. In addition to some of the classic challenges such as small datasets, limited annotations, imbalanced classes, how to gain and enhance trust from the users and practitioners of medical AI/ML is an emerging topic and key for successful applications of AI to patient care. In this talk, the speaker will elaborate what are the important pillars in developing trustworthy medical AI tools, how to marry medical intelligence and AI to enhance trust from clinicians, and showcase a range of applications of AI/ML in medical imaging.  

Prof. Shandong Wu, PhD, is a tenured Associate Professor in Radiology (primary), Biomedical Informatics, Bioengineering, Intelligent Systems, and Clinical and Translational Science, at the University of Pittsburgh (Pitt), and is an Adjunct Professor in the Machine Learning Department at the Carnegie Mellon University (CMU). Dr. Wu leads the Intelligent Computing for Clinical Imaging (ICCI) lab (16 trainee members and >20 clinician collaborators). He is the founding director of the Pittsburgh Center for Artificial Intelligence Innovation in Medical Imaging (CAIIMI), which includes more than 117 multidisciplinary members from Pitt, UPMC, and CMU, working on advancing AI research and clinical translation. Dr. Wu’s background is in Computer Science (Computer Vision) with additional clinical training in radiology research. Dr. Wu’s main research areas include computational biomedical imaging analysis, AI in clinical/translational applications and informatics, big (health) data coupled with machine/deep learning, quantitative imaging biomarkers and clinical study, and radiomics/radiogenomics. 


Title: Unlocking the Potential of Differential Privacy in Medical Imaging: Enabling Data Analysis while Protecting Patient Privacy 

Abstract: Medical imaging plays a vital role in diagnosing and treating various health conditions, but it also raises significant privacy concerns as sensitive personal information can be contained within these images. Differential privacy, a  privacy-preserving artificial intelligence technique, offers a solution to these challenges and enable the secure analysis of medical images while protecting patient privacy.

In this talk, we will focus on the potential of differential privacy in medical imaging. We will explore its various applications, including disease detection, diagnosis, and treatment planning, and discuss its ethical implications. We will also examine the technical aspects of differential privacy, including its implementation in machine learning algorithms, such as deep learning, and its limitations and challenges.

Furthermore, we will highlight some of our ongoing research and development efforts in this area, including recent advancements in differentially private deep learning for medical imaging. We will discuss the trade-offs between privacy and utility in these applications and provide insights on how to achieve a balance between the two.

Attendees will gain a deeper understanding of the potential and challenges of differential privacy in medical imaging and its implications for healthcare.

Prof. Georgios Kaissis is an adjunct assistant professor at TUM, where he leads the Privacy-Preserving and Trustworthy Artificial Intelligence research group at the Institute for Artificial Intelligence in Medicine. He also leads the Reliable AI research group at Helmholtz Zentrum Munich. He obtained his medical degree from LMU Munich, Master's Degree in Health Business Administration from FAU Nuremberg, and his specialist diagnostic radiologist board certification at the Institute for Diagnostic and Interventional Radiology at TUM, where he serves as a consultant radiologist. He did his post-doc in Artificial Intelligence at the Department of Computing at Imperial College London.

Title: Overcoming Data Heterogeneity Challenges in Federated Learning 

Abstract: Federated learning (FL) is a trending framework to enable multi-institutional collaboration in machine learning without sharing raw data. This presentation will discuss our ongoing progress in designing FL algorithms that embrace the data heterogeneity properties for distributed medical data analysis in the FL setting. First, I will present our work on theoretically understanding FL training convergence and generalization using a neural tangent kernel, called FL-NTK. Then, I will present our algorithms for tackling data heterogeneity (on features and labels) and device heterogeneity, motivated by our previous theoretical foundation. Lastly, I will also show the promising results of applying our FL algorithms in healthcare applications.

Prof. Xiaoxiao Li is an Assistant Professor at the Department of Electrical and Computer Engineering (ECE) at the University of British Columbia (UBC) starting in August 2021. Before joining UBC, Dr. Li was a Postdoc Research Fellow in the Computer Science Department at Princeton University. Dr. Li obtained her PhD degree from Yale University in 2020. Dr. Li’s research interests range across the interdisciplinary fields of deep learning and biomedical data analysis, aiming to improve the trustworthiness of AI systems for healthcare. Dr. Li has had over 30 papers published in leading machine learning conferences and journals, including NeurIPS, ICML, ICLR, MICCAI, IPMI, ECCV, IEEE Transactions on Medical Imaging, and Medical Image Analysis. Her work has been recognized with several best paper awards at international conferences.