In this course, you'll learn how to explore, develop, and get AI in the clinic and have a real impact on health care with a focus on Radiology applications. The course breaks down into the following concepts. Why does most clinical AI currently have no impact? Learn how to carefully define the clinical problem and associated performance metrics. How to avoid common pitfalls of data collection? An overview of algorithms and ways to optimize performance (ensembling, uncertainty). Validate that you're better than the clinical baseline (the PICCAI challenge). How to get your AI model used in the clinic. How can AI models be used in the clinic. Future topics: continuous learning and a new form of quality control, large language models and agents, community AI.
Henkjan Huisman
Professor @ Radboud University Medical Centre Nijmegen and NTNU (Trondheim)
Fabrizio Silvestri
Full Professor @ DIAG of the Sapienza University of Rome
Graph Neural Networks (GNNs) have become a cornerstone for analyzing data with inherent graph structures, facilitating breakthroughs in areas like social network analysis, bioinformatics, and knowledge graph inference. However, the complexity and lack of transparency in these models raise critical concerns about trust, fairness, and accountability in AI deployments. This PhD course on Explainable Artificial Intelligence (XAI) for GNNs seeks to tackle these challenges by delving into current methodologies, frameworks, and practices in XAI specifically tailored to GNNs. The course will focus on cutting-edge XAI techniques designed for GNNs, including perturbation-based methods, decomposition approaches, and model-agnostic strategies, evaluating their strengths, limitations, and suitability for diverse applications. Through hands-on sessions, participants will apply these XAI methods within popular GNN frameworks, enhancing their practical understanding and prompting them to consider the ethical ramifications of employing GNNs in sensitive settings. This practical engagement aims to deepen theoretical insights while encouraging a critical perspective on the ethical aspects of AI deployment. Discussions will revolve around the necessity of developing new XAI methods that can improve the interpretability and transparency of GNNs without sacrificing performance. The course aims to prepare PhD students with a nuanced understanding of the challenges and prospects in XAI for GNNs, positioning them to contribute to advancing more interpretable, reliable, and efficient AI systems based on graph neural networks.
The course reviews the main facets of parametric and nonparametric statistical pattern recognition as well as the fundamentals of supervised and unsupervised learning in artificial neural networks (ANNs), either shallow or deep. Proper probabilistic interpretations of ANNs are given. Traditional and leading-edge algorithms for the ANN-based estimation of posterior probabilities or probability density functions over feature vectors and structures (i.e., graphs) are presented.
Professor @ Max Plank Institute of Colloids and Interfaces (Potsdam, DE)
This introductory course will cover several aspects of estimation and hypothesis testing for a single mean. We will look at different situations, including whether the sample is from a normal distribution and whether the sample size is considered large. In addition to standard techniques, we will delve into bootstrapping methodology. While these fundamental concepts don't encompass the entire scope of applied statistics, attendees will acquire an approach that can be expanded to more complex scenarios. The lectures will include some theoretical background and focus on numerous examples from fields such as biology, medicine, and environmental science.