Title: Reconstructing Data in FL: Going Beyond Image Classification
Abstract: Gradient inversion in federated learning is an attack by the server aiming to reconstruct training data, thus breaking the supposed privacy guarantees offered by not sharing the raw data. Most gradient inversion attacks target image classification.
In this talk, I will first highlight that the training of generative models is vulnerable to gradient inversion attacks as well. I will introduce our design for federated diffusion of images, and then show that we can reconstruct training images in selected settings using a two-stage attack that leverages the final trained diffusion model to constraint the search space of the reconstructed images further.
Afterwards, I will move to a related problem in federated learning that is that users may choose to participate in multiple training processes, using the same data to train locally in each process. Thus an attacker controlling more than one server involved in these training process obtains the gradient updates of multiple training process the user participates in. Our results indicate that having more than having access to the gradients of more than one training process drastically increases the threat of gradient inversion as the information from two or more such processes can be combined to reconstruct data, which is a threat mostly ignored in the current literature but highly relevant in practice. I conclude by highlighting the challenge in applying defenses such as differential privacy to the scenario.
Short bio: Stefanie leads the group Secure Decentralized Systems at RPTU Kaiserslautern-Landau. Before joining the CS department in Kaiserlautern, they were an assistant professor at TU Delft (2018-2023) and a post-doctoral researcher at University of Waterloo, working with Prof. Ian Goldberg (2016-2018), and a PhD student at TU Dresden, supervised by Prof. Dr. Thorsten Strufe.
Their research focuses on trade-offs between privacy, security, and performance in decentralized systems, including P2P Networks, with a focus on anonymity and censorship resistance, blockchain, with a focus on scalability and privacy. At the moment, they are involved in various projects that analyze and improve the security and privacy of distributed machine learning systems such as federated learning, multi-discriminator GANs, and graph-generative models.
Title: Federated Learning in Healthcare: What’s Missing? And What’s Really Missing?
Abstract: Federated Learning promises to revolutionize healthcare by enabling collaborative AI without compromising privacy. But despite significant advances in privacy-preserving methods, theoretical guarantees, and explainability, real-world adoption remains rare. In this talk, we’ll explore what’s still missing from a technical perspective, and examine the deeper, often overlooked challenges that hinder deployment in clinical environments. Drawing from practical experience and recent research, the talk will connect core trustworthiness goals with the realities of healthcare, and reflect on what it takes to close the gap.
Short bio: Michael Kamp is Associate Professor for Machine Learning and Artificial Intelligence at TU Dortmund University and a faculty member of the Lamarr Institute for Machine Learning and Artificial Intelligence. His research spans the theoretical foundations of deep learning, causal representation learning, and trustworthy machine learning, with a particular focus on federated learning and privacy-preserving optimization. He develops machine learning methods that are not only mathematically rigorous but also designed to meet the demands of high-stakes real-world applications, particularly in healthcare and medicine. He is also affiliated with the Institut für KI in der Medizin (IKIM) at the University Medicine Essen, where he previously led the research group Trustworthy Machine Learning. He continues to collaborate closely with IKIM on cutting-edge medical AI research at the intersection of clinical practice, data privacy, and reliable machine learning. Earlier in his career, Michael was a postdoctoral researcher at the CISPA Helmholtz Center for Information Security in the Exploratory Data Analysis group of Jilles Vreeken (2021), and from 2019 to 2021 a postdoctoral fellow at Data Science & AI Department at Monash University and the Monash Data Futures Institute, where he remained an associated research fellow until 2024. Prior to that, he spent nearly a decade at Fraunhofer IAIS as data scientist, where he led the institute’s contributions to the EU project DiSIEM and headed a small applied research team working at the interface of academic research and industrial deployment. He also advised and trained corporate partners such as Volkswagen and DHL on data-driven innovation. Michael Kamp received his doctorate from the University of Bonn, where he taught graduate seminars and supervised numerous theses. Prior to entering academia, he worked for over a decade as a professional software developer. He is a member of the editorial board of the Springer journal Machine Learning and a member of the ELLIS society.