Abstract: Abstract
Short bio
Abstract: Critical sectors such as healthcare, finance, and public services often have the data needed for high-impact AI, but that data is fragmented across organizations and constrained by privacy, regulation, and operational barriers. Federated learning enables multiple parties to train models collaboratively without centralizing raw data—yet real deployments demand more than accuracy. This talk discusses how to enable cross-silo federated learning in practice: combining rigorous privacy mechanism with heterogeneous data and unreliable communication, defending poisoned data from clients, and enabling machine unlearning when data removal is needed. It also discusses future directions for federated learning, highlighting the challenges and opportunities in scaling trustworthy deployment for critical applications.
Zhaomin Wu is a Research Fellow at the Department of Computer Science, National University of Singapore. His research focuses on trustworthy machine learning, with special interests in federated learning, machine unlearning, and trustworthy LLMs. His publications appear in top-tier venues including NeurIPS, ICLR, SIGMOD, KDD, ACL, EMNLP, AAAI, MLSys, and TKDE. In 2023, He received SIGMOD Honorable Mention for Best Artifact Award.
Abstract: Google’s production system for federated learning now leverages trusted execution environments (TEEs) to address some of the challenges of cross-device federated learning. The system offers full external verifiability of the server-side components of federated learning, improves operability, and enables scaling to much larger models. In this talk, we’ll explain the history and evolution of FL at Google, and introduce an updated definition of federated learning based on its privacy principles (transparency/auditability, data minimization, and data anonymization) rather than on the placement of data processing. We’ll describe how the new approach compares to traditional cross-device federated learning and some new algorithms and use cases unique to the TEE-hosted federated learning setting.
Katharine Daly has built infrastructure for multiple generations of federated learning and federated analytics systems at Google Research. Recently she has focused on designing scalable systems that achieve verifiable differential privacy guarantees via TEEs (Trusted Execution Environments) for GenAI use cases.
Daniel Ramage directs the Google Research teams responsible for the production systems and research roadmap powering federated learning at Google. He is a co-inventor of federated learning and federated analytics, overseen their deployment in Google systems, and focuses on systems and methods for private and secure AI.
Abstract: Abstract
Short bio