Safe Multi-Modal Machine Learning
Sunday, 25 August, from 2:00 pm to 5:00 pm (Room 133)
Sunday, 25 August, from 2:00 pm to 5:00 pm (Room 133)
In the ever-evolving realm of machine learning, multimodal learning systems have emerged as a focal point of research and innovation, thanks to their remarkable ability to integrate and interpret diverse streams of modal information. However, the rapid advancement of Multi-Modal Machine Learning (MMML) has been accompanied by an increasingly pressing issue: safety. This tutorial carefully navigates the landscape of MMML safety, addressing crucial facets such as robustness, alignment, monitoring, and controllability. By offering attendees a comprehensive exploration of these safety dimensions, our aim is to equip them with the requisite knowledge and tools to navigate the complexities of MMML safety challenges effectively. Through a synthesis of current obstacles and future prospects, we endeavor to catalyze advancements in this pivotal domain, ensuring the continued integrity and reliability of multimodal learning systems in real-world applications.
Check our survey paper " A Survey on Safe Multi-Modal Learning System ".
Tianyi Zhao is a master’s student in the Department of Computer Science at the University of Southern California. Her research interest lies in trustworthy AI and Graph Neural Networks, and she has published related paper in CIKM. She is actively engaged in studying privacy, faithfulness and uncertainty quantification techniques within AI systems.
Enyan Dai is a tenure-track assistant professor at the Hong Kong University of Science and Technology (Guangzhou). He received his Ph.D. degree from the Pennsylvania State University. His research interests lie in trustworthy AI and its real-world applications. For the trustworthy AI, he has accomplished impactful works in aspects of fairness, robustness, privacy and explainability. In addition, he is keel on applying the trustworthy AI to various real-world scenarios including fake news detection, anomaly detection on power grids, and protein analysis. His works have been published in top-tier conferences and journals such as ICLR, NeurIPS, KDD, WWW, and TKDE.
Liangliang Zhang is a Ph.D. student in the Department of Computer Science at the Rensselaer Polytechnic Institute (RPI) under the supervision of Prof. Yao Ma. Her research interests lie at the intersection of ML and graph theory, particularly focusing on Graph Neural Networks and their applications. She is passionate about ensuring Trustworthy AI by addressing issues of robustness, fairness, and privacy in AI systems.
Yao Ma (Corresponding Tutor) is an Assistant Professor in the Department of Computer Science at the Rensselaer Polytechnic Institute (RPI). Before joining RPI, he worked as an Assistant Professor at the New Jersey Institute of Technology (NJIT) for two years. He received his Ph.D. in Computer Science from Michigan State University (MSU) in 2021, with a focus on ML with graph-structured data. His research contributions to this area have led to numerous innovative works presented at top-tier conferences such as KDD, WWW, WSDM, ICLR, NeurIPS, and ICML. He has also organized and presented several well-received tutorials at AAAI and KDD, attracting over 1000 attendees. He is the author of the book "Deep Learning on Graphs", which has been downloaded tens of thousands of times from over 100 countries. He was awarded the Outstanding Graduate Student Award (2019-2020) from the College of Engineering at MSU. He has also organized four workshops at ICDM, WSDM, and SDM.
Lu Cheng (Corresponding Tutor) is an assistant professor in Computer Science at the University of Illinois Chicago. Her research interests focus on responsible and reliable AI, causal ML, and AI for social good. She is the recipient of the Cisco Research Faculty Award, 2022 INNS Doctoral Dissertation Award (runner-up), SDM 2022 Doctoral Forum Best Poster, 2022 CS Outstanding Doctoral Student, 2021 ASU Engineering Dean's Dissertation Award, 2020 ASU Graduate Outstanding Research Award, 2019 ASU Grace Hopper Celebration Scholarship, IBM Ph.D. Social Good Fellowship, Visa Research Scholarship, among others. She was the lead tutor for two tutorials on socially responsible AI delivered at SDM'22 and SBP-BRiMS'21. She co-authors two books: ''Causal Inference and ML (Chinese)'' and ''Socially Responsible AI: Theories and Practices''.
Introduction (10 min)
Section I Robustness (30 min)
Distributional Robustness
Adversarial Robustness
Section II Alignment (30 min)
Misalignment
Aligning MMLS
Short Break (5 min)
Section III Monitoring (40 min)
Failure Detection
Reliable Output
Break (30 min)
Section IV Controllability (35 min)
Explainability
Privacy
Fairness
Q&A