The schedule is a half-day event on either Oct 19, 2025, from 1 pm to 6 pm at the Honolulu Convention Center. Hawaii, USA.
We are finalizing the program that will be posted soon.
Invited Speakers
Red Cedar Distinguished Associate Professor
Michigan State University
Talk title:
When Unlearning Fails: Probing the Robustness of Machine Unlearning
Abstract:
Machine unlearning (MU), the ability to selectively remove undesirable data or knowledge from trained models, has emerged as a promising tool for regulated and mission-critical domains. Yet, recent work reveals a critical weakness: much of the “forgotten” information persists and can be reactivated through lightweight interventions such as jailbreak prompts, quantization, fine-tuning, or replay attacks. In this talk, I will revisit these vulnerabilities and present principled approaches, inspired by robust ML research, to strengthen MU through optimization theory and optimizer design.
Bio:
Sijia Liu is a Red Cedar Distinguished Associate Professor in the Department of Computer Science and Engineering at Michigan State University (MSU), and an Affiliated Professor at the MIT-IBM Watson AI Lab, IBM Research. His research centers on scalable and trustworthy AI, such as machine unlearning for vision and language models, scalable optimization for deep models, adversarial robustness, and data-model efficiency. He is a co-author of the textbook Introduction to Foundation Models (Springer, 2024). His honors include the NSF CAREER Award, the INNS Aharon Katzir Young Investigator Award, MSU’s Withrow Rising Scholar Award, and best paper honors at UAI (2022) and ICASSP (2017). He co-founded the New Frontiers in Adversarial Machine Learning Workshop series (ICML/NeurIPS 2021–2024) and has delivered tutorials on trustworthy and scalable ML at major conferences, including AAAI, NeurIPS, CVPR, ICCV, and ICASSP
CEO
BRIA AI
Talk title:
Clean Data, Clean Models: High-Quality, Compliant Text-to-Image Generation at Commercial Scale with Built-In Attribution
Abstract:
We present an approach to building text-to-image foundation models that achieves quality and compliance simultaneously by addressing safety at the data level. By training exclusively on licensed data from commercial partners, we demonstrate competitive performance on standard benchmarks while eliminating copyright, trademark, and privacy violations at the source—no garbage in, no garbage out.
Our Visual Birth Certificate technology provides granular attribution tracking, revealing which training images influenced each generation. This creates an auditable trail for regulatory compliance (EU AI Act) and enables targeted editing when requirements evolve.
We show that foundation-level data licensing, coupled with attribution-informed editing strategies, solves these challenges at the root. Combined with fine-tuning and reinforcement learning techniques, this approach enables creation of high-quality, safe, and compliant models without stealing data. Moreover, releasing both source code and model weights to the research and development communities establishes a new standard for transparent, commercially-viable AI development—demonstrating that open-source foundations and regulatory compliance are not only compatible but mutually reinforcing.
This work addresses efficient model editing by demonstrating when investing in clean training data and open-source foundations minimizes downstream editing costs while maintaining model quality and ensuring compliance from deployment day one.
Bio:
Yair is currently an executive-level Machine Learning and Computer Vision expert with a passion for bridging technology and business. In 2020, he co-established BRIA with the vision of creating a responsible and open platform for visual generative AI. Bria is pioneering responsible Generative AI, aiming to democratize this technology to enhance products and set new industry benchmarks.
He holds a PhD in Computer Science in the field of computer vision from Ben-Gurion University, in collaboration with Harvard University.
He served as the CTO of Trax Retail, where he contributed to Trax’s rapid growth from an early-stage startup with 20 employees to a unicorn with over 850 employees. He has also been involved as an advisory board member in several companies, including Sparx, Vicomi, Tasq, DataGen, and Anima.
Accepted Papers
🗒️ Full papers:
Kartik Thakral, Shreyansh Pathak, Tamar Glaser, Tal Hassner, Diego Garcia-Olano, Iacopo Masi, Richa Singh, Mayank Vatsa, "Gen𝜇: The Generative Machine Unlearning Challenge"
Sai Siddhartha Chary Aylapuram, Veeraraju Elluru, Shivang Agarwal, "Bias-Aware Machine Unlearning: Towards Fairer Vision Models via Controllable Forgetting"
Kazuya Shibata, Kazuhiro Hotta, "Segmentation by Merging Models Specialized to Each Class"
Soham Roy, Abhishek Mishra, Aakash Sen Sharma, Shirish Karande, Murari Mandal, "Guardians of Generation: Dynamic Inference-Time Copyright Shielding with Adaptive Guidance for AI Image Generation"
Wen Cheng, Shichen Dong, Jiayu Qin, Wei Wang, "QAQ: Quality Adaptive Quantization for LLM KV Cache"
📃 Extended abstracts (non-archival):
Maty Bohacek, Thomas Fel, Ekdeep Singh Lubana, Maneesh Agrawala, "Systematic Assessment of Text-to-Image Concept Removal Using Sparse Autoencoders"
Jaskaran Singh, ARUN KUMAR DUBEY, Prabhav Sanga, Rachna Tewani, "Anchored to Remember, Designed to Forget: FAMR++ for Post-Hoc Style and Class Removal"
Aakash Kumar Singh, Joseph K J, Srikrishna Karanam, Venkatesh Babu Radhakrishnan, "Forget with Care: A Domain-Agnostic Framework for Concept Erasure without Side-effects"
Jaeheun Jung, Jaehyuk Lee, Yeajin Lee, Donghun Lee, "IPPRO: Importance-based Pruning with PRojective Offset for Magnitude-indifferent Structural Pruning"
Jaeheun Jung, Bosung Jung, SuHyun Bae, Donghun Lee, "OPC: One-Point-Contraction Unlearning Toward Deep Feature Forgetting"
Schedule
Work in Progres...