The workshop will take place on Tuesday, June 18th 2024.
Tentative Schedule based on the PT timezone.
08:30 - 08:40: Opening remarks
08:40 - 09:10: Invited talk
Speaker: Besmira Nushi (Microsoft Research)
09:10 - 09:40: Invited talk
Speaker: Miranda Bogen (Center for Democracy & Technology)
09:40 - 09:50: Contributed talk
Title: GELDA: A generative language annotation framework to reveal visual biases in image generators
Authors: Krish Kabra (Rice University); Kathleen M Lewis (MIT); Guha Balakrishnan (Rice University)
09:50 - 10:00: Contributed talk
Title: Latent Directions: A Simple Pathway to Bias Mitigation in Generative AI
Authors: Carolina Lopez Olmos (Microsoft, Technical University of Denmark); Alexandros Neophytou (Microsoft); Sunando Sengupta (Microsoft); Dim P Papadopoulos (Technical University of Denmark (DTU))
10:00 - 10:45: Poster session + coffee break
10:45 - 11:15: Invited talk
Speaker: Björn Ommer (Ludwig Maximilian University of Munich)
11:15 - 11:25: Contributed talk
Title: My Art My Choice: Adversarial Protection Against Unruly AI
Authors: Anthony D Rhodes (Intel Corporation); Ram Bhagat (Binghamton University); Umur A Ciftci (State University of New York at Binghamton); Ilke Demir (Intel Corporation)
11:25 - 12:20: Panel discussion
Panelists: Milagros Miceli (DAIR, Weizenbaum-Institut), Cristian Canton (Meta), David Bau (Northeastern University), Alex Beutel (Open AI), Chengzhi Mao (McGrill University, Mila)
12:20 - 12:30: Closing remarks
The following works will be presented at the workshop:
Enhancing Adversarial Robustness and Combating Uncertainty Bias in Transductive Zero-Shot Learning with Pseudo-Bidirectional Alignment; Abhishek Kumar Sinha (Indian Space Research Organization); Deepak Mishra (IIST); Manthira Moorthi S (ISRO)
Utility-Fairness Trade-Offs and How to Find Them; Sepehr Dehdashtian (Michigan State University); Bashir Sadeghi (Michigan State University); Vishnu Boddeti (Michigan State University)
FairerCLIP: Debiasing Zero-Shot Predictions of CLIP in RKHSs; Sepehr Dehdashtian (Michigan State University); Lan Wang (Michigan State University); Vishnu Boddeti (Michigan State University)
F?D: On understanding the role of deep feature spaces on face generation evaluation; Krish Kabra (Rice University); Guha Balakrishnan (Rice University)
D^3: Scaling Up Deepfake Detection by Learning from Discrepancy; Yongqi Yang (Wuhan University); Zhihao Qian (Wuhan University); Ye Zhu (Princeton University); Yu Wu (Wuhan University)
DETER: Detecting Edited Regions for Deterring Generative Manipulations; Sai Wang (Wuhan University); Ye Zhu (Princeton University); Ruoyu Wang (Wuhan University); Amaya Dharmasiri (Princeton University); Olga Russakovsky (Princeton University); Yu Wu (Wuhan University)
GELDA: A generative language annotation framework to reveal visual biases in image generators; Krish Kabra (Rice University); Kathleen M Lewis (MIT); Guha Balakrishnan (Rice University)
Improving Geo-diversity of Generated Images with Contextualized Vendi Score Guidance; Reyhane Askari Hemmat (Meta AI); Melissa Hall (Meta AI); Alicia Yi Sun (Meta AI); Candace Ross (Meta AI); Michal Drozdzal (Meta AI); Adriana Romero-Soriano (Meta AI)
Finding Patterns in Ambiguity: Interpretable Stress Testing in the Decision Boundary; Inês Gomes (Porto University); Luis F Teixeira (INESC TEC and University of Porto); Jan N. Van Rijn (Leiden University); Carlos Soares (U. Porto); André Restivo (Faculty of Engineering, University of Porto); Luís Cunha (Porto University); Moises Rocha Dos Santos (USP)
The Wolf Within: Covert Injection of Malice into MLLM Societies via an MLLM Operative; Zhen Tan (Arizona State University); Chengshuai Zhao (Arizona State University); Raha Moraffah (Arizona State University); Yifan Li (Michigan State University); Yu Kong (Michigan State University); Tianlong Chen (MIT); Huan Liu (Arizona State University)
From Detection to Deception: Are AI-Generated Image Detectors Adversarially Robust?; Yun-Yun Tsai (Columbia University); Ruize Xu (Columbia University); Chengzhi Mao (Columbia University); Junfeng Yang (Columbia University)
Detecting Generative Parroting through Overfitting Masked Autoencoders; Saeid A Taghanaki (Autodesk); Joseph G Lambourne (Autodesk AI Lab)
Latent Directions: A Simple Pathway to Bias Mitigation in Generative AI; Carolina Lopez Olmos (Microsoft, Technical University of Denmark); Alexandros Neophytou (Microsoft); Sunando Sengupta (Microsoft); Dim P Papadopoulos (Technical University of Denmark (DTU))
LLAVAGUARD: VLM-based Safeguard for Vision Dataset Curation and Safety Assessment; Lukas Helff (TU Darmstadt); Felix Friedrich (TU Darmstadt); Manuel Brack (TU Darmstadt); Patrick Schramowski (TU Darmstadt); Kristian Kersting (TU Darmstadt)
My Art My Choice: Adversarial Protection Against Unruly AI; Anthony D Rhodes (Intel Corporation); Ram Bhagat (Binghamton University); Umur A Ciftci (State University of New York at Binghamton); Ilke Demir (Intel Corporation)
Model Editing with Mechanistic Knowledge Localization in Text-to-Image Generative Models; Keivan Rezaei (University of Maryland); Samyadeep Basu (University of Maryland); Priyatham Kattakinda (University of Maryland); Vlad I Morariu (Adobe Research); Nanxuan Zhao (Adobe Research); Ryan A. Rossi (Adobe Research); Varun Manjunatha (Adobe Research); Soheil Feizi (University of Maryland)
WOUAF: Weight Modulation for User Attribution and Fingerprinting in Text-to-Image Diffusion Models; Changhoon Kim (Arizona State University); Kyle Min (Intel Labs); Maitreya Patel (ASU); Sheng Cheng (Arizona State University); Yezhou Yang (Arizona State University)
Ethics of Generating Synthetic MRI Vocal Tract Views from the Face; Muhammad Suhaib Shahid (University of Nottingham); Gleb Yakubov (University of Nottingham); Andrew French (University of Nottingham)
Robust Concept Erasure Using Task Vectors; Minh Pham (New York University); Kelly O. Marshall (New York University); Chinmay Hegde (New York University); Niv Cohen (New York University)
Uncovering Bias in Large Vision-Language Models with Counterfactuals; Phillip R Howard (Intel Labs); Anahita Bhiwandiwalla (Intel Labs); Kathleen C. Fraser (National Research Council Canada); Svetlana Kiritchenko (National Research Council Canada)
Do not think about pink elephant!; Kyomin Hwang (Seoul National University); Suyoung Kim (Seoul National University); Junhoo Lee (Seoul National University); Nojun Kwak (Seoul National University)
Guardrails for avoiding harmful medical product recommendations and off-label promotion in generative AI models; Daniel Lopez-Martinez (Amazon)
What Secrets Do Your Manifolds Hold? Towards Self-Assessment of Generative Models; Ahmed Imtiaz Humayun (Rice University); Mohammad Havaei (Google); Negar Rostamzadeh (Google)
Reviewers
Anna Richter, Mila - Quebec AI Institute
Chengzhi Mao, Columbia University
Cristina Bustos, Universitat Oberta de Catalunya
David Bau, Northeastern University
Remi Denton, Google
Florian Bordes, Mila, Université de Montreal, Meta AI
Ibrahim Said Ahmad, Northeastern University
Jonathan Lebensold, McGill University, Mila
Khaoula Chehbouni, McGill University, Mila
Mattie Tesfaldet, McGill University, Mila
Mina Arzaghi, Mila, HEC Montréal
Muhammad Ali, Institute for Experiential AI, Northeastern University
Oscar Mañas, Mila - Quebec AI Institute, Université de Montréal, Meta AI
Pietro Astolfi, Meta AI
Prakhar Ganesh, Mila - Quebec AI Institute
Resmi Ramachandranpillai, Northeastern University
Reyhane Askari Hemmat, Meta AI
Samuel J Bell, Meta AI
Tomo Lazovich, Northeastern University
Trupti Bavalatti, Meta AI
Vikram V. Ramaswamy, Princeton University