ICML-21 Workshop on Information-Theoretic Methods for Rigorous, Responsible, and Reliable Machine Learning

Joining the Workshop on July 24:

https://icml.cc/virtual/2021/workshop/8365


Schedule (July 24 - Times in PDT)


07:00-07:15 Opening Remarks by Organizers

07:15-08:00 Virtual Poster Session #1

08:00-08:45 Invited Talk: Maxim Raginsky (UIUC)

08:45-09:30 Invited Talk: Alex Dimakis (UT Austin)

09:30-10:00 Short Break

10:00-10:45 Invited Talk: Kamalika Chaudhuri (UCSD)

10:45-11:30 Invited Talk: Todd Coleman (Stanford)

11:30-11:45 Contributed Talk: Out-of-Distribution Robustness in Deep Learning Compression

11:45-12:00 Contributed Talk: Tighter Expected Generalization Error Bounds via Wasserstein Distance

12:00-13:00 Big break

13:00-13:45 Panel Discussion

13:45-14:30 Invited Talk: Kush Varshney (IBM Research AI)

14:30-15:15 Invited Talk: Thomas Steinke (Google Brain)

15:15-16:00 Virtual Poster Session #2

16:00-16:45 Invited Talk: Lalitha Sankar (Arizona State University)

16:45-17:00 Contributed Talk: A unified PAC-Bayesian framework for machine unlearning via information risk minimization.

17:00-17:15 Contributed Talk: Unsupervised Information Obfuscation for Split Inference of Neural Networks

17:15-18:00 Invited Talk: David Tse (Stanford)

18:00-18:15 Concluding Remarks

18:15-19:00 Social Hour

Accepted Papers (no particular order)

  • Fabrizio Carpi (New York University), Siddharth Garg (New York University), Elza Erkip (New York University), Single-Shot Compression for Hypothesis Testing. (Paper, Video)

  • Jiaheng Wei (UCSC), Yang Liu (UC Santa Cruz), When Optimizing f-divergence is Robust with Label Noise. (Paper, Video)

  • Ecenaz Erdemir (Imperial College London), Pier Luigi Dragotti (Imperial College London), Deniz Gunduz (Imperial College London), Active privacy-utility trade-off against a hypothesis testing adversary. (Paper, Video)

  • Sharu Theresa Jose (King's College London), Osvaldo Simeone (King's College London), A unified PAC-Bayesian framework for machine unlearning via information risk minimization. (Paper, contributed talk)

  • Mario Diaz (Universidad Nacional Autónoma de México), Peter Kairouz (Google), Jiachun Liao (Arizona State University), Lalitha Sankar (Arizona State University), Neural Network-based Estimation of the MMSE. (Paper, Video)

  • Gowtham R. Kurri (Arizona State University); Tyler Sypherd (Arizona State University); Lalitha Sankar (Arizona State University), Realizing GANs via a Tunable Loss Function. (Paper, Video)

  • Ethan Perez (New York University), Douwe Kiela (Facebook AI Research), Kyunghyun Cho (New York University), True Few-Shot Learning with Language Models. (Paper, Video)

  • Mohammad Saeed Masiha (Sharif University of Technology), Amin Gohari (Tehran Institute of Advanced Studies), Mohammad Hossein Yassaee (Sharif University of Technology), Mohammad Reza Aref (Sharif University of Technology), Learning under Distribution Mismatch and Model Misspecification. (Paper, Video)

  • Animesh Sakorikar (University of British Columbia), Lele Wang (University of British Columbia), Soft BIBD and Product Gradient Codes: Coding Theoretic Constructions to Mitigate Stragglers in Distributed Learning. (Paper, Video)

  • Simon Mak (Duke University), Shaowu Yuchi (Georgia Institute of Technology), Yao Xie (Georgia Tech), Information-Guided Sampling for Low-Rank Matrix Completion. (Paper, Video)

  • Ziv Goldfeld (Cornell University), Kristjan Greenewald (MIT-IBM Watson AI Lab), Sliced Mutual Information: A Scalable Measure of Statistical Dependence. (Paper, Video)

  • Kuan-Yun Lee (University of California, Berkeley), Thomas Courtade (UC Berkeley), Minimax Bounds for Generalized Pairwise Comparisons. (Paper, Video)

  • Gholamali Aminian (UCL), Yuheng Bu (MIT), Laura Toni (UCL), Miguel Rodrigues (University College London), Gregory W Wornell (MIT), Characterizing the Generalization Error of Gibbs Algorithm with Symmetrized KL information. (Paper, Video)

  • Abhishek Sharma (Harvard University), Sanjana Narayanan (Harvard University), Catherine Zeng (Harvard University), Finale Doshi-Velez (Harvard), Prediction-focused Mixture Models. (Paper, Video)

  • Borja Rodríguez Gálvez (KTH Royal Institute of Technology), Germán Bassi (Ericsson Research), Ragnar Thobaben (KTH Royal Institute of Technology), Mikael Skoglund (KTH Royal Institute of Technology), Tighter Expected Generalization Error Bounds via Wasserstein Distance. (Paper, contributed talk)

  • Fredrik Hellström (Chalmers University of Technology), Giuseppe Durisi (Chalmers University of Technology), Data-Dependent PAC-Bayesian Bounds in the Random-Subset Setting with Applications to Neural Networks. (Paper, Video)

  • Javad Heydari (LG Electronics), Ali Tajer (RPI), Active Sampling for Binary Gaussian Model Testing in High Dimensions. (Paper, Video)

  • Mohammad Samragh (UC San Diego), Hossein Hosseini (Qualcomm), Aleksei Triastcyn (Qualcomm AI Research), Kambiz Azarian (Qualcomm AI Research), Joseph Soriaga (Qualcomm AI Research), Farinaz Koushanfar (UC San Diego), Unsupervised Information Obfuscation for Split Inference of Neural Networks. (Paper, contributed talk)

  • Firas Laakom (Tampere University), Jenni Raitoharju (Tampere University), Alexandros Iosifidis (Aarhus University), Moncef Gabbouj (Tampere University), Within-layer Diversity Reduces Generalization Gap. (Paper, Video)

  • Spencer Compton (MIT), Murat Kocaoglu (Purdue University), Kristjan Greenewald (MIT-IBM Watson AI Lab), Dmitriy Katz (IBM Research), Entropic Causal Inference: Identifiability for Trees and Complete Graphs. (Paper, Video)

  • Mahdi Haghifam (University of Toronto), Gintare Karolina Dziugaite (ServiceNow), Shay Moran (), Daniel Roy (University of Toronto), Towards a Unified Information-Theoretic Framework for Generalization. (Paper, Video)

  • Parikshit Gopalan (VMware Research), Nina Narodytska (VMWare), Omer Reingold (Stanford University), Vatsal Sharan (Stanford University), Udi Wieder (VMware Research), Sub-population Guarantees for Importance Weights and KL-Divergence Estimation. (Paper, Video)

  • Aolin Xu, Excess Risk Analysis of Learning Problems via Entropy Continuity. (Paper, Video)

  • Eric Lei (University of Pennsylvania), Shirin Bidokhti (University of Pennsylvania), Hamed Hassani (University of Pennsylvania), Out-of-Distribution Robustness in Deep Learning Compression. (Paper, contributed talk)

  • Elahe Vedadi (University of Illinois at Chicago), Yasaman Keshtkarjahromi (Seagate), Hulya Seferoglu (University of Illinois at Chicago), Coded Privacy-Preserving Computation at Edge Networks. (Paper, Video)

  • Jacob T Deasy (University of Cambridge), Tom McIver (University of Cambridge ), Nikola Simidjievski (University of Cambridge ), Pietro Lió (University of Cambridge), a-VAEs: Optimising variational inference by learning data-dependent divergence skew. (Paper, Video)