NeurIPS 2019 Workshop on Information Theory and Machine Learning

Workshop Schedule

9:20-9:30 Introductory Remarks

9:30-10:00 [Invited Talk] Ayfer Özgür: Communication-Efficient Distributed Learning

10:00-10:40 [Invited Talk] Stefano Soatto and Alessandro Achille: Information in the Weights and Emergent Properties of Deep Neural Networks

10:40-11:00 Coffee Break and Poster Setup

11:00-11:15 [Contributed Talk] Rob Brekelmans: Understanding Thermodynamic Variational Inference

11:15-11:30 [Contributed Talk] Bob (Robert) Williamson: Data Processing Equalities

11:30-11:45 [Contributed Talk] Jose Gallego: GAIT: A Geometric Approach to Information Theory

12:00-14:00 Lunch + Poster Setup

14:00-14:30 [Invited Talk] Varun Jog: Adversarial risk via optimal transport and optimal couplings

14:30-15:00 [Invited Talk] Po-Ling Loh: Robust information bottleneck

15:00-15:20 Coffee Break

15:20-15:50 [Invited Talk] Aaron van den Oord: Contrastive Predictive Coding

15:50-16:20 [Invited Talk] Alexander A. Alemi: A Case for Compressed Representations

16:25-17:00 [Poster Spotlight]

17:00-18:00 [Poster Session]

Accepted Papers

Abstract

Information theory is deeply connected to two key tasks in machine learning: prediction and representation learning. Due to these connections, information theory has found wide applications in machine learning, such as proving generalization bounds, certifying fairness and privacy, optimizing information content of unsupervised/supervised representations, and proving limitations to prediction performance. Conversely, progress in machine learning has driven advancements in classical information theory problems such as compression and transmission.

This workshop aims to bring together researchers from different disciplines, identify common grounds, and spur discussion on how information theory can apply to and benefit from modern machine learning tools. Topics include, but are not limited to:

  • Controlling information quantities for performance guarantees, such as PAC-Bayes, interactive data analysis, information bottleneck, fairness, privacy. Information theoretic performance limitations of learning algorithms.

  • Information theory for representation learning and unsupervised learning, such as its applications to generative models, learning latent representations, and domain adaptation.

  • Methods to estimate information theoretic quantities for high dimensional observations, such as variational methods and sampling methods.

  • Quantification of usable / useful information, e.g. the information an algorithm can use for prediction.

  • Machine learning applied to information theory, such as designing better error-correcting codes, and compression optimized for human perception.

Key Dates

  • Submission starts: August 15, 2019

  • Extended abstract submission deadline: September 15, 2019 (23:59 AOE)

  • Acceptance notification: September 30, 2019 (23:59 AOE)

  • Camera ready submission: November 15, 2019

  • Workshop date: December 13, 2019

Speakers

Organizers

Camera Ready Instructions

Use the following modified NeurIPS style file, or include "Workshop on Information Theory and Machine Learning, " in front of the first page footnote.

https://drive.google.com/file/d/1--3XrNVcDzYXnK2hB8mPJLkJtn-PsDz7/view?usp=sharing

Use \usepackage[final]{itml2019} to include the style file.

Submit the camera ready version to CMT, with file name changed to [cmt paper submission id].pdf

Presentation Instructions

Posters should be no larger than 36W x 48H inches or 90 x 122 cm, and printed on light weight, not laminated paper. Tapes will be provided.

All accepted papers will receive a 1 minute spotlight presentation at the beginning of the poster session. Please upload a one page slide in PDF format to cmt as a supplementary file no later than Dec 8th. (Please address questions about spotlight slides to kechoi@cs.stanford.edu)