Schedule
July 23 (Friday), 2021
Timezone: US/Eastern Time
Tentative schedule in EST (subject to change):
Note: All the invited and contributed talks will be pre-recorded. Poster sessions (gather town) and panel discussion will be live.
Session 1
9:00am - 9:15am Opening remarks (Balaji Lakshminarayanan)
9:15am - 9:45am Invited talk #1 Dustin Tran Uncertainty Modeling from 50M to 1B
9:45am - 11am Live Poster Session #1 [Room 1 (Uncertainty), Room 2 (Robustness)]
Coffee Break: 11-11:15am
Session 2
11:15am - 11:45am Invited talk #2 Alec Radford Some Thoughts on Generalization, Robustness, and their application with CLIP
11:45am - 1pm Live Poster Session #2 [Room 3 (Uncertainty), Room 4 (Robustness)]
1:00pm - 1:45pm Live Panel discussion with Chelsea Finn, Kamalika Chaudhuri, Uri Shalit & Yarin Gal (Moderator: Tom Dietterich)
Lunch: 1:45-2:15pm
Session 3
2:15pm - 2:25pm Contributed talk #1 Repulsive Deep Ensembles are Bayesian
2:25pm - 2:35pm Contributed talk #2 Wide Mean-Field Variational Bayesian Neural Networks Ignore the Data
2:35pm - 2:45pm Contributed talk #3 Are Bayesian neural networks intrinsically good at out-of-distribution detection?
2:45pm - 3:15pm Invited talk #3 Shiori Sagawa Improving Robustness to Distribution Shifts: Methods and Benchmarks
Coffee Break: 3:15-3:30pm
Session 4
3:30pm - 4:00pm Invited talk #4 Nazneen Rajani Evaluating deep learning models with applications to NLP
4:00pm - 4:10pm Contributed talk #4 Calibrated Out-of-Distribution Detection with Conformal P-values
4:10pm - 4:20pm Contributed talk #5 Provably Robust Detection of Out-of-distribution Data (almost) for free
4:20pm - 4:30pm Contributed talk #6 Out-of-Distribution Dynamics Detection: RL-Relevant Benchmarks and Results
4:30pm - 5pm Invited talk #5 Jinwoo Shin Contrastive Learning for Novelty Detection
Poster Session #1
- Wide Mean-Field Variational Bayesian Neural Networks Ignore the Data
- Repulsive Deep Ensembles are Bayesian
- Precise characterization of the prior predictive distribution of deep ReLU networks
- Disentangling the Roles of Curation, Data-Augmentation and the Prior in the Cold Posterior Effect
- Are Bayesian neural networks intrinsically good at out-of-distribution detection?
- Towards improving robustness of compressed CNNs
- Efficient Gaussian Neural Processes for Regression
- Rethinking Function-Space Variational Inference in Bayesian Neural Networks
- Diverse and Amortised Counterfactual Explanations for Uncertainty Estimates
- Class-Distribution-Aware Calibration for Long-Tailed Visual Recognition
- Bayesian Neural Networks with Soft Evidence
- A Bayesian Approach to Invariant Deep Neural Networks
- Neural Variational Gradient Descent
- The Hidden Uncertainty in a Neural Network’s Activations
- Towards Stochastic Neural Networks via Inductive Wasserstein Embeddings
- Distribution-free uncertainty quantification for classification under label shift
- Learning to Align the Support of Distributions
- Beyond First-Order Uncertainty Estimation with Evidential Models for Open-World Recognition
- A variational approximate posterior for the deep Wishart process
- On The Dark Side Of Calibration For Modern Neural Networks
- Learning Invariant Weights in Neural Networks
- On the Effectiveness of Mode Exploration in Bayesian Model Averaging for Neural Networks
- Deep Deterministic Uncertainty for Semantic Segmentation
- Stochastic Bouncy Particle Sampler for Bayesian Neural Networks
- A Tale Of Two Long Tails
- Intrinsic uncertainties and where to find them
Room 2 (Robustness)
- Do We Really Need to Learn Representations from In-domain Data for Outlier Detection?
- DATE: Detecting Anomalies in Text via Self-Supervision of Transformers
- Implicit Ensemble Training for Efficient and Robust Multiagent Reinforcement Learning
- Failures of Uncertainty Estimation on Out-Of-Distribution Samples: Experimental Results from Medical Applications Lead to Theoretical Insights
- On Out-of-distribution Detection with Energy-Based Models
- Transfer and Marginalize: Explaining Away Label Noise with Privileged Information
- SAND-mask: An Enhanced Gradient Masking Strategy for Invariant Prediction in Domain Generalization
- BETH Dataset: Real Cybersecurity Data for Anomaly Detection Research
- Mean Embeddings with Test-Time Data Augmentation for Ensembling of Representations
- Exact and Efficient Adversarial Robustness with Decomposable Neural Networks
- Anomaly Detection for Event Data with Temporal Point Processes
- An Empirical Study of Invariant Risk Minimization on Deep Models
- Consistency Regularization Can Improve Robustness to Label Noise
- Evaluating the Use of Reconstruction Error for Novelty Localization
- Objective Robustness in Deep Reinforcement Learning
- How does a Neural Network's Architecture Impact its Robustness to Noisy Labels?
- Revisiting Out-of-Distribution Detection: A Simple Baseline is Surprisingly Effective
- Contrastive Predictive Coding for Anomaly Detection and Segmentation
- Multi-headed Neural Ensemble Search
- Scaling Laws for the Out-of-Distribution Generalization of Image Classifiers
- Relational Deep Reinforcement Learning and Latent Goals for Following Instructions in Temporal Logic
- On the reversed bias-variance tradeoff in deep ensembles
- Robust Generalization of Quadratic Neural Networks via Function Identification
- Deep Random Projection Outlyingness for Unsupervised Anomaly Detection
- DAIR: Data Augmented Invariant Regularization
- Model-Based Robust Deep Learning: Generalizing to Natural, Out-of-Distribution Data
- Novelty detection using ensembles with regularized disagreement
- Thinkback: Task-Specific Out-of-Distribution Detection
Poster Session #2
Room 3 (Uncertainty)
- Multiple Moment Matching Inference: A Flexible Approximate Inference Algorithm
- PAC Prediction Sets Under Covariate Shift
- Correct-N-Contrast: a Contrastive Approach for Improving Robustness to Spurious Correlations
- Beyond Pinball Loss: Quantile Methods for Calibrated Uncertainty Quantification
- Uncertainty Toolbox: an Open-Source Library for Assessing, Visualizing, and Improving Uncertainty Quantification
- Deterministic Neural Networks with Inductive Biases Capture Epistemic and Aleatoric Uncertainty
- Meta-Calibration: Meta-Learning of Model Calibration Using Differentiable Expected Calibration Error
- Inferring Black Hole Properties from Astronomical Multivariate Time Series with Bayesian Attentive Neural Processes
- Safety & Exploration: A Comparative Study of Uses of Uncertainty in Reinforcement Learning
- Understanding the Under-Coverage Bias in Uncertainty Estimation
- Deep Ensemble Uncertainty Fails as Network Width Increases: Why, and How to Fix It
- Quantization of Bayesian neural networks and its effect on quality of uncertainty
- Batch Inverse-Variance Weighting: Deep Heteroscedastic Regression
- Practical posterior Laplace approximation with optimization-driven second moment estimation
- Variational Generative Flows for Reconstruction Uncertainty Estimation
- Epistemic Uncertainty in Learning Chaotic Dynamical Systems
- Top-label calibration
- Uncertainty-Aware Boosted Ensembling in Multi-Modal Settings
- RouBL: A computationally efficient way to go beyond mean-field variational inference
- Domain Adaptation with Factorizable Joint Shift
- Training-Free Uncertainty Estimation for Dense Regression: Sensitivity as a Surrogate
- Directly Training Joint Energy-Based Models for Conditional Synthesis and Calibrated Prediction of Multi-Attribute Data
- Deep Learning with Quantified Uncertainty for Free Electron Laser Scientific Facilities
- Identifying Invariant and Sparse Predictors in High-dimensional Data
- On Misclassification-Aware Smoothing for Robustness and Uncertainty Calibration
- On Pitfalls in OoD Detection: Entropy Considered Harmful
- Notes on the Behavior of MC Dropout
- Distribution-free Risk-controlling Prediction Sets
Room 4 (Robustness)
- Exploring the Limits of Out-of-Distribution Detection
- Calibrated Out-of-Distribution Detection with Conformal P-values
- A simple fix to Mahalanobis distance for improving near-OOD detection
- Provably Robust Detection of Out-of-distribution Data (almost) for free
- Out-of-Distribution Dynamics Detection: RL-Relevant Benchmarks and Results
- Rethinking Assumptions in Deep Anomaly Detection
- Simple, General-Purpose Defense Against Targeted Training Set Attacks Using Gradient Alignment
- Consistency Regularization for Training Confidence-Calibrated Classifiers
- Improving the Accuracy-Robustness Trade-Off for Dual-Domain Adversarial Training
- Accuracy on the Line: on the Strong Correlation Between Out-of-Distribution and In-Distribution Generalization
- On the Calibration of Deterministic Epistemic Uncertainty
- On Stein Variational Neural Network Ensembles
- What Are Effective Labels for Augmented Data? Improving Calibration and Robustness with AutoLabel
- No True State-of-the-Art? OOD Detection Methods are Inconsistent across Datasets
- Out-of-Distribution Generalization with Deep Equilibrium Models
- Mixture Proportion Estimation and PU Learning: A Modern Approach
- Detecting OODs as datapoints with High Uncertainty
- Multi-task Transformation Learning for Robust Out-of-Distribution Detection
- Exploring Corruption Robustness: Inductive Biases in Vision Transformers and MLP-Mixers
- PnPOOD : Out-Of-Distribution Detection for Text Classification via Plug andPlay Data Augmentation
- Improved Adversarial Robustness via Uncertainty Targeted Attacks
- Defending against Adversarial Patches with Robust Self-Attention
- Dataset to Dataspace: A Topological-Framework to Improve Analysis of Machine Learning Model Performance
- Analyzing And Improving Neural Networks By Generating Semantic Counterexamples Through Differentiable Rendering
- Relating Adversarially Robust Generalization to Flat Minima
- Deep Quantile Aggregation