Accepted Papers
Note: Papers listed here do not constitute workshop proceedings.
Accepted Papers
Stanislav Fort, Jie Ren and Balaji Lakshminarayanan. Exploring the Limits of Out-of-Distribution Detection
Beau Coker, Weiwei Pan and Finale Doshi-Velez. Wide Mean-Field Variational Bayesian Neural Networks Ignore the Data
Francesco D'Angelo and Vincent Fortuin. Repulsive Deep Ensembles are Bayesian
Stephen Bates, Emmanuel Candès, Lihua Lei, Yaniv Romano and Matteo Sesia. Calibrated Out-of-Distribution Detection with Conformal P-values
Lorenzo Noci, Gregor Bachmann, Kevin Roth, Sebastian Nowozin and Thomas Hofmann. Precise characterization of the prior predictive distribution of deep ReLU networks
Lorenzo Noci, Kevin Roth, Gregor Bachmann, Sebastian Nowozin and Thomas Hofmann. Disentangling the Roles of Curation, Data-Augmentation and the Prior in the Cold Posterior Effect
Jie Ren, Stanislav Fort, Jeremiah Zhe Liu, Abhijit Guha Roy, Shreyas Padhy and Balaji Lakshminarayanan A simple fix to Mahalanobis distance for improving near-OOD detection
Christian Henning, Francesco D'Angelo and Benjamin F. Grewe. Are Bayesian neural networks intrinsically good at out-of-distribution detection?
Alexander Meinke, Julian Bitterwolf and Matthias Hein. Provably Robust Detection of Out-of-distribution Data (almost) for free
Mohamad Hosein Danesh and Alan Fern. Out-of-Distribution Dynamics Detection: RL-Relevant Benchmarks and Results
Lukas Ruff, Robert A Vandermeulen, Billy Joe Franks, Klaus-Robert Müller and Marius Kloft. Rethinking Assumptions in Deep Anomaly Detection
Haonan Duan and Pascal Poupart. Multiple Moment Matching Inference: A Flexible Approximate Inference Algorithm
Sangdon Park, Edgar Dobriban, Insup Lee and Osbert Bastani. PAC Prediction Sets Under Covariate Shift
Michael Zhang, Nimit Sohoni, Hongyang Zhang, Chelsea Finn and Chris Ré. Correct-N-Contrast: a Contrastive Approach for Improving Robustness to Spurious Correlations
Zhisheng Xiao, Qing Yan and Yali Amit. Do We Really Need to Learn Representations from In-domain Data for Outlier Detection?
Andrei Manolache, Florin Brad and Elena Burceanu. DATE: Detecting Anomalies in Text via Self-Supervision of Transformers
Youngseog Chung, Willie Neiswanger, Ian Char and Jeff Schneider. Beyond Pinball Loss: Quantile Methods for Calibrated Uncertainty Quantification
Youngseog Chung, Ian Char, Han Guo, Jeff Schneider and Willie Neiswanger. Uncertainty Toolbox: an Open-Source Library for Assessing, Visualizing, and Improving Uncertainty Quantification
Macheng Shen and Jonathan How. Implicit Ensemble Training for Efficient and Robust Multiagent Reinforcement Learning
Karina Zadorozhny, Dennis Ulmer and Giovanni Cina. Failures of Uncertainty Estimation on Out-Of-Distribution Samples: Experimental Results from Medical Applications Lead to Theoretical Insights
Sven Elflein, Bertrand Charpentier, Daniel Zügner and Stephan Günnemann. On Out-of-distribution Detection with Energy-Based Models
Jishnu Mukhoti, Andreas Kirsch, Joost van Amersfoort, Philip Torr and Yarin Gal. Deterministic Neural Networks with Inductive Biases Capture Epistemic and Aleatoric Uncertainty
Mark Collier, Rodolphe Jenatton, Efi Kokiopoulou and Jesse Berent. Transfer and Marginalize: Explaining Away Label Noise with Privileged Information
Ondrej Bohdal, Yongxin Yang and Timothy Hospedales. Meta-Calibration: Meta-Learning of Model Calibration Using Differentiable Expected Calibration Error
Ji Won Park, Ashley Villar, Yin Li, Yan-Fei Jiang, Shirley Ho, Joshua Yao-Yu Lin, Philip Marshall and Aaron Roodman. Inferring Black Hole Properties from Astronomical Multivariate Time Series with Bayesian Attentive Neural Processes
Jasper Hoffmann, Shashank Agnihotri, Tonmoy Saikia and Thomas Brox. Towards improving robustness of compressed CNNs
Soroosh Shahtalebi, Jean-Christophe Gagnon-Audet, Touraj Laleh, Mojtaba Faramarzi, Kartik Ahuja and Irina Rish. SAND-mask: An Enhanced Gradient Masking Strategy for Invariant Prediction in Domain Generalization
Stratis Markou, James Requeima, Wessel Bruinsma and Richard Turner. Efficient Gaussian Neural Processes for Regression
Zayd Hammoudeh and Daniel Lowd. Simple, Attack-Agnostic Defense Against Targeted Training Set Attacks Using Cosine Similarity
Varun Tekur, Javin Pombra, Rose Hong and Weiwei Pan. Safety & Exploration: A Comparative Study of Uses of Uncertainty in Reinforcement Learning
Tim G. J. Rudner, Zonghao Chen, Yee Whye Teh and Yarin Gal. Rethinking Function-Space Variational Inference in Bayesian Neural Networks
Yu Bai, Song Mei, Huan Wang and Caiming Xiong. Understanding the Under-Coverage Bias in Uncertainty Estimation
Kate Highnam, Kai Arulkumaran, Zach Hanif and Nicholas R. Jennings. BETH Dataset: Real Cybersecurity Data for Anomaly Detection Research
Arsenii Ashukha, Andrei Atanov and Dmitry Vetrov. Mean Embeddings with Test-Time Data Augmentation for Ensembling of Representations
Lisa Schut, Edward Hu, Greg Yang and Yarin Gal. Deep Ensemble Uncertainty Fails as Network Width Increases: Why, and How to Fix It
Pranav Subramani, Antonio Vergari, Gautam Kamath and Robert Peharz. Exact and Efficient Adversarial Robustness with Decomposable Neural Networks
Youngbum Hur, Jihoon Tack, Eunho Yang, Sung Ju Hwang and Jinwoo Shin. Consistency Regularization for Training Confidence-Calibrated Classifiers
Dan Ley, Umang Bhatt and Adrian Weller. Diverse and Amortised Counterfactual Explanations for Uncertainty Estimates
Mahesh Subedar, Ranganath Krishnan, Sidharth N Kashyap and Omesh Tickoo. Quantization of Bayesian neural networks and its effect on quality of uncertainty
Mobarakol Islam, Lalithkumar Seenivasan, Hongliang Ren and Ben Glocker. Class-Distribution-Aware Calibration for Long-Tailed Visual Recognition
Edward Yu. Bayesian Neural Networks with Soft Evidence
Oleksandr Shchur, Ali Caner Turkmen, Tim Januschowski, Jan Gasthaus and Stephan Günnemann. Anomaly Detection for Event Data with Temporal Point Processes
Vincent Mai, Waleed Khamies and Liam Paull. Batch Inverse-Variance Weighting: Deep Heteroscedastic Regression
Yong Lin, Qing Lian and Tong Zhang. An Empirical Study of Invariant Risk Minimization on Deep Models
Nikolaos Mourdoukoutas, Marco Federici, Georges Pantalos, Mark van der Wilk and Vincent Fortuin. A Bayesian Approach to Invariant Deep Neural Networks
Christian S. Perone, Roberto Pereira Silveira and Thomas Paula. L2M: Practical posterior Laplace approximation with optimization-driven second moment estimation
Jiaxin Zhang, Jan Drgona, Sayak Mukherjee, Mahantesh Halappanavar and Frank Liu. Variational Generative Flows for Reconstruction Uncertainty Estimation
Chawin Sitawarin, Arvind Sridhar and David Wagner. Improving the Accuracy-Robustness Trade-Off for Dual-Domain Adversarial Training
Erik Englesson and Hossein Azizpour. Consistency Regularization Can Improve Robustness to Label Noise
Lauro Langosco, Vincent Fortuin and Heiko Strathmann. Neural Variational Gradient Descent
Patrick Feeney and Michael Hughes. Evaluating the Use of Reconstruction Error for Novelty Localization
John Miller, Rohan Taori, Aditi Raghunathan, Shiori Sagawa, Pang Wei Koh, Vaishaal Shankar, Percy Liang, Yair Carmon and Ludwig Schmidt. Accuracy on the Line: on the Strong Correlation Between Out-of-Distribution and In-Distribution Generalization
Janis Postels, Hermann Blum, Yannick Strümpler, Cesar Cadena, Roland Siegwart, Luc Van Gool and Federico Tombari. The Hidden Uncertainty in a Neural Network’s Activations
Janis Postels, Mattia Segu, Tao Sun, Luc Van Gool, Fisher Yu and Federico Tombari. On the Calibration of Deterministic Epistemic Uncertainty
Jack Koch, Lauro Langosco, Jacob Pfau, James Le and Lee Sharkey. Objective Robustness in Deep Reinforcement Learning
Nate Gruver, Sanyam Kapoor, Miles Cranmer and Andrew Wilson. Epistemic Uncertainty in Learning Chaotic Dynamical Systems
Hao Yang, Yongxin Yang, Da Li, Yun Zhou and Timothy Hospedales. Towards Stochastic Neural Networks via Inductive Wasserstein Embeddings
Aleksandr Podkopaev and Aaditya Ramdas. Distribution-free uncertainty quantification for classification under label shift
Jingling Li, Mozhi Zhang, Keyulu Xu, John Dickerson and Jimmy Ba. How does a Neural Network's Architecture Impact its Robustness to Noisy Labels?
Chirag Gupta and Aaditya Ramdas. Top-label calibration
Shangyuan Tong, Timur Garipov, Yang Zhang, Shiyu Chang and Tommi Jaakkola. Adversarial Support Alignment via Relaxed 1D Optimal Transport
Charles Corbière, Marc Lafon, Nicolas Thome, Matthieu Cord and Patrick Pérez. Beyond First-Order Uncertainty Estimation with Evidential Models for Open-World Recognition
Julian Bitterwolf, Alexander Meinke, Maximilian Augustin and Matthias Hein. Revisiting Out-of-Distribution Detection: A Simple Baseline is Surprisingly Effective
Puck de Haan and Sindy Löwe. Contrastive Predictive Coding for Anomaly Detection and Segmentation
Ashwin Raaghav Narayanan, Arbër Zela, Tonmoy Saikia, Thomas Brox and Frank Hutter. Multi-headed Neural Ensemble Search
Sebastian Ober and Laurence Aitchison. A variational approximate posterior for the deep Wishart process
Francesco D'Angelo, Vincent Fortuin and Florian Wenzel. On Stein Variational Neural Network Ensembles
Yao Qin, Xuezhi Wang, Balaji Lakshminarayanan, Ed Chi and Alex Beutel. What Are Effective Labels for Augmented Data? Improving Calibration and Robustness with AutoLabel
Utkarsh Sarawgi, Rishab Khincha, Wazeer Zulfikar, Satrajit Ghosh and Pattie Maes. Uncertainty-Aware Boosted Ensembling in Multi-Modal Settings
Sahar Karimi, Beliz Gokkaya and Audrey Flower. RouBL: A computationally efficient way to go beyond mean-field variational inference
Fahim Tajwar, Ananya Kumar, Sang Michael Xie and Percy Liang. No True State-of-the-Art? OOD Detection Methods are Inconsistent across Datasets
Kaiqu Liang, Cem Anil, Yuhuai Wu and Roger Grosse. Out-of-Distribution Generalization with Deep Equilibrium Models
Saurabh Garg, Yifan Wu, Alex Smola, Sivaraman Balakrishnan and Zachary Lipton. Mixture Proportion Estimation and PU Learning: A Modern Approach
Aditya Singh, Alessandro Bay, Biswa Sengupta and Andrea Mirabile. On The Dark Side Of Calibration For Modern Neural Networks
Hao He, Yuzhe Yang and Hao Wang. Domain Adaptation with Factorizable Joint Shift
Gabriele Prato, Simon Guiroy, Ethan Caballero, Irina Rish and Sarath Chandar. Scaling Laws for the Out-of-Distribution Generalization of Image Classifiers
Tycho van der Ouderaa and Mark van der Wilk. Learning Invariant Weights in Neural Networks
Borja Gonzalez Leon, Murray Shanahan and Francesco Belardinelli. Relational Deep Reinforcement Learning and Latent Goals for Following Instructions in Temporal Logic
John Holodnak and Allan Wollaber. On the Effectiveness of Mode Exploration in Bayesian Model Averaging for Neural Networks
Lu Mi, Hao Wang, Yonglong Tian, Hao He and Nir Shavit. Training-Free Uncertainty Estimation for Dense Regression: Sensitivity as a Surrogate
Ramneet Kaur, Susmit Jha, Anirban Roy, Sangdon Park, Oleg Sokolsky and Insup Lee. Detecting OODs as datapoints with High Uncertainty
Sina Mohseni, Arash Vahdat and Jay Yadawa. Multi-task Transformation Learning for Robust Out-of-Distribution Detection
Jacob Kelly, Richard Zemel and Will Grathwohl. Directly Training Joint Energy-Based Models for Conditional Synthesis and Calibrated Prediction of Multi-Attribute Data
Lipi Gupta, Aashwin Mishra and Auralee Edelen. Deep Learning with Quantified Uncertainty for Free Electron Laser Scientific Facilities
Seijin Kobayashi, Johannes von Oswald and Benjamin F. Grewe. On the reversed bias-variance tradeoff in deep ensembles
Kan Xu, Hamsa Bastani and Osbert Bastani. Robust Generalization of Quadratic Neural Networks via Function Identification
Katelyn Morrison, Benjamin Gilby, Colton Lipchak, Adam Mattioli and Adriana Kovashka. Exploring Corruption Robustness: Inductive Biases in Vision Transformers and MLP-Mixers
Martin Bauw, Santiago Velasco-Forero, Jesus Angulo, Claude Adnet and Olivier Airiau. Deep Random Projection Outlyingness for Unsupervised Anomaly Detection
Jishnu Mukhoti, Joost van Amersfoort, Philip Torr and Yarin Gal. Deep Deterministic Uncertainty for Semantic Segmentation
Sean Spinney, Amin Mansouri, Amin Memarian and Irina Rish. Identifying Invariant and Sparse Predictors in High-dimensional Data
Athanasios Tsiligkaridis and Theodoros Tsiligkaridis. On Misclassification-Aware Smoothing for Robustness and Uncertainty Calibration
Andreas Kirsch, Jishnu Mukhoti, Joost van Amersfoort, Philip H.S. Torr and Yarin Gal. On Pitfalls in OoD Detection: Entropy Considered Harmful
Mrinal Rawat, Ramya Hebbalaguppe and Lovekesh Vig. PnPOOD : Out-Of-Distribution Detection for Text Classification via Plug andPlay Data Augmentation
Tianjian Huang, Chinnadhurai Sankar, Pooyan Amini, Satwik Kottur, Alborz Geramifard, Meisam Razaviyayn and Ahmad Beirami. DAIR: Data Augmented Invariant Regularization
Alexander Robey, Hamed Hassani and George J. Pappas. Model-Based Robust Deep Learning: Generalizing to Natural, Out-of-Distribution Data
Gilberto Manunza, Matteo Pagliardini, Martin Jaggi and Tatjana Chavdarova. Improved Adversarial Robustness via Uncertainty Targeted Attacks
Francesco Verdoja and Ville Kyrki. Notes on the Behavior of MC Dropout
Stephen Bates, Anastasios Angelopoulos, Liha Lei, Jitendra Malik and Michael Jordan. Distribution-free Risk-controlling Prediction Sets
Ethan Goan and Clinton Fookes. Stochastic Bouncy Particle Sampler for Bayesian Neural Networks
Alexandru Tifrea, Eric Stavarache and Fanny Yang. Novelty detection using ensembles with regularized disagreement
Daniel D'Souza, Zach Nussbaum, Chirag Agarwal and Sara Hooker. A Tale Of Two Long Tails
Norman Mu and David Wagner. Defending against Adversarial Patches with Robust Self-Attention
Francesco Farina, Lawrence Phillips and Nicola J. Richmond. Intrinsic uncertainties and where to find them
Henry Kvinge, Colby Wight, Sarah Akers, Scott Howland, Woongjo Choi, Xiaolong Ma, Luke Gosink, Elizabeth Jurrus, Keerti Kappagantula and Tegan Emerson. Dataset to Dataspace: A Topological-Framework to Improve Analysis of Machine Learning Model Performance
Lakshya Jain, Varun Chandrasekaran, Uyeong Jang, Sanjit Seshia and Somesh Jha. Analyzing And Improving Neural Networks By Generating Semantic Counterexamples Through Differentiable Rendering
Lixuan Yang and Dario Rossi. Thinkback: Task-Specific Out-of-Distribution Detection
David Stutz, Matthias Hein and Bernt Schiele. Relating Adversarially Robust Generalization to Flat Minima
Taesup Kim, Rasool Fakoor, Jonas Mueller, Ryan Tibshirani and Alex Smola. Deep Quantile Aggregation