Accepted Papers

Note: Papers listed here do not constitute workshop proceedings.

  1. Why ReLU networks yield high-confidence predictions far away from the training data and how to mitigate the problem Matthias Hein, Maksym Andriushchenko, Julian Bitterwolf
  2. Subspace Inference for Bayesian Deep Learning Wesley Maddox, Timur Garipov, Pavel Izmailov, Polina Kirichenko, Dmitry Vetrov, Andrew Gordon Wilson
  3. ‘In-Between’ Uncertainty in Bayesian Neural Networks Andrew Y. K. Foong, Yingzhen Li, Jose Miguel Hernandez-Lobato, Richard E. Turner
  4. Quality of Uncertainty Quantification for Bayesian Neural Network Inference Jiayu Yao, Weiwei Pan, Soumya Ghosh, Finale Doshi-Velez
  5. Detecting Extrapolation with Influence Functions David Madras, James Atwood, Alex D'Amour
  6. How Can We Be So Dense? The Robustness of Highly Sparse Representations Subutai Ahmad, Luiz Scheinkman
  7. Likelihood Ratios for Out-of-Distribution Detection Jie Ren, Peter J. Liu, Emily Fertig, Jasper Snoek, Ryan Poplin, Mark A. DePristo, Joshua V. Dillon, Balaji Lakshminarayanan
  8. Improving anomaly detection with differential privacy Min Du, Dawn Song
  9. Can You Trust Your Model’s Uncertainty? Evaluating Predictive Uncertainty Under Dataset Shift Yaniv Ovadia, Emily Fertig, Jie Ren, Zack Nado, D. Sculley, Sebastian Nowozin, Josh Dillon, Balaji Lakshminarayanan, Jasper Snoek
  10. RobULA: Efficient Sampling for Robust Bayesian Inference Kush Bhatia, Yi-An Ma, Peter L. Bartlett, Anca D. Dragan, Michael I. Jordan
  11. Out-of-Sample Robustness for Neural Networks via Confidence Densities Robert Cornish, George Deligiannidis, Arnaud Doucet
  12. Uncertainty estimates and out-of-distribution detection with Sine Networks Hartmut Maennel
  13. Efficient evaluation-time uncertainty estimation by improved distillation Erik Englesson, Hossein Azizpour
  14. Pumpout: A Meta Approach to Robust Deep Learning with Noisy Labels Bo Han, Gang Niu, Jiangchao Yao, Xingrui Yu, Miao Xu, Ivor W. Tsang, Masashi Sugiyama
  15. Deep Support Vector Data Description for Unsupervised and Semi-Supervised Anomaly Detection Lukas Ruff, Robert A. Vandermeulen, Nico Görnitz, Alexander Binder, Emmanuel Müller, and Marius Kloft
  16. Affine Variational Autoencoders: An Efficient Approach for Improving Generalization and Robustness to Distribution Shift Rene Bidart, Alexander Wong
  17. Exploring Deep Anomaly Detection Methods Based on Capsule Net Xiaoyan Li, Iluju Kiringa, Tet Yeap, Xiaodan Zhu, Yifeng Li
  18. Output-Constrained Bayesian Neural Networks Wanqian Yang*, Lars Lorch*, Moritz A. Graule*, Srivatsan Srinivasan, Anirudh Suresh, Jiayu Yao, Melanie F. Pradier, Finale Doshi-Velez
  19. Defense Against Adversarial Attacks by Langevin Dynamics Vignesh Srinivasan, Arturo Marban, Klaus-Robert Müller, Wojciech Samek, Shinichi Nakajima
  20. CapsAttacks: Robust and Imperceptible Adversarial Attacks on Capsule Networks Alberto Marchisio, Giorgio Nanfa, Faiq Khalid, Muhammad Abdullah Hanif, Maurizio Martina, Muhammad Shafique
  21. An Empirical Evaluation on Robustness and Uncertainty of Regularization Methods Sanghyuk Chun, Seong Joon Oh, Sangdoo Yun, Dongyoon Han, Junsuk Choe, Youngjoon Yoo
  22. Transfer of Adversarial Robustness Between Perturbation Types Daniel Kang*, Yi Sun*, Tom Brown, Dan Hendrycks, Jacob Steinhardt
  23. On Norm-Agnostic Robustness of Adversarial Training for MNIST Bai Li, Changyou Chen, Wenlin Wang, Lawrence Carin
  24. Mitigating Model Non-Identifiability in BNN with Latent Variables Yaniv Yacoby, Weiwei Pan, Finale Doshi-Velez
  25. Learning for Single-Shot Confidence Calibration in Deep Nerual Networks through Stochastic Inferences Seonguk Seo, Paul Hongsuck Seo, Bohyung Han
  26. Disentangling Adversarial Robustness and Generalization David Stutz, Matthias Hein, Bernt Schiele
  27. Detecting Adversarial Examples and Other Misclassifications in Neural Networks by Introspection Jonathan Aigrain, Marcin Detyniecki
  28. Unsupervised Temperature Scaling: Post-Processing Unsupervised Calibration of Deep Models Decisions Azadeh Sadat Mozafari, Hugo Siqueira Gomes, Wilson Leão, Christian Gagné
  29. Calibration of Encoder Decoder Models for Neural Machine Translation Aviral Kumar, Sunita Sarawagi
  30. Using learned optimizers to make models robust to input noise Luke Metz, Niru Maheswaranathan, Jonathon Shlens, Jascha Sohl-Dickstein, Ekin D. Cubuk
  31. Implicit Generative Modeling of Random Noise during Training improves Adversarial Robustness Priyadarshini Panda, Kaushik Roy
  32. Leverage Temporal Consistency for Robust Semantic Video Segmentation Timo Sämann, Karl Amende, Stefan Milz, Hort Michael Gross

  33. EnsembleDAgger: A Bayesian Approach to Safe Imitation Learning Kunal Menda, Katherine Driggs-Cambell, Mykel J. Kochenderfer
  34. Improving Robustness Without Sacrificing Accuracy with Patch Gaussian Augmentation Raphael Gontijo Lopes, Dong Yin, Ben Poole, Justin Gilmer, Ekin D. Cubuk
  35. Empirically Measuring Concentration: Fundamental Limits on Intrinsic Robustness Saeed Mahloujifar, Xiao Zhang, Mohammad Mahmoody, David Evans
  36. MNIST-C: A Robustness Benchmark for Computer Vision Norman Mu, Justin Gilmer
  37. VAE-GANs for Compressive Medical Image Recovery: Uncertainty Analysis Vineet Edupuganti, Morteza Mardani, Joseph Cheng, Shreyas Vasanawala, John Pauly
  38. A Fourier Perspective on Model Robustness in Computer Vision Dong Yin, Raphael Gontijo Lopes, Jonathon Shlens, Ekin D. Cubuk, Justin Gilmer
  39. Robust conditional GANs under missing or uncertain labels Kiran Koshy Thekumparampil, Sewoong Oh, Ashish Khetan
  40. Defending Deep Neural Networks against Structural Perturbations Uttaran Sinha, Dr Saurabh Joshi, Dr Vineeth N Balasubramanian
  41. Analyzing the Role of Model Uncertainty for Electronic Health Records Michael W. Dusenberry, Andrew Dai, Dustin Tran, Edward Choi, Jonas Kemp, Jeremy Nixon, Ghassen Jerfel, Katherine Heller
  42. On Mixup Training: Improved Calibration and Predictive Uncertainty for Deep Neural Networks Sunil Thulasidasan, Gopinath Chennupati, Jeffrey Bilmes, Sarah Michalak, Tanmoy Bhattacharya
  43. Understanding Adversarial Robustness Through Loss Landscape Geometries Joyce Xu, Dian Ang Yap, Vinay Uday Prabhu
  44. Learning a Hierarchy of Neural Connections for Modeling Uncertainty Raanan Y. Rohekar, Yaniv Gurwicz, Shami Nisimov, Gal Novik
  45. Bayesian Evaluation of Black-Box Classifiers Disi Ji, Robert Logan, Padhraic Smyth, Mark Steyvers
  46. Stochastic Prototype Embeddings Tyler R. Scott, Michael C. Mozer
  47. Assessing the Robustness of Bayesian Dark Knowledge to Posterior Uncertainty Meet P. Vadera, Benjamin M. Marlin
  48. Uncertainty Quantification for Deep Context-Aware Mobile Activity Recognition and Unknown Context Discovery Aven Samareh, Arash Pakbin, Xiaohan Chen, Nathan C. Hurley, Ye Yuan, Xiaoning Qian, Zhangyang Wang, Shuai Huang, Bobak Mortazavi
  49. Modeling Assumptions and Evaluation Schemes: On the Assessment of Deep Latent Variable Models Judith, Bütepage, Petra Poklukar, Danica Kragic
  50. Continual Learning by Kalman Optimiser Honglin Li, Shirin Enshaeifar, Frieder Ganz, Payam Barnaghi
  51. Predicting Model Failure using Saliency Maps in Autonomous Driving Systems Sina Mohseni, Akshay Jagadeesh, Zhangyang Wang
  52. Deeper Connections between Neural Networks and Gaussian Processes Speed-up Active Learning Evgenii Tsymbalov, Sergei Makarychev, Alexander Shapeev, Maxim Panov