Search this site
Embedded Files
Skip to main content
Skip to navigation
ICML UDL 2019
Home
Schedule
Accepted Papers
Author instructions
Invited Speakers
Dawn Song
Kilian Weinberger
Max Welling
Suchi Saria
Terrance Boult
ICML UDL 2019
Home
Schedule
Accepted Papers
Author instructions
Invited Speakers
Dawn Song
Kilian Weinberger
Max Welling
Suchi Saria
Terrance Boult
More
Home
Schedule
Accepted Papers
Author instructions
Invited Speakers
Dawn Song
Kilian Weinberger
Max Welling
Suchi Saria
Terrance Boult
Accepted Papers
Note: Papers listed here do not constitute workshop proceedings.
Why ReLU networks yield high-confidence predictions far away from the training data and how to mitigate the problem
Matthias Hein, Maksym Andriushchenko, Julian Bitterwolf
Subspace Inference for Bayesian Deep Learning
Wesley Maddox, Timur Garipov, Pavel Izmailov, Polina Kirichenko, Dmitry Vetrov, Andrew Gordon Wilson
‘In-Between’ Uncertainty in Bayesian Neural Networks
Andrew Y. K. Foong, Yingzhen Li, Jose Miguel Hernandez-Lobato, Richard E. Turner
Quality of Uncertainty Quantification for Bayesian Neural Network Inference
Jiayu Yao, Weiwei Pan, Soumya Ghosh, Finale Doshi-Velez
Detecting Extrapolation with Influence Functions
David Madras, James Atwood, Alex D'Amour
How Can We Be So Dense? The Robustness of Highly Sparse Representations
Subutai Ahmad, Luiz Scheinkman
Likelihood Ratios for Out-of-Distribution Detection
Jie Ren, Peter J. Liu, Emily Fertig, Jasper Snoek, Ryan Poplin, Mark A. DePristo, Joshua V. Dillon, Balaji Lakshminarayanan
Improving anomaly detection with differential privacy
Min Du, Dawn Song
Can You Trust Your Model’s Uncertainty? Evaluating Predictive Uncertainty Under Dataset Shift
Yaniv Ovadia, Emily Fertig, Jie Ren, Zack Nado, D. Sculley, Sebastian Nowozin, Josh Dillon, Balaji Lakshminarayanan, Jasper Snoek
RobULA: Efficient Sampling for Robust Bayesian Inference
Kush Bhatia, Yi-An Ma, Peter L. Bartlett, Anca D. Dragan, Michael I. Jordan
Out-of-Sample Robustness for Neural Networks via Confidence Densities
Robert Cornish, George Deligiannidis, Arnaud Doucet
Uncertainty estimates and out-of-distribution detection with Sine Networks
Hartmut Maennel
Efficient evaluation-time uncertainty estimation by improved distillation
Erik Englesson, Hossein Azizpour
Pumpout: A Meta Approach to Robust Deep Learning with Noisy Labels
Bo Han, Gang Niu, Jiangchao Yao, Xingrui Yu, Miao Xu, Ivor W. Tsang, Masashi Sugiyama
Deep Support Vector Data Description for Unsupervised and Semi-Supervised Anomaly Detection
Lukas Ruff, Robert A. Vandermeulen, Nico Görnitz, Alexander Binder, Emmanuel Müller, and Marius Kloft
Affine Variational Autoencoders: An Efficient Approach for Improving Generalization and Robustness to Distribution Shift
Rene Bidart, Alexander Wong
Exploring Deep Anomaly Detection Methods Based on Capsule Net
Xiaoyan Li, Iluju Kiringa, Tet Yeap, Xiaodan Zhu, Yifeng Li
Output-Constrained Bayesian Neural Networks
Wanqian Yang*, Lars Lorch*, Moritz A. Graule*, Srivatsan Srinivasan, Anirudh Suresh, Jiayu Yao, Melanie F. Pradier, Finale Doshi-Velez
Defense Against Adversarial Attacks by Langevin Dynamics
Vignesh Srinivasan, Arturo Marban, Klaus-Robert Müller, Wojciech Samek, Shinichi Nakajima
CapsAttacks: Robust and Imperceptible Adversarial Attacks on Capsule Networks
Alberto Marchisio, Giorgio Nanfa, Faiq Khalid, Muhammad Abdullah Hanif, Maurizio Martina, Muhammad Shafique
An Empirical Evaluation on Robustness and Uncertainty of Regularization Methods
Sanghyuk Chun, Seong Joon Oh, Sangdoo Yun, Dongyoon Han, Junsuk Choe, Youngjoon Yoo
Transfer of Adversarial Robustness Between Perturbation Types
Daniel Kang*, Yi Sun*, Tom Brown, Dan Hendrycks, Jacob Steinhardt
On Norm-Agnostic Robustness of Adversarial Training for MNIST
Bai Li, Changyou Chen, Wenlin Wang, Lawrence Carin
Mitigating Model Non-Identifiability in BNN with Latent Variables
Yaniv Yacoby, Weiwei Pan, Finale Doshi-Velez
Learning for Single-Shot Confidence Calibration in Deep Nerual Networks through Stochastic Inferences
Seonguk Seo, Paul Hongsuck Seo, Bohyung Han
Disentangling Adversarial Robustness and Generalization
David Stutz, Matthias Hein, Bernt Schiele
Detecting Adversarial Examples and Other Misclassifications in Neural Networks by Introspection
Jonathan Aigrain, Marcin Detyniecki
Unsupervised Temperature Scaling: Post-Processing Unsupervised Calibration of Deep Models Decisions
Azadeh Sadat Mozafari, Hugo Siqueira Gomes, Wilson Leão, Christian Gagné
Calibration of Encoder Decoder Models for Neural Machine Translation
Aviral Kumar, Sunita Sarawagi
Using learned optimizers to make models robust to input noise
Luke Metz, Niru Maheswaranathan, Jonathon Shlens, Jascha Sohl-Dickstein, Ekin D. Cubuk
Implicit Generative Modeling of Random Noise during Training improves Adversarial Robustness
Priyadarshini Panda, Kaushik Roy
Leverage Temporal Consistency for Robust Semantic Video Segmentation
Timo Sämann, Karl Amende, Stefan Milz, Hort Michael Gross
EnsembleDAgger: A Bayesian Approach to Safe Imitation Learning
Kunal Menda, Katherine Driggs-Cambell, Mykel J. Kochenderfer
Improving Robustness Without Sacrificing Accuracy with Patch Gaussian Augmentation
Raphael Gontijo Lopes, Dong Yin, Ben Poole, Justin Gilmer, Ekin D. Cubuk
Empirically Measuring Concentration: Fundamental Limits on Intrinsic Robustness
Saeed Mahloujifar, Xiao Zhang, Mohammad Mahmoody, David Evans
MNIST-C: A Robustness Benchmark for Computer Vision
Norman Mu, Justin Gilmer
VAE-GANs for Compressive Medical Image Recovery: Uncertainty Analysis
Vineet Edupuganti, Morteza Mardani, Joseph Cheng, Shreyas Vasanawala, John Pauly
A Fourier Perspective on Model Robustness in Computer Vision
Dong Yin, Raphael Gontijo Lopes, Jonathon Shlens, Ekin D. Cubuk, Justin Gilmer
Robust conditional GANs under missing or uncertain labels
Kiran Koshy Thekumparampil, Sewoong Oh, Ashish Khetan
Defending Deep Neural Networks against Structural Perturbations
Uttaran Sinha, Dr Saurabh Joshi, Dr Vineeth N Balasubramanian
Analyzing the Role of Model Uncertainty for Electronic Health Records
M
ichael W. Dusenberry, Andrew Dai, Dustin Tran, Edward Choi, Jonas Kemp, Jeremy Nixon, Ghassen Jerfel, Katherine Heller
On Mixup Training: Improved Calibration and Predictive Uncertainty for Deep Neural Networks
Sunil Thulasidasan, Gopinath Chennupati, Jeffrey Bilmes, Sarah Michalak, Tanmoy Bhattacharya
Understanding Adversarial Robustness Through Loss Landscape Geometries
Joyce Xu, Dian Ang Yap, Vinay Uday Prabhu
Learning a Hierarchy of Neural Connections for Modeling Uncertainty
Raanan Y. Rohekar, Yaniv Gurwicz, Shami Nisimov, Gal Novik
Bayesian Evaluation of Black-Box Classifiers
Disi Ji, Robert Logan, Padhraic Smyth, Mark Steyvers
Stochastic Prototype Embeddings
Tyler R. Scott, Michael C. Mozer
Assessing the Robustness of Bayesian Dark Knowledge to Posterior Uncertainty
Meet P. Vadera, Benjamin M. Marlin
Uncertainty Quantification for Deep Context-Aware Mobile Activity Recognition and Unknown Context Discovery
Aven Samareh, Arash Pakbin, Xiaohan Chen, Nathan C. Hurley, Ye Yuan, Xiaoning Qian, Zhangyang Wang, Shuai Huang, Bobak Mortazavi
Modeling Assumptions and Evaluation Schemes: On the Assessment of Deep Latent Variable Models
Judith, Bütepage, Petra Poklukar, Danica Kragic
Continual Learning by Kalman Optimiser
Honglin Li, Shirin Enshaeifar, Frieder Ganz, Payam Barnaghi
Predicting Model Failure using Saliency Maps in Autonomous Driving Systems
Sina Mohseni, Akshay Jagadeesh, Zhangyang Wang
Deeper Connections between Neural Networks and Gaussian Processes Speed-up Active Learning
Evgenii Tsymbalov, Sergei Makarychev, Alexander Shapeev, Maxim Panov
Google Sites
Report abuse
Google Sites
Report abuse