UAI 2018 Workshop: Uncertainty in Deep Learning
10 August, Monterey, California, USA
Deep neural networks (DNNs) trained on large datasets can make remarkably accurate predictions. But sometimes they cannot—for example, because of limited training data, poor generalization to out-of-distribution data, or because the data is fundamentally noisy. In many applications, particularly those where predictions are driving decision-making, accurately representing this uncertainty is essential.
The aim of this workshop is to foster discussion of and research into rigorous treatment of uncertainty in deep learning models. We invite submission of papers for poster and short oral presentations. Bayesian and classical approaches are both welcome. Topics of interest include but are not limited to:
• Calibration
• Separation of forms of uncertainty
• Stochastic neural networks, such as Bayesian neural networks and ensembles
• Robustness to distribution shift
• Inference in deep latent-variable models and generative models
• Deep kernel learning and deep Gaussian processes
• Active deep learning
• Bayesian optimization
• Applications of uncertainty-aware deep learning
Invited Speakers
- Zoubin Ghahramani (University of Cambridge, Uber)
- Sergey Levine (University of California, Berkeley, Google)
- Volodymyr Kuleshov (Stanford University)
- Rich Caruana (Microsoft)
- Yingzhen Li (University of Cambridge)
Organizers
- Andrew Gordon Wilson (Cornell University)
- Balaji Lakshminarayanan (Google Deepmind)
- Dustin Tran (Google, Columbia University)
- Matt Hoffman (Google)