Research Highlight

A stochastic back-progagation for quantifying uncertainty in deep learning

Achievement


We developed a backward stochastic differential equation based probabilistic machine learning method, which formulates a class of stochastic neural networks as a stochastic optimal control problem. An efficient stochastic gradient descent algorithm is introduced with the gradient computed through a backward stochastic differential equation. Convergence analysis for stochastic gradient descent optimization and numerical experiments for applications of stochastic neural networks are carried out to validate our methodology in both theory and performance.

The central idea of our method is to treat the random samples of the backward SDE as "pseudo data", and only solve the backward SDE partially on randomly selected pseudo data. In this way, only a tiny fraction of the computing cost of solving the entire backward SDE is required to complete one training iteration. The proposed back-propagation method for SNNs is expected to be a feasible tool for the uncertainty quantification of deep learning.

Figure 1: The performance of our method in the classification problem with the corresponding confidence band.

Figure 2: The performance of our method in the classification problem with the corresponding confidence band.

Publication

Richard Archibald, Feng Bao, Yanzhao Cao, He Zhang, A backward SDE method for uncertainty quantification in deep learning, Discrete and Continuous Dynamical Systems - S, 2021. (doi: 10.3934/dcdss.2022062)