The Bayes Theorem provides a principled way to update knowledge (belief) from data. With prior belief, and given some new data, we can update our belief a posteriori following the Bayes rule. This may be the essence of learning - for machines as well as for humans.
CNNs are well suited to process images, with the Deep Image Prior [1] stored in their architectures and weights. When taking in task-specific data, the CNN updates its belief (weights) and wraps itself around the task. This is how a CNN learns to accomplish an image analysis task.
However, for a Bayesian, something is missing here. In the Bayes land, nothing is absolutely certain; our prior belief is a probabilistic distribution, and posterior belief is also a probabilistic distribution [2]. Unless one has seen all the data in the world, the posterior distribution maintains some width - which translates into uncertainty. The predominant maximum likelihood way of deep learning, in contrast, goes from one deterministic belief (a randomly initialized weight set) to another (the final weight set after training). Hence, uncertainty is missing and silent failure [3] may occur.
Our 2022 MICCAI paper [4] introduces a practical way to infer the Bayesian uncertainty of the powerful nnU-Net, such that it provides uncertainty quantification (UQ) along with its organ segmentation. Our later MELBA paper [5] substantially strengthens the computational foundation of UQ, built upon Statistical Physics simulations*. The series of work has important implications for clinical image analysis and other life-critical tasks, as it enables automatic identification of black-box errors (that ruin the reputation of AI!) when e.g. a new patient comes but deviates from the training cohort. Being honest about uncertainty is a step towards trustworthiness.
The oriental Master taught us that real knowledge is not only to know what you know, but also to know what you do not know [6]. The same ethos that makes humans modest makes AI trustworthy.