For machine learning tasks, say, prediction, uncertainty quantification usually means one of two things: the assignment of data-driven degrees of belief to relevant hypotheses or the construction of confidence sets. Reliability in these two contexts also has different meanings, so there's really no common ground on which these can be compared. Towards resolving this dichotomy, I consider the assignment of data-driven degrees of belief that are valid in the sense that the output is calibrated: roughly, degrees of belief assigned to true/false hypotheses tend to be large/small. This begs the question: what kind of degrees of belief are valid in this sense? To answer this, I'll give a generalization of the false confidence theorem in statistical inference to the context of prediction, establishing that no precise probability distribution offers valid uncertainty quantification; hence the title claim that imprecision is imperative for valid uncertainty quantification. To keep the "IM" wordplay going, I'll argue that the imperative imprecision isn't impossible to implement. Indeed, those methods that are reliable in the usual frequentist sense, such as conformal prediction, can easily be transformed into an imprecise probability with a special consonant form, thereby offering valid uncertainty quantification for machine learning.
The Epistemic AI is an approach which proposes the use of second-order uncertainty measures for quantifying epistemic uncertainty in artificial intelligence. A mathematical framework which generalises the concept of random variable, random sets, for instance, enable a more flexible and expressive approach to uncertainty modeling. We discuss ways in which the random sets and credal sets formalisms can model classification uncertainty over both the target and parameter spaces of a machine learning model (e.g., a neural network), outperforming Bayesian, ensemble and evidential baselines. We show how the principle can be extended to generative AI, in particular large language models less prone to hallucination, as well as diffusion processes and generative adversarial networks. Exciting applications to large concept models and visual language models as well as neural operators, scientific machine learning and neurosymbolic reasoning are discussed.