Home

In this context Bayesian informatics is a convergent field that connects tools of statistical inference (like parameter-estimation & model-selection) to thermal physics, network analysis, the code-based sciences (like genetics, linguistics, & computer science), and even the role of information industries in the evolution of complex systems. To the extent that Bayesian inference is the science of making the most of limited data, one might even imagine that data-driven crime scene investigation is a special case.

In particular the science of Bayesian-inference applied to model-selection may eventually offer a quantitative way to assess such new approaches. That's because log-probability based Kullback-Leibler divergence, or the extent to which a candidate model is surprised by emerging insights, formally considers both goodness of fit (prediction quality) and algorithmic simplicity (Occam's razor). This strategy lies at the heart of independent approaches to quantitative model-selection in both the life sciences (e.g. Burnham & Anderson 2002) and the physical sciences (e.g. Gregory 2005).

Welcome to our mobile-ready google-site about Bayesian model-selection, 

as complement to our webpages on Bayesian informatics and on tabulating surprisals.

It has long been apparent that quantitative science is less about binary logic e.g. "Is it this or not?" than about probabilities e.g. "What is your estimate and its uncertainty?" What is less appreciated is that there is an emerging quantitative science about our choice of questions to ask, i.e. of what concepts to use. In other words, it's less and less about "Is the hypothesis true or false?" and more about "Which concept-set is currently the best bet?". Lifeforms on earth have of course been asking this same question about molecular codes, rather than idea codes, for a long time.

The figure above illustrates the observation cycle, including model-selection, by which our idea-codes adapt to the world around.

Related references: