In the Bayesian estimation framework, the parameters we want to estimate are considered to be random and every prior information we may know about them is modeled as a probability distribution function referred as prior distribution. The collected data also have information about these parameters and are represented by the likelihood distribution. An example is Bayesian estimation which performs a trade-off between both sources of information by calculating a new probability distribution called posterior distribution which is the update of our prior distribution using the data (likelihood).
Related journal papers
(Submitted) Yue Ju, Kévin Colin, Tianshi Chen, Bo Wahlberg, Håkan Hjalrmarsson. Kernel-based regularized estimators: Theoretical Insights and New Estimators with Improved Accuracy. Submitted to Automatica. 2025
Related conference papers
Kévin Colin, Yue Ju, Xavier Bombois, Cristian Rojas, Håkan Hjalmarsson. A bias-variance perspective of data-driven control. Presented at the IFAC 20th Symposium on System Identification (SYSID 2024), Boston, USA. Open access on HAL: https://hal.science/hal-04338871
Kévin Colin, Håkan Hjalmarsson, Véronique Chotteau. Gaussian process modeling of macroscopic kinetics a better tailored kernel for Monod type kinetics. Presented at 10th international conference on Mathematical Modelling (MATHMOD), Vienna, Austria. Published on IFACPapersOnLine, vol. 55, issue 20, pp 397-402. 2022