Location: CRAN Polytech, project DATA
Funding: Université de Lorraine
Research subject: finite-time data informativity
Motivation: Bilal’s research focuses on the concept of finite-time data informativity within the prediction error framework. While most of the existing literature emphasizes the asymptotic case, only limited attention has been given to finite-time scenarios. A notable exception is Willems’ lemma, which addresses data informativity but assumes a noise-free environment, an unrealistic condition in practical settings.
Approach: The key idea to handle the stochastic noise case is to define a new data informativity condition in finite-time called data informativity in mean-square sense. It requires that the expectation of the squared difference between every two one step ahead predictors computed from different models and the collected data are different. In other words, for a given external excitation, if the user collects every possible trajectory from all the possible noise realizations, then the expected difference between any two predictors is zero only if both predictors are computed using the same model.
Summary of the results: Bilal’s first key result demonstrates that, when the system is initially at rest before the identification experiment, simple input signals such as impulses or steps are sufficient to ensure finite-time data informativity for the open-loop identification of ARX, Box-Jenkins, and Output-Error model structures of arbitrary order. This finding contrasts with classical asymptotic results, which require the input to be persistently exciting of a sufficiently high order depending on the compleixty of these model structures. However, this finite-time informativity is guaranteed only when the number of collected data points exceeds a certain threshold. Remarkably, for ARX and Box-Jenkins models, this threshold is actually smaller than the total number of parameters to be estimated which is an unexpected and insightful result.