Autonomous systems, Modelling and Uncertainty

30th November - 14:00

Recent advances on testing autonomous vehicle systems: an industry perspective

Dr Jonathan Sadeghi - Bosch (Five AI) 

How should we test autonomous vehicles to ensure they are safe to deploy? The answer to this simple question is surprisingly complex. For practical deployment scenarios, real world testing alone will not be sufficient to prove that these systems are safe. Therefore we advocate a multifaceted computational simulation approach, with both open and closed loop testing, using both recorded data and scenario based testing. We will discuss different simulation approaches whilst describing how to exploit the structure of the autonomous vehicle stack for a more effective testing strategy. Then we will describe the challenges of performing simulation in a way that is useful, realistic and efficient, with reference to relevant current research which could offer a compelling solution.

Dr. Jonathan Sadeghi is a Research Engineer at Bosch (formerly FiveAI), based in Bristol. Currently his work is focused towards developing AI systems to help build driverless cars. He obtained his PhD in Engineering from the University of Liverpool (2020), focusing on uncertainty quantification and machine learning. His research interests span the intersection of computer vision and probabilistic machine learning with applications to autonomous vehicles. https://jcsadeghi.github.io/ 

Data, Uncertainty and Machine Learning: Human Reliability Analysis Applications

Karl Johnson - University of Strathclyde

The field of Human Reliability Analysis (HRA) encounters challenges related to data, particularly availability and interpretation. This work presents methodologies for data sharing and presentation, addresses uncertainty arising from data scarcity, and explores the potential of machine learning and AI tools to mitigate these challenges.

D2T2.pdf

Modelling dependencies in complex systems: Dynamic and Dependent Tree Theory (D2T2)

Dr Silvia Tolo - University of Nottingham

Silvia Tolo gained an M.Sc. in Energy and Nuclear Engineering from the University of Bologna, and subsequently collaborated with the Institute for Risk and Uncertainty at the University of Liverpool, where she was awarded a PhD. She is currently undertaking research within The Resilience Engineering Research Group at the University of Nottingham on the development of theoretical and computational tools for the efficient modelling of complex systems.

Learning to rank with heterogeneous data

Dr Tathagata Basu - University of Strathclyde

Learning to rank is an important problem in many sectors ranging from  social sciences to artificial intelligence. However, fitting a ranking model to observed data remains a rather difficult task to perform, especially in presence of partial data. In such cases, it is preferable to perform cautious inference to ensure robustness. We address here this issue by exploring an imprecise counterpart of the Plackett-Luce model. We propose a robust Bayesian analysis of the Bayesian Plackett-Luce model. We check for the sensitivity of the posterior estimated with respect to a set of prior hyperparameters and obtain cautious ranking estimates. We discuss an EM algorithm-based computation scheme along with  some theoretical properties allowing for fast computation. We illustrate the interest of our approach by experiments realized on both synthetic and real datasets.


Tathagata Basu is a post doctoral researcher in the department of Civil 

and Environmental Engineering at the University of Strathclyde, working 

on the uncertainty quantification of drone logistic network under the 

supervision of Prof Edoardo Patelli. Earlier, he was a postdoctoral 

researcher at UMR-CNRS, Compiegne after obtaining his doctoral degree from

the University of Durham. His primary research work is related to statisitcal modelling, robust 

Bayesian analysis, regression modelling, preference learning, and random 

utility models. He is also interested in asymptotic analysis and convergence

of imprecision in set valued inference.

The 'safety' of safety factors

Dr Peter Hristov - University of Liverpool

Modern engineering practice relies increasingly on computational modelling. In recent years, the digital twin paradigm has stretched this trend to its extreme. On the face of it, the goal beyond using extensive computer model simulations in engineering is to increase the understanding about the system under consideration and therefore optimise its performance without compromising on its safety. Yet, state-of-the-art engineering models, which can provide quantitative answers about things until recently unimaginable and do so with a resolution of several decimal places, still utilise a centuries-old safety mechanism that inflates results by 1.5 to 2 times or more, when they are deemed safety critical. Ironically, safety factors often have little to do with the system being designed and are instead based on historical data, expert judgement and observation, which often voids any guarantees they give about the system’s safety. The situation becomes even more problematic for digital twins, due to their size, interconnection complexity and disparate nature of information they are based on. The purported goal of the safety factor is to mitigate various uncertainties in the design process, while in fact they do little more than obscuring the uncertainty from the engineer. The rampant use of post factum safety factors, comes in spite of the existence of advanced uncertainty quantification methods. This talk will present a different perspective on what is required to make a system safe by design in the presence of uncertainty and will discuss how some of the methods can be used to tackle uncertainty by reasoning about what we know instead of shrugging at what we do not know. Finally, it will explore some of the main roadblocks on the way towards their more universal adoption and the relegation of safety factors.

Peter Hristov holds a bachelor’s degree in aerospace engineering (2014) and a PhD in computational engineering (2018) from the University of Liverpool. Since 2018 Peter has been working as a post-doctoral research assistant in the areas of computational engineering and uncertainty quantification, at the “Institute for Risk and Uncertainty” in the UK. In 2022 he won a fellowship with the GATE Institute in Bulgaria for the “Advancing uncertainty-aware digital twins” (AUDiT) project. Peter’s research focuses on model reliability and the developments of industrially-applicable computational methods for uncertainty quantification. His research interests span the fields of computational and numerical modelling, uncertainty quantification, computer model-based certification and aerospace design.

The novel tendency of uncertainty quantification metrics in stochastic model updating and sensitivity analysis

Dr Sifeng Bi - University of Strathclyde

Numerical models, widely used for the design, optimisation, and assessment of products, are approximate representations of reality in that their predictions exhibit a level of disagreement from experimental measurements. In the background of uncertainty analysis, the main sources of this disagreement can be classified as: 1) parameter uncertainties due to imprecisely known model parameters; 2) model form bias due to unavoidable simplifications and idealizations during modelling; and 3) test variability due to hard-to-control random effects in the experiments. These first two sources are related to the model and can be mitigated through the well-known process of model updating, which infers the likely parameter values and model bias that improve the agreement between predictions and measurements. Uncertainty quantification (UQ) metrics are consequently significant to provide a uniform, explicit, and quantitative description of the uncertainty information in stochastic model updating and sensitivity analysis. The Bhattacharyya distance is a statistical distance between two random samples considering their probabilistic distributions. In this presentation, this statistical distance is introduced as a comprehensive UQ metric, compared with the classical Euclidian distance. A complete framework of stochastic model updating and sensitivity analysis is proposed. 

Dr. Sifeng Bi is a Strathclyde Chancellor’s Fellow and holds a Lecturer position at the Aerospace Centre of Excellence in the Department of Mechanical & Aerospace Engineering at the University of Strathclyde. His research topics are uncertainty quantification, stochastic model updating, numerical verification and validation, especially in the application of complex aerospace engineering dynamics. In particular, he focuses on probabilistic techniques such as advanced Monte Carlo simulation, approximate Bayesian computation, and global sensitivity analysis, with the consideration of uncertainties. He sits at the AIAA Technical Committee of Non-Deterministic Approaches. He is the Associate Editor of the ASCE-ASME Journal of Risk and Uncertainty, and Guest Editor of the international journal Mechanical Systems and Signal Processing. He is a Senior Member of AIAA.