Computing, Ethics and Human factors

1st December - 10:00

Collaborative intelligence, and Human-AI Teaming in Safety Critical Applications: Key challenges and opportunities

Dr Maria Chiara Leva - TU Dublin

This talk aims to provide a brief overview of the key insights gathered from the current work in a EU funded research project focused on collaborative systems between Human Factors and AI, with a particular emphasis on safety-critical applications. The purpose is to gather feedback on the design of experiments and research hypotheses for each living lab, assessing their relevance to industry in various domains in the context of Human factors & Neuroergonomics for collaborative Intelligence frameworks.

Maria Chiara Leva is the Lead of the Human factors in Safety and Sustainability (HFISS) research group in Technological University Dublin and a Senior Lecturer in the School of Environmental Health for the same institution. She is a visiting research Fellow in the Centre for Innovative Human systems in Trinity College Dublin. She is the co-founder of Tosca Solutions (www.toscasolutions.com) a Spin out campus company based in NDRC and Trinity College Dublin to offer support for implementing risk management tools customised specifically to the needs of highly regulated environments. Her area of Expertise is Human factors and Safety Management Systems. Chiara holds a PhD in Human factors conferred by the Polytechnic of Milano Department of Industrial Engineering. She is the former chair of The Irish Ergonomics Society and current co-chair of the technical committee for Human factors in the European Safety and Reliability Association.

Requirements for quantitative risk assessment of hydrogen facilities: An Irish use case

Dr Hector Diego Estrada Lugo - TU Dublin

This talk will present a summary the requirements for a Quantitative Risk Assessment (QRA) of a hydrogen facility. This is done within a framework of the recently announced National Hydrogen Strategy in the Republic of Ireland. The proposed framework for risk assessment is based on a probabilistic method called Bayesian networks. This framework is expected to provide permitting authorities with a decision support system and guidance for QRA hydrogen facilities.

Speculations On Risk, Uncertainty and Humane Algorithms

Dr Nicholas Gray - University of Liverpool

Digital twins promise revolutions across many fields from engineering to healthcare. This is not just true of digital twins, but of algorithms more generally as they anthropomorphically become a larger part of society. There have already been many downsides to this transition, ranging from irritation and confusion to injustice and catastrophe. This talk considers how the appreciation and utilisation of uncertainty can play a key role in helping to solve some of the many ethical issues that are posed by these technologies and make them more humane. Understanding the uncertainties can allow algorithms to make better decisions. Uncertainty in the output of an algorithm may lead to ways in which decisions can be interrogated. Allowing algorithms to deal with variability and ambiguity with their inputs means they do not need to force people into uncomfortable classifications. It is essential to compute with what we know rather than make assumptions that may be unjustified.

Nick Gray is a postdoctoral research assistant at the University of Liverpool’s Institute for Population Health. He is currently researching the communication of risk and uncertainty in medical AI. His PhD is entitled The Importance of Risk and Uncertainty in Humane Algorithms and has research interests including application of imprecise probabilities in machine learning, uncertainty in medical diagnosis and the ethics of machine learning.

Rigorous time evolution of p-boxes in non-linear ODEs

Dr Ander Gray - UKAEA

We combine reachability analysis and probability bounds analysis, which allow for imprecisely known random variables (multivariate intervals or p-boxes) to be specified as the initial states of a dynamical system. In combination, the methods allow for the temporal evolution of p-boxes to be rigorously computed, and they give interval probabilities for formal verification problems, also called failure probability calculations in reliability analysis. The methodology places no constraints on the input probability distribution or p-box and can handle dependencies generally in the form of copulas.

Novel Confidence Structures for Discrete Data

Dr Alexander Wimbush - University of Birmingham

Likelihood functions for discrete distributions can often be used to create confidence boxes that may be used to define confidence intervals about some parameter of the distribution. Imprecise likelihood functions were recently used to create a novel two-sided confidence structure for probabilities in binomial sampling problems. Confidence structures assign a possibility value to each underlying hypothesis, and so long as these satisfy coverage criteria, these structures can then be used to assign levels of belief to sets of hypotheses and avoid the potential for false confidence that can arise when making inferences from confidence boxes. This presentation extends the approach used to create the two-sided structure for probabilities to other distributions, explains the general procedure, and describes how it can be applied to general discrete distributions. Confidence intervals drawn from these structures maintain strict coverage properties and are generally tighter than their confidence box equivalents.  These structures may also be propagated through functions that handle purely epistemic uncertainty using either a naïve approach when dependence is unknown or a sampling method for independence.

Bio. I recently began working as a statistical methodology research fellow at the University of Birmingham Cancer Research Clinical Trials Unit. My research is part of the Captivate node of the UK Rare disease research platform, focusing on developing novel methods for running, maintaining, and analysing clinical trials in cases of rare diseases where sample sizes are limited. This follows the completion of my PhD titled 'Propagation of Epistemic Uncertainty through Medical Diagnostic Algorithms' where I developed simple methods for generating medical diagnostic algorithms, characterising and propagating epistemic uncertainty, and demonstrated that these novel methods maintain the desired coverage properties.


Spectral uncertainty quantification of stochastic processes in the presence of missing data with a Bayesian framework

Yu Chen - University of Strathclyde

Missing data is a ubiquitous problem of various engineering and physical systems, which hinders from robust inference, understanding, and model development towards the underlying physical process.  Missing data can also be a critical issue for complex safety-critical systems, operated on the basis of real-time monitoring data streams and autonomous decisions, under unexpected sensor failure. Fully data-driven methods may impose a performance ceiling due to data insufficiency when modelling with missing data. By contrast, knowledge-informed Deep Learning frameworks leveraging knowledge integration, notably prior physical domain knowledge, are instead proposed. These novel frameworks provide solutions to the general arbitrary missing data pattern in a non-stationary setting even with significant incompleteness.

Yu Chen is currently a Research Assistant at University of Strathclyde. During his PhD at University of Liverpool, funded by the EU H2020 MSCA project URBASIS, his research work mainly revolves around developing robust and knowledge-informed Deep Learning frameworks, dedicated to characterisation, propagation, and quantification of uncertainty embedded in the bad data (e.g. scarce, incomplete, or imprecise) through an efficient computational pipeline.