See list of current PhD opportunities below
Starting: From october 2023; Funds: fully funded scholarship; Only self-funded students; Eligibility: UK resident; Any; Application deadline: until a suitable candidate is identified.
Context:
Digital technologies used to create a virtual life replica (i.e. digital twin) of components and systems are becoming increasingly popular tools to predict, monitor and control the behaviour of complex systems. A digital twin is often used to reduce the necessary requirements and experimental analysis during the design phase, to test the behaviour of the system under critical situations, and to create scenarios that are difficult or impractical to recreate in practice. The Digital twin approach is also used to identify optimal maintenance policy and adaptation/restoring actions able to improve the resilience of such systems.
Often a digital twin is only obtained by combining static and deterministic models. Instead, a realistic digital twin is constituted by dynamic models, constantly updated with different streams of data and information and able to predict (simulate) the performance of the system with the required level of confidence. For instance, at the design stage, assessing the trade-off between quality and precision can save money, time and contribute to reducing our environmental impact.
Motivation:
To make this approach applicable in practice the digital twin needs to address some fundamental challenges and overcome limitations in the common perception from the users. For instance, the computational cost of high-fidelity simulations may often be incompatible with the requirements required to make timely decisions.
Current limitations on the applicability of machine learning approaches include the requirement of a very large (and controlled) date set for training such computational tool, the lack of confidence in the prediction, the complexity of probabilistic calculations and finally the lack of transparency of the model (i.e. the models are black-box). Data is required to improve the quality of digital twins. However, data are usually not reliable because data can be imprecise, incomplete, truncated, missing, censored, corrupted, just to mention a few problems. It is therefore necessary to complement our digital model with physics-based rules to reduce the amount of data for training and to make the model able to generalise and extrapolate.
Models are only an approximation of reality and their accuracy need to be estimated and considered. Propagating the uncertainty through models is challenging for a non-expert in stochastic analysis probabilistic models. Handling large amounts of data is cumbersome, slow, and expensive.
Uncertainty analysis is too important to leave inexperienced people to do it themselves. Our calculation tools must do this automatically. Uncertainty analysis can be utilized to give the benefit of the doubt to people in uncertain cases where safety or fairness is a salient issue. Probabilistic methods are some of the most powerful methods available in computational science but are expensive and require certain expertise to perform correctly. Just as automatic differentiation has enabled machine learning, automatic uncertainty would enable cheap and speedy probabilistic calculations.
Artificial intelligence tools are rapidly evolving, and they have already shown their capability in modelling accurately complex systems and supporting decision making.
Aim and Objectives:
The main aim of the proposed research is to develop robust probabilistic computing tools to support the development of sustainable infrastructure and systems.
To achieve the proposed aim, the research will develop methods for automatic uncertainty characterisation and propagation based on probabilistic computing and virtual experts. Probabilistic computing is a promising approach from the AI community for addressing the uncertainties inherent in so-called natural data (or uncertain numbers). The most challenging is the arithmetic using uncertain numbers since the operations themselves introduce dependence, such that after several operations variables are correlated even if beginning as independent (every time a binary operation occurs the computer needs to know how the operand to the left is correlated to the one on the right side of the operator). Solving this problem requires variable dependencies to be calculated, stored, and tracked. It is yet unknown how to achieve this mathematically, but algorithmic strategies to the problem are currently being developed, with bounds obtained with certain numerical guarantees.
Probabilistic programming and automatic uncertainty are high-level software tools that allow developers to define models and “solve” them automatically in a probabilistic sense. Rather than adjusting decimal numbers as it is done in (deep) neural networks, probabilistic computing modifies code in its simulation to try to match its predictions with those of a human. The models are therefore more transparent, opening a dialogue between man and machine allowing developers to quickly update the code with new rules of thumb.
The main ambition of this proposal is computing with uncertain numbers as easily as one can compute with floating-point numbers, making uncertain numbers the numbers of the future. From an engineering perspective, this is a game-changer tool. It allows an engineer to easily add assumptions or physical bases rules to a model, then tests them on the data. In turn, it also provides a diagnostic tool based on the violations of predefined rules.
The project supports the cost-effective design and management of infrastructure based on the condition monitoring data, operational experience and expert opinion integrated into an automatised and verified computational decision tool. The research will also support real-time recovery strategies to cope with suddenly disturbance or perturbation of the infrastructure due to extreme events. The intention is that for each new data and/or safety document issued (e.g. incident and condition reports, safety observations, assurance activities), we can quickly identify the more relevant features in seconds – instead of waiting for some days/weeks for the expert opinion. This way, model parameters can be more quickly and dynamically updated achieving early prediction of safety issues.
Case study:Evaluation of uncertainty is critical in manufacturing processes and can be used to reduce the degree (and expense) of post-processing inspection for validating component properties. The Radial Forge at NMIS is used to create complex components primarily for the aerospace industry. In this process metallic preforms are heated and deformed into the desired shape, but uncertainty remains over the hardness of the final component. A wide range of open-access datasets have been generated from the Radial Forge, and consist of the process control data, hardness testing data and the model of the preform and desired component dimensions. The use case would implement the developed data uncertainty evaluation framework using the Radial Forge process and associated datasets to determine the impact on reducing post-processing inspection.
Funding notes: The scholarship covers all the tuition fess and stipend at the UKRI Doctoral Stipend rate (approx. £16,000 annually, tax free) for 3.5 years
Competitive project: Safety and resilience analysis for the net-zero energy transition
Starting: October 2022; Funds: competitive scholarship; Eligibility: UK resident; Application deadline: 30 May; Application deadline: 30 May First-class or equivalent
Context: The energy sector is crucial to almost all the critical facilities and infrastructure systems on which the modern world relies. Energy systems are critical infrastructures that must fulfil present demand. At the same time, energy systems must evolve to become less CO2-intensive and inclusive of new technologies. Reducing our dependence on energy sources responsible for climate change is one of the most challenging tasks that we are facing nowadays.
These goals are broadly agreed upon but what is less clear is the pathway that ought to be taken to achieve these ideal future systems. The importance of energy systems to achieving sustainability goals motivates the desire to have risk-aware transition planning and resilient transition pathways.
Motivation: Evaluating the safety and effect of uncertainty of low and zero-carbon solutions is essential for managing risks inherent to sustainable transitions and in ensuring resilient solutions. For instance, hydrogen is often offered as the panacea for all (including using hydrogen for domestic heating) or long-term carbon storage. In addition, climate change and extreme weather events endanger energy sector reliability thereby limiting power system operability and serviceability. Therefore, proactive mitigation plans and resilient solutions must be designed and implemented to deliver significant benefits to the current and future users and industries.
While the existence of the threat of climate change is known, there are specific and challenging requirements to assess the safety and mitigate the risks of the proposed solutions. It should be clearly demonstrated that the proposed solutions are posing unnecessary and unmanageable risks to humans and the environment as well.
For instance, the current knowledge about hydrogen safety is less thorough than the knowledge of the safety of conventional fuels and although hydrogen is well known as a chemical, not enough evidence is available for the safety of its use as an energy carrier on a large-scale commercial basis as shown by the Proceedings of the Mid-Term Assessment Workshop of the European Hydrogen Integrated Project, Volume 5 – Safety, 2 October, European Commission, Brussels. HSE and Alliaz also alighted a series of potential risks associated with hydrogen (see attached files).
In addition, most common risk analysis techniques, such as fault and event trees, are well-rooted in engineering practice thanks to their computational efficiency and numerical robustness. However, their ability to model the complexity of modern engineering systems is limited. Such techniques lack the ability to integrate into the probabilistic analysis crucial aspects of systems behaviour, such as component dependencies and dynamic features, on which many safety aspects of modern systems rely. This has resulted in the adoption of a plethora of alternative techniques (e.g. Petri Nets, Markov Models, Bayesian Networks etc.) which offer high modelling accuracy but that, due to their complexity, can easily become computationally infeasible when applied to industrial-scaled systems. The identification of novel strategies for the accurate and efficient modelling of complex engineering systems plays therefore a crucial role in the enhancement of reliability and safety analysis practice.
Aim and Objectives:
The aim of the proposed research project is to assess the safety of selected solutions offered for the net-zero energy transition.
The proposed research intends to understand and manage the risk posed by the proposed solution and on the use of hydrogen co-generated by electrical thermal plants (including nuclear power reactors and/or wind and solar systems) and its final use.
The research will quantify the resilience of such solutions to extreme events (weather/pandemic) that can reduce the supply and potentially the quality of fuel, reduce the input of energy, damage generation and grid infrastructure, reduce output, and affect the security of supply.
To quantify the resilience of such complex and interconnected systems, novel computational strategies will be developed to allow the realistic modelling of the systems and the risk/resilience quantification in affordable time. Therefore, computational models will be enhanced and supported by AI tools and novel uncertainty quantification (UQ) methods.
The use of verified software tools backed up by trustful AI tools allows understanding the behaviour and response of the system to different stress scenarios. Such computational tools will be integrated into easy to use tools currently under development in the Centre for Intelligence Infrastructure.
Expected Impact of Research:
It is of paramount importance to provide a better understanding of the future state of the decommissioning site for decision making and ageing asset management. The methodology and the tools developed within this research work target a quantum step in managing the ageing assets for life extension/decommissioning. The asset management of the decommissioning facility based on the proposed approach provides a cost-effective plan by satisfying the safety targets of staff, the public and the environment well within the set limits. PhD students will be trained on the development of computational models, hazard modelling, uncertainty quantification methodologies that constitute an essential element for decision-making procedures.
The proposed PhD proposal will contribute to the development of safe and sustainable energy solutions for tomorrow.
Self-funded: Probabilistic AI for the risk prediction in complex systems
Starting: October 2022; Funds: self-funded students only; Eligibility: open to all; Application deadline: until a suitable candidate is identified.
The proposed PhD project will develop a robust and reliable Artificial Intelligence (AI) methodology for risk assessment and prediction for nuclear applications. In particular, the project will explore the use of probabilistic and interval-based methods for dealing with bad data.
The developed approach will be applied to solve the following case studies:
Use of AI for extracting knowledge from historical documents and reports and provide a realtime safety assessment for the storage of nuclear waste based on conditional health monitoring.
The proposed technology allows considering the unavoidable variability associated with any available datasets and the very limited number of data (from accidents and rare events situations) required for training AI tools. Therefore, the proposed AI tool integrated physical conditions with probabilistic and interval-based approaches for dealing with bad data and being able to predict the risk with the required accuracy. The outcome of this project will prove the applicability of AI technology in the nuclear industry by providing tools that can be trusted and support the build of new reactors and decommissioning activities as well.
AI technologies are already used actively in other industries (e.g. aerospace, biology) but still facing some residence in their application in the nuclear industry due to the highest safety and reliability standard required. Another factor that limits the current use is that AI requires to be trained with a large dataset. however, data related to anomalous and accident states are rare and subject to large uncertainty. Finally, the current AI approaches do not provide a measure of the reliability associated with the prediction creating a further problem for their applicability in safety-critical situations.
The project will create a virtual expert exploiting machine learning and AI to supports real-time risk prediction and assessments to cope with suddenly disturbance or perturbation of the ageing assets due to unexpected events. The intention is that for each new data and/or safety document issued (e.g., incident and condition reports, safety observations, assurance activities), we can quickly identify the more relevant features in seconds – instead of waiting for some days/weeks for the expert opinion. This way, model parameters can be more quickly and dynamically updated achieving early prediction of safety issues.
The PhD project is built on the results of a recently funded Game Changer feasibly project Probabilistic AI for Prediction of Material Properties recently funded and recent advancement in AI technology from the PI research group (i.e. developing of Robust methodology for dealing with bad data (https://doi.org/10.1016/j.neunet.2019.07.005) and automatic report assessment for human reliability assessment (https://doi.org/10.1016/j.ssci.2021.105528).
Do you have an idea/project for your PhD? Contact me at: edoardo.patelli@strath.ac.uk