BBQ 2025
Postdocs:
Federico Zocco (2021-2023), project: 'Development of multiple collaborative vehicles for marine monitoring',
Yuzhu Sun (Sept/2024-), project: Offshore Wind Turbine Blade Monitoring Using Computer Vision and AI .
Zhengui Xue (Oct/2025), Project: Daphne Jackson Fellowship.
Main supervisor:
Yibo Zhou (Oct/2025), project, 'Model-Based Reinforcement Learning for Unmanned Autonomous Vehicles Planning and Control with Multimodal Perception'.
Hayder Al-Husseinawi (Jan/2025), project 'Multi-agent Based Optimisation of Maintenance Routing in Offshore Wind Farms'.
Suhad Aleassa, (Jan/2025), project 'Digital Twin-Driven Reinforcement Learning for Trustworthy Robots'.
Rhyss McMullan (Sept/2024), project: Resilient Risk-Aware Autonomy for Quadrupedal Robots
Wei Ding (Sep/2024), project: Development of enhanced steering control model for autonomous ground vehicles.
Rajeem Thomas (May/2024), project: Exploring Artificial Intelligence for Autonomous Vehicle’s Trust and Security.
Peter James McConnellogue (Oct/2023), project: 'Intelligent control for trustworthy autonomous vehicles'
Luke Maguire (Oct/2023), Project, Digital Twins-enhanced trustworthy autonomous vehicles
Kabirat Bolanle Olayemi (Oct/2022), Project: 'Resilient Risk-Aware Autonomy for Unmanned Autonomous Vehicles',
Jack Close (Oct/2022), Project: 'Learning based control for trustworthy autonomous systems',
Minh Nhat Nguyen (Jan/2022-), project: ‘Digital-Twin Based In-Process Quality Control for Robotic Machining’,
Stephen McIlvanna (Sept/2021-), project: 'Learning-based Safety critical control for trustworthy autonomous systems',
Main-supervised exchange PhD students (1-2years)
Chaoning Chen (Sep/2024), from Jilin University, project: Control Systems in Vehicle Automation.
Haichuang Zhang (Sep/2024), from Chang'an University, project: Driving style identification of driver.
Co-supervisor
Richard Hamilton (Oct/2024), project: Trustworthy AI for Automated Driving Systems, (co-supervision with Prof. Merhdad Dianati)
Ruihan Yao, (Sep/2024) (co-supervision with prof. Yan Jin)
Morgan Campbell (Oct/2023) (co-supervision with Dr. Madjid Karimirad)
Jing Wang, (2022->), project: (co-supervision with Dr. Chongfeng Wei).
Hady Farrag, (2022->), project: (co-supervision with Dr. Chongfeng Wei).
2025-2026:
UG and MSc students:
Thomas Lowry, project: Develop Unmanned Aerial Vehicles (UAVs) for Offshore Wind Turbine Blade Inspection
Stas Glinkowski, project: Fruit (Strawberry) Picking Robot
Oisin Braniff, Picking and Placing Underwater Objects using Underwater Robots-BlueROV2
2024-2025:
UG and MSc students:
Ronan McElroy, project: Learning to Walking and Jumping for Quadruped robots (Awarded The Invista Prize in Control Engineering)
Cormac Toal, project: Develop UAVs for Offshore Wind Turbine Blade Inspection
Balaji Sudharsana Devi Baskaran, Safety Critical Control for Quadruped robots
2023-2024:
UG students:
Rhyss McMullan, project: Learning to Walking and Jumping for Quadruped robots (Awarded The Invista Prize in Control Engineering),
Lorcan Quail, project: Robotic Guide Dog in Pedestrian Environments for People who are Blind and Visually Impaired,
Jamie Curran, CycleAid – Develop a Haptic Feedback Vest that uses Computer Vision as Input
Niall Douglas, project: Fruits (Strawberry) Picking Robots
Paul Reid, project: Palletising Doors Using Robot Arm
Aaron Myers, Fruits (Strawberry) Picking Robots
MSc students:
Alumni:
PhD students:
Virajith Shakya, 'Deformation modelling and optimal control of machining processes by Parallel Kinematic Machines' (2020->) (Co-supervised with Dr. Yan Jin) -> Assistant Professor at University of Kelaniya, Sri Lanka
Yuzhu Sun, project: 'Learning and Control for Human-Robot Collaboration: A Focus on Safety and Adaptability' (Sept/2021-Sept 2024) (main supervisor)-> Postdoctoral research fellow at Queen's University Belfast
Dianhao Zhang (2019->, project: 'Human robot collaboration in Manufacturing' (co-supervised with Prof. Sean Mcloone)
Shea Quinn (2020->), project: 'A smart approach for laser dissimilar materials joining via machine learning and AI' (Co-supervised with Chi-Wai Chan and Prof. Fraser Buchanan)
Arron Magill, 'Next Generation Tooling & Fixturing Paradigms for Cyber Physical Production Systems .' (2020->) (Co-supervised with Dr. Joseph Butterfield and Prof. Adrian Murphy)
2022-2023:
UG students:
Peter James McConnellogue, project: Deep Learning for sorting line using robot arm,
Xinjie Wang, project: Safety critical control for autonomous ground vehicles,
Patrick Ryan Woods Arnott, Picking and placing underwater objects using underwater robots-Bluerov2
Zaokun Pu, project: Intelligent Control for underwater robots
Allwin Bino, project: Visual servoing control for landing of a AUV on a moving target
Venkata Deepak Reddy Medam, Learning The Steering Policy for AGV Based on Deep Learning Using 3D Camera and Lidar Sensor
Andrew Ellis, Marine Debris Classification using Sensor Fusion and Deep Learning
MSc students:
2021-2022:
UG students:
Scott Lam-McGonnell, project: Deep Learning for Sorting Line using WLKARA robot
Matthew Pentland (Received Best thesis Award and Megaw Memorial award), project: Picking and Placing Underwater Objects using Underwater Robots-BlueROV2 (Received Megaw Lecture award and The Invista Prize in Control Engineering),
Devin Twaddle, project: Marine Debris Classification using deep learning,
Joshua Beatty, project: Underwater object Classification using deep learning,
Jordan Keenan, project: Marine segmentation using deep learning,
MSc students:
Hongyu Ju, project: Intelligent control for underwater robot-Bluerov2,
2020-2021:
7. Roujian Li, Visual Servo Control for Landing of a UAV on a Fixed/Moving Target , 2020-2021.
6. Keying Xia, Cooperative control for dual robot, 2020-2021.
5. Aditya Mukherjee, project: Robot vision systems, 2020-2021.
4. Yuanhao Ling, project: Control of Robot interacting with environment, 2020-2021.
3. Mate Dravuzc, project: Vision-based Terrain Classification for Mobile Robot Using Convolutional Neural Network , 2020-2021.
2019-2020:
2. Eimhin Laverty, project: Low-Cost Robotic Vision for Wire-Terminal Insertion Tasks, 2019-2020.
1. Mingyu Liu, project: Vision control for landing of a UAV on a Moving Target, 2019-2020.
Thank you for your interest in working with me! Please find the information below and email to me at m.van@qub.ac.uk
Undergraduates / Graduate students looking for research opportunities:
If you choose to email me, please attach your CV, Transcripts and a statement about your interests. Otherwise, you may not receive a reply. Please ensure the following quantities that I am looking for from good students:
(i) Interest in doing research on robotics or control
(ii) Have a good motivation in doing research
If you have a fellowship support, please be sure to let me know.
Stephen McIlvanna PhD viva
Group meeting
Yuzhu Sun PhD Viva
Farewell drink for Dr. Yuzhu Sun
If you are a self-funded student considering to study for a PhD in any of the topics below please email me @m.van@qub.ac.uk to discuss before applying.
I am recruiting sponsored or self-funded PhD students who wish to undertake projects in A.I, Machine Learning, Robotics and Control, including projects within the topics below:
Project Description:
Wind Turbines (WT) are large structures subject to various defects, requiring periodic maintenance. Certain damages like delamination of the blade or erosion may not cause complete failure of the WT but affect its performance. On the other hand, proper maintenance can increase annual energy production by 5%, profits by 20% and extend WT’s life span. Therefore, improving Operations & Maintenance (O&M) for WTs is extremely important. A new approach for comprehensively inspecting the inside of a wind turbine blade using a drone/Unmanned Aerial Vehicle (UAV) will be developed to identify blade internal damage throughout the length of the blade by using computer vision and 3D rendering. Will develop control algorithms to operate the drone within the blade in a GPS-denied environment and provide accurate localization and mapping of the interior of the offshore wind turbine blade.
Project Description:
Legged robots (quadrupedal, bipedal, humanoid robots) have been extensively applied for many practical applications that are either too dangerous or unsuitable for humans such as environmental monitoring, security surveillance, and search-and-rescue. These systems, however, consist of many interdependent components, from sensors to motors, operating in highly uncertain environments and exhibiting complex dynamics. This complex interdependency introduces new vulnerabilities within the robot systems that are sometimes impossible to predict. As a result, a single disturbance in actuators, or in sensors can lead to catastrophic events such as colliding with obstacles. Hence, it is imperative to guarantee that both the legged robots and humans in the surrounding are always safe during operation even when facing unforeseen and unpredictable events. Unfortunately, at the moment there are very limited solutions that can guarantee safety at runtime while a legged robot explores. It is thus necessary to develop a new theory of safety critical control for legged robots to guarantee resilience and safety against errors, uncertainties and disturbances. In this project, a new theory of learning-based safety critical control will be developed to mitigate or even eliminate the effects of the errors and disturbances during autonomous operations. The safety critical control is employed to guarantee that the legged robots will always operate within the designed safe zones during manoeuvring. Meanwhile, learning algorithms (meta learning or reinforcement learning) will be explored to model the uncertainty sources and integrate them within the safety critical control. This will enhance the precision of the tracking control system.
Objectives:
1. To build simulation model for a legged robot using Gazebo platform or Matlab/Simulink.
2. To design a safety critical control that ensures the legged robot will always operate within the designed safe zones during manoeuvring.
3. To develop learning algorithms using meta-learning and/or reinforcement learning concepts, in order to enhance the precision of the safety critical control.
4. To implement and validate the proposed algorithms in computer simulation platforms: Gazebo platform or Matlab/Simulink.
5. To implement the developed algorithms in a legged platform, i.e., PAWs robot, which has been designed and developed by our group.
3. Human-enabled control of autonomous partners via AR (Augmented reality) and VR (Virtual Reality)- based digital twin
Project Description:
Unmanned autonomous systems (UASs) such as unmanned aerial vehicles, marine vehicles and autonomous ground vehicles serve vital roles in security and defence systems. Such systems in some scenarios require the skills and adaptation capabilities for unpredictable conditions that only expert human operators can provide, for example, when a severe failure or external attack occurs. Therefore, research for UASs is recently concerned to fuse information from intelligent agents and humans. This proposal develops a novel human-in-the-loop digital twins that provides a new model of human-autonomous agents interaction, where human can access to the life-cycle and remaining mission capacity of autonomous agents team members in real time and remotely control the team when an unsafe circumstance occurs. The proposed digital twin system aggregates the functions of remote control and virtual machining using virtual reality and augmented reality techniques. These two essential functions are designed to provide an immersive and friendly operating environment as well as a vivid preview of situation awareness to improve the safety, secure and resilience of the team.
Objectives:
1. To develop a digital twin model for life cycle modelling and situations awareness .
2. To design Human-machine interaction through the digital twin and augmented reality .
3. To develop learning algorithms based on reinforcement learning concepts to enhance the precision of the safety critical control.
4. To implement the developed algorithms in a UAV platform, i.e., HuskyA200 Autonomous Ground Vehicles and multi-agent robots system: 4 underwater robots, which are currently available in our Lab.
4. Reinforcement Learning-Enhanced Safety Critical Control for Multiple Collaborative Vehicles
Project Description:
Multiple collaborative Unmanned Autonomous vehicles (UAVs) have been extensively applied for many practical applications that are either too dangerous or unsuitable for humans such as environmental monitoring, security surveillance, and search-and-rescue. These systems, however, consist of many interdependent components, from sensors to motors, operating in highly uncertain environments and exhibiting complex dynamics. This complex interdependency introduces new vulnerabilities within UAVs systems that are sometimes impossible to predict. As a result, a single disturbance in actuators, or in sensors can lead to catastrophic events such as colliding with obstacles. Hence, it is imperative to guarantee that both UAV and humans in the surrounding are always safe during operation even when facing unforeseen and unpredictable events. This project aims to develop a novel safety critical control for UAVs based on the advance and applications of reinforcement learning techniques.
Objectives:
1. To design a safety critical control that assures the UAVs will always operate within the designed safe zones during manoeuvring.
2. To extend the developed algorithm to multiple collaborative UAVs.
3. To develop learning algorithms based on reinforcement learning concepts to enhance the precision of the safety critical control.
4. To implement and validate the proposed algorithms in computer simulation platforms: Gazebo platform or Matlab/Simulink.
5. To implement the developed algorithms in a UAV platform, i.e., HuskyA200 Autonomous Ground Vehicles and multi-agent robots system, which are currently available in our Lab.
5. Resilient Risk-Aware Autonomy for Trustworthy Cooperative Autonomous Unmanned Vehicles
Project Description:
Autonomous unmanned vehicles (AUVs) have been extensively applied for many practical applications that are either too dangerous or unsuitable for humans such as environmental monitoring, security surveillance, precision farming, and search-and-rescue. In order to increase the performance and saving the execution time, a concept of group/team of AUVs has also been developed extensively for many crucial applications. The AUVs are inherently complex machines that have to cope with a dynamic and often hostile environment, i.e., underwater or nuclear environment. Effects of extreme disturbances from dynamic environments can cause system/mission failures, leading to physical system damage, intelligence leakage, endangerment of human lives as well as fear or loss of confidence in the public. Cyber-attacks that incapacitate AUVs or compromise their sensors are becoming increasingly realistic. Unexpected AUV failures can seriously degrade the performance of an AUV team and in extreme cases jeopardize the overall operation. Hence, it is imperative to guarantee that they can always be safe even during unforeseen and unpredictable events. Unfortunately, at the moment there are no real solutions that can guarantee safety at runtime while an AUV/team of AUVs explores, learns, plans and controls its actions through unknown, uncertain, and adversarial environments. It is thus necessary to develop theories for health monitoring, path planning and resilient control of cooperative autonomous systems to guarantee resiliency and safety against the failure and upcoming potential risks, hence to enable assured autonomous operations. This project, therefore, aims to develop a high fidelity Resilient Risk-Aware Autonomy (2R2A) framework, which smartly integrates three interdependent key enabling technologies (KETs): a system health monitoring scheme, a self-configuring control system methodology and a resilience-based task allocation mechanism, will be developed to enhance the safety, reliability and resiliency of the AUVs.
6. Digital-Twin based In-Process Quality Control for Robotic Machining
Robotic system plays an important role in automated manufacturing such as drilling, cutting, milling, grinding, and polishing in recent years. In robotic manufacturing, the machining vibration can easily affect the quality of the processed surfaces due to the poor stiffness and variable stiffness of the robot arm. Hence, it is imperative to monitor the processing quality and develop in-process quality control to enhance the quality of machining process. In this project, a new digital-twin model, which integrates a physical model (mathematical model) of the machining system with a data-driven technique (machine learning approach) will be developed to build a surrogate model of the machining process. This digital-twin can be used for two main applications: (i) to select optimal parameters (machining tool, interaction forces, etc) of the machining process, (ii) to monitor the product quality of the machining process online. This innovative approach takes the advantages of both physical modelling such as high accuracy and data-driven such as real-time execution. The digital-twin model will then be fed back into the process control system to adapt the control input online to enhance the product quality of the machining process.
7. Human-Robot Collaboration in Disassembly for Future Remanufacturing
Project description:
Remanufacturing is the process of recovering, disassembling, repairing and sanitizing components for resale at “new product” performance, quality and specifications. Remanufacturing helps to reduce the manufacturing cost and environmental impact. A key step in remanufacturing is disassembly of the returned product to be remanufactured. Currently, most of the current disassembly tasks in remanufacturing are manual as they are too complex to automate as current products have not been design considering disassembly. However, manual disassembly is a time consuming and costly process. Therefore, the automation of the disassembly tasks using robotic technology is necessary and important to reduce the cost and speed up the disassembly process. The robotic technologies generated through this project aim to make the disassembly process more collaborative with robot assistants working alongside humans. This human-robot collaborative system takes advantage of the strengths of humans (cognitive capability) and robotic systems (repeatability and high precision). This will require advancements in perception, planning, and control areas.
Aims and Objectives:
The aim of the project is to establish an integrated framework that automates disassembly in remanufacturing via the collaboration between human and robots. The objectives of the project are as follows:
To identify task sequence of disassembly processes,
To develop task sequence planning for robots in a human-robot collaboration setting to optimize operational cost and disassembly efficiency.
To predict human behaviour and adapt robots trajectories accordingly to guarantee safe collaboration.
To develop planning, learning and control for robots to perform disassembly tasks.
To publish research outcomes in appropriate journals of international standing and to publish and disseminate the result of research and scholarship in other reputable outlets.
8. In-process quality control for complex manufacturing system
Project Description:
complex manufacturing process is a multiple-inputs multi-outputs (MIMO) coupling system; one parameter has significant effects on the variations of other parameters. Hence, small disturbance from one or more of these parameters may result in undesired KPI outputs, which leads to defects in the manufactured parts. Therefore, it is very essential to monitor the multiple KPIs, understand the coupled dynamics and defects formation, and control the multi-inputs simultaneously. This project synergizes expertise in computer science, physics, manufacturing and robotics and control engineering to overcome one of the most challenging issue in controlling complex manufacturing systems, namely, implementing in-process quality control.
Objectives:
1. Extract KPIs based on digital twin approach.
2. Design in-process quality control
3. Implement the control algorithms in physical experimental remote laser welding platform.
9. Impedance Control with Human Intent Estimation for Human-Robot Collaboration
Project Description:
The use of robots for industrial applications has been increased significantly recently. This project aims at realizing robotic systems that are useful in industrial settings where the environment is predominantly occupied by human workers. The robotics system is not intended to completely replace the human worker, but with enhanced intelligence and capability, it will serve as the most useful tool for the human worker. This human-robot collaborative system takes advantage of the strengths of humans (cognitive capability) and robotic systems (repeatability and high precision). As a co-operator, robot needs to understand the human intention and adapt to movement and interaction force of the human. To equip the robots such capabilities, impedance control strategies with human intent estimation will be investigated in this project.
Objectives:
1. To design disturbance observers to estimate human motion intention.
2. To design optimization algorithms to optimize the interaction force between robot and human.
3. To design impedance control/admittance control strategies for robot to track the human intention.
4 To implement the developed algorithms on industrial robot platforms, i.e., Baxter robot.