Safety-critical model predictive control with control barrier function for dynamic obstacle avoidance
PhD student: Minh Nhat Nguyen & Stephen McIlvanna
Abstract: In this paper, a safety critical control scheme for a nonholonomic robot is developed to generate control signals that result in optimal obstacle-free paths through dynamic environments. A barrier function is used to obtain a safety envelope for the robot. We formulate the control synthesis problem as an optimal control problem that enforces control barrier function (CBF) constraints to achieve obstacle avoidance. A nonlinear model predictive control (NMPC) with CBF is studied to guarantee system safety and accomplish optimal performance at a short prediction horizon, which reduces computational burden in real-time NMPC implementation. An obstacle avoidance constraint under the Euclidean norm is also incorporated into NMPC to emphasize the effectiveness of CBF in both point stabilization and trajectory tracking problem of the robot. The performance of the proposed controller achieving both static and dynamic obstacle avoidance is verified using several simulation scenarios.
Digital Twin-Driven Reinforcement Learning for Obstacle Avoidance in Robot Manipulators
PhD students: Yuzhu Sun
The evolution and growing automation of collaborative robots introduce more complexity and unpredictability to systems, highlighting the crucial need for robot’s adaptability and flexibility to address the increasing complexities of their environment. In typical industrial production scenarios, robots are often required to be re-programmed when facing a more demanding task or even a few changes in workspace conditions. To increase productivity, efficiency and reduce human effort in the design process, this paper explores the potential of using digital twin combined with Reinforcement Learning (RL) to enable robots to generate self-improving collision-free trajectories in real time. The digital twin, acting as a virtual counterpart of the physical system, serves as a ’forward run’ for monitoring, controlling, and optimizing the physical system in a safe and cost-effective manner. The physical system sends data to synchronize the digital system through the video feeds from cameras, which allows the virtual robot to update its observation and policy based on real scenarios. The bidirectional communication between digital and physical systems provides a promising platform for hardware-in-the-loop RL training through trial and error until the robot successfully adapts to its new environment. The proposed online training framework is demonstrated on the Ufactory Xarm5 collaborative robot, where the robot end-effector aims to reach the target position while avoiding obstacles. The experiment suggest that proposed framework is capable of performing policy online training, and that there remains significant room for improvement.
Multiple Collaborative Underwater Vehicles for Underwater monitoring
PhD students: Jack Close, Stephen Mcivanna, Minh Nhat Nguyen
Multiple collaborative Unmanned Autonomous vehicles (UAVs) have been extensively applied for many practical applications that are either too dangerous or unsuitable for humans such as environmental monitoring, security surveillance, and search-and-rescue. These systems, however, consist of many interdependent components, from sensors to motors, operating in highly uncertain environments and exhibiting complex dynamics. This complex interdependency introduces new vulnerabilities within UAVs systems that are sometimes impossible to predict. As a result, a single disturbance in actuators, or in sensors can lead to catastrophic events such as colliding with obstacles. Hence, it is imperative to guarantee that both UAV and humans in the surrounding are always safe during operation even when facing unforeseen and unpredictable events. This project aims to develop a novel safety critical control for UAVs based on the advance and applications of reinforcement learning techniques.
Safety-Critical Control for Adaptive Admittance Control
PhD student: Yuzhu Sun
Abstract: Physical human-robot collaboration requires strict safety guarantees since robots and humans work in a shared workspace. This paper presents a novel control framework to handle safety-critical position-based constraints for human-robot physical interaction. The proposed methodology is based on admittance control, exponential control barrier functions (ECBFs) and quadratic program (QP) to achieve compliance during the force interaction between human and robot, while simultaneously guaranteeing safety constraints. In particular, the formulation of admittance control is rewritten as a second-order nonlinear control system, and the interaction forces between humans and robots are regarded as the control input. A virtual force feedback for admittance control is provided in real-time by using the ECBFs-QP framework as a compensator of the external human forces. A safe trajectory is therefore derived from the proposed adaptive admittance control scheme for a low-level controller to track. The innovation of the proposed approach is that the proposed controller will enable the robot to comply with human forces with natural fluidity without violation of any safety constraints even in cases where human external forces incidentally force the robot to violate constraints. The effectiveness of our approach is demonstrated in simulation studies on a two-link planar robot manipulator.
Digital-Twin Based In-Process Quality Control for Robotic Machining
PhD student: Minh Nhat Nguyen
Abstract: Remote laser-welding (RLW) systems are being used more frequently because of their larger working areas, shorter downtimes, and ability to weld different seam types with high accuracy at greater speeds in comparison to conventional welding. A leading challenge preventing the full uptake of the RLW technology in the industry is the lack of efficient in-process monitoring and weld quality control solutions. This underpins the need for the penetration estimation model and advanced quality-critical control. The aim of this research project is to look at the current and emerging tools and techniques that can be applied to guarantee welding quality or significantly reduce the welding defects. My report is a brief review of some of the key background areas followed by details of work that has been undertaken to date and an outline of the expected progress for the duration of the research is discussed.
Reinforcement Learning-Enhanced Safety Critical Control
PhD: Kabirat Olayemi
To increase performance and throughput, multiple collaborative UAVs have been developed. These systems, however, consist of many interdependent components, from sensors to motors, operating in highly uncertain environments and exhibiting complex dynamics. This complex interdependency introduces new vulnerabilities within UAVs systems that are sometimes impossible to predict. Hence, it is imperative to guarantee that both UAV and humans in the surrounding are always safe during operation even when facing unforeseen and unpredictable events. In this project, a new theory of learning-based safety critical control will be developed to mitigate or even eliminate the effects of the errors and disturbances during autonomous operations. Learning algorithms based on reinforcement learning will be explored to model the uncertainty sources and environment and integrate them within the safety critical control. This will enhance the precision and safety of the control system.
Intelligent control for legged robots
PhD students: Peter James McConnellogue
Legged robots presents a significant challenge for artificial intelligence and control theory. To safely control the legged robots, the control system needs to understand the behavior of system dynamics under the effects of model uncertainties and environmental disturbances. This can be achieved by embedding a control scheme that integrates: (1) the design of model-based controllers to ensure the safety of the system, (2) the design of online learning technique for estimating the unknown system dynamics, and (3) the design of model free-learning technique such as reinforcement learning to estimate the unknown environment. The integration between the model-based control with online system dynamics approximation and environment learning will enable the autonomous vehicle to highly have confidence on safety during driving. Therefore, the aim of the project is to design a controller architecture that combines: (1) a model-based controllers utilizing the safety critical control concepts, (2) an online learning of the unknown system dynamics, and (3) a model-free reinforcement learning technique, which estimates the external environment, to ensure safety during learning and exploration.
Advancements in underwater vehicle technology have significantly expanded the potential scope for deploying autonomous or remotely operated underwater vehicles in novel practical applications. However, the efficiency and maneuverability of these vehicles remain critical challenges, particularly in the dynamic aquatic environment. In this work, we propose a novel control scheme for creating multi-agent distributed formation control with limited communication between individual agents. In addition, the formation of the multi-agent can be reconfigured in real-time and the network connectivity can be maintained. The proposed use case for this scheme includes creating underwater mobile communication networks that can adapt to environmental or network conditions to maintain the quality of communication links for long-range exploration, seabed monitoring, or underwater infrastructure inspection. This work introduces a novel Distributed Nonlinear Model Predictive Control (DNMPC) strategy, integrating Control Lyapunov Functions (CLF) and Control Barrier Functions (CBF) with a relaxed decay rate, specifically tailored for 6-DOF underwater robotics
UAVs for internal turbine blade inspection (funded by Innovate UK)
Postdocs/ RAs: Yuzhu Sun, Minh-Nhat Nguyen, Peter James McConnellogue
Our group has succesfully designed and controlled drone for offshore wind turbine inspection.