Research

Major research

Scalable Risk-Aware Formal Analysis and Control of Multi-Agent Systems

Abstract: The growing demand for automation and intelligentization of manufacturing and social services requires the application of a larger number of autonomous devices, such as robots, autonomous driving vehicles, and unmanned aerial vehicles to more complicated environments and more sophisticated tasks than before. This leads to larger challenges to the controller design of large-scale robot teams due to the difficulty in massive computation, decentralized stabilization, uncertainty quantification, risk mitigation, and fault handling. In this sense, the conventional control approaches that are sensitive to the dynamic system models are facing more and more challenges when dealing with these applications. On the one hand, the development of stochastic analysis, computational technologies, and artificial intelligence provides powerful tools to infer useful knowledge for control from data. On the other hand, the conventional control approaches based on transfer functions or Lyapunov theory cannot provide sufficient support to incorporate this knowledge. Therefore, novel control frameworks are needed to cope with the control of large-scale systems with complicated task specifications while being capable of incorporating the knowledge inferred through data-driven approaches. Formal control is a promising control methodology that can hopefully help us achieve this goal. Compared to the conventional informal control approaches that are highly dependent on the proper selection of Lyapunov functions and control parameters, formal control methods are dedicated to proposing a set of feasible control specifications in the form of temporal logic formulas. Then, a controller is automatically solved via a synthesis process using model-checking-based or optimization-based approaches. Such control methods are referred to as formal since they are formally given by predefined specifications and can be automatically solved. Nevertheless, solving a formal controller is usually computationally expensive. This brings challenges to formal control of large-scale systems, such as networked systems, multi-agent systems, and cyber-physical systems. In this research, we are investigating novel approaches and technologies to improve the scalability of formal control methods to large-scale systems. Under this research topic, we are dedicated to developing a new conceptual framework of efficient and scalable formal control for multi-agent systems and cyber-physical systems. In the meantime, we are also consistently promoting the application of novel formal control approaches to practical scenarios, such as autonomous driving, industrial manufacturing, and aerial management. [Details]

Artificial-Intelligence-Enabled Adaptive Robots

Abstract: The recent development of industrial manufacturing and social services has witnessed a significant trend of automation and intelligentization due to the wide application of robots and the technology of artificial intelligence (AI). While robots liberate humans from tedious and dangerous work in hazardous environments, AI simplifies the programming of robots by automatically inferring patterns and models from the interaction between the robots and the environment. Nevertheless, the application of robots and AI to more general manufacturing and social tasks is still limited by the lack of flexibility and adaptability to the changes in the task and the environment. For example, the conventional way of programming a robot is platform-specific, which means that the control algorithms must be developed for the specific robots individually. In this sense, reprogramming is needed if another robot is assigned to the same task or this robot is required to perform a new task. Even for learning-based approaches, massive training, and data collection are necessary to develop control policies for specific robots. Transplanting or migrating algorithms or policies to new robots or new tasks are challenging problems. Thus, a new concept, adaptive robotics, has been proposed to address the desire that an AI-facilitated robot should be able to properly reprogram itself to these changes without human intervention. Nevertheless, this concept is yet too abstract to provide any specific guidance to the development of robot programs. In this research, we are dedicated to proposing novel concepts, frameworks, and approaches to define and prescribe the connotations and methodologies of adaptive robots. More specifically, we investigate how to use AI-based methods, such as machine learning, reinforcement learning, transfer learning, imitation learning, and meta-learning to facilitate the capabilities of robotic systems in variable environments and changeable tasks. The scope of this research also includes how to upgrade a robot control algorithm to complicated environments and tasks from simpler ones without using massive data, or how to migrate a well-trained algorithm in a simulation environment to reality incorporating the existence of sim-to-real gap. Based on this, we aim to lead the way toward a new generation of robotic devices that are able to automatically adapt themselves to new environments and new tasks with the least possible data and retraining. [Details]

Robust Control and Fault Identification of Safe Robotic Systems

Abstract: Safety is always a critical issue for robotic systems especially in the context of human-robot interaction or collaboration.  The uncertainties brought by the human cooperators greatly increase the challenges in the design of the man-in-the-loop systems. On the one hand, the task targets of the systems should be achieved with human uncertainties, which renders a robustness issue. On the other hand, the human-robot collaboration system should always guarantee the hard compliance of the required constraints to ensure the safety of humans. However, the current technical standard in the manufacturing industry is far from satisfying these demands. Therefore, the design of a safe human-robot collaboration system is still an open problem. Among the diverse research fields in safe human-robot interaction,  control, observation, and fault identification are important topics to guarantee the safe performance of robotic systems. A safe controller is necessary to ensure that the robotic systems achieve the desired tasks without violating the safety rules or causing injuries to human operators. Meanwhile, fault identification is an important technology to quantify the influence of disturbance or system faults on the accomplishment of the desired tasks. Given that the conventional methods of robust control and fault identification focus too much on the steady-state performance of the systems, namely tracking control errors and identification errors,  they cannot ensure sufficient bandwidth of the system responses. For robot control, this issue leads to the system being sensitive to high-bandwidth disturbances, such as fast-changing signals or hard safety constraints. For fault identification, this issue means the difficulty of the system in reconstructing high-bandwidth disturbances, such as square signals or triangle signals. In this research, we are dedicated to finding novel control and learning approaches that can ensure fast response of the systems in robust control and precise fault identification. To be more specific, we are interested in using second-order sliding mode control, time-delay estimation, supervised learning, and Bayesian inference to achieve a better balance between the steady performance and the bandwidth of the closed-loop systems. All these approaches form a novel safety framework for the design of robotic systems in human-robot collaboration contexts with human uncertainties and hard safety requirements. This framework is promising in inspiring using both control-theory-based and machine-learning-based methods to build safe collaborative robots. [Details]

Collaborative Research

Safe Control of Multi-Agent Systems Using Control Barrier/Lyapunov Functions

Abstract: Safety has become an important issue for the control of multi-agent systems, especially those with complex dynamic models or in complicated environments. For example, search and rescue have to deal with navigation tasks in rugged terrains with unpredictable obstacles. Also, the coverage control of multiple robots has to achieve effective coverage without leaving the assigned area. The predominant approach follows a mapping, planning, and tracking control decoupled paradigm that forgoes theoretical guarantees for efficient practical applications. Each layer in this decoupled paradigm is designed to assume perfect execution of the connected layers. However, the actual performance of each layer in real-world environments often deviates from desirable expectations, which causes cascading errors in the classical map-plan-track architecture. These cascading errors and the associated influence are hard to theoretically characterize and thus further be addressed. For example, the tracking controller is designed assuming an available dynamically feasible and safe reference trajectory, which might be unavailable due to the infeasibility of the numerical optimization planning algorithm; Besides, the gap between the actual execution trajectory and the desired trajectory, caused by the inefficiency of the tracking controller, might result in unsafe issues even though a safe reference trajectory is provided by the planning layer. Besides, the decoupled paradigm poses high requirements of hardware for efficient execution of each module, which is unavailable for low-cost platforms. Control barrier functions (CBF) and control Lyapunov functions (CLF) provide powerful tools for dynamic systems to incorporate safety constraints such that safe control can be achieved with closed-form solutions. In this research, we use CBFs and CLFs to facilitate hard safe constraints to the coordination control of multi-agent systems with complex dynamics or complicated tasks. The CBFs and CLFs are determined according to individual cases with the feasibility, invariance, and stability proved. Through this research, we are dedicated to developing novel safe-critic control approaches for practical multi-agent systems [Details].

Abstract: An epidemic model depicts the dynamics of information that diffuses in a social network. The investigation of epidemic modes is motivated by the desire to study and control the spreading of viruses or rumors. In recent years, modeling and control of epidemics have been popularly studied in the fields of sociology, psychology, health care, and social media management. In the first half of 2020, the outbreak of pandemics COVID-19 raised concerns about the reliability of the current public health systems, thus attracting more attention to the control problem of the epidemic models. In this research, we investigate the filtering and robust control of the epidemic models with uncertain disturbances. Although the susceptible-infected-susceptible (SIS) or susceptible-infected-recovered (SIR) models are able to describe most epidemic scenarios, the inherent randomness and uncertainties of the individuals in the epidemic network are still not fully investigated. As a result, some random phenomena or exceptional individual behaviors still lack justified interpretations. The control methods without considering the uncertainties may lead to completely different results from reality. In this topic, we seek for a robust control framework for the epidemic models. By investigating the control solutions for these problems, we hope to produce a generic framework for the robust filtering and control of epidemic models with applicable potentials. [Details]