Cognitive Autonomy for Human CPS

NSF CPS Frontier (2019-2025)

Mission

The objective of this research is a new architecture for modeling, prediction, and control of cyber-physical systems (CPS) with a human in the loop, that is highly responsive to the human, yet maintains safety, reliability, and high performance.

[National Science Foundation (NSF) Funded Project]

Human interaction with autonomous cyber-physical systems is becoming ubiquitous in consumer products, transportation systems, manufacturing, and many other domains. This project seeks constructive methods to answer the question: How can we design cyber-physical systems to be responsive and personalized, yet also provide high-confidence assurances of reliability? Cyber-physical systems that adapt to the human, and account for the human's ongoing adaptation to the system, could have enormous impact in everyday life as well as in specialized domains (biomedical devices and systems, transportation systems, manufacturing, military applications), by significantly reducing training time, increasing the breadth of the human's experiences with the system prior to operation in a safety-critical environment, improving safety, and improving both human and system performance. Architectures that support dynamic interactions, enabled by advances in computation, communication, and control, can leverage strengths of the human and the automation to achieve new levels of performance and safety.

This research investigates a human-centric architecture for "cognitive autonomy" that couples human psychophysiological and behavioral measures with objective measures of performance. We are focusing on three elements: 1) computable cognitive models which are amenable to control, yet highly customizable, responsive to the human, and context dependent; 2) cognitive control, which collaboratively assures both desired safety properties and human performance metrics; and 3) various simulation and experiment platforms for validating algorithms and frameworks. See below for details! 

We are working with eight principal investigators from five universities: Purdue University, University of New Mexico, University of Texas at Austin, University of Colorado Boulder, and Pennsylvania State University. 

The project started in 2019 and will be finished in 2025. However, our research on human-machine interaction will be continued!

Research Topics and Products

1) Computable Cognitive Models

Inverse Optimal Control for Skill-Level Inference

We have developed inverse optimal control (IOC) approaches to learn task objective functions from human behavioral data (e.g., a human-controlled drone's trajectory). The learned task objective functions can be utilized to infer the human operator's skill-level, based on the observation that an expert's task objective function differs from that of a novice. Furthermore, we have extended the IOC approach to infer time-varying and stochastic task objective functions to account for intrinsic uncertainties in human behaviors. For example, even when human operators try to perform the same task multiple times, their behavior can differ (e.g., probabilistically distributed) due to unintentional variability. This approach enables us to understand and predict human behaviors better and customize interactive schemes based on the inferred skill-levels.

Publications:

S. Byeon, D. Sun, and I. Hwang, "Human Behavior Modeling via Identification of Task Objective and Variability," 2024,  arXiv:2404.14647

S. Byeon, D. Sun and I. Hwang, "An Inverse Optimal Control Approach for Learning and Reproducing Under Uncertainties," in IEEE Control Systems Letters, vol. 7, pp. 787-792, 2023, doi: 10.1109/LCSYS.2022.3226882.

S. G. Clarke, S. Byeon and I. Hwang, "A Low Complexity Approach to Model-Free Stochastic Inverse Linear Quadratic Control," in IEEE Access, vol. 10, pp. 9298-9308, 2022, doi: 10.1109/ACCESS.2022.3144933.

S. Byeon, W. Jin, D. Sun and I. Hwang, "Human-Automation Interaction for Assisting Novices to Emulate Experts by Inferring Task Objective Functions," 2021 IEEE/AIAA 40th Digital Avionics Systems Conference (DASC), 2021, pp. 1-6, doi: 10.1109/DASC52595.2021.9594324. [Student Paper Awards Finalist]

Safety Analysis for Human-in-the-loop Systems

We have developed data-driven algorithms to predict the future state distribution of a human-in-the-loop system while explicitly considering the control policy of the human operator. The proposed algorithms learn the human control policy or propagation of the state as a Gaussian Mixture Model (GMM) and predict the future state by leveraging it. Thus, the result generated by the proposed algorithms can reduce unnecessary redundancy by incorporating the human control l policy, i.e., via the closed-loop analysis. Recently, we have also investigated the Koopman operator, which can represent the state propagation of a nonlinear system using the linear operator, to model the more complex and sophisticated behavior of a human-in-the-loop system. 

Publications:

J. Choi, H. Park, and I. Hwang, “Bootstrapped Gaussian Mixture Model-Based Data-Driven Forward Stochastic Reachability Analysis,” IEEE Control Systems Letters, 2023, doi: 10.1109/LCSYS.2023.3347188

J. Choi, S. Byeon, and I. Hwang, “Data-driven Forward Stochastic Reachability Analysis for Human-in-the-Loop Systems,” 62nd IEEE Conference on Decision and Control, 2023, doi: 10.1109/CDC49753.2023.10383447

J. Choi, S. Byeon, and I. Hwang, “State Prediction of Human-in-the-Loop Multi-rotor System with Stochastic Human Behavior Model,” 4th IFAC Workshop on Cyber-Physical Human Systems, Houston, Texas, December 1-2, 2022, doi: 10.1016/j.ifacol.2023.01.113

2) Cognitive Control

Skill-Level based Shared Control

We have developed skill-level-based shared control frameworks to assist human novices to emulate human experts in complex dynamic control tasks (e.g., controlling drones). The shared control schemes are designed to dynamically adjust the level of assistance based on the inferred skill-level to prevent frustration or tedium during human training due to poorly imposed assistance. Furthermore, we have collaborated with Dr. Jain's laboratory at Purdue University to incorporate human's cognitive states (e.g., self-confidence) and physiological data in the assistance determination policies. We have conducted multiple human user studies to examine the proposed frameworks.

Publications:

S. Byeon, J. Choi, Y. Zhang, and I. Hwang, "Stochastic-Skill-Level-Based Shared Control for Human Training in Urban Air Mobility Scenario," ACM Transactions on Human-Robot Interaction, 2023, doi: https://doi.org/10.1145/3603194.

M. S. Yuh, S. Byeon, I. Hwang, N. Jain, "A Heuristic Strategy for Cognitive State-based Feedback Control to Accelerate Human Learning," IFAC-PapersOnLine, Volume 55, Issue 41, 2022, doi: 10.1016/j.ifacol.2023.01.111.

M. Yuh, S. Byeon, N. Jain and I. Hwang, "Evaluation of Cognitive State Feedback for Accelerating Human Learning," 2022 American Control Conference (ACC), Atlanta, GA, USA, 2022, pp. 3364-3364, doi: 10.23919/ACC53348.2022.9867342.

S. Byeon, D. Sun and I. Hwang, "Skill-level-based Hybrid Shared Control for Human-Automation Systems," 2021 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Melbourne, Australia, 2021, pp. 1507-1512, doi: 10.1109/SMC52423.2021.9658994.

Human-Autonomy Teaming

Human-autonomy teaming (HAT) is a collaborative working strategy involving human(s) and autonomous agent(s). Recent HAT studies have demonstrated that autonomous agents can enhance task performance when working as teammates with humans rather than mere tools. However, introducing autonomous agents does not imply that humans can simply delegate team tasks to them in complex environments. An effective HAT design should consider the unique capabilities, limitations, and interdependencies of each teammate to foster symbiotic teamwork. To address these problems, we have developed a computational function allocation approach that leverages cognitive engineering, computational work models, and optimization techniques. We offer three main elements: 1) analyze the teamwork to identify a set of all possible function allocations within a team construction, 2) numerically simulate the teamwork in temporal semantics to explore the interaction of the team with complex environments using the identified function allocations in a trial-and-error manner, and 3) optimize the adaptive function allocation with respect to a given situation such as physical conditions, available information resources, and human mental workload. For the optimization, we utilize performance metrics such as task performance, human mental workload, and coherency in function allocations.

Publication:

S. Byeon, J. Choi and I. Hwang, "A Computational Framework for Optimal Adaptive Function Allocation in a Human-Autonomy Teaming Scenario," in IEEE Open Journal of Control Systems, vol. 3, pp. 32-44, 2024, doi: 10.1109/OJCSYS.2023.3340034.

3) Simulation and Experiment Platforms

Python-based 2D Interactive Games

We have developed an experimental testbed, called Quadrotor Simulator, for the development, test and validation of shared control as well as cognitive states estimation algorithms. The human users are tasked to land a quadrotor safely on the touch-pad using a joystick while checking its current status via a monitor. The control input is set to the roll angular acceleration and thrust. An initial position of the quadrotor can be randomly generated or designated to a specific location. All human behavioral data are automatically saved with a time-tag. This simulator is programmed in Python using open-source libraries, so it can be easily extended to various scenarios according to the purpose of the experiment without worrying about copyright. 

3D Aerial Vehicle Simulators

We have developed 3D aerial vehicle simulators to develop, test, and demonstrate algorithms and frameworks for shared control, human training, and human-autonomy teaming. The new 3D simulators can provide complex and realistic scenario and is very easy to modify for any future use. The first version of the simulator focuses on a human training scenario with Urban Air Mobility (UAM) application. The second version provides realistic environments for human-autonomy teaming and function allocation research with multiple drones. The simulators are powered by AirSim (and Unreal Engine), an open-source simulator for physically and visually realistic simulations.

Drone Hardware with Motion Capture System

We have established an indoor environment for conducting human user studies using drone hardware systems. This setup includes various open-source components such as Crazyflie drones, a Qualisys motion capture system for real-time tracking and recording of drone positions and attitude, and a mixed virtual reality interface that allows drones to interact with virtual objects without altering the physical environment. A single human operator can interact with multiple drones simultaneously, using graphical user interface, hand gesture, speech, or joystick. Tests have been successfully conducted with up to four drones at once. To enable drones to perform task allocation and motion planning as they would in a real environment, a custom code library has been developed for use in experiments. 

Publications

People

Sponsor

NSF CPS Frontier: Cognitive Autonomy for Human CPS: Turning Novices into Experts

NSF CNS-1836952 (Learn more)

From 2019 to 2025

This material is based upon work supported by the National Science Foundation under Grant Numbers CNS-1836900 and CNS-1836952. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.