AI Welcomes Systems Engineering:

Towards the Science of Interdependence for Autonomous Human-machine Teams

AAAI Spring Symposium Series

Stanford, CA USA, March 23-25, 2020

Symposium Blurb

Compared to a collection of the same but independent individuals, the members of a team when interdependent are significantly more productive. Yet, interdependence is insufficiently studied to provide an efficient operational architecture for human-machine or machine-machine teams. Interdependence in a team creates bi-stable effects among humans characterized by tradeoffs that affect the design, performance, networks and other aspects of operating autonomous human-machine teams. To solve these next-generation problems, the AI and systems engineering (SE) of human-machine teams require multidisciplinary approaches. Namely, the science of interdependence for autonomous human-machine teams requires contributions not only from AI, including machine learning (ML); and from SE, including the verification and validation of systems using AI/ML; but also other disciplines to establish an approach that allows a human and machine to operate as teammates, including simulation and training environments where humans and machines can co-adapt their operational outcomes yet with stable outcomes assured by evidence based frameworks. As a general rule, users interfacing with machine learning algorithms require the information fusion (IF) of data to achieve limited autonomous operations, but as autonomy increases, a wider spectrum of features become necessary, like transfer learning. Fundamentally, for human-machine teams to become autonomous, the science of how humans and machines operate interdependently in a team requires contributions from, among others, the social sciences to study how context is interdependently constructed among teammates; how trust is affected when humans and machines depend upon each other; how human-machine teams are to train with each other; how human-machine teams need a bidirectional language of explanation, the law to determine legal responsibilities from mis-behavior and accidents, ethics to know the limits of morality, and sociology to guide the appropriate team behaviors across society and different cultures, the workplace, healthcare, and combat. We need to know the psychological impact on humans when teaming with machines that can think faster than humans even in relatively mundane situations like with self-driving cars, or the more complex but still traditional decision situations like in combat (i.e., “in-the-loop”; e.g., with the Navy’s Ghost fleet; the Army’s self-driving combat convoy teams; the Marine Corps’ ordinance disposal teams), or the more daunting scenarios with humans as "observers" of decisions (i.e., “on-the-loop”; e.g., with the Air Force's aggressive, dispensable, “attritable” drones flying wing for an F-35).

Topics: AI/ML; Autonomy; Systems Engineering; Human-Machine Teams (HMT); machine explanations of decisions; context

New: Springer Book Contract, new title: Systems Engineering and Artificial Intelligence

New: Submit chapter drafts: 12/15/2020; edits returned to authors: Jan 15, 2021; camera-ready finals returned: Mar 1st; book published May 15, 2021