The intersection of robust intelligence (RI) and trust in autonomous systems
Organizers: Jennifer Burke (Boeing), Alan Wagner (Georgia Tech Research Institute; GTRI), Don Sofge (Naval Research Laboratory; NRL) & Bill Lawless (Paine College)
From our announcement on the call-for-proposals at AAAI's Spring Symposia site at: http://www.aaai.org/Symposia/Spring/sss14symposia.php#ss05
This AAAI Symposium will explore the intersection of robust intelligence (RI) and trust across multiple contexts among autonomous hybrid systems (where hybrids are arbitrary combinations of humans, machines and robots). We seek methods for structuring teams or networks that increase robust intelligence and engender trust among a system of agents. But how can we determine the questions critical to the static and dynamic aspects of behavior and metrics of agent performance?
To better manage RI with AI to promote trust in autonomous agents and teams, our interest is in the theory, mathematics, computational models, and field applications at the intersection of RI and trust, not only in team-multitasking effectiveness or in modeling RI networks, but in the efficiency and trust engendered among interactants.
We seek to understand the intersection of RI and trust for humans interacting with systems (e.g., teams, firms, networks), to use this information with AI to model RI and trust, and to predict outcomes from interactions among hybrids (e.g., multitasking operations).
Systems that learn, adapt, and apply experience to problems may be better suited to respond to novel environmental challenges. One could argue that such systems are “robust” to the prospect of a dynamic and occasionally unpredictable world. We expect systems that exhibit robustness would afford a greater degree of trust in hybrids that interact with the system. Robustness through learning, adaptation and structure is determined by predicting and modeling the interactions of autonomous hybrid enterprises. How can we use this data to develop models indicative of normal/abnormal operations in a given context? We hypothesize such models improve enterprise intelligence by allowing autonomous entities to continually adapt within normalcy bounds, leading to greater reliability and trust.
The focus of this workshop is how robust intelligence impacts trust in the system and how trust in the system impacts robustness. We will explore approaches to RI and trust including (for example): intelligent networks, intelligent agents, and intelligent multitasking by hybrids.
Papers should address “The intersection of robust intelligence (RI) and trust in autonomous systems”, and specify the relevance of their topic to AI or how AI can be used to help address their issue.
To better understand and manage RI with AI to promote trust in autonomous agents and teams, our interest is in the theory, network models, mathematics, computational models, associations, and field applications at the intersection of RI and trust. We are interested not only in effectiveness with a team’s multitasking or in constructing RI networks and models, but in the efficiency and trust engendered among interacting participants.
Part of our symposium is a better understanding of the intersection of RI and trust for humans interacting with other humans and human groups (e.g., teams, firms, systems; also, networks among these social objects). Our goal is to use this information with AI to not only model RI and trust, but in predicting outcomes from interactions between autonomous hybrid groups (e.g., hybrid teams in multitasking operations).
Systems that learn, adapt, and apply their experience to the problems faced in an environment may be better suited to respond to new and unexpected challenges. One could argue that such systems are “robust” to the prospect of a dynamic and occasionally unpredictable world. We expect the systems that exhibit this type of robustness would afford others that must interact with the system a greater degree of trust. For instance, an autonomous vehicle which, in addition to driving to different locations, can also warn a passenger of locations where it should not drive likely be viewed as more robust than a similar system without such warning capability. But would it be viewed as more trustworthy? This workshop endeavors to examine such questions that lay at the intersection of robust intelligence and trust. Problems such as these are particularly difficult because they imply situational variations that may be hard to define.
The focus of this workshop will center on how robust intelligence impacts trust in the system and how trust in the system makes it more or less robust. We intend to explore approaches to RI and trust that include, among others, intelligent networks, intelligent agents, and multitasking by hybrid groups (i.e., arbitrary combinations of humans, machines and robots).
Robust intelligence (RI) has not been easy to define. We propose an approach to RI with artificial intelligence (AI) that may include, among other approaches, the science of intelligent networks, the generation of trust among intelligent agents, and multitasking among hybrid groups (humans, machines and robots). RI is the goal of several government projects to explore intelligence as seen at the level of humans, including those directed by NSF; the US Army; and the USAF. DARPA has a program on physical intelligence that is attempting to produce the first example of “”intelligent" behavior under thermodynamic pressure from their environment.” Carnegie Mellon University has a program to build a robot that can execute “complex tasks in dangerous … environments”. IEEE has the journal Intelligent Systems to address various topics on intelligence in automation including trust; social computing; health; and, among others, coalitions that make the “effective use of limited resources to achieve complex and multiple objectives.” From another perspective, IBM built a program that beat the reigning world champion at chess in 1997; another program that won at the game Jeopardy in 2011; and an intelligent operations center for the management of cities, transportation, and water. There may be multiple other ways to define or approach RI, but also in how to measure it.
In an attempt to advance AI with a better understanding and management of RI, our interest is in the theory, network models, mathematics, computational models, associations, and field applications of RI. This means that we are interested in not only effectiveness with multitasking or in constructing RI networks and models, but in the efficiency and trust engendered among the participants during interactions.
Part of our symposium is a better understanding of RI and the autonomy it produces with humans interacting with other humans and human groups (e.g., teams, firms, systems; also, networks among these social objects). Our ultimate goal is to use this information with AI to not only model RI and autonomy, but in the predictions of the outcomes from interactions between hybrid groups (e.g., hybrid teams composed arbitrarily of humans, machines and robots) that interdependently generate networks and trust.
Multitasking: For multitasking with human teams and firms, interdependence is an important element in their RI: e.g., the Army is attempting to develop a robot that can produce "a set of intelligence-based capabilities sufficient to enable the teaming of autonomous systems with Soldiers." ONR is studying robust teamwork. But a team’s interdependence also introduces uncertainty, fundamentally impacting measurement.
Interdependence: Unlike conventional computational models where agents act independently of neighbors, where, for example, a predator mathematically consumes its prey or not as a function of a random interaction process, interdependence means that agents dynamically respond to the bi-directional signals of actual or potential presence of other agents (e.g., in states poised to fight or flight), a significant increase over conventional modeling complexity. That this problem is unsolved, mathematically and conceptually, precludes hybrid teams based on artificial intelligence from processing information like human teams operating under joint challenges and perceived threats.
This AAAI Symposium will explore the various aspects and meanings of robust intelligence, networks and trust between humans, machines and robots in different contexts, and the social dynamics of networks and trust in teams or organizations composed of autonomous machines and robots working together with humans. We will seek to identify and/or develop methods for structuring networks and engendering trust between agents, to consider the static and dynamic aspects of behavior and relationships, and to propose metrics for measuring outcomes of interactions. Details, including invited speakers and program members, are displayed later at: https://sites.google.com/site/aaairobustintelligence/ with a summary located at AAAI's Spring Symposia site at: http://www.aaai.org/Symposia/Spring/sss14.php.
This AAAI symposium will seek to address these specific topics and questions:
· How can robust intelligence be instantiated?
· What is RI for an individual agent? A team? Firm? System?
· What is a robust team?
· What is the association between RI and autonomy?
· What metrics exist for robust intelligence, trust or autonomy between individuals or groups, and how well do these translate to interactions between humans and autonomous machines?
· What are the connotations of “trust” in various settings and contexts?
· How do concepts of trust between humans collaborating on a task differ from human-human, human-machine, machine-human, and machine-machine trust relationships?
· What metrics for trust currently exist for evaluating machines (possibly including such factors as reliability, repeatability, intent, and susceptibility to catastrophic failure) and how may these metrics be used to moderate behavior in collaborative teams including both humans and autonomous machines?
· How do trust relationships affect the social dynamics of human teams, and are these effects quantifiable?
· What validation procedures could be used to engender trust between a human and an autonomous machine?
· What algorithms or techniques are available to allow machines to develop trust in a human operator or another autonomous machine?
· How valid are the present conceptual models of human networks? Mathematical models? Computational models?
· How valid are the present conceptual models of autonomy in networks? Mathematical models? Computational models?
Specific Instructions: Papers should address issues associated with “The intersection of robust intelligence (RI) and trust in autonomous systems”. They should also specify the relevance of their topic to AI, or propose a method involving AI to be used to help address their particular issue. Potential topics include (but are not limited to) the following:
Robust Intelligence (RI) Topics:
· Computational, mathematical, conceptual models of robust intelligence
· Metrics of robust intelligence
· Is a model of thermodynamics possible with RI (i.e., using physical thermodynamic principles, can “intelligent" behavior be addressed under thermodynamic pressure from the environment?)
· Computational, mathematical, conceptual models of trust in autonomous systems
· Human requirements for trust and trust in machines
· Machine requirements for trust and trust in humans
· Methods for engendering and measuring trust among humans and machines
· Metrics for deception among humans and machines
· Other computational and heuristic models of trust relationships, and related behaviors, in teams of humans and machines
· Models of individual, group, firm autonomous system behaviors
· Mathematical models of multi-tasking in a team (e.g., entropy levels overall and by individual agents; energy levels overall and by individual agents).
· Constructing, measuring, assessing networks; e.g., the density of chat networks among human operators controlling multi-unmanned aerial vehicles.
· For networks, specify whether the application is for humans, machines, robots or a combination; e.g., the density of inter-robot communications.
Papers should use the format specified by AAAI, and may be either 2 page abstracts or up to 8 pages for final submissions. The AAAI AuthorKit includes templates and further formatting instructions, and may be accessed here:
Jennifer Burke, Boeing; firstname.lastname@example.org>
Alan Wagner, Georgia Tech Research Institute; Alan.Wagner@gtri.gatech.edu
Don Sofge, Naval Research Laboratory; email@example.com
William F. Lawless, Paine College; firstname.lastname@example.org
Keynote Speakers: TBD
Program Committee (names and email addresses after individual commitments): TBD
Important Dates (dates were changed by AAAI on 9/30/2013):
October 22, 2013 (this date was changed by AAAI on 9/30/2013): Submissions due to organizers (Coordinating with one of the organizers, this can be abstract at this point, with full drafts of papers by October 18th for both short (2 pages) and long (6-8 pages) papers, with final papers due by December 31st).
November 1, 2013 (this date was changed by AAAI on 9/30/2013): Notifications of acceptance sent by organizers.
December 31st: Camera-ready copies are due to the organizers to assure proper formatting and in preparation for the Preface and the Agenda ("soft" date to help authors confirm formatting, etc.).
January 17, 2014 (this date was changed by AAAI on 9/30/2013): Accepted camera-ready copy due to AAAI (hard date).
Submissions: Submit abstracts, short papers (2 pages), long papers (6-8 pages) to one or all of the organizers. Acceptance notifications will be made by November 1st; camera-ready copies are due to the organizers by December 31 st (soft date), and to AAAI along with Preface and Agenda by January 17th (hard date).
The strongest submissions to this workshop will be considered for inclusion in a special issue of AAAI Magazine on Trust and Autonomous Systems.
Registration dates: Notices by email directly from AAAI in coordination with the organizers. Also, see the AAAI webpage (http://www.aaai.org/Symposia/Spring/sss14.php; and our page at AAAI: http://www.aaai.org/Symposia/Spring/sss14symposia.php#ss05)
 National Science Foundation (2013), “Robust Intelligence (RI)”; at http://www.nsf.gov/funding/pgm_summ.jsp?pims_id=503305&org=IIS.
 Gluck, K. 13.15.12.B0909: Robust Decision Making in Integrated Human-Machine Systems
 The Guardian (2012, 3/31), “AI robot: how machine intelligence is evolving”.
 ONR Command Decision Making (CDM) & Hybrid Human Computer Systems (HHCS) Annual Program Review, 4-7 June, 2013, Naval Research Lab, DC.
 Lawless, W. F., Llinas, J., Mittu, R., & Sofge, D.A., Sibley, C., Coyne, J., & Russell, S. (2013). "Robust Intelligence (RI) under uncertainty: Mathematical and conceptual foundations of autonomous hybrid (human-machine-robot) teams, organizations and systems." Structure and Dynamics, 6(2).
 New York Times (2012, 9/28), “Why the beaver should thank the wolf”: in Yellowstone’s National Park, “aspen and other native vegetation, once decimated by overgrazing, are now growing up along the banks … [in part] because elk and other browsing animals behave differently when wolves are around. Instead of eating down to the soil, they take a bite or two, look up to check for threats, and keep moving. [This means that the] greenery can grow tall enough to reproduce.”