Home

The intersection of robust intelligence (RI) and trust in autonomous systems

Organizers: Jennifer Burke (Boeing), Alan Wagner (Georgia Tech Research Institute; GTRI), Don Sofge (Naval Research Laboratory; NRL) & Bill Lawless (Paine College) 

**New: Agenda and slides as provided by speakers are attached below at Subpages**

Overview (Announcement):

From our announcement on the call-for-proposals at AAAI's Spring Symposia site at: http://www.aaai.org/Symposia/Spring/sss14symposia.php#ss05

This AAAI Symposium will explore the intersection of robust intelligence (RI) and trust across multiple contexts among autonomous hybrid systems (where hybrids are arbitrary combinations of humans, machines and robots). We seek methods for structuring teams or networks that increase robust intelligence and engender trust among a system of agents. But how can we determine the questions critical to the static and dynamic aspects of behavior and metrics of agent performance?

To better manage RI with AI to promote trust in autonomous agents and teams, our interest is in the theory, mathematics, computational models, and field applications at the intersection of RI and trust, not only in team-multitasking effectiveness or in modeling RI networks, but in the efficiency and trust engendered among interactants. 

We seek to understand the intersection of RI and trust for humans interacting with systems (e.g., teams, firms, networks), to use this information with AI to model RI and trust, and to predict outcomes from interactions among hybrids (e.g., multitasking operations).

Systems that learn, adapt, and apply experience to problems may be better suited to respond to novel environmental challenges. One could argue that such systems are “robust” to the prospect of a dynamic and occasionally unpredictable world. We expect systems that exhibit robustness would afford a greater degree of trust in hybrids that interact with the system.  Robustness through learning, adaptation and structure is determined by predicting and modeling the interactions of autonomous hybrid enterprises.  How can we use this data to develop models indicative of normal/abnormal operations in a given context?  We hypothesize such models improve enterprise intelligence by allowing autonomous entities to continually adapt within normalcy bounds, leading to greater reliability and trust.

The focus of this workshop is how robust intelligence impacts trust in the system and how trust in the system impacts robustness. We will explore approaches to RI and trust including (for example): intelligent networks, intelligent agents, and intelligent multitasking by hybrids. 

Papers should address “The intersection of robust intelligence (RI) and trust in autonomous systems”, and specify the relevance of their topic to AI or how AI can be used to help address their issue.


Details

To better understand and manage RI with AI to promote trust in autonomous agents and teams, our interest is in the theory, network models, mathematics, computational models, associations, and field applications at the intersection of RI and trust. We are interested not only in effectiveness with a team’s multitasking or in constructing RI networks and models, but in the efficiency and trust engendered among interacting participants. 

Part of our symposium is a better understanding of the intersection of RI and trust for humans interacting with other humans and human groups (e.g., teams, firms, systems; also, networks among these social objects). Our goal is to use this information with AI to not only model RI and trust, but in predicting outcomes from interactions between autonomous hybrid groups (e.g., hybrid teams in multitasking operations).

Systems that learn, adapt, and apply their experience to the problems faced in an environment may be better suited to respond to new and unexpected challenges. One could argue that such systems are “robust” to the prospect of a dynamic and occasionally unpredictable world. We expect the systems that exhibit this type of robustness would afford others that must interact with the system a greater degree of trust. For instance, an autonomous vehicle which, in addition to driving to different locations, can also warn a passenger of locations where it should not drive likely be viewed as more robust than a similar system without such warning capability. But would it be viewed as more trustworthy? This workshop endeavors to examine such questions that lay at the intersection of robust intelligence and trust. Problems such as these are particularly difficult because they imply situational variations that may be hard to define.

The focus of this workshop will center on how robust intelligence impacts trust in the system and how trust in the system makes it more or less robust. We intend to explore approaches to RI and trust that include, among others, intelligent networks, intelligent agents, and multitasking by hybrid groups (i.e., arbitrary combinations of humans, machines and robots). 

Background: 

Robust intelligence (RI) has not been easy to define. We propose an approach to RI with artificial intelligence (AI) that may include, among other approaches, the science of intelligent networks, the generation of trust among intelligent agents, and multitasking among hybrid groups (humans, machines and robots). RI is the goal of several government projects to explore intelligence as seen at the level of humans, including those directed by NSF;[1] the US Army;[2] and the USAF.[3]  DARPA has a program on physical intelligence that is attempting to produce the first example of “”intelligent" behavior under thermodynamic pressure from their environment.”[4] Carnegie Mellon University has a program to build a robot that can execute “complex tasks in dangerous … environments”.[5]  IEEE has the journal Intelligent Systems to address various topics on intelligence in automation including trust; social computing; health; and, among others, coalitions that make the “effective use of limited resources to achieve complex and multiple objectives.”[6] From another perspective, IBM built a program that beat the reigning world champion at chess in 1997; another program that won at the game Jeopardy in 2011;[7] and an intelligent operations center for the management of cities, transportation, and water.[8] There may be multiple other ways to define or approach RI, but also in how to measure it.

In an attempt to advance AI with a better understanding and management of RI, our interest is in the theory, network models, mathematics, computational models, associations, and field applications of RI. This means that we are interested in not only effectiveness with multitasking or in constructing RI networks and models, but in the efficiency and trust engendered among the participants during interactions.

Part of our symposium is a better understanding of RI and the autonomy it produces with humans interacting with other humans and human groups (e.g., teams, firms, systems; also, networks among these social objects). Our ultimate goal is to use this information with AI to not only model RI and autonomy, but in the predictions of the outcomes from interactions between hybrid groups (e.g., hybrid teams composed arbitrarily of humans, machines and robots) that interdependently generate networks and trust.

Multitasking: For multitasking with human teams and firms, interdependence is an important element in their RI: e.g., the Army is attempting to develop a robot that can produce "a set of intelligence-based capabilities sufficient to enable the teaming of autonomous systems with Soldiers."[9] ONR is studying robust teamwork.[10] But a team’s interdependence also introduces uncertainty, fundamentally impacting measurement.[11]

Interdependence: Unlike conventional computational models where agents act independently of neighbors, where, for example, a predator mathematically consumes its prey or not as a function of a random interaction process, interdependence means that agents dynamically respond to the bi-directional signals of actual or potential presence of other agents (e.g., in states poised to fight or flight),[12] a significant increase over conventional modeling complexity. That this problem is unsolved, mathematically and conceptually, precludes hybrid teams based on artificial intelligence from processing information like human teams operating under joint challenges and perceived threats.

This AAAI Symposium will explore the various aspects and meanings of robust intelligence, networks and trust between humans, machines and robots in different contexts, and the social dynamics of networks and trust in teams or organizations composed of autonomous machines and robots working together with humans. We will seek to identify and/or develop methods for structuring networks and engendering trust between agents, to consider the static and dynamic aspects of behavior and relationships, and to propose metrics for measuring outcomes of interactions. Details, including invited speakers and program members, are displayed later at: https://sites.google.com/site/aaairobustintelligence/ with a summary located at AAAI's Spring Symposia site at: http://www.aaai.org/Symposia/Spring/sss14.php.

This AAAI symposium will seek to address these specific topics and questions:

·      How can robust intelligence be instantiated?

·      What is RI for an individual agent? A team? Firm? System?

·      What is a robust team?

·      What is the association between RI and autonomy?

·      What metrics exist for robust intelligence, trust or autonomy between individuals or groups, and how well do these translate to interactions between humans and autonomous machines?

·      What are the connotations of “trust” in various settings and contexts?

·      How do concepts of trust between humans collaborating on a task differ from human-human, human-machine, machine-human, and machine-machine trust relationships?

·      What metrics for trust currently exist for evaluating machines (possibly including such factors as reliability, repeatability, intent, and susceptibility to catastrophic failure) and how may these metrics be used to moderate behavior in collaborative teams including both humans and autonomous machines?

·      How do trust relationships affect the social dynamics of human teams, and are these effects quantifiable?

·      What validation procedures could be used to engender trust between a human and an autonomous machine?

·      What algorithms or techniques are available to allow machines to develop trust in a human operator or another autonomous machine?

·      How valid are the present conceptual models of human networks? Mathematical models? Computational models?

·      How valid are the present conceptual models of autonomy in networks? Mathematical models? Computational models?

 

Specific Instructions: Papers should address issues associated with “The intersection of robust intelligence (RI) and trust in autonomous systems. They should also specify the relevance of their topic to AI, or propose a method involving AI to be used to help address their particular issue. Potential topics include (but are not limited to) the following:

Robust Intelligence (RI) Topics:

·      Computational, mathematical, conceptual models of robust intelligence

·      Metrics of robust intelligence

·      Is a model of thermodynamics possible with RI (i.e., using physical thermodynamic principles, can “intelligent" behavior be addressed under thermodynamic pressure from the environment?)


Trust Topics:

·      Computational, mathematical, conceptual models of trust in autonomous systems

·      Human requirements for trust and trust in machines

·      Machine requirements for trust and trust in humans

·      Methods for engendering and measuring trust among humans and machines

·      Metrics for deception among humans and machines

·      Other computational and heuristic models of trust relationships, and related behaviors, in teams of humans and machines

 

Autonomy Topics:

·      Models of individual, group, firm autonomous system behaviors

·      Mathematical models of multi-tasking in a team (e.g., entropy levels overall and by individual agents; energy levels overall and by individual agents).

 

Network Topics:

·      Constructing, measuring, assessing networks; e.g., the density of chat networks among human operators controlling multi-unmanned aerial vehicles.

·      For networks, specify whether the application is for humans, machines, robots or a combination; e.g., the density of inter-robot communications.

 

Papers should use the format specified by AAAI, and may be either 2 page abstracts or up to 8 pages for final submissions. The AAAI AuthorKit includes templates and further formatting instructions, and may be accessed here: 

http://www.aaai.org/Publications/Templates/AuthorKit.zip

 

Organizers:

Jennifer Burke, Boeing; jennifer.l.burke2@boeing.com>

Alan Wagner, Georgia Tech Research Institute; Alan.Wagner@gtri.gatech.edu

Don Sofge, Naval Research Laboratory; don.sofge@nrl.navy.mil

William F. Lawless, Paine College; wlawless@paine.edu

 

Keynote Speakers: TBD


  • Suzanne Barber, barber@mail.utexas.eduAT&T Foundation Endowed Professor in Engineering, Department of Electrical and Computer Engineering, Cockrell School of Engineering, U Texas
  • Julie Marble, julie.marble@navy.milProgram Officer: Hybrid human computer systems at Office of Naval Research, Washington, DC
  • Ranjeev Mittu, ranjeev.mittu@nrl.navy.mil; Branch Head, Information Management & Decision Architectures Branch, Information Technology Division, U.S. Naval Research Laboratory, Washington, DC
  • Hadas Kress-Gazit, hadaskg@cornell.edu; Cornell University; High-Level Verifiable Robotics
  • Satyandra K. Gupta, Ph.D.,  Program Director,  Robust Intelligence Program and National Robotics Initiative,  National Science Foundation, skgupta@nsf.gov
  • Dave Ferguson, Google's Self-Driving Car project
  • Mo Jamshidi, University of Texas at San Antonio, Lutcher Brown Endowed Chair and Professor, Computer and Electrical Engineering, mo.jamshidi@usta.edu
  • Dirk Helbing, ETH Zurich (dirk.helbing@gess.ethz.ch;  http://www.futurict.eu): slides: futurICT_intelligence_trust_s.pdf

Program Committee (names and email addresses after individual commitments): TBD

  • Julie Marble, julie.marble@navy.milProgram Officer: Hybrid human computer systems at Office of Naval Research, Washington, DC
  • Ranjeev Mittu, ranjeev.mittu@nrl.navy.mil; Branch Head, Information Management & Decision Architectures Branch, Information Technology Division, U.S. Naval Research Laboratory, Washington, DC
  • David Atkinson, IHMC, datkinson@ihmc.us
  • Jeffrey Bradshaw, IHMC, jbradshaw@ihmc.us
  • Lashon B. Booker, The MITRE Corporation, booker@mitre.org
  • Paul Hyden, Naval Research Laboratory, Paul.hyden@nrl.navy.mil
  • Holly Yanco, University of Massachusetts Lowell, holly@cs.uml.edu
  • Fei Gao, MIT, feigao@MIT.EDU
  • Robert Hoffman, IHMC, rhoffman@ihmc.us
  • Florian Jentsch, Department of Psychology and Institute for Simulation & Training, Director, Team Performance Laboratory, University of Central Florida, Florian.Jentsch@ucf.edu
  • Howell, Chuck, Chief Engineer, Intelligence Portfolio, National Security Center, The MITRE Corporation, howell@mitre.org
  • Paul Robinette, Georgia Tech, probinette3@gatech.edu
  • Munjal Desai, Desai, munjaldesai@google.com
  • Geert-Jan Kruijff, Senior researcher/Project leader, Language Technology Lab, DFKI GmbH, Saarbruecken, Germany; gj@dfki.de


Important Dates (dates were changed by AAAI on 9/30/2013)

October 22, 2013 (this date was changed by AAAI on 9/30/2013): Submissions due to organizers (Coordinating with one of the organizers, this can be abstract at this point, with full drafts of papers by October 18th for both short (2 pages) and long (6-8 pages) papers, with final papers due by December 31st). 

November 1, 2013 (this date was changed by AAAI on 9/30/2013): Notifications of acceptance sent by organizers. 

December 31st: Camera-ready copies are due to the organizers to assure proper formatting and in preparation for the Preface and the Agenda  ("soft" date to help authors confirm formatting, etc.). 

January 17, 2014 (this date was changed by AAAI on 9/30/2013): Accepted camera-ready copy due to AAAI (hard date).

 

Submissions: Submit abstracts, short papers (2 pages), long papers (6-8 pages) to one or all of the organizers. Acceptance notifications will be made by November 1st; camera-ready copies are due to the organizers by December 31 st (soft date), and to AAAI along with Preface and Agenda by January 17th (hard date). 


The strongest submissions to this workshop will be considered for inclusion in a special issue of AAAI Magazine on Trust and Autonomous Systems. 


Registration dates: Notices by email directly from AAAI in coordination with the organizers. Also, see the AAAI webpage (http://www.aaai.org/Symposia/Spring/sss14.php; and our page at AAAI: http://www.aaai.org/Symposia/Spring/sss14symposia.php#ss05)



[1] National Science Foundation (2013), “Robust Intelligence (RI)”; at http://www.nsf.gov/funding/pgm_summ.jsp?pims_id=503305&org=IIS.

[2] http://www.arl.army.mil/www/pages/8/research/12-020.pdf

[3] Gluck, K. 13.15.12.B0909: Robust Decision Making in Integrated Human-Machine Systems

[4] http://www.darpa.mil/Our_Work/DSO/Programs/Physical_Intelligence.aspx

[5]http://www.darpa.mil/Our_Work/TTO/Programs/DARPA_Robotics_Challenge/Track_A_Participants.aspx

[6] http://www.computer.org/csdl/mags/ex/2013/01/index.html

[7] The Guardian (2012, 3/31), “AI robot: how machine intelligence is evolving”.

[8] http://www-03.ibm.com/software/products/us/en/intelligent-operations-center/

[9] WWW.ARMY.MIL (2013, 1/10), “Army researchers develop robot intelligence to support Soldiers”; http://www.army.mil/article/94076/

[10] ONR Command Decision Making (CDM) & Hybrid Human Computer Systems (HHCS) Annual Program Review, 4-7 June, 2013, Naval Research Lab, DC.

[11] Lawless, W. F., Llinas, J., Mittu, R., & Sofge, D.A., Sibley, C., Coyne, J., & Russell, S. (2013). "Robust Intelligence (RI) under uncertainty: Mathematical and conceptual foundations of autonomous hybrid (human-machine-robot) teams, organizations and systems." Structure and Dynamics, 6(2).

[12] New York Times (2012, 9/28), “Why the beaver should thank the wolf”: in Yellowstone’s National Park, “aspen and other native vegetation, once decimated by overgrazing, are now growing up along the banks … [in part] because elk and other browsing animals behave differently when wolves are around. Instead of eating down to the soil, they take a bite or two, look up to check for threats, and keep moving. [This means that the] greenery can grow tall enough to reproduce.” 

Subpages (2): photos of attendees slides