2013 AAAI Spring Symposium: Trust and Autonomous Systems
Trust is a key issue in the development and implementation of autonomous systems working with humans. Humans must be able to trust the actions of the machines to want to work with them, and machines must develop or establish trust in the actions of human coworkers to ensure effective collaboration. There is also the issue of autonomous agents, robots and systems trusting one another and humans.
But trust can mean different things in different contexts. For flight control systems on airplanes, trust may mean meeting rigorous criteria regarding structural qualities of the airplane, flightworthiness, and a provably stable control system. In the context of humans interacting with humanoid robots, trust may more closely relate to the interdependence between the human and robot in correctly reading and interpreting each other’s voice commands and gestures and observed actions, and the likelihood that both the robot and human will do what is expected of each other. In the context of an autonomous automobile carrying passengers, trust in the system may be the expectation that the system will respond correctly not only to foreseen road and traffic conditions, but also to unusual circumstances (e.g., gridlock; alternative route planning; a child running into the street while chasing a ball; running out of gas on the highway; an engine catching fire; hearing and seeing an approaching fire engine or ambulance with siren blaring; or a flat tire causing the vehicle to swerve toward the guardrail). Interdependent trust also includes system controllers and society. System controllers, human or machine, must be able to control at the individual, group and system levels; and society must be willing to entrust its citizens, including the elderly and young, to the system.
This AAAI Symposium will explore the various aspects and meanings of trust between humans and machines in various situational contexts, and the social dynamics of trust in teams or organizations composed of autonomous machines working together with humans. We will seek to identify and/or develop methods for engendering trust between humans and autonomous machines, to consider the static and dynamic aspects of trust, and to propose metrics for measuring trust. Details, including invited speakers and program members, are displayed here at: https://sites.google.com/site/aaais2013trust/ with a summary located at AAAI's Spring Symposia site at: http://www.aaai.org/Symposia/Spring/sss13.php. The Symposium Program has been posted at: http://sites.google.com/site/aaais2013trust/home/program.
This AAAI symposium will seek to address these specific topics and questions:
· What are the connotations of “trust” in various settings and contexts?
· How do concepts of trust between humans collaborating on a task differ from [human è machine], [machine è human], and [machine è machine] trust relationships?
· What metrics exist for trust between individuals, and how well do these translate to trust relationships between humans and autonomous machines?
· What metrics for trust currently exist for evaluating machines (possibly including such factors as reliability, repeatability, intent, and susceptibility to catastrophic failure) and how may these metrics be used to moderate behavior in collaborative teams including both humans and autonomous machines?
· How do trust relationships affect the social dynamics of human teams, and are these effects quantifiable?
· What validation procedures could be used to engender trust between a human and an autonomous machine?
· What algorithms or techniques are available to allow machines to develop trust in a human operator or another autonomous machine?
Papers should address issues associated with Trust in Autonomous Systems. They should also specify the relevance of their topic to AI, or propose a method involving AI to be used to help address their particular issue. Potential topics include (but are not limited to) the following:
· Computational models of trust in autonomous systems
· Trust model between a single human and a single robot
· The effect of trust on team social dynamics
· Verification and validation of autonomous system behaviors
· Human requirements for trust in machines
· Methods for engendering trust between humans and machines
· Metrics for established trust
· Metrics for deception in humans and machines
· Other computational and heuristic models of trust relationships, and related behaviors, in teams of humans and machines
Papers should use the format specified by AAAI, and may be either 2 page abstracts or 6 pages for final submissions.
The AAAI AuthorKit includes templates and further formatting instructions, and may be accessed here:
Don Sofge, Naval Research Laboratory; email@example.com
Geert-Jan Kruijff, German Research Center for Artificial Intelligence (DFKI); firstname.lastname@example.org
William F. Lawless, Paine College; email@example.com
John Lee (University of Wisconsin)
Holly Yanco (UMass Lowell)
Jeff Bradshaw (IHMC)
Missy Cummings (ONR/MIT)
Jim Hansen (NRL Monterey)
Ron Diftler (NASA/JSC, Robonaut 2 Program Manager)
Alan Wagner (GTRI)
Andrew Clare (MIT)
Ciara Sibley (NRL)
Florian Jentsch (UCF)
Frank Dignum (University of Utrecht)
Jeff Bradshaw (IHMC)
Jenny Burke (Boeing)
Mark Neerincx (Delft Univerity of Technology)
Michal Pěchouček (Czech Technical Institute)
Paul Hyden (NRL)
Rino Falcone (ICST, Italy)
Robert Hoffman (IHMC)
Tal Oron-Gilad (Ben-Gurion University)
Dates (edited for participants):
Final (open) registration deadline
AAAI sends general information email including the location, registration, special events, and parking to all registered SSS-13 participants
AAAI Spring Symposia Series, Stanford University