Workshop on Impedance Matching in Cognitive Partnerships @ IJCAI-17
Human-Autonomy Teaming (HAT) describes situations where people cooperate with artificially intelligent autonomous agents to perform some function. Cognitive partnerships describe situations where humans and technical artifacts work together to solve problems or conduct research (Nersessian, Kurz-Milcke, Newstetter, & Davies, 2003). In a general sense, we can envision heterogeneous teams composed of autonomous participants each using either human or artificial intelligence. These relationships can take on different structures depending on the level of supervision the humans can exert and the level of intelligence and autonomy provided by the non-human agents.
This workshop explores cognitive partnerships among heterogeneous autonomous team members, whether they be human or artificial. People often struggle to work through the impedance mismatches caused by varying backgrounds, professional fields, and goals. This workshop targets areas of impedance mismatch between humans and autonomous AI. The workshop includes the following keynote speakers:
Artificial Intelligence: A Binary Approach, Stuart Russell, University of California, Berkeley;
Human-Machine Trust: Perspectives from the US Air Force Office of Scientific Research, Peter Friedland, US Air Force Research laboratory;
Social Planning - Reasoning with and about others ,Tim Miller, The University of Melbourne;
and Impeding Partnerships: Mr. & Miss Matches, Mike Cox, Wright State Research Institute.
The mismatches between human and technological artifacts bring their own challenges. These challenges are a critical area of research for the field of Artificial Intelligence (AI). Impedance mismatches affect such aspects of teamwork as trust mechanisms, cooperative learning, understanding the division of cognitive labor, alignment of goals, adaptability of policies and plans, the granularity of policies and plans, and team roles. Some of the specific topics of interest include, but are not limited to the following:
- Exchange of goals, domain knowledge, and beliefs about the current situation
- Trust and transparency in decision making
- Communication and planning at differing levels of abstraction
- Activity Recognition and Task Models
- Process Mining and Learning about Teammates
- Roles, strategies, and the division of labor
- Joint adaptation and the effects of adaptation on partnership
Other relevant areas of interest include: machine learning and other AI techniques supporting HAT and cognitive partnerships, including architectures, new models and multi-agent systems.
Location & Date
Impedance Matching in Cognitive Partnerships is hosted as an IJCAI-17 workshop, and will take place in Room MCEC 207, Melbourne Convention and Exhibition Center (MCEC) at South Wharf in Melbourne, Australia on Monday August 21st, 2017.
8:30 - 8:45 Welcome & Overview
8:45 - 10:00 Session 1: Paper Presentations (20 minutes presentations + 5 minutes questions)
8:45 – 9:10 : Expert Demonstration for Deep-Q Reinforcement Learning Agents (Brett Schmerl and Greg O’Keefe)
9:10 – 9:35 : An Approach to Integrating Human Knowledge into Agent-Based Planning (Leah Kelley, Michael Ouimet, Bryan Croft, Eric Gustafson and Luis Martinez)
9:35 – 10:00 : Human-Autonomy Teaming for Multiple Autonomous Vehicles (Glennn Moy, Darren Williams, Katherine Noack, Jan Richter, Joshua Broadway and Luke March)
10:00 - 10:30 Morning Tea (30 Minutes)
10:30 - 11:15 Invited Talk 1: Tim Miller, The University of Melbourne: Social Planning - Reasoning with and about others
11:15 - 12:30 Session 2: Paper Presentations (20 minutes presentations + 5 minutes questions)
11:15 – 11:40 : Multimedia Narrative For Autonomous Systems (Steve Wark, Marcin Nowina-Krowicki, Ian Dall, Jae Chung, Peter Argent and Greg Bowering )
11:40 – 12:05 : Decentralised Decision Making in Defence Logistics (Slava Shekh and Michelle Blom)
12:05 – 12:30 : Communicating to Establish Shared Mental Models (Ronal Singh, Liz Sonenberg and Tim Miller)
12:30 - 14:00 Lunch (1.5 Hours)
14:00 – 14:45 Invited Talk 2: Stuart Russell, University of California, Berkeley: Artificial Intelligence: A Binary Approach
14:45 - 15:30 Invited Talk 3: Mike Cox, Wright State Research Institute: Impeding Partnerships: Mr. & Miss Matches
15:30 - 16:00 Invited Talk 4: Peter Friedland, US Air Force Research laboratory: Human-Machine Trust: Perspectives from the US Air Force Office of Scientific Research
16:00 - 16:30 Afternoon Tea: (30 Minutes)
16:30 - 17:45 Panel Discussion including Summary of Talks : Jason Scholz, Doug Lange, Peter Friedland, Adrian R. Pearce
Invited Keynote Talks
Invited Talk 1: Tim Miller - University of Melbourne
Title: Social Planning - Reasoning with and about others
Abstract: To successfully operate within a team, individuals keep a mental model of others, including what others can do, what they know or believe, and what their intentions are; in short, they have a Theory of Mind about their team members. This allows others to anticipate how others are about to act, how this affects the outcomes of their own actions, and what information needs to be shared to coordinate. We call this 'social planning', reflecting that such planning itself is a social activity that requires thinking about and communicating with others. In this talk, I will discuss some ideas around social planning and interacting between humans and systems, and overview results in these areas.
Bio: Tim Miller is a faculty member in the School of Computing and Information Systems at The University of Melbourne, Australia. Tim received his PhD from the University of Queensland and spent four years at the University of Liverpool, UK, as a postdoc in the Agent ART group. Tim's primary research interests are in artificial intelligence, in particular on how humans interact with systems, and multi-agent planning, in particular notions of knowledge and action in groups.
Invited Talk 2: Stuart J. Russell - University of California, Berkeley
Title: Artificial Intelligence: A Binary Approach
Abstract: The standard view promulgated in the textbooks is that AI is about building intelligent systems - further defined as systems that act so as to achieve their objectives to the extent possible. This is a unary notion, appropriate for creatures (like us) that operate autonomously in the world. This definition brings with it the possibility of value misalignment between machines and humans, and hence the existential risks (to humans) of superintelligence. I will argue instead for a binary definition for AI, where the machine acts to achieve human objectives, even though it may not know what those are.
Bio: Stuart Russell received his B.A. with first-class honours in physics from Oxford University in 1982 and his Ph.D. in computer science from Stanford in 1986. He then joined the faculty of the University of California at Berkeley, where he is Professor (and formerly Chair) of Electrical Engineering and Computer Sciences and holder of the Smith-Zadeh Chair in Engineering. He has served as an Adjunct Professor of Neurological Surgery at UC San Francisco and as Vice-Chair of the World Economic Forum's Council on AI and Robotics. He is a recipient of the Presidential Young Investigator Award of the National Science Foundation, the IJCAI Computers and Thought Award, the World Technology Award (Policy category), the Mitchell Prize of the American Statistical Association, and Outstanding Educator Awards from both ACM and AAAI. From 2012 to 2014 he held the Chaire Blaise Pascal in Paris. He is a Fellow of the American Association for Artificial Intelligence, the Association for Computing Machinery, and the American Association for the Advancement of Science. His book "Artificial Intelligence: A Modern Approach" (with Peter Norvig) is the standard text in AI; it has been translated into 13 languages and is used in over 1300 universities in 118 countries. His research covers a wide range of topics in artificial intelligence including machine learning, probabilistic reasoning, knowledge representation, planning, real-time decision making, multitarget tracking, computer vision, computational physiology, and philosophical foundations. He also works for the United Nations, developing a new global seismic monitoring system for the nuclear-test-ban treaty. His current concerns include the threat of autonomous weapons and the long-term future of artificial intelligence and its relation to humanity.
Invited Talk 3: Michael Cox - Wright State Research Institute
Title: Impeding Partnerships: Mr. & Miss Matches
Abstract: The concept of a cognitive partnership between humans and machines remains an aspiration rather than any achievement in the artificial intelligence community. Impeding the vision of partnership is the large mismatch between the means, the representation, and the language of shared behavior and decisions. We claim that a solution lies not in the amount of data available or the level of optimization of performance, rather successful cognitive partnerships will arise from a common language of goals. We will review an approach to explainable cognitive systems that focuses on a set of common goal operations and argue that a goal-based metaphor of interaction facilitates successful teamwork.
Bio: Dr. Michael T. Cox is director of the Collaboration and Cognition Laboratory at Wright State University and a Senior Research Scientist at WSRI. He also holds a joint appointment as Research Professor in the Computer Science and Engineering Department at WSU. Dr. Cox served as a former DARPA program manager (2008-2010) and came to WSRI from the University of Maryland Institute for Advanced Computer Studies (2010-2014). Dr. Cox is engaged in research on high-level autonomy in both humans and machines. He studies mixed-initiative computing, human-machine teaming, case-based reasoning, causal and volitional explanation, multi-strategy learning, automated planning and scheduling, and computational metareasoning. Dr. Cox was a senior computer scientist for Raytheon BBN Technologies (2005-2008) and was an assistant professor for WSU’s College of Engineering and Computer Science (1998-2004). He graduated from the Georgia Institute of Technology with a BS (highest honors, 1986) and a PhD (1996) both in computer science. Dr. Cox was a postdoctoral fellow in the School of Computer Science at Carnegie Mellon University, Pittsburgh (1996-1998).
Invited Talk 4: Peter Friedland - US Air Force Research laboratory
Title: Human-Machine Trust: Perspectives from the US Air Force Office of Scientific Research
Abstract: This talk will describe the discussions and recommendations from two multi-disciplinary, multi-national workshops held by AFOSR in 2013 and 2015. Artificial intelligence, cognitive science, psychology, philosophy, and robotics were among the disciplines contributing to the development of a basic research program for AFOSR that recognizes the importance of multi-cultural contributions to the understanding and development of trust in human-machine teams.
Bio: Peter Friedland’s career has focused on interdisciplinary technology research, development, and application with substantial accomplishments in academia, industry, and government. He received his PhD in Computer Science from Stanford in 1980 for pioneering artificial intelligence research in the areas of planning, knowledge representation, and expert systems. He applied this work to the then emerging discipline of molecular genetics leading to the creation of a user community of several thousand academic and industrial scientists, and the funding of a NIH-sponsored National Research Resource, BIONET. He also co-founded two companies while at Stanford: IntelliGenetics, the first bioinformatics company, and Teknowledge, the first expert systems technology and training company. Both became public companies in the early 80’s.
In 1987, Dr. Friedland joined NASA Ames Research Center to create what became the government’s largest and most highly-regarded Intelligent Systems R&D laboratory. The hallmark of the laboratory was the ability to simultaneously conduct state-of-the-art research while also fielding applications to all of the primary NASA missions and Centers. He left Ames in 1995 to form and lead his third company, Intraspect Software, an early knowledge management systems provider, to the point of 200 employees and over $30M in sales. Intraspect was sold to Vignette Software in 2003, and Dr. Friedland rejoined Ames as Chief Technologist where he supervised a wide range of technology development activities in emerging areas like nanotechnology. He also chaired several NASA-wide committees and studies in such areas as core competencies for NASA Centers and technology transition from basic research to fielded applications.
He is now an independent technology strategist and consultant with a majority of his time spent as a scientific advisor to the Air Force Office of Scientific Research (AFOSR). His specific areas of emphasis for AFOSR are strategy and tactics for international research investments in all disciplines, and creation of new programs in computer and cognitive science.
Dr. Friedland is a Fellow of the American Association of Artificial Intelligence, and a recipient of the NASA Outstanding Leadership Medal and the Feigenbaum Intermational Medal for Expert Systems Applications.
Douglas S. Lange (Space and Naval Warfare Systems Center, Pacific)
Luke B. Marsh (Defence Science and Technology Group)
Adrian R. Pearce (The University of Melbourne)
Leah Kelley (Space and Naval Warfare Systems Center, Pacific)
Mark Draper (Air Force Research Laboratory)
Peter Friedland (Air Force Office of Scientific Research)
Jason Scholz (Defence Science and Technology Group)
Don Gossink (Defence Science and Technology Group)
Glennn Moy (Defence Science and Technology Group)
Darren Williams (Defence Science and Technology Group)
Marcin Nowina-Krowicki (Defence Science and Technology Group)