Title: Drones in Public: interacting and communicating with all users
Abstract: This talk will focus on the role of human-robot interaction with drones in public spaces and be focused on two individual research areas: operation of aerial systems by a spectrum of users and improved communication with both end-users and bystanders. Prior work on human-interaction with aerial robots has focused on communication from the users or about the intended direction of flight, but has not considered how to communicate to novice users in unconstrained environments. In this presentation, it will be argued that the diverse users and open-ended nature of public interactions offers a rich exploration space for foundational interaction research with aerial robots. Findings will be presented from both lab-based and design studies, while context will be provided from the field-based research on prescribed aerial ignitions and environmental sampling that is central to the NIMBUS lab. This presentation will be of interest to researchers and practitioners in the robotics community, as well as those in the fields of human factors, artificial intelligence, and the social sciences.
Bio: Dr. Brittany Duncan is an Assistant Professor in Computer Science and Engineering and a co-Director of the NIMBUS lab at the University of Nebraska, Lincoln. Her research is at the nexus of behavior-based robotics, human factors, and unmanned vehicles; specifically she is focused on how humans can more naturally interact with robots, individually or as part of ad hoc teams, in field-based domains such as environmental sampling, disaster response, and engineering applications. She is a PI on a NSF Early Faculty Career Award (CAREER), a PI and co-PI on a NSF National Robotics Initiative (NRI) grants, and was awarded a NSF Graduate Research Fellowship in 2010. Dr. Duncan received a Ph.D. From Texas A&M University and B.S. in Computer Science from the Georgia Institute of Technology.
Title: Multimodal and speech interfaces to support human performance in automated transportation environments
Abstract: Automation and increasingly intelligent systems continue to be introduced to safety-critical transportation environments, such as driving and aviation. The field of Human Factors is critical for understanding how operators perceive and interact with various levels and types of automation. The design of automation without consideration of the human often results in unrealistic operator expectations, inappropriate usage behavior, and poor overall human-automation system performance. In this presentation, Dr. Pitts will discuss studies being conducted by the Next-generation Human-systems and Cognitive Engineering (NHanCE) Lab involving next-generation autonomous vehicles (AVs) and automated speech recognition (ASR) systems. In particular, NHanCE experiments have investigated age-related differences in attention allocation and the perception of AV takeover requests during semi-autonomous driving, as well as the ability of ASR systems to support general aviation (GA) pilot report (PIREP) generation and submission. Findings from this research have broader implications for the development and evaluation of intelligent interfaces in other high-risk, data-rich domains.
Bio: Brandon J. Pitts is an Assistant Professor in the School of Industrial Engineering, Director of the Next-generation Human-systems and Cognitive Engineering (NHanCE) Lab, and Faculty Associate with the Center on Aging and the Life Course (CALC) all at Purdue University in West Lafayette, IN. He received a B.S. in Industrial Engineering from Louisiana State University in 2010, and a M.S.E and Ph.D. in Industrial and Operations Engineering from the University of Michigan (UM) in 2013 and 2016, respectively. Prior to his faculty appointment, he was a Research Fellow in the UM Center for Healthcare Engineering and Patient Safety (CHEPS). Dr. Pitts’ research interests are in the areas of human factors and cognitive ergonomics, human-automation interaction, context-sensitive interface design, and gerontechnology in complex transportation and work environments, such as driving and aviation. His lab has several government and industry funded projects related to NextGen autonomous systems. Dr. Pitts is a member of the Human Factors and Ergonomics Society and the Institute of Industrial and Systems Engineers. He is also a registered Engineer Intern.
Title: Enabling human-aware automation: a dynamical systems perspective on human cognition
Abstract: Across many sectors, ranging from manufacturing to healthcare to the military theater, there is growing interest in the potential impact of automation that is truly collaborative with humans. Realizing this impact, though, rests on first addressing the fundamental challenge of designing automation to be aware of, and responsive to, the human with whom it is interacting. While a significant body of work exists in intent inference based on human motion, a human’s physical actions alone are not necessarily a predictor of their decision-making. Indeed, cognitive factors, such as trust and workload, play a substantial role in their decision making as it relates to interactions with autonomous systems. In this talk, I will highlight our interdisciplinary efforts at tackling this problem, focusing on recent work in which we synthesized a near-optimal control policy, using a trust-workload POMDP (partially-observable Markov decision process) modeling framework, that adapts the automation’s transparency to the human’s cognitive behavior to achieve a context-specific control objective. I will present experimental validation of our algorithm and highlight how our approach is able to mitigate the negative consequences of “over trust” which can occur between humans and automation. I will also discuss our use of clustering algorithms applied to our modeling framework to identify dominant human cognitive behaviors across large populations of individuals, as well as related work involving the use of psychophysiological data and classification techniques as an alternative method toward real-time trust estimation.
Bio: Dr. Neera Jain is an Assistant Professor in the School of Mechanical Engineering and a faculty member in the Ray W. Herrick Laboratories at Purdue University. She directs the Jain Research Laboratory with the aim of advancing technologies that will have a lasting impact on society through a systems-based approach, grounded in dynamic modeling and control theory. A major thrust of her research is the design of human-aware automation through control-oriented modeling of human cognition. A second major research thrust is optimal design and control of complex energy systems. Dr. Jain earned her M.S. and Ph.D. degrees in mechanical engineering from the University of Illinois at Urbana-Champaign in 2009 and 2013, respectively. She earned her S.B. from the Massachusetts Institute of Technology in 2006. Dr. Jain and her research have been featured in NPR and Axios. As a contributor for Forbes.com, she writes on the topic of human interaction with automation and its importance in society. Her research has been supported by the National Science Foundation, Air Force Research Laboratory, Office of Naval Research, and private industry.
Title: The Human in Human-Autonomy Interaction
Abstract: New advances in robotics and autonomy offer the promise to revitalize assembly manufacturing, assist in personalized at-home healthcare, and even scale the power of earth-bound scientists for robotic space exploration. Yet, in real-world applications, autonomy is often run in the O-F-F mode because researchers fail to understand the human in human-in-the-loop systems. In this talk, I will share exciting research we are conducting at the nexus of human factors engineering and cognitive robotics to inform the design of human-autonomy interaction. In my talk, I will focus on our recent work in 1) understanding the psychology behind human response to autonomy failures, 2) how personality traits affect a human’s ability to teach robots skills, and 3) how we can design robot behaviors and relationship-building mechanisms to establish rapport between human and machine. The goal of this research is to inform the design of autonomous teammates that users want to turn – and benefit from turning – to the O-N mode.
Bio: Dr. Matthew Gombolay is an Assistant Professor of Interactive Computing at the Georgia Institute of Technology. He received a B.S. in Mechanical Engineering from the Johns Hopkins University in 2011, an S.M. in Aeronautics and Astronautics from MIT in 2013, and a Ph.D. in Autonomous Systems from MIT in 2017. Gombolay’s research interests span robotics, AI/ML, human-robot interaction, and operations research. Between defending his dissertation and joining the faculty at Georgia Tech, Dr. Gombolay served as a technical staff member at MIT Lincoln Laboratory, transitioning his research to the U.S. Navy, earning him an R&D 100 Award. His publication record includes a best paper award from American Institute for Aeronautics and Astronautics, a finalist for best student paper at the 2020 American Controls Conference, and a finalist for best paper at the 2020 Conference on Robot Learning. Dr Gombolay was selected as a DARPA Riser in 2018, received 1st place for the Early Career Award from the National Fire Control Symposium, and was awarded a NASA Early Career Fellowship for increasing science autonomy in space.
Title: Humans Interacting with Autonomy: Consideration of the Interactive Behavior Triad
Abstract: The Interactive Behavior Triad (IBT) was originally developed as a framework for researchers in computational cognitive modeling to strengthen not only their models but their models’ theoretical frameworks. The IBT includes attributes of the task, cognition, and environment (e.g., computational environment). Although originally formulated for computational cognitive modeling, Dr. Peres and her colleagues have extended it to be applied to environments such as high-risk industrial settings. This extension specific incorporates: all attributes of the human (e.g., motivational, physical, expertise); more attributes of the tasks (e.g., frequency, hazard level); and a reconsideration of the environment to be the: physical environment; artifact or tool used to perform the task; and the psychosocial context where they task is performed. In this talk, Dr. Peres will present research illustrating how interactive behavior can often be better predicted with information from all three facets of the IBT. Implications for designing autonomy to support interactive behavior will be discussed.
Bio: Dr. S. Camille Peres is an Associate Professor with Environmental and Occupational Health at Texas A&M University. She is the assistant director of Human Systems Engineering Research with the Mary Kay O’Connor Process Safety Center. She does collaborative research on Human Factors and high-risk processing industry such as the oil and gas industry, chemical processing, and emergency response. She is currently involved in investigations regarding: performance implications for procedure design and use; understanding Human Robotic Interaction in disaster environments; and measuring team performance in Emergency Operations.
Title: From coexistence to collaboration: Towards reliable collaborative agents
Abstract: The fields of robotics and autonomous systems have made incredible progress over the past several decades. Indeed, we have built impressive autonomous agents capable of performing complex and intricate tasks in a variety of domains. Most modern robots, however, passively coexist with humans while performing pre-specified tasks in predictable environments. If we want robots to be an integral part of our everyday lives – from factory floors to our living rooms – it is imperative that we build robots that can reliably operate and actively collaborate in unstructured environments. This talk will present three key aspects of collaborative intelligent agents that will help us make progress toward this goal. Specifically, this talk will a provide brief introduction to broadly-applicable algorithmic techniques that enable robots to i) consistently and reliably perform manipulation tasks, ii) understand and predict the behavior of other agents involved, and iii) effectively collaborate with other robots and humans.
Bio: Dr. Harish Ravichandar is currently a Research Scientist in the School of Interactive Computing and a faculty member of the Institute of Intelligent Robots and Machines (IRIM) at Georgia Institute of Technology, where he joined as a Postdoctoral Fellow in 2018. He received his M.S. degree in Electrical and Computer Engineering from the University of Florida in 2014 and his Ph.D. in Electrical and Computer Engineering from the University of Connecticut in 2018. His current research interests span the areas of robot learning, human-robot interaction, and multi-agent systems. His work has been recognized by the ASME DSCC Best Student Robotics Paper Award (2015), IEEE CSS Video Contest Award (2015), UTC Institute for Advanced System Engineering Graduate Fellowship (2016-2018), and Georgia Tech's College of Computing Outstanding Post-Doctoral Research Award (2019) and Outstanding Research Scientist Award (2020).
Title: Human-robot interaction seen through the prism of joint action theory
Abstract: In this presentation, I will explain basics of joint action theory and how to my point of view it could help human-robot interaction design. I will illustrate with some works we’ve done in Rachid Alami’s team (at LAAS CNRS, Toulouse, France) in human-aware task planning and execution.
Bio: Aurélie CLODIC is a Research Engineer in Robotics at LAAS CNRS (team IDEA), Toulouse, France. She received her PhD in robotics in 2007 for which she elaborated and implemented ingredients for human-robot joint activity in several contexts (robot guide in a museum) and robotic assistant in the framework of the COGNIRON project. Her research interests include human-robot collaborative task achievement as well as robotics architecture design (focused on decision-making and supervision) dedicated to human-robot interaction. She is the principal investigator of "toward a Framework for Joint Action" workshop series. She is a member of AI ANITI institute.
Title: Safe and Efficient Inverse Reinforcement Learning
Abstract: It is important that autonomous agents can safely and efficiently infer the preferences, motivation, and intent of a variety of users and adapt their behavior accordingly. One popular way to infer intent is through inverse reinforcement learning, where an agent receives demonstrations of how to perform a task and then tries to infer the reward function of the demonstrator. However, many inverse reinforcement learning algorithms have limited applicability because they do not provide practical assessments of safety, require near optimal demonstrations, and have high computational costs. In this talk, I will discuss recent work towards efficient inverse reinforcement learning algorithms that can infer user intent from suboptimal demonstrations and can provide high-confidence bounds on performance.
Bio: Dr. Brown is a postdoc at UC Berkeley, advised by Anca Dragan and Ken Goldberg. His research interests include robot learning, reward inference, AI safety, and multi-agent systems. He recently received his Ph.D. in computer science from the University of Texas at Austin, where he worked with Scott Niekum on safe imitation learning. Prior to starting his PhD, Dr. Brown worked for the Air Force Research Lab's Information Directorate where he studied bio-inspired swarms and multi-agent planning.
Title: Efficient Learning and Adaptive Motion Planning for and from Physical Human Robot Interaction
Abstract: From factories to households, we envision a future where robots can work safely and efficiently alongside humans. For robots to truly be adopted in such dynamic ecosystems, we must i) minimize human effort while communicating and transferring tasks to robots; and ii) endow robots with the capabilities of adapting to changes in the environment, in the task objectives and human intentions. However, combining these objectives is challenging as providing a single optimal solution can be intractable and even infeasible due to problem complexity and contradicting goals. In my research, I seek to unify robot learning and control strategies to provide safe and fluid physical human-robot-interaction (pHRI) while theoretically guaranteeing task success and stability. To achieve this, I devise techniques that step over traditional disciplinary boundaries, seamlessly blending concepts from control theory, robotics and machine learning. The contributions presented in this talk leverage Bayesian non-parametrics with dynamical system (DS) theory, solving challenging open problems in the Learning from Demonstration (LfD) and pHRI domains. By formulating and learning motion policies as DS with convergence guarantees, a single (or sequence of) motion policy can be used to solve a myriad of robotics problems. I will present novel DS formulations and efficient learning schemes that are capable of executing i) continuous complex motions, such as pick-and-place and trajectory following tasks; ii) sequential household manipulation tasks, such as rolling dough or peeling vegetables; iii) and more dynamic scenarios, such as object hand-overs from humans and catching objects in flight. Finally, I will show how these techniques scale to more complex scenarios and domains such as navigation and co-manipulation with humanoid robots.
Bio: Dr. Figueroa is a Postdoctoral Associate in the Interactive Robotics Group at MIT advised by Prof. Julie Shah. She holds a PhD in Robotics, Control and Intelligent Systems (2019) from the Swiss Federal Institute of Technology in Lausanne (EPFL). Prior to this, she was a Research Assistant (2012-2013) at the Engineering Department of New York University Abu Dhabi (NYU-AD) and a Student Research Assistant (2011-2012) at the Institute of Robotics and Mechatronics (RMC) of the German Aerospace Center (DLR). She holds a B.Sc. degree in Mechatronics (2007) from Monterrey Tech (ITESM-Mexico) and an M.Sc. degree in Automation and Robotics (2012) from the Technical University of Dortmund, Germany. Her research focuses on leveraging machine learning techniques with concepts from dynamical systems theory to solve salient problems in the areas of learning from demonstration, incremental/interactive learning, human-robot collaboration, multi-robot coordination, shared autonomy and control.
Title: Reactive modeling of uncontrolled agents for safe planning
Abstract: Uncontrolled agents like humans are present in many important autonomous applications such as autonomous vehicles and assistive robots. This talk presents a reactive modeling framework for these uncontrolled agents and the subsequent motion planning under the reactive model. A reactive model predicts the set of possible actions by the uncontrolled agents given the situation of the whole system. We propose a data-driven approach that generates probabilistically complete reactive models while avoiding being overly conservative. Generalization bound is proved using the Random Convex Program theory and conformal analysis. The prediction is then leveraged in the verification and planning of autonomous systems sharing the environment with uncontrolled agents. We demonstrate that by leveraging the reactivity of the uncontrolled agents, the autonomous system can maintain safety without being overly conservative.
Bio: Yuxiao Chen is currently a Postdoctoral researcher from the California Institute of Technology working on safe autonomy. He obtained his B.S. from Tsinghua University in 2013 and his Ph.D. in mechanical engineering from the University of Michigan in 2018. His research centers around safety-critical autonomy and multi-agent systems with applications such as autonomous vehicles, robotics, and power networks
Title: Interactive Bayesian Framework for Specification Learning
Abstract: A key challenge in deploying autonomous systems in open-world domains like space exploration and disaster response is specifying all tasks the system is expected to perform apriori. The ability of robots to rapidly learn directly from domain experts in the field would be a crucial enabler towards widespread deployment of automation. Towards that we proposed an interactive Bayesian framework for learning task specifications from multi-modal inputs provided by a domain expert by maintaining a belief over task specifications. In this talk, we will explore how this Bayesian framework is used to learn task specifications from few demonstrations, and further use these learned specifications to evaluate novel task executions. I will present planning with uncertain specifications (PUnS), a novel problem formulation that allows a robot to plan actions that optimally satisfy a belief over task specifications. Finally, I will present an active learning formulation that allows a robot to rapidly refine its belief over task specifications by eliciting expert assessments on a task execution that would be most informative towards refining the robot's belief.
Bio: Ankit is a PhD candidate in the Interactive Robotics Group at MIT advised by Prof. Julie Shah. His research involves developing computational models and algorithms that allow domain-experts to directly train robots just as they would train a human apprentice. He has previously completed S.M. at MIT and B.Tech. at the Indian Institute of Technology - Bombay.