Tao Gao (Department of Communication, Department of Statistics, UCLA)
Bio: Dr. Gao explores how human intelligence is grounded in visual perception. His work is high interdisciplinary, integrating insights from Cognitive Science, Artificial intelligence, and Robotics. He obtained his Ph.D. in cognitive psychology from Yale in 2011. He was a post-doctoral fellow in the Center of Brain, Mind and Machine at MIT between 2011-2015. He then worked at GE research as a computer vision scientist between 2015-2017. He is now jointly appointed to the departments of Statistics and Communication at UCLA since 2017.
Presentation: Intuitive Signaling Through an “Imagined We”
Abstract: Communication is highly overloaded. Despite this, even young children are good at leveraging context to understand ambiguous signals. We propose a computational shared agency account of signaling that we call the Imagined We framework. We leverage Bayesian Theory of Mind (ToM) to provide mechanisms for rational action planning and inverse action interpretation. To expand this framework for communication, we first treat signals as rational actions, to convert it back into the ToM framework. We then incorporate our rich understanding of intuitive physics to constrain the scope of affordable actions. Finally, we treat communication as a cooperative act, subject to constraints of maximizing a shared utility function. We implement this model in two completely different behavioral works from psychology to show how general the Imagined We is and showcase how different types of uncertainty namely uncertainty in intentions and uncertainty in beliefs – in cooperative communication are handled.
Heni Ben Amor (School of Computing, Informatics, and Decision Systems Engineering, Arizona State University)
Bio: Heni Ben Amor is an Assistant Professor for robotics at Arizona State University. He is the director of the ASU Interactive Robotics Laboratory. Ben Amor received the NSF CAREER Award in 2018, the Fulton Outstanding Assistant Professor Award in 2018, as well as the Daimler-and-Benz Fellowship in 2012. Prior to joining ASU, he was a research scientist at Georgia Tech, a postdoctoral researcher at the Technical University Darmstadt (Germany), and a visiting research scientist in the Intelligent Robotics Lab at the University of Osaka (Japan). His primary research interests lie in the fields of artificial intelligence, machine learning, robotics, and human-robot interaction. Ben Amor received a Ph.D. in computer science from the Technical University Freiberg, focusing on artificial intelligence and machine learning. His dissertation won the overall best dissertation award at the TU Freiberg. He has won numerous best paper awards at major robotics and AI conferences.
Presentation: Interaction Primitives: A Machine Learning Approach for Ergonomic Human-Robot Symbiosis
Abstract: How can robots physically interact with humans in a meaningful and healthy way? Existing approaches to specifying close-contact, physical interactions between humans and robots focus solely on successful task completion. However, these approaches completely neglect the biomechanical and ergonomic ramifications of robot actions on the human body. An action which may seem momentarily effective may result in stresses to the human musculoskeletal system and even serious injuries. In this talk, I will discuss how machine learning enables healthy, bi-directional, and biomechanically-safe interactions between humans and machines that can be sustained over long periods of time. Specifically, I will present Bayesian Interaction Primitives -- a probabilistic framework that enables learning and inference for HRI scenarios. Bayesian Interaction Primitives encode the mutual dependencies between interaction partners and can be used to 1.) predict human motion and sensor values, 2.) infer task-relevant biomechanical variables, and 3.) generate appropriate robot responses. Used within a model-predictive control loop, Bayesian Interaction Primitives generate actions that minimize long-term impact on the musculoskeletal system of the human partner. To demonstrate the approach, I will present a number of applications in prosthetics and social robotics.
Zhi Li (Robotics Engineering Department, Worcester Polytechnic Institute)
Bio: Dr. Li is an assistant professor with the Robotics Engineering Department of Worcester Polytechnic Institute. She received her Master degree in Mechanical Engineering in 2009 from University of Victoria (Canada), and Ph.D in Computer Engineering in 2014 from University of California, Santa Cruz. She was a postdoctoral associate with Duke University in 2015-2016. She has extensive experience in the design for various medical robots, including upper limb exoskeleton for stroke rehabilitation, tele-surgical robots, tele-nursing robots. At WPI, she designs the human-robot interfaces and assistive robot autonomy, based on the understanding of the perception-action coupling of humans and cyber-human systems. Her research aims to bridge the gap between human movement science and human-robot interaction.
Presentation: Perception-Action Coupling and Nursing Robot Teleoperation Interfaces
Abstract: Tele-robotic systems show promise to support the human healthcare workforce in the on-going COVID-19 crisis and during future pandemic outbreaks. Future tele-nursing robots are expected to go beyond mere tele-presence, and to perform nursing assistance tasks that involve the coordinated, remote control of manipulation, navigation, and active perception. Despite the recent advancement of robot hardware and autonomy, the desired human-robot teaming hasn't been realized because of the lack of efficient, intuitive and ergonomic teleoperation interfaces. The resulting workload and training effort also discourages the nursing workforce from adopting robots in the future workplace and during education.
We believe the usability of teleoperation interfaces is limited due to the disrupted perception-action coupling of cyber-human systems, namely the human-robot system cognitively and physically integrated via the teleoperation interface. We therefore propose a novel experimental paradigm, theoretical framework, and computational models to deeply understand the perception-action coupling of cyber-human systems, and inform the design of teleoperation interfaces, assistive autonomy, and user training paradigms. We also propose a transformative design philosophy and systematic approaches to address the interface issues which arise due to disrupted vision-motion and haptics-motion coupling, and the integration multi-sensory feedback, and we will validate them in the process of building, characterizing, and improving a tele-nursing interface prototype. The technological and social impacts of these research activities and outcomes will be evaluated with nursing students, educators and practitioners.
Marco Pavone (Department of Aeronautics and Astronautics, Stanford University)
Bio: Dr. Marco Pavone is an Associate Professor of Aeronautics and Astronautics at Stanford University, where he is the Director of the Autonomous Systems Laboratory and Co-Director of the Center for Automotive Research at Stanford. Before joining Stanford, he was a Research Technologist within the Robotics Section at the NASA Jet Propulsion Laboratory. He received a Ph.D. degree in Aeronautics and Astronautics from the Massachusetts Institute of Technology in 2010. His main research interests are in the development of methodologies for the analysis, design, and control of autonomous systems, with an emphasis on self-driving cars, autonomous aerospace vehicles, and future mobility systems. He is a recipient of a number of awards, including a Presidential Early Career Award for Scientists and Engineers from President Barack Obama, an Office of Naval Research Young Investigator Award, a National Science Foundation Early Career (CAREER) Award, a NASA Early Career Faculty Award, and an Early-Career Spotlight Award from the Robotics Science and Systems Foundation. He was identified by the American Society for Engineering Education (ASEE) as one of America's 20 most highly promising investigators under the age of 40. His work has been recognized with best paper nominations or awards at the European Control Conference, at the IEEE International Conference on Intelligent Transportation Systems, at the Field and Service Robotics Conference, at the Robotics: Science and Systems Conference, at the ROBOCOMM Conference, and at NASA symposia. He is currently serving as an Associate Editor for the IEEE Control Systems Magazine.
Presentation: On Safe and Efficient Human-Robot Interactions via Multimodal Intent Modeling and Reachability-Based Safety Assurance
Abstract: In this talk I will present a decision-making and control stack for human-robot interactions by using autonomous driving as a motivating example. Specifically, I will first discuss a data-driven approach for learning multimodal interaction dynamics between robot-driven and human-driven vehicles based on recent advances in deep generative modeling. Then, I will discuss how to incorporate such a learned interaction model into a real-time, interaction-aware decision-making framework. The framework is designed to be minimally interventional; in particular, by leveraging backward reachability analysis, it ensures safety even when other cars defy the robot's expectations without unduly sacrificing performance. I will present recent results from experiments on a full-scale steer-by-wire platform, validating the framework and providing practical insights. I will conclude the talk by providing an overview of related efforts from my group on infusing safety assurances in robot autonomy stacks equipped with learning-based components, with an emphasis on adding structure within robot learning via control-theoretical and formal methods.
Néstor Becerra Yoma (Department of Electrical Engineering, Universidad de Chile)
Bio: Néstor Becerra Yoma obtained his Ph.D. from the University of Edinburgh, UK, and his M.Sc and B.Sc. in electrical engineer from the State University of Campinas, Brazil. He currently works as a tenured professor at the University of Chile where he started the Speech Processing and Transmission Laboratory (LPTV, http://www.lptv.cl) where he investigates artificial intelligence applied to human-robot integration, among others. He was a visiting professor at Carnegie Mellon University, USA, between 2016 and 2017. He has directed numerous R&D projects, in addition to being the author of more than 80 papers in international magazines and conferences, and of patents. His interests include multidisciplinary research on artificial intelligence and signal processing in various fields of application such as human-robot interaction, volcanology, bio-acoustics, seismology, etc. He is the creator and organizer of the IA2030CHILE initiative, participated in the committee called by the Senate's “Future Challenges, Science, Technology and Innovation Commission” to address the development of artificial intelligence and he is also a member of the Expert Committee on AI of the Ministry of Science, Technology, Knowledge and Innovation in Chile.
Presentation: AI and Social Robotics: The Role of Spoken Language
Abstract: Human-robot collaboration will be a strategic component in defense and commercial applications in the next 10-20 years. Consequently, social robotics is one of the most important and critical challenges in robot science and engineering. Robots are not 100% autonomous in all possible circumstances like in the movies. This situation should not change radically in the next few years, so the collaborative partnership between humans and robots is very important. Robots can perform tasks or go to places that are not desirable to humans. On the other hand, humans can help robots complete tasks and make decisions when problems occur. In this talk we will show how artificial intelligence applied to spoken language and multimodal beamforming can help this collaborative integration between humans and robots. The voice is the most natural way for us to communicate, but it also has information on our health, physical and emotional state. All of this is important so that robots can interact well with us in a future that is not so distant.
David Robb (Interactive and Trustworthy Technologies Research Group, Heriot-Watt University)
Bio: David Robb is Experimental Lead for the HRI theme of the £16M EPSRC UKRI funded ORCA Hub project, and Research Fellow at Heriot-Watt University.
Presentation: The Challenges in Human-Robot Collaboration with Remote Robot Teams in Hazardous Environments
Abstract: I will talk about my work as a mixed methods HRI researcher with the ORCA Hub (https://orcahub.org/). That work has progressed from the Situation Awareness benefit of using a Conversational Agent (CA) with graphical map-based interface for control of Autonomous Underwater Vehicles (AUV), through design principles for CAs for remote autonomy, to the Challenges in collaborating with remote robot teams in hazardous environments. Some of our recent work on real-time Cognitive Load monitoring and a CA embodied as Furhat social robot will also feature.
Shaoping Bai (Department of Materials and Production, Aalborg University)
Bio: Shaoping Bai is a full professor at Department of Mechanical and Manufacturing Engineering, Aalborg University (AAU), Denmark. His research interests include novel mechanism design, sensor and actuators, medical and assistive robots, parallel manipulators and exoskeletons. He leads several national and international research projects in exoskeletons, including EU AXO-SUIT and IFD Grand Solutions project EXO-AIDER, among others. He is a recipient of IEEE CIS-RAM 2017 best paper award, IFToMM MEDER 2018 best application paper award and WearRAcon2018 Grand Prize of Innovation Challenges. Dr. Bai is an Associate Editor of ASME J. of Mechanisms and Robotics and IEEE Robotics and Automation Letters. He serves as a deputy chair of IFToMM Technical Committee of Robotics and Mechatronics and also a deputy chair of IFToMM Denmark. He is the founder of BioX ApS, a AAU spin-off on wearable technologies.
Presentation: Reliable Motion Intention Detection and Actuation for Human-Exoskeleton Interaction
Abstract: Exoskeletons are robotic systems that are designed to assist, aid, and help people in their body movement. Exoskeletons have found very broad applications including healthcare, elderly assistance, industrial and military uses. For exoskeletons, a safe and comfortable interactions with human users requires reliable and accurate yet convenient detection of human motion intention detection and also compliant actuation, which are challenging topics among exoskeleton researches.
This talk reports research and innovations of wearable technologies developed recently at Aalborg University, Denmark for improving human-robot interaction. A detection method based forcemyography (FMG) is first introduced. Compared with electromyography (EMG) methods that are mostly used in research environments, a FMG method is more reliable and easy to use in exoskeletons in manufacturing and home applications. The principle of motion intention detection, electronics, motion detection and their application in exoskeletons and other devices will be outlined in this talk. In addition, a novel design of compliant mechanism to facilitate physical human-robot interaction will be presented. The compliant joint mechanism, featured with nonlinear stiffness, is able to actuate robotic systems with inherent compliance and self-torque-sensing capability for safe and effective human-robot interaction.
Guoying Gu (School of Mechanical Engineering, Shanghai Jiao Tong University)
Bio: Guoying Gu is a Professor of School of Mechanical Engineering at Shanghai Jiao Tong University. He was a Humboldt Fellow of Germany. He was a Visiting Scholar at Massachusetts Institute of Technology, National University of Singapore and Concordia University. His research interests include soft robotics, intelligent wearable systems, smart materials sensing, actuation and motion control. He is the author or co-author of over 90 publications, which have appeared in Science Robotics, Science Advances, IEEE Trans., Advanced Functional Materials, Soft Robotics, etc., as book chapters and in conference proceedings. Now he serves as Associate Editor of IEEE Transactions on Robotics. He has also served for several journals as Editorial Board Member, Topic Editor, or Guest Editor, and several international conferences/symposiums as Chair, Co-Chair, Associate Editor or Program Committee Member.
Presentation: Design of Soft Robots and the Preliminary Applications on Wearable Prosthetic Hands
Abstract: Conventional robots with the rigid actuations and mechanisms have made great progress for humans in the fields of automation assembly and manufacturing. With an increasing requirement to interact with humans and unstructured environments, soft robots made of soft materials are promising, capable of sustaining large deformation and adaptation while inducing little pressure or damage. However, design and control of soft wearable robots still have grand challenges and few applications with humans have been demonstrated. In this talk, I will introduce our recent development on soft actuators, sensors and robots, and their applications in e-skins, prosthetic hands and assistive gloves.