List of Speakers

Katherine Driggs-Campbell (Department of Electrical and Computer Engineering, UIUC)

Bio: Katie Driggs-Campbell is currently an assistant professor at the University of Illinois at Urbana-Champaign in the Department of Electrical and Computer Engineering. Prior to that, she was a Postdoctoral Research Scholar at the Stanford Intelligent Systems Laboratory in the Aeronautics and Astronautics Department. She received a B.S.E. with honors from Arizona State University in 2012 and an M.S. from UC Berkeley in 2015. She earned her PhD in 2017 in Electrical Engineering and Computer Sciences from the University of California, Berkeley. Her lab works on human-centered autonomy, focusing on the integration of autonomy into human-dominated fields, merging ideas robotics, learning, human factors, and control.

Presentation: Inference, Prediction, and Representations for Safe Navigation

Abstract: Autonomous systems and robots are becoming prevalent in our everyday lives and changing the foundations of our way of life. However, the desirable impacts of autonomy are only achievable if the underlying algorithms can handle the unique challenges humans present: People tend to defy expected behaviors and do not conform to many of the standard assumptions made in robotics. To design safe, trustworthy autonomy, we must transform how intelligent systems interact, influence, and predict human agents. In this talk, we'll discuss how inferring hidden states (e.g., traits, intent) coupled with robust prediction methods can be used to improve decision-making and control in interactive settings. Specifically, we’ll discuss recent work in goal-oriented prediction that simultaneously estimates intent and predicts the future trajectory. We'll also consider the influences and interactions between agents (humans and/or robots) in crowded settings, resulting in new structural representations for reinforcement learning that produce safe, socially-aware policies for crowd navigation. We demonstrate the effectiveness of our proposed approaches on fully equipped autonomous vehicles and mobile robots that operate among humans with ease (most of the time).

Zhi Li (Robotics Engineering Department, Worcester Polytechnic Institute)

Bio: Prof. Jane Li is an assistant professor with the Robotics Engineering Department of Worcester Polytechnic Institute. She received her Master degree in Mechanical Engineering in 2009 from University of Victoria (Canada), and Ph.D in Computer Engineering in 2014 from University of California, Santa Cruz. She was a postdoctoral associate with Duke University in 2015-2016. She has extensive experience in human-robot interaction and interfaces for various medical robots, including upper limb exoskeleton for stroke rehabilitation, tele-surgical robots, tele-nursing robots. At WPI, she focuses on the design of assisted teleoperation interfaces for nursing robots, based on understanding of the perception-action coupling of humans and cyber-human systems. Her research aims to bridge the gap between human movement science and human-robot interaction.

Presentation: Design Teleoperation Interfaces for Mobile Humanoid Nursing Robots

Abstract: Tele-robotic systems show promise to support the human healthcare workforce in the on-going COVID-19 crisis and during future pandemic outbreaks. Future tele-nursing robots are expected to go beyond mere tele-presence, and to perform nursing assistance tasks that involve the coordinated, remote control of manipulation, navigation, and active perception. Despite the recent advancement of robot hardware and autonomy, the desired human-robot teaming hasn't been realized because of the lack of efficient, intuitive and ergonomic teleoperation interfaces. The resulting workload and training effort also discourages the nursing workforce from adopting robots in the future workplace and during education. This talk presents our recent development of the tele-nursing robot interfaces to address these problems, including: 1) the human factor experiment that reveal the perception-action coupling and visual-haptic sensory integration in the usage of active telepresence; 2) the remote perception problems in the teleoperation of mobile manipulators to perform various navigation and loco-manipulation nursing tasks; and 3) the integration haptic and augmented reality visual feedback for assisting dexterous tele-manipulation.

Nazita Taghavi (Louisville Automation and Robotic Research Institute, University of Louisville)

Bio: Dr. Taghavi is a postdoctoral research associate with the Louisville Automation and Robotic Research Institute (LARRI) at the University of Louisville. She received her Ph.D. in Mechanical Engineering in 2020 from Iowa State University. Her primary research interests are control engineering, robotics, and human-robot interactions. She has extensive experience in the design and development of medical and assistive robotic devices for diagnosis and rehabilitation. Her recent focus in the field of medical robotics is in autism studies and the development of machine learning algorithms for an intelligent physiotherapist robot. This social robot aims to support autistic children and assist with autism diagnosis and treatment.

Presentation: Non-Contact and Emotionally Safe Interaction of Physiotherapist Social Robot with Autistic Children for Autism Diagnosis and Treatment

Abstract: Children with autism spectrum disorder often need different physiotherapy practices to learn new skills. However, physiotherapists can be available for these children for a limited amount of time. Since studies show that autistic children positively interact with social robots, it is convincing to use these robots as physiotherapists for these children. Each child is unique regarding their abilities and needs, therefore, the robot interacting with the autistic child must be smart, diagnose the severity of autism and practice appropriately with the child. For this purpose, the robotic system needs to obtain information and sensor signals from the child. However, it is common for autistic children to suffer hyper-sensory overload. For these children, it is intolerable to wear objects around the head, hands, and other body parts. Therefore, wearable sensors are emotionally threatening for these children.

We have developed two algorithms for a social robot to interact with autistic children only based on data from a non-contact motion tracking system which makes the robot emotionally safe and useable for autistic children. The first algorithm is called segment-based online dynamic time warping (SODTW) and can diagnose autism based on the motion performance of the autistic child. The second algorithm is based on deep reinforcement learning. Using this algorithm, the robot learns from the child and adapts the speed and shape of the robot motion to generate a suitable physiotherapy motion for the child to practice. The result of these two algorithms is that the robot safely interacts with the child, adapts itself to the child, and evaluates the child’s performance over time. It is also able to save data from the child so the human physiotherapist can track the child's performance and progress.

Cong Wang (Department of Electrical and Computer Engineering, New Jersey Institute of Technology)

Bio: Dr. Cong Wang is a faculty member in the ECE department at New Jersey Institute of Technology. Before joining NJIT in 2015, Dr. Wang was a Lecturer and Research Engineer at University of California, Berkeley. He obtained his PhD degree in Mechanical Engineering from UC Berkeley in 2014, before which he attended Tsinghua University and obtained his master's degree in Automotive Engineering and bachelor's degree in Mechanical Engineering and Automation in 2010 and 2008 respectively.

Presentation: Fundamental Concerns in Human-Robot Co-working

Abstract: The concept of cobots and its application in physical collaboration of humans and robots have been a top buzz. Despite many commercial models becoming available, there is yet little sign of cobots being effectively used in the industries. This talk will examine several related aspects and explain some fundamental concerns that are hampering the application. The factors to be discussed include collision mechanics, psychological perception, productivity, and risk management. Such factors sit beyond the conventional theories of robotics but fundamentally challenge the application of cobots.

Jingang Yi (Department of Mechanical and Aerospace Engineering, Rutgers University)

Bio: Professor Jingang Yi received the B.S. degree in electrical engineering from Zhejiang University in 1993, the M.Eng. degree in precision instruments from Tsinghua University in 1996, and the M.A. degree in mathematics and the Ph.D. degree in mechanical engineering from the University of California, Berkeley, in 2001 and 2002, respectively. He is currently a Full Professor in mechanical engineering and a Graduate Faculty member in electrical and computer engineering at Rutgers University. His research interests include human-robot interactions and assistive robotics, autonomous robotic and vehicle systems, dynamic systems and control, mechatronics, automation science and engineering, with applications to biomedical, transportation and civil infrastructure systems. Prof. Yi is a Fellow of American Society of Mechanical Engineers (ASME) and a Senior Member of IEEE. He has received several awards, including the 2017 Rutgers Chancellor’s Scholars, 2014 ASCE Charles Pankow Award for Innovation, the 2013 Rutgers Board of Trustees Research Fellowship for Scholarly Excellence, and the 2010 NSF CAREER Award. He has coauthored several best papers in IEEE Transactions on Automation Science and Engineering and at IEEE/ASME AIM, ASME DSCC, and IEEE ICRA, etc. He serves as a Senior Editor for IEEE Robotics and Automation Letters and an Associate Editor for International Journal of Intelligent Robotics and Applications. He also served in editorial board of IFAC journals Control Engineering Practice, Mechatronics, IEEE/ASME Transactions on Mechatronics, IEEE Transactions on Automation Science and Engineering, IEEE Robotics and Automation Letters, and ASME Journal of Dynamic Systems, Measurement and Control.

Presentation: Control of Unstable Physical Human-Machine Interactions: A Rider-Bikebot Example

Abstract: Human with trained motor skills can fluidly and flexibly interact with machines while smart machines can also provide motor assistance and enhancement to facilitate human’s motor skills learning. However, we currently lack theories and design tools to effectively model and tune human motor control and its interactions with machines. In this talk, I will discuss recent developments control of human motor skills through unstable physical human-machine interactions (upHMI). Rider-bikebot (i.e., bicycle-like robot) interactions is used as an upHMI paradigm to examine a sensorimotor theory for modeling of human motor control relevant to balancing motor activities. I will present a balance equilibrium manifold (BEM) concept to study how a human rider balances a bikebot while maintaining tracking a desired trajectory. A performance metric is also introduced to quantify the balance motor skills using the BEM. Extensive experiments are conducted to validate the analyses and demonstrate the balance skill metrics. Finally, I will briefly present balancing stability analysis and motor skill control of the rider-bikebot interactions.

Néstor Becerra Yoma (Department of Electrical Engineering, Universidad de Chile)

Bio: Néstor Becerra Yoma obtained his Ph.D. from the University of Edinburgh, UK, and the M.Sc. and B.Sc. degrees in electrical engineer from the State University of Campinas, Brazil. He is a full professor at the University of Chile where he started the Speech Processing and Transmission Laboratory, http://www.lptv.cl, where he investigates artificial intelligence and signal processing applied to human-robot integration among other topics. He was a visiting professor at Carnegie Mellon University, USA, between 2016 and 2017. He has directed numerous R&D projects, e.g. http://www.cmrsp.cl, in addition to being the author of dozens of articles in international research journals and conferences, and of patents. His interests include multidisciplinary research on artificial intelligence and signal processing in various fields of application such as human-robot interaction, health, volcanology, bioacoustics, seismology, etc. He is the creator and organizer of the IA2030CHILE initiative and participated in the committee convened by the Senate's “Future Challenges, Science, Technology and Innovation Commission” to address the development of artificial intelligence in Chile. He was a member of the Expert Committee on AI of the Ministry of Science, Technology, Knowledge and Innovation. He started the Chilean chapter of the IEEE Signal Processing Society.

Presentation: Toward Voice Based User Profiling in HRI

Abstract: Human-robot seamless and collaborative partnership will be a strategic component in commercial applications in the next decades. Consequently, social robotics is one of the most important and critical challenges in robot science and engineering. In this context, safety is a critical issue. Accordingly, user profiling in the physical, cognitive and social domains is crucial, and social robots should observe multimodal inputs from the human teammates. However, some mode inputs such as physiological signals require wearable sensors that may be invasive from the user point of view. Also, image processing may not be always possible depending on the operating conditions. In contrast, speech conveys a huge amount of linguistic and paralinguistic (e.g. prosody) information. Beyond voice commands to robots, speech is a window to the psychological, physical and emotional condition of humans. However, speech analysis and processing are very sensitive to noise environments (including the “cocktail party” effect), reverberation and time-varying acoustic channel that results from dynamic scenarios. In this talk we will present our recent results at the Speech Processing and Transmission Lab on robust voice based HRI in indoor reverberant environments. First, we will discuss the importance of speech based HRI in collaborative HRI. Then, the problem of time-varying acoustic channel is presented and addressed in mobile HRI by making use of deep learning and image-based beamforming. Finally, the results of our investigation on the effect in the beamforming effectiveness of robot head movement to follow the audio target are presented.