Yiannis Demiris is a Professor in Human-Centred Robotics at Imperial, where he holds a Royal Academy of Engineering Chair in Emerging Technologies (Personal Assistive Robotics). He established the Personal Robotics Laboratory at Imperial in 2001. He holds a PhD in Intelligent Robotics, and a BSc(Hons) in Artificial Intelligence and Computer Science, both from the University of Edinburgh. He has been a European Science Foundation (ESF) junior scientist Fellow, and a COE Fellow at the Agency of Industrial Science and Technology (AIST - ETL) of Japan. He is currently a Fellow of the Institute of Engineering and Technology (FIET), and a Fellow of the British Computer Society (FBCS).
Hatice Gunes is a Professor of Affective Intelligence and Robotics (AFAR) and the Director of the AFAR Lab at the University of Cambridge. Her expertise is in the areas of affective computing and social signal processing cross-fertilizing research in multimodal interaction, computer vision, machine learning and human-robot interaction. She has published over 150 papers in these areas (h-index=36, citations >7,100), with most recent works on lifelong learning for affect recognition, fairness and affective robotics; and longitudinal HRI for wellbeing. Her recent research highlights include, Guest Editor of the 2022 Frontiers Research Topic on Lifelong Learning and Long-Term Human-Robot Interaction and the 2021 IEEE Trans. Affect. Comput. Special Issue on Automated Perception of Human Affect from Longitudinal Behavioral Data, RSJ/KROS Distinguished Interdisciplinary Research Award Finalist at IEEE RO-MAN'21, Distinguished PC Award at IJCAI'21, Best Paper Award Finalist at IEEE RO-MAN'20, Finalist for the 2018 Frontiers Spotlight Award, Outstanding Paper Award at IEEE FG'11, and Best Demo Award at IEEE ACII'09. Prof Gunes is the former President of the Association for the Advancement of Affective Computing (AAAC), and is the General Co-Chair of ACM ICMI'24 and ACII'19, and the Program Co-Chair of ACM/IEEE HRI'20 and IEEE FG'17. She was a member of the Human-Robot Interaction Steering Committee (2018-2021) and was the Chair of the Steering Board of IEEE Transactions on Affective Computing (2017-2019). In 2019 she was awarded the prestigious EPSRC Fellowship as a personal grant (2019-2024) to investigate adaptive robotic emotional intelligence for wellbeing, and was named a Faculty Fellow of the Alan Turing Institute– UK’s national centre for data science and artificial intelligence (2019-2021). Prof Gunes is a Senior Member of the IEEE, a member of the AAAC and a Staff Fellow of Trinity Hall.
Anca Dragan is an Associate Professor in the EECS Department at UC Berkeley. Her goal is to enable robots to work with, around, and in support of people. She runs the InterACT Lab, where they focus on algorithms for human-robot interaction -- algorithms that move beyond the robot's function in isolation, and generate robot behavior that coordinates well with people, and is aligned with what we actually want the robot to do. InterACT Lab works across different applications, from assistive arms, to quadrotors, to autonomous cars, and draw from optimal control, game theory, reinforcement learning, Bayesian inference, and cognitive science. She also helped found and serves on the steering committee for the Berkeley AI Research (BAIR) Lab, and is a co-PI of the Center for Human-Compatible AI. She has been honored by the Sloan Fellowship, MIT TR35, the Okawa award, an NSF CAREER award, and the PECASE award.
Marynel Vázquez is an Assistant Professor in Yale’s Computer Science Department, where she leads the Yale Interactive Machines Group (IMG). Her main area of research is Human-Robot Interaction (HRI). She studies fundamental problems to enable group human-robot interactions. For instance, her work investigates social group phenomena in HRI, including spatial patterns of behavior typical of group conversations and group social influence. Further, she works on advancing autonomous, social robot behavior, both in terms of perception and decision making. An example is her work on learning social navigation policies, which has led them to create interactive online surveys to scale data collection in HRI. A key idea that has driven our recent work in group HRI is abstracting interactions as graphs. This allows robots to reason about individual, relationship and group factors in unison (e.g., see our work on group detection and pose generation). She also enjoys building robotic systems to demonstrate ideas in practice (Chester, Shutter).
Henny Admoni is an Assistant Professor at Carnegie Mellon University and directs the Human And Robot Partners (HARP) Lab, which develops assistive and collaborative robots and AI to help improve people’s lives. Her research is in the areas of human-robot interaction, assistive robotics, human-centered learning, and modeling human behavior. Her research bridges autonomous robotics and cognitive science to develop algorithms for robot behavior that are built around the unique cognitive science of human-robot interaction. Using her background in computer science and cognitive psychology, she designs, implements, and evaluates robotic systems for providing assistance on complex tasks in human environments. Her research uses principles from robotics, artificial intelligence, machine learning, computer vision, and cognitive science.