The Speakers

Yiannis Demiris


Title: Assistive Robots: from robot learning to human empowerment

Abstract: Assistive robots hold great potential for empowering people to achieve their intended tasks, for example during activities of daily living. However, every person is unique, creating challenges for the robots’ perceptual, cognitive and motor systems, and necessitating the development of interactive learning algorithms that personalise the robot’s assistance to each individual. In this talk, I will outline research in our laboratory towards the development of algorithms that enable learning during human-robot interaction, both for robots as well as for humans; I will demonstrate their application in activities of daily living, and illustrate how computer vision, learning and augmented reality can enable adaptive user interfaces for interacting with assistive robots in a safe and trustworthy manner.



Bio: Yiannis Demiris is a Professor in Human-Centred Robotics at Imperial, where he holds a Royal Academy of Engineering Chair in Emerging Technologies (Personal Assistive Robotics). He established the Personal Robotics Laboratory at Imperial in 2001. He holds a PhD in Intelligent Robotics, and a BSc(Hons) in Artificial Intelligence and Computer Science, both from the University of Edinburgh. He has been a European Science Foundation (ESF) junior scientist Fellow, and a COE Fellow at the Agency of Industrial Science and Technology (AIST - ETL) of Japan. He is currently a Fellow of the Institute of Engineering and Technology (FIET), Fellow of the British Computer Society (FBCS) and Fellow of the Royal Statistical Society (FRSS).

Prof. Demiris' research interests include Artificial Intelligence, Machine Learning, and Intelligent Robotics, particularly in intelligent perception, multi-scale user modelling, and adaptive cognitive control architectures in order to determine how intelligent robots can generate personalised assistance to humans in order to improve their physical, cognitive and social well being.

Stefanie Tellex


Title: Towards Complex Language in Partially Observed Environments

Abstract: Robots can act as a force multiplier for people, whether a robot assisting an astronaut with a repair on the International Space station, a UAV taking flight over our cities, or an autonomous vehicle driving through our streets. Existing approaches use action-based representations that do not capture the goal-based meaning of a language expression and do not generalize to partially observed environments. The aim of my research program is to create autonomous robots that can understand complex goal-based commands and execute those commands in partially observed, dynamic environments. I will describe demonstrations of object-search in a POMDP setting with information about object locations provided by language, and mapping between English and Linear Temporal Logic, enabling a robot to understand complex natural language commands in city-scale environments. These advances represent steps towards robots that interpret complex natural language commands in partially observed environments using a decision theoretic framework.

Bio: Stefanie Tellex is an Associate Professor of Computer Science at Brown University. Her group, the Humans To Robots Lab, creates robots that seamlessly collaborate with people to meet their needs using language, gesture, and probabilistic inference, aiming to empower every person with a collaborative robot. She completed her Ph.D. at the MIT Media Lab in 2010, where she developed models for the meanings of spatial prepositions and motion verbs. Her postdoctoral work at MIT CSAIL focused on creating robots that understand natural language. She has published at SIGIR, HRI, RSS, AAAI, IROS, ICAPs and ICMI, winning Best Student Paper at SIGIR and ICMI, Best Paper at RSS, and an award from the CCC Blue Sky Ideas Initiative. Her awards include being named one of IEEE Spectrum's AI's 10 to Watch in 2013, the Richard B. Salomon Faculty Research Award at Brown University, a DARPA Young Faculty Award in 2015, a NASA Early Career Award in 2016, a 2016 Sloan Research Fellowship, and an NSF Career Award in 2017. Her work has been featured in the press on National Public Radio, BBC, MIT Technology Review, Wired and Wired UK, as well as the New Yorker. She was named one of Wired UK's Women Who Changed Science In 2015 and listed as one of MIT Technology Review's Ten Breakthrough Technologies in 2016.

Dan Bohus


Title: Situated Interaction

Abstract: Situated language interaction is a complex, multimodal affair that extends well beyond the spoken word. When interacting, we use a wide array of non-verbal signals and incrementally coordinate with each other to simultaneously resolve several problems: we manage engagement, coordinate on taking turns, recognize intentions, and establish and maintain common ground as a basis for contributing to the conversation. Proximity and body pose, attention and gaze, head nods and hand gestures, prosody and facial expressions, all play very important roles in this process. And just like a couple of decades ago advances in speech recognition opened up the field of spoken dialog systems, current advances in vision and other perceptual technologies are again opening up new horizons -- we are starting to be able to build machines that computationally understand these social signals and the physical world around them, and participate in physically situated interactions and collaborations with people. In this talk, using a number of research vignettes from work we have done over the last decade at Microsoft Research, I will draw attention to some of the challenges and opportunities that lie ahead of us in this exciting space. In particular, I will discuss issues with managing engagement and turn-taking in multiparty open-world settings, and more generally highlight the importance of timing and fine-grained coordination in situated language interaction. Finally, I will conclude by describing an open-source framework we are developing that promises to simplify the construction of physically situated interactive systems, and in the process further enable and accelerate research in this area.

Bio: Dan Bohus is a Senior Principal Researcher in the Adaptive Systems and Interaction Group at Microsoft Research. His work centers on the study and development of computational models for physically situated spoken language interaction and collaboration. The long term question that shapes his research agenda is how can we enable interactive systems to reason more deeply about their surroundings and seamlessly participate in open-world, multiparty dialog and collaboration with people? Prior to joining Microsoft Research, Dan obtained his Ph.D. from Carnegie Mellon University.

Changliu Liu

Title: Safety-critical learning and control for collaborative robots


Abstract: This talk will share some of our recent work that enables autonomous robotic systems to safely operate in uncertain and human-involved environments. The safety specification can be written as constraints on the system's state space. To ensure that these constraints are satisfied throughout time, the robot needs to correctly anticipate the future and only select the action that will not lead to a state that violates the constraints. To deal with the uncertainties, the robot needs to continuously learn the environment dynamics and adjust its behavior accordingly. This solution strategy requires seamless integration between set-theoretic control and continual learning. This talk will focus on two aspects of the problem: 1) how to perform provably safe control in real time with learned models and 2) how to achieve data-efficient learning. For the first aspect, I will introduce a safe control method that ensures forward invariance inside the safety constraint with black-box dynamic models (e.g., deep neural networks). For the second aspect, I will introduce a verification-guided learning method that performs more learning on most vulnerable parts of the model. The computations that involve deep neural networks are handled by our toolbox NeuralVerification.jl, a sound verification toolbox that can check input-output properties of deep neural networks. I will conclude the talk with future visions.


Bio: Changliu Liu is an assistant professor in the Robotics Institute at CMU, where she leads the Intelligent Control Lab. Prior to joining CMU in 2019, she was a postdoc at the Stanford Intelligent Systems Laboratory. She obtained my Ph.D from Berkeley in 2017, where she worked in the Mechanical Systems & Control Lab. Her primary research focus is on the design and verification of intelligent systems that work with people, with applications towards manufacturing and transportation. She published the book Designing Robot Behavior in Human-Robot Interactions with CRC press in 2019.