Speakers

invited Speakers

Julie A. Adams
Oregon State University

Human interactions with multiple robots: historical lessons learned and future challenges 

Abstract:
As the first researcher to conduct a formal human subjects evaluation of a single human-multiple robot system and the first to collect objective human subjects data for a single human deploying a swarm of more than 100 heterogeneous robots in an urban environment, Dr. Adams can provide a unique perspective of the field. She will discuss key lessons learned from the last 30 years of research focused on facilitating human’s ability to interact with two or more robots.  Many historical research questions remained unresolved, but as the field progresses, new research challenges arise. Important potential future research challenges will be presented to serve as a foundation for broader research discussions. 

Bio:
Dr. Adams, Oregon State University’s College of Engineering’s Dean’s Professor, is the founder of the Human-Machine Teaming Laboratory. She is also OSU’s Collaborative Robotics and Intelligent Systems (CoRIS) Institute’s Associate Director of Research. Adams has worked in the area of human-machine teaming for over thirty years. Throughout her career she has focused on human interaction with unmanned systems, but also focused on manned civilian and military aircraft at Honeywell, Inc. and commercial, consumer and industrial systems at the Eastman Kodak Company. Her research, which is grounded in robotics applications for domains such as first response, archaeology, oceanography, the national airspace, and the U.S. military, focuses on distributed artificial intelligence, swarms, robotics and human-machine teaming. Dr. Adams is a NSF CAREER award recipient, an ANA Avatar XPRIZE judge, and HFES Fellow. 

Jen Jen Chung
University of Queensland

Going with the Flow: Robots Navigating Human Space

Abstract:

The tremendous potential offered by robotic automation has been evidenced by its revolution of large-scale manufacturing, mining, and agriculture. Now, with increasing social and economic pressures to transfer this success to service-focused sectors, we're seeing robots moving into our shops, our hospitals, and our homes. These human-centred spaces heighten the challenges of robotic perception for scene understanding and emphasise the need for robust planning and interaction in unstructured and dynamic environments. This talk will focus on the very literal problem of robots navigating human spaces, where robots need to understand the nuances of human interaction in crowds to enable safe, smooth and successful motion planning across the spectrum of collaborative-ambivalent-adversarial pedestrian scenarios.


Bio: 

Jen Jen Chung is an Associate Professor in Mechatronics within the School of Electrical Engineering and Computer Science at The University of Queensland. Her current research interests include perception, planning and learning for robotic mobile manipulation, algorithms for robot navigation through human crowds, informative path planning and adaptive sampling. Prior to working at UQ, Jen Jen was a Senior Researcher in the Autonomous Systems Lab (ASL) at ETH Zürich from 2018-2022 and was a Postdoctoral Scholar at Oregon State University researching multiagent learning methods from 2014-2017. She completed her Ph.D. on information-based exploration-exploitation strategies for autonomous soaring platforms at the Australian Centre for Field Robotics in the University of Sydney. She received her Ph.D. (2014) and B.E. (2010) from the University of Sydney.

Unified Robot Fleet Control: Lessons From The Factory 

Abstract:

OTTO’s mobile robots have operated for over 5 million hours fully autonomously in factories and warehouses around the world. With this has come many lessons in how to simplify and streamline fleet setup and optimization without compromising the ability to handle the substantial workflow variances present across our deployments, lessons we’ve incorporated into our latest interface designs. The resulting system enables non-expert users to quickly commission heterogeneous fleets of robots without requiring substantial changes to existing factory workflows, and has already undergone rigorous real-world testing.

This talk will present the system- and user-level considerations we faced, the resulting concepts we developed, results from the field, and trends which are likely to drive future work in the area.


Bio:

Ryan Gariepy is co-founder and CTO of both Clearpath Robotics and OTTO Motors. In addition, he serves on the board of the Open Source Robotics Foundation, is a co-founder of ROSCon, and also co-founded and co-chairs the Canadian Robotics Council. Ryan is also an advisor to several startups and venture capital groups, and helped found the Next Generation Manufacturing Canada initiative. He is a regular speaker, panelist, and expert guest on topics including robotics, AI, and technology policy. Ryan completed both a B.A.Sc. degree in Mechatronics Engineering and a M.A.Sc. degree in Mechanical Engineering at the University of Waterloo, and has over seventy pending patents in the field of autonomous systems. 

Paolo Robuffo Giordano
CNRS at IRISA, Rennes

Shared Control for Tele-Navigation of Multi-Robot Systems 

Abstract:
Nowadays and future robotics applications are expected to address more and more complex tasks in increasingly unstructured environments and in co-existence or co-operation with humans. Achieving full autonomy is clearly a "holy grail" for the robotics community: however, one could easily argue that real full autonomy is, in practice, out of reach for years to come, and in some cases also not desirable. The leap between the cognitive skills (e.g., perception, decision making, general ”scene understanding”) of us humans w.r.t. those of the most advanced robots nowadays are still huge. In most applications involving tasks in unstructured environments, uncertainty, and interaction with the physical word, human assistance is still necessary, and will probably be for the next decades.

These considerations motivate research efforts into the (large) topic of shared control for complex robotics systems: on the one hand, empower robots with a large degree of autonomy for allowing them to effectively operate in non-trivial environments. On the other hand, include human users in the loop for having them in (partial) control of some aspects of the overall robot behavior. 

In this talk I will then review several recent results on novel shared control architectures for providing a human operator with an easy "interface" for commanding a group of multiple robots at high-level while coping with many local (e.g., collision avoidance) and global (e.g., connectivity maintenance) constraints.


Bio:

Paolo Robuffo Giordano is a CNRS senior research scientist head of the Rainbow group at IRISA/Inria, Rennes, France. He holds a PhD degree in Systems Engineering obtained in 2008 at the University of Rome “La Sapienza”. From January 2007 to July 2007 and from November 2007 to October 2008, he was a research scientist at the Institute of Robotics and Mechatronics, German Aerospace Center (DLR), Germany, and from October 2008 to November 2012 he was a senior research scientist at the Max Planck Institute for Biological Cybernetics and scientific leader of the group “Human-Robot Interaction”. His scientific interests include motion control for mobile robots and mobile manipulators, visual control of robots, active sensing, bilateral teleoperation, shared control, multi-robot estimation and control, aerial robotics. 

Roderich Gross
University of Sheffield

Multi-Operator Control of Connectivity-Preserving Robot Swarms 

Abstract:
Robot swarms hold the potential to effectively carry out large-scale task scenarios though their practical use is sometimes limited if unable to interface with humans. This talk addresses the challenge of making robot swarms work in scenarios where they share an environment with multiple human operators. We introduce a framework based on formal languages that makes it possible to dynamically assign robots to (i) help individual operators or (ii) help preserve the connectivity of the communication network among them. The framework supports automatic code generation to control the robots in a fully distributed way. We report the findings of a series of experiments where multiple simulated/real users collaborate with swarms of simulated/real robots. The results show that the framework creates communication networks that not only adapt to the movements of operators but are near optimal in length and enable operators to exchange robots. We discuss potential future uses of the framework, as currently explored in the Horizon Europe OpenSwarm project.


Bio:

Roderich Gross is an Associate Professor in the Department of Automatic Control and Systems Engineering at the University of Sheffield where he leads the Natural Robotics Lab. He has made algorithmic and other contributions to swarm robotics and reconfigurable robots, and invented a machine learning method called Turing Learning. 

Michael Lewis
Carnegie Mellon University

Swarms and Supervision: How much automation do we really need and where do we need it? 

Abstract:
Advances in machine vision, reinforcement learning, and now generative AI raise the question of where humans may contribute to the functioning of swarms.  Manually controlling more than 2 or 3 is impractical while search performance rapidly deteriorates when more than 4-5 are monitored.  Despite these discouraging numbers military and industrial planners are rushing to deploy robotic swarms of increasing size, reaching 27 UAVs in a recent EDGE-22 field exercise.   While expectations for collective action and information gathering are high, some are readily achievable while others pose significant problems.  To illustrate, we introduce three approaches of increasing complexity.  In a Flood the Zone strategy swarm members move as a group in a common direction avoiding collisions but otherwise act independently in sensing and acting.  In Search and Service missions members break off into local groups to execute ‘plays’ when triggering conditions are encountered rejoining the swarm upon completion.  In Comprehensive Surveillance information from across the swarm is pooled to provide a common picture.  Coupled with automated detection and alerting, Flood the Zone and Search and Service missions can be effectively supervised at the local level while Comprehensive Surveillance remains a  difficult problem.


Bio:

Michael Lewis is a professor in the Department of Informatics and Networked Systems and Intelligent Systems Programs in the School of Computing and Information at the University of Pittsburgh. His research focuses on human interaction with intelligent automation and modeling human operators in complex systems. Since coming to the University of Pittsburgh he has studied visualization based control interfaces, human-agent teamwork, and for the past seventeen years human-robot interaction.  He is the author of more than 300 scientific papers in the area of Human Factors and Human-Robot Interaction. He received numerous awards for his research, among them best paper awards at the major human factors conferences including HCII-14, HFES-13, SMC-12, HRI-11. He has served as an associate editor of Human Factors, IEEE SMC, JAAMAS, GDN, and CTW. As a RoboCup Rescue participant his teams have placed between 1-3 on 7 occasions. His group at the University of Pittsburgh has developed a number of widely used research tools including USARSim, a robotic simulation adopted for RoboCup Rescue competition and CaveUT, software for inexpensive immersive displays. Lewis has participated in various large research efforts, including three MURIs, PRET, NSF ITR, Science of Autonomy, and others.


Kirstin Petersen
Cornell University

Studies of Mixed Autonomy and Communication in Multi-Agent Systems 

Abstract:
In this talk, I will present findings from three separate studies in the Collective Embodied Intelligence Lab at Cornell towards human-robot swarm interaction. First, I will introduce the Martha Robot platform, designed as an inexpensive, yet capable, open-source mobile robot to support HRI studies, capable of communicating intent visually and audible through a soft inflatable interface. With colleagues, we are using this platform to better understand the preferred level of control in human-robot teams. Second, I will discuss recent findings on human-drone communication of intent through physical contact. Last, I will show how emergent behaviors in swarms of simulated Braitenberg Vehicles can be affected through a single, manually controlled agent. 


Bio:

Kirstin Petersen is an Assistant Professor and Aref and Manon Lahham Faculty Fellow in the School of Electrical and Computer Engineering at Cornell University. Her lab, the Collective Embodied Intelligence Lab, is focused on design and coordination of robot collectives able to achieve complex behaviors beyond the reach of an individual, and corresponding studies on how social insects do so in nature. Major research topics include swarm intelligence, embodied intelligence, soft robots, and bio-hybrid systems. Petersen did her postdoc at the Max Planck Institute for Intelligent Systems and her PhD at Harvard University and the Wyss Institute for Biologically Inspired Engineering. Her graduate work was featured in and on the cover of Science, she was elected among the top 25 women to know in robotics by Robohub in 2018, and received the Packard Fellowship in Science and Engineering in 2019 and the NSF CAREER award in 2021.