For detailed program of workshops, please refer to: https://www.ro-man2024.org/program-at-a-glance.
Abstract: Effective human-robot collaboration requires a high level of cognitive compatibility between humans and machines. To achieve this, robots must incorporate models that account for the unique biases and characteristics of human perception and cognition. For example, insights from human motor control and embodied communication can inform the design of robot behaviors that are intuitive and transparent. While some models can be derived from existing knowledge, more complex cognitive processes must be learned through the robot's own experiences. The ultimate goal is to develop robots that are more “humane”—not by resembling humans physically, but by "seeing" and "thinking" in ways that align more closely with human cognition.
Abstract: Robots are progressively moving out from research laboratories into real-world human environments, envisioned to provide companion care for elderly people, teach children with special needs, assist individuals in their day-to-day tasks at home or work, and offer services in public spaces. All these practical applications require that humans and robots work together in such environments, where interaction is unavoidable. Therefore, there has been an increasing effort towards advancing the perception and interaction capabilities of robots, enabling them to recognise, adapt to, and respond to human behaviours and actions with increasing levels of autonomy. However, this brings about a long list of ethical and responsible research and innovation implications. In this talk, I will briefly discuss these issues and give an overview of our ongoing research activities on understanding human behaviour, with an emphasis on social acceptance, fairness, and explainability.
09:30 – 09:37 Generating Contextually-Relevant Navigation Instructions for Blind and Low Vision People
Zain Merchant, Abrar Anwar, Emily Wang, Souti Chattopadhyay, and Jesse Thomason
09:37 – 09:44 3D Gaussian Splatting for Human-Robot Interaction
Shawn Bowser and Stephanie M. Lukin
09:44 – 09:51 Toward a Dialogue System Using a Large Language Model to Recognize User Emotions with a Camera
Hiroki Tanioka, Tetsushi Ueta, and Masahiko Sano
09:51 – 09:58 Interactive Terrain Affordance Learning via VAE Query Selection & Data Manipulation
Jordan Sinclair, Brian Reily, and Christopher Reardon
09:58 – 10:05 HRI-Free Evaluation of Embodied Social Attention Models Through Cognitive Robotic Simulation
Fares Abawi and Stefan Wermter
10:05 – 10:12 ‘Scrape That Barnacle’: Commanding Underwater Robot In-Contact Manipulation Tasks with Intuitive Spatial-Temporal-Force Features
Ramya Challa, Hunter Brown; Chirag Jain, Zachary Speiser, and Heather Knight
10:30 – 10:37 Policy-Enhanced Fallback Nodes in Behavior Trees
Andreas Naoum and Loizos Michael
10:37 – 10:44 Towards Teammate-Aware Active Search for Human-Multi-Robot Teams in Adverse Environments
Sherif Bakr, Brian Reily, and Christopher Reardon