Abstract: Intel Labs presents a focused exploration of human-robot collaboration (HRC) in semiconductor manufacturing, offering insights into real-world industrial challenges and applications. This research aims to democratize robot programming and enhance operational efficiency through intuitive user demonstrations and natural-collaborative synergy. Key innovations include AI-Robot Primitives and Programming-by-Demonstration (PbD), supported by virtual-reality, haptic-feedback interfaces and human-intent estimation for optimizing task co-execution fluency. These advancements enable immersive and intuitive programming and foster seamless collaboration between humans and robots. The presentation will highlight strategic deployments within Intel's processes, showcasing the practical use cases of cobots in maintenance and manufacturing challenges. This work underscores the transformative potential of Human-Robot teams in revolutionizing semiconductor production processes.
Bio: David obtained a master's degree in computer science at the National Polytechnic Institute (IPN-Cinvestav-Mx), specializing in Geometric-algebra in 3D-Vision for Humanoid Robots. David furthered his studies in Germany, obtaining a Doctorate in Engineering (Dr.-Ing.) from the Karlsruhe Institute of Technology (KIT), focusing on Environmental Robot Perception for Grasping and Manipulation. As a post-doctoral research fellow at KIT, he contributed to numerous European projects in Robotics and AI. His scholarly work includes over 28+ publications in leading conferences and journals, alongside 95+ patents in multi-modal sensing and novel representation technologies, impacting fields such as robotics, smart spaces, autonomous vehicles, VR, AR, and more. Currently, David serves as a principal engineer and robotics team lead at Intel Labs' Intelligent System Research Lab. His active involvement in the Mexican Net of Talent Abroad, Intel Latin Network, Society of Hispanic Professional Engineers, and German Society for Robotics underscores his dedication for meaningful industrial and academic contributions.
"Human-Robot Teaming for Assisted Living" by Yiannis Demiris
Abstract: There is substantial interest and progress in considering how robots can assist humans in activities of daily living, with applications both to home and work environments. From mobility (smart wheelchairs) to (bimanual) robots that help us get dressed, eat, and bathe, a fundamental (and multidisciplinary) research direction is how to assist in a human-acceptable manner, with user comfort, trust and privacy added as optimisation criteria to the traditional task-performance metrics. In this talk, I will outline our research at Imperial’s Personal Robotics Lab on how multimodal sensing and inference of human states can influence the robot’s behaviour and levels of assistance, in diverse tasks (mobility, handovers, dressing, among others). I will detail the robotic challenges that frequently occur in assistive scenarios (such as dealing with deformable and articulated objects, such as clothes), as well as the challenges of utilising theoretical concepts such as trust in human experiments.
Bio: Yiannis is a professor of Human-centred Robotics at Imperial College London, where he also holds the Royal Academy of Engineering (RAEng) Chair in Emerging Technologies (CiET). He specialises in human-robot interaction, and in particularly robot perception of human states, human-in-the-loop learning and personalisation, and adaptive human-robot collaboration; he has published more than 270+ refereed papers in these topics. He graduated with a BSc and a PhD in Intelligent Robotics from the University of Edinburgh, has been a visiting scholar at Harvard, and AIST in Japan, and established the Personal Robotics Laboratory at Imperial College in 2001.
Web page: http://profiles.imperial.ac.uk/y.demiris
Lab: www.imperial.ac.uk/Personal-Robotics
"Variable Autonomy for Brain-Machine Interfaces" by Tom Carlson
Brain-Machine Interfaces (BMI) offer an exciting mode of interaction that does not rely on our usually motor output pathways. This makes them a particularly attractive potential solution to supporting people with severe motor disabilities to (re-)gain a level of independence. However, BMIs generally achieve relatively low information transfer rates compared with traditional interfaces, so to compensate for this, they are often teamed with robots that have a high degree of autonomy. Nevertheless, it is important that the user retains control authority, so in this talk we will explore adjustable autonomy methods that enable the human to share control with a robot via a BMI, whilst adapting to their ever-evolving personal needs and capabilities.
Bio: Tom Carlson is Professor of Assistive Robotics at UCL, Vice-Dean Education for the Faculty of Medical Sciences, and Head of Education for the Division of Surgery and Interventional Science. His research lab is based in Aspire CREATe, the Centre for Rehabilitation Engineering and Assistive Technology. Tom is a Senior Fellow of the Higher Education Academy (SFHEA) and co-Director of the MSc in Rehabilitation Engineering and Assistive Technologies. He obtained his MEng in Electrical & Electronic Engineering (2006) and PhD in Intelligent Robotics (2010), both from Imperial College London. He then pursued his postdoctoral research in shared control for brain-machine interfaces at EPFL, Switzerland, before joining UCL as a lecturer in 2013. From 2016-2018, he was a visiting professor at LAMIH UMR CNRS 8201, Université de Valenciennes et du Hainaut–Cambrésis, France. Prof Carlson also co-directed the INRIA (France) associated team ISI4NAVE (2016-2021) and co-founded the IEEE SMC Technical Committee on Shared Control.
"Enhancing Artificial Intelligence (AI) for Automation and Robotics Through Awareness of Human-Object Interactions" by Natasha Kholgade Banerjee
Abstract: Humans interact with objects in complex and diverse ways, through activities such as lifting, picking, handing objects to each other, assembly, taking apart, and repairing. As systems such as robots and intelligent agents become increasingly pervasive in user spaces, artificial intelligence (AI) algorithms driving these systems need to be aware of human interactions with objects to ensure seamless integration for tasks such as human-robot collaboration (HRC) and automation in domains such as manufacturing and healthcare. In this talk, I will discuss my work on studying how humans perceive and interact with objects to create AI algorithms that are aware of human-object relationships. My talk will discuss the large multimodal datasets our lab, the Terascale All-sensing Research Studio (TARS), has contributed in to enable study of and AI algorithm development on tasks of interest in HRC such as multi-agent object handover, object repair, and assistance during lifting. I will talk about how our work on AI-driven sensing of user behavior from single- and multi-person data enhances HRC, and how our state-of-the-art work on automated human-aware damaged object completion and enables democratization of repair. Further, I will discuss how our new multi-institutional cross-disciplinary research on “recyclofacturing” transforms manufacturing by enabling metal recyclers to fabricate user products from scrap metal by integrating AI, extended reality, and HRC and workforce training and education toward cyber-enabled manufacturing.
Bio: Natasha Kholgade Banerjee is the LexisNexis Endowed Co-Chair for Advanced Data Science and Engineering and Associate Professor in the Department of Computer Science & Engineering at Wright State University in Dayton, Ohio, USA. She is co-founder and co-director of the Terascale All-sensing Research Studio (TARS, https://tars-home.github.io/). She performs research at the intersection of computer graphics, computer vision, and machine learning. Her research uses large-scale multimodal multi-viewpoint data to contribute artificial intelligence (AI) algorithms imbibed with comprehensive awareness of how humans interact with objects in everyday environments. Her work addresses data-driven object repair, model generation, and assembly, human-robot handover informed by multi-person interactions, and AI-driven detection of need for assistance from multimodal data on human-object interactions. She received her Ph.D. in 2015 from the Robotics Institute at Carnegie Mellon University, and her BS and MS degrees in 2009 from the Department of Computer Engineering at Rochester Institute of Technology. Prior to Wright State, she was Assistant Professor, and later Associate Professor, of Computer Science at Clarkson University between 2015 and 2024. Her work has been published at prestigious venues such as ICRA, CVPR, NeurIPS, ECCV, IEEE RO-MAN, and SIGGRAPH Asia, and has received multiple awards at venues such as IEEE AIxVR, ACM/IEEE CHASE, IEEE VR, SIGMAP, IEEE MMM, and IEEE MMSP. Her research is supported by multiple grants from agencies such as NSF, NIST, and the De Luca Foundation.