Abstract: Humans achieve spectacular agility and dexterity despite severe handicaps—slow actuation, slow communication, slow computation. Robots may be catching up, at least in agility, but given their advantages (faster actuation, much faster communication and computation) why aren’t they much better? Should robots try to emulate humans? Or learn from humans? I will briefly survey a plausible hypothesis about how humans achieve their impressive performance but will also present examples of surprising limitations of human neuro-motor performance. These latter may reflect a corollary of Murphy’s Law, the “no free lunch” principle: the strategies humans use to overcome their limitations in some tasks may inevitably compromise their performance in others. While optimization is a popular approach to robot coordination and control, I will argue that human performance is not optimal in any meaningful sense but “good enough”, consistent with the constraints of evolution. Whether robots should be subject to similar constraints is open for discussion.
Bio: Neville Hogan is Sun Jae Professor of Mechanical Engineering and Professor of Brain and Cognitive Sciences at the Massachusetts Institute of Technology. He earned a Diploma in Engineering (with distinction) from Dublin Institute of Technology and M.S., Mechanical Engineer and Ph.D. degrees from MIT. He joined MIT’s faculty in 1979 and presently directs the Newman Laboratory for Biomechanics and Human Rehabilitation. He co-founded Interactive Motion Technologies, subsequently part of Bionik Laboratories. His research includes robotics, motor neuroscience, and rehabilitation engineering, emphasizing the control of physical contact and dynamic interaction. Awards include: Honorary Doctorates from Delft University of Technology and Dublin Institute of Technology; the Silver Medal of the Royal Academy of Medicine in Ireland; the Saint Patrick’s Day Medal for Academia from Science Foundation Ireland; the Henry M. Paynter Outstanding Investigator Award and the Rufus T. Oldenburger Medal from the American Society of Mechanical Engineers, Dynamic Systems and Control Division; and from the Institute of Electrical and Electronics Engineers, the Academic Career Achievement Award from the Engineering in Medicine and Biology Society and the Pioneer in Robotics Award from the Robotics and Automation Society.
Abstract: Add information about your project. You can include success metrics, timelines, and the latest updates. This is linked to a subpage, which you can fill out with even more details about the project.
Bio: Add information about your project. You can include success metrics, timelines, and the latest updates. This is linked to a subpage, which you can fill out with even more details about the project.
Abstract: In the last fifteen years, the idea of building soft instead of rigid hands has picked up and contributed to new ways of looking not only at hand design, but also at grasping and manipulation planning and control. Many problems remain open, and hands for both grasping and manipulation are still a very active field of research. In this talk, I will review some of the history and discuss what new directions is research taking in this field. These include the design of more versatile hands (including e.g. flexible palmar arcs), integrated wrist-hand design, the development of more robust and reliable tactile sensing, and the implementation of grasp reflexes to improve reactivity to uncertainties in the environment.
Bio: Antonio is a scientist interested in robotics, automatic control and haptics. He holds a chair in Robotics at the University of Pisa and leads the Soft Robotics Laboratory at the Italian Institute of Technology in Genova. His research work produced many publications largely used and cited, and earned him several awards, including the Pioneer Award from the IEEE Robotics and Automation Society. He helped start the WorldHaptics Symposium series, the IEEE Robotics and Automation Letters, and the Italian Institute of Robotics and Intelligent Machines. He is currently Editor in Chief of The International Journal of Robotics Research (IJRR) .
Abstract: This talk explores approaches to embodied intelligence for robotic manipulation, focusing on how distributed stiffness, morphology, and low-level control enable robust physical interaction with the environment. By co-designing materials, structure, and sensory–motor control, robots can offload intelligence to their bodies, achieving adaptable, resilient manipulation without relying solely on high-level planning. Drawing inspiration from biological systems while leveraging computational design and fabrication methods, the talk examines how variations in compliance, geometry, and local control policies shape manipulation performance. The resulting robotic systems demonstrate new capabilities for interacting with complex, uncertain environments, from our kitchens to agricultural fields.
Bio: Josie Hughes is an Assistant Professor at EPFL where she established the CREATE in 2021. She undertook her undergraduate, masters and PhD studies at the University of Cambridge, joining the Bio-inspired Robotics Lab (BIRL). Following this, she worked as a postdoctoral associate at the CSAIL, MIT in the Distributed Robotics Lab. Her research focuses on developing novel design paradigms for designing robot structures that exploit their physicality and interactions with the environment. This includes the development of robotic hands, soft manipulators and locomoting robots, and automation systems for applications focused on sustainability and science .
Abstract: Sensorimotor hand function can be described as a multidimensional space where mechanical, neural, and cognitive factors interact to enable a rich repertoire of actions. Among these actions, dexterous object manipulation plays a key role in motor development as well as activities of daily living. I will review insights gained from our research on dexterous manipulation – combining biomechanical, behavioral, neuromodulation, neuroimaging, and robotics approaches – as a model for understanding sensorimotor control and underlying neural mechanisms. I will conclude my talk with an overview of directions for future fundamental and translational research.
Bio: Marco Santello, Ph.D. is the Fulton Professor of Neural Engineering in the School of Biological and Health Systems Engineering at Arizona State University, USA. His research focuses on the neural control of movement, particularly the mechanisms underlying dexterous hand function, sensorimotor integration, and motor learning. By combining neuroscience, biomechanics, and engineering approaches, his work seeks to understand how the brain coordinates complex hand movements and how this knowledge can be applied to neurorehabilitation, prosthetics, and human–robot interaction. Dr. Santello earned his Ph.D. in Sport and Exercise Sciences from the University of Birmingham (U.K.) and completed postdoctoral training in neuroscience at the University of Minnesota. Since joining Arizona State University in 1999, he has held several leadership roles, including Director of the School of Biological and Health Systems Engineering and Director of the NSF Industry/University Cooperative Research Center on Building Reliable Advances and Innovation in Neurotechnology (BRAIN). His research has contributed to advances in understanding multi-finger coordination and dexterous manipulation, informing the design of robotic and prosthetic hands and technologies for quantifying and modulating sensorimotor function. Dr. Santello has authored more than 120 peer-reviewed publications, and his work has been widely supported by the National Institutes of Health and the National Science Foundation.
.
Abstract: Add information about your project. You can include success metrics, timelines, and the latest updates. This is linked to a subpage, which you can fill out with even more details about the project.
Bio: Add information about your project. You can include success metrics, timelines, and the latest updates. This is linked to a subpage, which you can fill out with even more details about the project.
Abstract: Add information about your project. You can include success metrics, timelines, and the latest updates. This is linked to a subpage, which you can fill out with even more details about the project.
Bio: Add information about your project. You can include success metrics, timelines, and the latest updates. This is linked to a subpage, which you can fill out with even more details about the project.
Abstract: Humans can effortlessly coordinate with one another through physical contact, from dancing the tango to carrying a sofa together. How do we achieve such rapid, flexible coordination? In this talk, I will describe behavioral experiments showing how humans can infer the partner’s movement intention through touch. I will then describe how giving robots the ability to estimate human intention can greatly simplify and expand human-robot interaction from providing physical assistance to intelligently avoiding obstacles. Finally, I will discuss how intention estimation can improve teleoperation by allowing a follower robot to remain highly compliant while still accurately tracking a leader’s trajectory, thereby improving both safety and task performance.
Bio: Atsushi Takagi received his MSci in Physics in 2011 and a PhD in Computational Neuroscience from Imperial College in 2016. His research uses behavioral experiments to understand how the brain controls the body to make skillful movements. The findings from these experiments are leveraged to develop applications in sports training and physical rehabilitation, human-robot interaction, and teleoperation. His current interest is in understanding the mechanism that enables the dominant hand to be more skillful than the other.
Abstract: Robotic hands and hand exoskeletons share a fundamental challenge, achieving dexterous, adaptive manipulation with minimal control complexity, yet the two fields have developed largely in parallel. This talk uses a custom hand exoskeleton to surface two underexploited principles from human motor control with direct implications for robotic manipulation. The first, proprioceptive integration, concerns how biological systems couple intent sensing and state estimation into a single loop, a coupling that often remains under-utilized in robotic manipulation despite increasingly rich sensor suites. A residual isometric grip force control strategy implemented in the exoskeleton serves as a concrete demonstration of this principle. The second, concerns how passive joint mechanics can shift computational burden away from the controller and into the physical structure itself. This talk argues that both principles represent meaningful, actionable directions for robotic hand design.
Bio: Quentin Sanders holds a joint appointment in the Department of Bioengineering and the Department of Mechanical Engineering at George Mason University's College of Engineering and Computing. He received his Bachelor of Science in mechanical engineering from the University of Maryland Baltimore County in 2015, and his master’s and PhD in Mechanical engineering from the University of California, Irvine in 2018 and 2020, respectively. Prior to joining the Department of Bioengineering, Sanders spent a year at X, the moonshot factory (formerly Google X), and completed a postdoctoral fellowship in the Joint Biomedical Engineering Program at the University of North Carolina–Chapel Hill and North Carolina State University. At George Mason University, he leads the Enabling Mobility through Patient Oriented Wearables and Robotics (EMPOWER) Laboratory, which develops robotic and prosthetic devices for individuals with neurological injuries or amputations.
Abstract: In this talk, Rich will share a few challenges that Shadow Robot would like the community to bring their creativity to bear on, and offer a few reflections on areas where Shadow Robot has got their fingers burnt in the past, so you don't have to.
Bio: Rich Walker is a Director at the Shadow Robot Company, an elected Board Member of the euRobotics Association and an ARIA Creator in the Smarter Robot Bodies programme. Rich's career in robotics spans many hype cycles around innovations that will transform robotics and he looks forward to this transformation happening one day. In the mean time, Rich works on the intersection of advanced dexterity and real world problems, exploring new robot hand designs and supporting a wide range of robotics researchers in their work.
Abstract: For the past few years, the RAI Institute has been working on applying machine learning to highly physical manipulation and locomotion tasks in robotics. In doing so, we have repeatedly come up against the same problem, over and over: most methodologies for learning control policies can only act in a centralized decision making capacity. In fact, most specifically require that a task be described as a Markov decision process (MDP), i.e. as an explicitly enumerated space of states and actions, a physical prediction model, and a reward function. However, our practical experience teaches us that just as much depends on the behaviors of the robot that cannot be centrally controlled. Physical interactions such as collisions often happen too fast for a control system to react, and even joint-space control decisions usually happen too fast for a central control loop running at tens to hundreds of milliseconds. Some of the local decisions are made through mechanism design: underactuated transmissions and suspensions, for example, can be made to respond conditionally to some input faster than any actuator through shaping of kinematics and passive impedance. Others, such as shaping of joint-space and task-space impedance, must be done first by designing to minimize parasitic impedance in the manipulator, and then by careful tradeoffs to ensure properties such as controller passivity and tunable impedance. Correctly making these non-centralized decisions amounts to much more than faithfully describing them (the "sim to real" approach). We will go through examples from present and past work on robotic manipulation that demonstrate how local physical properties carefully used can complement central control and planning.
Bio: Lael Odhner is a researcher at the RAI Institute in Cambridge, Massachusetts. A recovering control theorist, Lael's current research interests lie in the design of robot manipulators and hands. Prior to joining RAI, Lael was the co-founder and CTO of RightHand Robotics, a company selling model-free pick-and-place robots for warehouse automation.
Abstract: Add information about your project. You can include success metrics, timelines, and the latest updates. This is linked to a subpage, which you can fill out with even more details about the project.
Bio: Add information about your project. You can include success metrics, timelines, and the latest updates. This is linked to a subpage, which you can fill out with even more details about the project.