NEUROMECHANICS MEETs DEEP LEARNING

New Opportunities & Avenues for Robotics and Human-Robot Interaction

ICRA'23, May 29th, London-UK
(Venue- South Gallery Room 29, ExCeL London)

Goal

Cross-pollination of insights between the fields of Neuromechanics, Robotics, and Machine Learning has often led to significant advancement not only in our understanding of human intelligence, but also in how we can manifest it in our digital systems. 

The goal of the workshop is to bring together experts in these individual fields as well as researchers working at their intersections. This workshop is an attempt to further fuel the cross-pollination at these intersections by providing a platform for sharing results/ insights, and discussing opportunities/ future potentials with the broader NeuroAI community.

Overview

ICRA 2023 motto is “Embracing the future – Making robots for humans”. This vision entails a future where capable robots and humans have a synergistic relationship. To achieve this vision, significant advancements are necessary. As result, we must address two crucial questions -- 

Humans are able to synthesize diverse behaviour by relying on an effective coordination between the central nervous system – where intelligent controllers are created by networks of billions of neurons – and the peripheral musculoskeletal system which translates the intentions into actions. Akin to the central nervous system, the field of Artificial Intelligence has been pursuing the emulation of intelligent behaviours via neural structures (Neural Networks). At the same time, and mostly independently, the biomechanics community has been developing in silico musculoskeletal model to understand peripheral actuation. These new developments offer exciting opportunities to better understand how movement is generated through these complex pathways and how machines can seamlessly be integrated with them.

This workshop aims at providing a platform for the experts in the field of artificial intelligence, neuromechanics and robotics to come together, to share respective progress, and explore joint opportunities.

Invited Speakers 

location:

South Gallery Room 29, ExCeL London

Program:

09:00 - 09:05 Opening remarks

09:05 - 09:25 Ajay Seth

Title: Two centuries of (neuro)musculoskeletal modeling: What we’ve learned and what is in store for understanding and improving human physical performance

Abstract: It's 1873. Our understanding of human movement stems mostly from observations and detailed dissections of the anatomy of muscles, tendons, bones, ligaments and other articular and fibro-cartilage structures that compose our joints. The structures involved in human movement are vividly described and we reason their function from observations and measurements on cadavers (Barclay). Electrophysiological experiments then lay the groundwork for Hill’s phenomenological and Huxley’s cross-bridge models of muscle contraction dynamics. It wasn’t until cinematography (1878), however, until we could capture and review the motion of living animals (Muybridge). Stereo photogrammetry and optical motion capture with force-plates soon improved our measurements of human movement. With models of muscles as actuators and rapidly improving multibody dynamics to capture the physics of the skeleton, we were soon “fitting” these models to experimental data. The complexity of multibody formulations, muscle and joint mechanics, and experimental data analysis was a massive barrier to their use in human movement science. 

The 21st century kicks in. OpenSim is born to address these issues and build a community around open and shared tools for musculoskeletal modeling and simulation and to boost the validation and verification of simulation-based conclusions. Human and animal movement studies have since exploded. With a focus on biomechanical fidelity OpenSim is evolving into a virtual prototyping environment to understand the effects of form on function in human and animal biomechanics. In addition to simulation fidelity, computational speed is necessary to predict human performance and to train deep neural networks that emulate the central nervous system. New platforms (osim-RL, Myosuite) and algorithms are sprouting to exploit faster, larger, and specialized computing resources and are reducing the tradeoffs between bio-fidelity and speed. We will soon be poised for another paradigm shift. Neurophysiological inspired deep-RL controllers show promise in delivering new insights into the organization and behaviors of the animal motor control system. These controllers will likely embody human/animal coordination that is not only robust to changes in the musculoskeletal system but can adapt and augment performance in real-time to advance rehabilitation and enable athletic records to be broken. 

Year is 2073. What can we envision? Wearable controllers coordinate devices that augment our physical performances at work and play to protect us from injury and empower us to be faster, stronger and more agile. Synergistic human-machine interfaces fundamentally change our ideas about rehabilitation, mobility and physical performance.

09:25 - 09:45 Pulkit Agrawal

Title: Dynamic and Dexterous Manipulation with Machine Learning

Abstract: Building a controller for a complex manipulation task requires constructing simplified (or reduced-order) models for real-time optimization. However, often these models are too simplified, resulting in manipulation skills being restricted to narrow settings such as manipulating only polygonal objects, considering small changes in object configuration, etc. Furthermore, constructing simplified models, akin to designing features, requires immense human effort and is therefore not scalable. Following the lesson in machine learning that features should be learned from data, I will show how we can bypass the human construction of simplified models and instead learn controllers from data. Results on challenging problems of multi-finger dexterous manipulation, whole-body control, and legged manipulation provide evidence of a framework for quickly synthesizing controllers for solving contact-rich problems with a high degree of freedom controllers with minimal human effort. I believe this is a crucial step towards scaling robotics to diverse tasks. 

09:45 - 10:05 Nidhi Seethapathi

Title: Computational principles of sensorimotor control: lessons from human locomotion

Abstract: The best current robots still fall short of the versatility, efficiency, stability, and robustness to uncertainty achieved by human sensorimotor control. One way to help machines achieve better sensorimotor performance is to develop theoretical frameworks and computational models that explain how humans select, execute, and learn movements. My talk will highlight the computational principles we've uncovered over the years that underlie efficient and stable control of everyday locomotion, robust generalization of locomotor control to intrinsic and extrinsic perturbations, and safe and energy-efficient learning of locomotion in a novel setting. These principles will provide a blueprint for engineers seeking to develop rehabilitation robots and autonomous systems that wish to be compatible with and/or comparable to humans.


10:05 - 10:25 Emo Todorov

Title: Modeling and simulation of bio-mechanical systems in MuJoCo

   Abstract: Physics simulation has played an important role in robotics. Its role has become even more important with the advent of machine learning methods that rely on extensive sampling. Biomechanical systems and well as biologically inspired robots pose unique challenges in terms of modeling and simulation: large numbers of degrees of freedom, flexible structures and soft materials, rich contact interactions with the environment, muscle-tendon actuation, unusual constraints which do not easily map to the kinematic and dynamic frameworks from traditional robotics. In this talk I will discuss the capabilities of the MuJoCo simulator in the context of such systems. Topics will include muscle models, soft materials, forward and inverse dynamics with soft constraints including contacts.

10:25 - 10:55 Coffee break and poster session


10:55 - 11:15 Vikash Kumar

Title: MyoSuite 2.0: Towards spatio-temporal neuro-muscular representations for behavior generalization

Abstract: An intricate fabric of sensory-motor representations supports the complexity of the movements biological beings can manifest on the go. In addition to synthesizing underlying motor-decisions, the brain also shares the responsibility for maintaining and updating these representations as the new experiences are exposed over time.


In Myosuite 1.0 we presented a contact-rich open-source framework for studying musculoskeletal motor control. Myosuite has since received rich interaction (>10k downloads), participation (MyoChallenge2022 being one the most well attended challenge at @Neurips2022), and contributions (new musculoskeletal models e.g. legs, tasks, and baselines) from the research community. Building on this momentum, this talk will introduce Myosuite 2.0 -- a platform for studying spatio-temporal neuro-muscular representations responsible for generalization in movement synthesis. In addition to introducing our latest set of MSK models and physiological tasks collections specially curated for studying neuro-muscular abstraction, I'll be presenting our new SOTA motor control paradigm SAR+MyoDex. Inspired by our sensory motor control, SAR+MyoDex can autonomously acquire generalizable spatio-temporal neuro-muscular representations without external supervision. Our agents leveraging these representations are able to robustly manipulate hundreds of distinct objects, outperforming SOTA by approximately 40% while simultaneously being up to 4x more efficient in learning new contact rich dexterous manipulation tasks!


11:15 - 11:35 Francisco Valero-Cuevas 

Title: Neuro-mechanics for robots: Co-design and co-adaptation of robotic brains and bodies 

Abstract: Young animals must quickly learn how to use their bodies to move and eat—or die. In contrast, robots require calibration and lengthy training. How do animals do it? As Tad McGeer reminded us long ago, useful physical behavior is possible with minimal actuation and control by leveraging the physics of the robot. But a body is not enough. Animals evolved nervous systems to learn, perform, and adapt tasks. I will provide a conceptual and computational overview of the hierarchical and distributed nature of the nervous system, and the highly conserved and ubiquitous neuro-mechanical architecture in animals that is the foundation of versatile function across innumerable environments and species. Neuro-mechanics, thus, demonstrates that more computational power is not always necessary or better, and emphasizes the need to co-design and co-adapt robotic brains and bodies. Some examples include the autonomous learning of locomotion with minimal experience and the extraction of body percepts.

11:35 - 12:25 Panel discussion

12:25 - 12:55 MyoSuite Tutorial -- link to colab

12:55 - 13:00 Conclusion

Call for poster

We are looking for early career researcher (bachelor, master, PhD, postdoc) to present their research at the poster session. Please contact us with your poster idea (guillaume.durandau[at]mcgill.ca) with [ICRA Poster] as the subject.

Myosuite Tutorial

Installation of MyoSuite is needed before hand. See following for code and instruction here 

Organisers 

Guillaume Durandau, McGill Univ.

Huawei Wang, U. of Twente

Vittorio Caggiano, FAIR-MetaAI

 

Seungmoon Song, Northeaster Univ.

Massimo Sartori, U. of Twente

Vikash Kumar, FAIR-MetaAI

Sponsor