Wearable robotic systems such as lower-limb exoskeletons and prostheses are capable of augmenting human mobility and assisting individuals with mobility impairments. Conventionally, these systems generate joint torques that mimic the user’s underlying biomechanical joint demand during movement. Unfortunately, due to the dynamic nature of human movement during daily locomotor activities, it is challenging to develop a control framework that captures the full range of intended movements. However, recent breakthroughs in artificial intelligence (AI) have enabled improved comprehension of the human state information in real-time, enabling robust control of these wearable systems during dynamic movement. While these AI-based strategies show exciting promise, there remain critical hurdles for these interventions to be deployed to the real world. Challenges include positive feedback loops between actuation and sensing, data size requirements for user-independent models, model’s robustness to unseen mobility contexts, transitions between ambulation modes, and sensor data shifting. In general, there have been few attempts to tackle the critical problem of translating/generalizing laboratory-based AI approaches to real-world, large-scale applications. In this workshop, we will tackle these important challenges from multiple perspectives (both high-level and practical; academic and industrial) and provide roadmaps for future wearable robotic system developers to incorporate AI-based controllers for their applications.
Workshop Abstract Submission
We are pleased to invite 2 page extended abstract submissions for the AI-Based Estimation and Control of Wearable Robotic Systems for Enhancing Human Mobility at BioRob 2024, which will be reviewed and selected for short talks and/or a poster session.
Abstract topics of interest include all aspects of ML-based wearable robotics control, including (but not exclusive to): Estimation of user or environmental state; User intent recognition; Computer vision for movement estimation; Simulation and data augmentation for informing controller design; Adaptive wearable robot control; Novel sensing methods for wearable robot control.
Note: If you are already presenting a work at BioRob2024, you can upload your accepted paper.
Short Talk
A series of short talks given by junior researchers in the field (student or postdoctoral researcher)
5 minutes each followed by a 2 minute Q&A
To extend active discussions on relevant topics, we will hold the poster session and a short networking session following this event
Poster Session
A small symposium where both junior and senior researchers interact by presenting work via a poster session
The entire poster session will be held for 30 minute during lunchtime
Potential short talk presenters will be solicited through the same abstract submission
09:00 Welcome and Workshop Overview
09:10 Seminar Talk – Aaron Young
09:30 Seminar Talk – Robert Howe
09:50 Panel Discussion - Future Directions for Wearable Robotics
10:30 Coffee Break
11:00 Seminar Talk – Nick Fey
11:20 Seminar Talk – Helen Huang
11:40 Short Talks (from abstract submission, 6 short talks, 6 minutes each including QnA)
12:30 Poster Session
Associate Professor
Georgia Tech
New advanced wearable exosuits are capable of restoring function to individuals in the older adult population by reducing the metabolic cost of walking and restoring normative biomechanics. An important function of these devices is to timely and accurately recognize user intent and optimize the control to provide biomechanically appropriate assistance across multimodal task paradigms. Key challenges in the wearable robotics control community include generalizing control systems across a rich variety of real-world tasks and diverse individuals while simultaneously personalizing control systems to each individual’s specific set of biomechanical needs. Standard state-based controllers have largely failed to provide this level of task generalization combined with individualized assistance. A unified framework that focuses on continuous estimation of internal human state variables provides promise for task agnostic control that tracks individualized movements. This talk will examine our approaches for human internal state estimation using deep learning, optimization of the control, and associated human outcomes of the approach across multi-modal activities. New open-source datasets that we have generated to facilitate research in this area will also be briefly discussed.
Professor
North Carolina State University
University of North Carolina
In this talk, I will present our recent solution for personalized robotic prosthetic leg control that not only achieves the desired prosthesis mechanics but also augments unamputated biological joint motion to facilitate human-robot coordination during walking. At the core of our solution is a new bilevel optimization framework, which incorporates iterative inverse reinforcement learning (IRL) and reinforcement learning (RL) approaches to personalize the human-prosthesis objective function and prosthesis control law. Our preliminary validation demonstrated the feasibility of this bilevel optimization framework for symbiotic prosthesis control design.
Assistant Professor
University of Texas at Austin
Wearable robots are commonly used to provide basic functionality such as mediating standing and level-ground walking. To achieve robust control of these devices during widely-varying ambulation tasks, hierarchical control systems have been implemented that classify an individual’s high-level intention using discrete labels and delegate the phase-varying joint-level response of the device within a predicted mode to mid-level controllers that render motion, torque or impedance reference trajectories. This presentation will highlight our efforts to eliminate this hierarchy by developing volitional and semi-volitional control systems of robotic knee-ankle prostheses using new sensing modalities of peripheral muscles and shared robot control paradigms. Secondly, this presentation will emphasize that each device wearer is unique in their physical form as well as the priority they place on specific neuromotor objectives during movement. Predictive neuromusculoskeletal modeling systems that incorporate an individual’s anthropometric shape and underlying task objectives will be discussed within the context of soft-hip flexion exosuits.
Professor
Harvard University
We have developed vision-based systems to assist with navigational challenges in unstructured environments. One system uses RGB-D video data to detect staircases, estimate the distance to the last step on level ground, and estimate the height and depth of each tread. The difference in perspective when descending versus ascending stairs results in lower accuracy for downstairs estimates. Another system uses RGB images to detect anomalies in the path ahead. An autoencoder is trained on obstacle-free sidewalk images, so that anomalous objects produce image reconstruction errors. Subsequent classifiers discriminate common “anomalies” like manhole covers from novel obstacles. These systems can be used by assistive robots to adapt system behavior to match environmental conditions, and by vision-impaired users to warn of upcoming conditions.
Assistant Professor,
Carnegie Mellon University
inseung@cmu.edu
Assistant Professor,
Georgia Institute of Technology
mtucker@gatech.edu
Assistant Professor,
Korea University
daekyum@korea.ac.kr
Assistant Professor,
Harvard University
slade@seas.harvard.edu