Novel actuators that can provide squeeze, vibration, and localized forces while remaining soft and comfortable are essential for next-generation augmented reality applications. Despite this need, there are currently few soft actuator topologies that can provide high forces and high bandwidths at low voltages and temperatures. In this work, we present a new type of soft electromagnetic actuator architecture for haptics. These low-cost, easy-to-manufacture, and conformal actuators are composed of a coil, magnet, thin-film material, and water. Adding a thin ferromagnetic sheet further enables the creation of a latching actuator variant, which can improve force output while reducing power consumption. Each actuator combines low voltage (up to 2 V), high-bandwidth electromagnetics with hydraulics to amplify force output. In addition to force amplification, the hydraulics provide cooling and thermal mass, which enable the actuator to be used safely in wearables for longer durations. Using these actuators, we develop a prototype wearable wristband device that can render body-grounded squeeze and vibration. By expanding on the operating principles described in this work, novel augmented reality applications can potentially become possible.
Publications:
N. D. Kohls, N. Colonnese, Y. C. Mazumdar, and P. Agarwal, “HAPSEA: Hydraulically Amplified Soft Electromagnetic Actuator for Haptics”, IEEE/ASME Transactions on Mechatronics, 2023.
N. Kohls, P. Agarwal, “HAPSEA: Hydraulically Amplified Soft Electromagnetic Actuator for Haptics”, United States Provisional Patent, Pub No. 63/478566, Filed: January 5, 2023.
Fingertips are one of the most sensitive regions of the human body and provide a means to dexterously interact with the physical world. To recreate this sense of physical touch in a virtual or augmented reality (VR/AR), high-resolution haptic interfaces that can render rich tactile information are needed. In this paper, we present a wearable electrohydraulic haptic interface that can produce high-fidelity multimodal haptic feedback at the fingertips. This novel hardware can generate high intensity fine tactile pressure (up to 34 kPa) as well as a wide range of vibrations (up to 700 Hz) through 16 individually controlled electrohydraulic bubble actuators. To achieve such a high intensity multimodal haptic feedback at such a high density (16 bubbles/cm2) at the fingertip using an electrohydraulic haptic interface, we integrated a stretchable substrate with a novel dielectric film and developed a design architecture wherein the dielectric fluid is stored at the back of the fingertip. We physically characterize the static and dynamic behavior of the device. In addition, we conduct psychophysical characterization of the device through a set of user studies. This electrohydraulic interface demonstrates a new way to design and develop high-resolution multimodal haptic systems at the fingertips for AR/VR environments.
Publications:
Purnendu, J. Hartcher-O’Brien, V. Mehta, N. Colonnese, A. Gupta, C. J Bruns, P. Agarwal , “Fingertip Wearable High-resolution Electrohydraulic Interface for Multimodal Haptics”, IEEE World Haptics Conference , pp. 299-305 , 2023.
P. Agarwal, Purnendu and N. Colonnese, "Systems and Methods of Generating High-density Multimodal Haptic Responses Using an Array of Electrohydraulic-controlled Haptic Tactors, and Methods of Manufacturing Electrohydraulic-controlled Haptic Tactors for Use Therewith", United States Patent, App No. 18/462,306, Pub. Date: March 7, 2024.
Haptic feedback is important in augmented and virtual reality (AR/VR) because it closes the loop of touch sensation and provides physical realism to what is being rendered in the virtual world [Sodhi et al. 2013]. In this context, clothing is an appealing substrate for haptic interfaces because it is in direct contact with the user’s skin and provides a large space for delivering haptic feedback. Most haptic garments are based on rigid devices (e.g. electromagnetic vibrotactors) which tamper the softness of clothing and increase encumbrance for the user. Fluidic elastomeric actuators are interesting because they are soft, can be molded in a variety of shapes, and manufactured at scale [Sonar and Paik 2016]. We introduce a HYperelastic FAbric-Reinforced (HYFAR) soft actuator that is pneumatically powered and suitable for haptic clothing. It can render high forces, hyperinflate, be manufactured from textiles, and thanks to the local programming of the active membrane, render low encumbrance to the user and inflate into diverse shapes. We present the manufacturing process to program the local material properties of the membrane to achieve custom inflation by reinforcing a fabric-elastomer composite using embroidery. Furthermore, we present the modeling method that simulates the behavior of inf lated, multi-material, hyperelastic membranes. Finally, we develop functional garments with HYFAR to demonstrate shape shifting behaviors and render kinesthetic haptic feedback at the shoulder abduction-adduction joint of the user.
Publications:
J. Barreiros, T. Liu, M. Chiaramonte, K. Jost, Y. Menguc, N. Colonnese, P. Agarwal, “HYFAR: A Textile Soft Actuator for Haptic Clothing Interfaces”, ACM SIGGRAPH Poster, 2022
J. Barreiros, P. Agarwal, M. Chiaramonte, T. Liu, K. Jost, H. Zager, and N. Colonnese, "Fabric Reinforced Soft Actuators", United States Patent, App No. 17/111,549, Filed: December 4, 2020.
Tasbi: A Force-Controlled Multimodal Haptic Bracelet
Haptic feedback is known to enhance the realism of an individual’s interactions with objects in virtual environments. Wearable haptic devices, such as vibrotactile sleeves or armbands, can provide haptic feedback in a smaller and more lightweight form factor than haptic gloves that can be bulky and cumbersome to the wearer. In this article, we present tactile and squeeze bracelet interface (Tasbi), a multimodal haptic wristband that can provide radial squeeze forces around the wrist along with vibrotactile feedback at six discrete locations around the band. Tasbi implements a squeezing mechanism that minimizes tangential forces between the band’s points of contact with the skin, instead of focusing the motor actuation to predominantly normal forces. Force sensing capacitors enable closed-loop control of the squeeze force, while vibration is achieved with linear resonant actuators. Additionally, we present the results of psychophysical experiments that quantify user perception of the vibration and squeeze cues, including vibrotactile identification accuracy in the presence of varying squeeze forces, discrimination thresholds for the squeeze force, and an analysis of user preferences for squeeze actuation magnitudes.
Publications:
E. Pezent, N. Colonnese, J. Clark, J. Hartcher-O ́Brien, and P. Agarwal, “Systems and Methods for Simulating a Sensation of Expending Effort in a Virtual Environment”, United States Patent, Pub No. 11550397, Pub. Date: January 10, 2023.
E. Pezent, P. Agarwal, J. Hartcher-O’Brien, N. Colonnese, M. K. O’Malley, “Design, Control, and Psychophysics of Tasbi: A Force-Controlled Multimodal Haptic Bracelet”, IEEE Transactions on Robotics, 38(5), pp. 2962-2978, 2022.
E. Pezent, P. Agarwal, H. Benko, N. Colonnese, A. Israr and S. Robinson, “Systems and Methods for Providing Substantially Orthogonal Movement of a Device about a User’s Body Part”, United States Patent, Pub No. 11366522, Pub. Date: June 21, 2022.
E. Pezent, A. Israr, M. Samad, S. Robinson, P. Agarwal, H. Benko and N. Colonnese, “Tasbi: Multisensory Squeeze and Vibrotactile Wrist Haptics for Augmented and Virtual Reality”, IEEE World Haptics Conference, pp. 1-6, 2019. (Best Technical Paper Candidate)
E. Pezent, A. Gupta, H. Duhaime, M. O’Malley, A. Israr, M. Samad, S. Robinson, P. Agrawal, H. Benko, N. Colonnese, “Explorations of wrist haptic feedback for AR/VR interactions with Tasbi”, Extended Abstracts of the CHI Conference on Human Factors in Computing Systems, 2020.
Bellowband: A Pneumatic Wristband for Delivering Local Pressure and Vibration
We present the design and control of Bellowband, a pneumatic wristband for localized pressure and vibration haptic feedback. The wristband has eight equally spaced pneumatic bellows that extend into the wrist, constructed from layers of polyester thermoplastic polyurethane (TPU), resulting in a flexible, lightweight (11 g) band capable of rendering complex pressure and vibration cues to the user. Each bellow can withstand over 100 kPa, extend over 10 mm, and exert over 10 N of force at zero displacement. Quasi-static analysis is performed to estimate bellow force for a given input pressure and bellow displacement, and the dynamic response is examined experimentally. Finally, we demonstrate the wristband’s ability to deliver various haptic cues to the wrist, including uniform squeeze, uniform vibration, local force, and local vibration. Bellowband is a thin, soft, low-encumbrance wristband that can provide meaningful haptic feedback, making it ideal for AR/VR environments.
Publications:
E. Young, P. Agarwal, N. Colonnese and A. Memar, “Using a fluidic mechanism on a wearable device for both haptic feedback and user input”, United States Patent, Pub No. 11062573, Pub. Date: July 13, 2021.
E. Young, A. Memar, P. Agarwal and N. Colonnese, “Bellowband: A Pneumatic Wristband for Delivering Local Pressure and Vibration”, IEEE World Haptics Conference, pp. 55-60, 2019.
PneuSleeve: In-fabric Multimodal Actuation and Sensing in a Soft, Compact, and Expressive Haptic Sleeve
Integration of soft haptic devices into garments can improve their usability and wearability for daily computing interactions. In this paper, we introduce PneuSleeve, a fabric-based, compact, and highly expressive forearm sleeve which can render a broad range of haptic stimuli including compression, skin stretch, and vibration. The haptic stimuli are generated by controlling pneumatic pressure inside embroidered stretchable tubes. The actuation configuration includes two compression actuators on the proximal and distal forearm, and four uniformly distributed linear actuators around and tangent to the forearm. Further, to ensure a suitable grip force, two soft mutual capacitance sensors are fabricated and integrated into the compression actuators, and a closed-loop force controller is implemented. We physically characterize the static and dynamic behavior of the actuators, as well as the performance of closed-loop control. We quantitatively evaluate the psychophysical characteristics of the six actuators in a set of user studies. Finally, we show the expressiveness of PneuSleeve by evaluating combined haptic stimuli using subjective assessments.
Publications:
M. Zhu, A. Memar, A. Gupta, M. Samad, P. Agarwal, Y. Visell, S. J Keller, N. Colonnese, “Pneusleeve: In-fabric multimodal actuation and sensing in a soft, compact, and expressive haptic sleeve”, Proceedings of the CHI Conference on Human Factors in Computing Systems, pp. 1-12, 2020. (CHI 2020 Honourable Mention)
M. Zhu, P. Agrawal, A. Memar, M. Samad, and H. Zager, “Wearable Structures for Imparting Haptic Feedback Using Fluidic Actuators”, United States Provisional Patent, App No. 62/899,635, Filed: September 12, 2019.
Rehabilitation of the hands is critical for restoring independence in activities of daily living for individuals with upper extremity disabilities. There is initial evidence that robotic devices with force-control based strategies can help in effective rehabilitation of human limbs. However, to the best of our knowledge, none of the existing hand exoskeletons allow for accurate force or torque control. We design and prototype a novel hand exoskeleton with the following unique features: (i) an underlying kinematic mechanism that is optimized to achieve large range of motion, (ii) Bowden-cable-based LC-SEA allowing for bidirectional torque control of each joint individually. We test the developed prototype with human subjects to characterize its kinematics, dynamics and controller performance. Results show that the device supports a large workspace with hand, preserves the characteristics of natural motion of each digit, allows for accurate torque control and can be rendered dynamically transparent to offer minimal resistance to digit motion.
Publications:
P. Agarwal, Y. Yun, J. Fox, K. Madden and A. D. Deshpande, “Design, Control and Testing of a Thumb Exoskeleton with Series Elastic Actuation”, International Journal of Robotics Research, 36(3), pp. 355-375, 2017.
P. Agarwal and A. D. Deshpande, "Series Elastic Actuators for Small-scale Robotics Applications", ASME Journal of Mechanisms and Robotics, 9(3), 031016, 2017.
P. Agarwal, J. Fox, Y. Yun, M. K. O’Malley and A. D. Deshpande, "An Index Finger Exoskeleton with Series Elastic Actuation for Rehabilitation: Design, Control and Performance Characterization", International Journal of Robotics Research, 34(14), pp.1747-1772, 2015.
We present two types of subject-specific assist-as-needed controllers. Learned force-field control is a novel control technique in which a neural-network-based model of the required torques given the joint angles for a specific subject is learned and then used to build a force-field to assist the joint motion of the subject to follow a trajectory designed in the joint-angle space. Adaptive assist-as-needed control,on the other hand, estimates the coupled digit-exoskeleton system torque requirement of a subject using radial basis function (RBF) and on-the-fly adapts the RBF magnitudes to provide a feed-forward assistance for improved trajectory tracking. Experiments on the index finger exoskeleton prototype with a healthy subject showed that while the force-field control is non-adaptive and there is less control on the speed of execution of the task, it is safer as it doesnot apply increased torques if the finger motion is restricted. On the other hand, adaptive assist-as-needed controller adapts to the changing needs of the coupled finger-exoskeleton system and helps in performing the task with a consistent speed, however, applies increased torques in case of restricted motion and therefore, less safe.
Publications:
P. Agarwal and A. D. Deshpande, "Subject-specific Assist-as-needed Controllers for a Hand Exoskeleton for Rehabilitation", IEEE Robotics and Automation Letters, 3(1), pp.508-515, 2018.
P. Agarwal, B. Fernandez and A. D. Deshpande, "Assist-as-needed Controllers for Index Finger Module of a Hand Exoskeleton for Rehabilitation", Dynamic Systems and Control Conference (DSCC), pp. V003T42A002, 2015. (Winner of the Best Robotics Paper Award)
P. Agarwal and A. D. Deshpande, "Impedance and Force-field Control of the Index Finger Module of a Hand Exoskeleton for Rehabilitation", IEEE International Conference on Rehabilitation Robotics (ICORR), pp. 85--90, 2015.
A critical question to be answered to improve robotic rehabilitation is what is the optimal rehabilitation environment for a subject that will facilitate maximum recovery during therapy? Studies suggest that task variability, nature and degree of assistance or error-augmentation and type of feedback play a critical role in motor (re)-learning. In this work, we present a framework for robot-assisted motor (re)-learning that provides subject-specific training by allowing for simultaneous adaptation of task, assistance and feedback based on the performance of the subject on the task. We model a continuous and coordinated multi-joint task using a learning from demonstration approach, which allows the task to be modeled in a generative manner such that the challenge-level of the task could be modulated in an online manner. To train the subjects for dexterous manipulation, we present a torque- based task that requires subject to dynamically regulate their joint torques. Finally, we carry out a pilot study with healthy human subjects using our previously developed hand exoskeleton to test a hypothesis and the results suggest that training under simultaneous adaptation of task, assistance and feedback positively affects motor learning.
Publications:
P. Agarwal and A. D. Deshpande, "A Framework for Adaptation of Training Task, Assistance and Feedback for Optimizing Motor (Re)-learning using a Robotic Exoskeleton", IEEE Robotics and Automation Letters, 4(2), pp. 808-815, 2019.
P. Agarwal and A. D. Deshpande, “A Novel Framework for Optimizing Motor (Re)-learning with a Robotic Exoskeleton”, IEEE International Conference on Robotics and Automation (ICRA), pp. 490-497, 2017.
P. Agarwal and A. D. Deshpande, "A Novel Framework for Optimizing Motor (Re)-learning with a Robotic Exoskeleton", Biomechanics and Neural Control of Movement Conference, 2016.
A number of robotic exoskeletons are being developed to provide rehabilitation interventions for those with movement disabilities. We present a systematic framework that allows for virtual prototyping (i.e., design, control, and experimentation) of robotic exoskeletons. The framework merges computational musculoskeletal analyses with simulation-based design techniques, which allows for exoskeleton design and control algorithm optimization. We introduce biomechanical, morphological, and controller measures to optimize the exoskeleton performance. A major advantage of the framework is that it provides a platform for carrying out hypothesis driven virtual experiments to quantify device performance and rehabilitation progress. To illustrate the efficacy of the framework, we present a case study wherein the design and analysis of an index finger exoskeleton is carried out using the proposed framework.
Publications:
P. Agarwal, R. Neptune and A. D. Deshpande, "A Simulation Framework for Virtual Prototyping of Robotic Exoskeletons", ASME Journal of Biomechanical Engineering, 138(6), pp. 061004(1--15), 2016.
P. Agarwal, P. Kuo, R. Neptune and A. D. Deshpande, "A Novel Framework for Virtual Prototyping of Rehabilitation Exoskeletons", IEEE International Conference on Rehabilitation Robotics (ICORR), 2013.
Robotic exoskeletons can be effective tools for providing repetitive and high dose rehabilitation therapy. However, currently there is a lack of techniques to design therapy systematically using the myriad of subject-specific experimental data that is available from these devices. We envision an objective and systematic approach that combines experimental data with computational simulations for designing robot-assisted rehabilitation therapies. To this end, we present a methodology for estimating joint moments in the arm during upper extremity robotic training using a computational model of the coupled arm-exoskeleton system. Computational models of the coupled human-robot system may be valuable for systematic design of robotic rehabilitation therapy using quantitative data from robotic devices. Such models could also help in developing subject-specific therapy regimens that yield improved motor recovery.
Publications:
P. Agarwal, C. G. McDonald, T. A. Dennis, B. J. Fregly, and M. K. O’Malley, "An Approach to Simulate Robot-Arm Interactions in Upper-Extremity Exoskeletons", International Symposium on Wearable & Rehabilitation Robotics, 2017.
P. Agarwal, C. G. McDonald, T. A. Dennis, B. J. Fregly, and M. K. O’Malley, "Towards a Comprehensive Model of Robot-Arm Interactions: A Tool for Computational Neurorehabilitation", International Symposium on Computer Simulation in Biomechanics, 2017.
Effective teleoperation requires real-time control of a remote robotic system. In this work, we develop a controller for realizing smooth and accurate motion of a robotic head with application to a teleoperation system for the Furhat robot head [1], which we call TeleFurhat. The controller uses the head motion of an operator measured by a Microsoft Kinect 2 sensor as reference and applies a processing framework to condition and render the motion on the robot head. The processing framework includes a pre-filter based on a moving average filter, a neural network-based model for improving the accuracy of the raw pose measurements of Kinect, and a constrained-state Kalman filter that uses a minimum jerk model to smooth motion trajectories and limit the magnitude of changes in position, velocity, and acceleration. Our results demonstrate that the robot can reproduce the human head motion in real time with a latency of approximately 100 to 170 ms while operating within its physical limits. Furthermore, viewers prefer our new method over rendering the raw pose data from Kinect.
Publications:
P. Agarwal, Samer Al Moubayed, Alexandar Alspach, Joohyung Kim, Elizabeth J. Carter, Jill Lehman and Katsu Yamane, "Imitating Human Movement with a Teleoperated Robotic Head", IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), 2016. (Winner of the Best Paper Award: Technical Category)
Exoskeletons are a new class of articulated mechanical systems whose performance is realized while in intimate contact with the human user. The overall performance depends on many factors including selection of architecture, device, parameters and the nature of the coupling to the human, offering numerous challenges to design evaluation and refinement. In this paper, we discuss merger of techniques from the musculoskeletal analysis and simulation-based design to study and analyze the performance of such exoskeletons. A representative example of a simplified exoskeleton interacting with and assisting the human arm is used to illustrate principal ideas. Overall, four different case-scenarios are developed and examined with quantitative performance measures to evaluate the effectiveness of the design and allow for design refinement. The results show that augmentation by way of the exoskeleton can lead to a significant reduction in muscle loading.
Publications:
P. Agarwal, M. S. Narayanan, L-F. Lee, F. Mendel and V. N. Krovi, "Simulation-based Design of Exoskeletons Using Musculoskeletal Analysis", proceeding of ASME 2010 International Design Engineering Technical Conference, 2010. ( CIE Best Paper Award) [IDETC Talk]
Human pose estimation using monocular vision is a challenging problem in computer vision. Past work has focused on developing efficient inference algorithms and probabilistic prior models based on captured kinematic/dynamic measurements. However, such algorithms face challenges in generalization beyond the learned dataset. In this work, we propose a model-based generative approach for estimating the human pose solely from uncalibrated monocular video in unconstrained environments without any prior learning on motion capture/image annotation data. We propose a novel Product of Heading Experts (PoHE) based generalized heading estimation framework by probabilistically-merging heading outputs (probabilistic/ non-probabilistic) from time varying number of estimators. Our current implementation employs motion cues based human heading estimation framework to bootstrap a synergistically integrated probabilistic-deterministic sequential optimization framework to robustly estimate human pose. Novel pixel-distance based performance measures are developed to penalize false human detections and ensure identity-maintained human tracking. We tested our framework with varied inputs (silhouette and bounding boxes) to evaluate, compare and benchmark it against ground-truth data (collected using our human annotation tool) for 52 video vignettes in the publicly available DARPA Mind’s Eye Year I dataset 1. Results show robust pose estimates on this challenging dataset of highly diverse activities. Read More
Publications:
P. Agarwal, S. Kumar, J. Ryde, J. Corso, and V. Krovi, "An Optimization Based Framework for Human Pose Estimation in Monocular Videos", International Symposium on Visual Computing, Rethymnon, Crete, Greece, July 16-18, 2012. ( Best Poster Award, 23rd CSE Graduate Research Conference, State University of New York at Buffalo, 2011.)
Youtube Channel : http://www.youtube.com/user/ubmaemindseye
Estimating the physical parameters of articulated multibody systems (AMBSs) using an uncalibrated monocular camera poses significant challenges for vision-based robotics. Articulated multibody models, especially ones including dynamics, have shown good performance for pose tracking, but require good estimates of system parameters. In this paper, we first propose a technique for estimating parameters of a dynamically equivalent model (kinematic/geometric lengths as well as mass, inertia, damping coefficients) given only the underlying articulated model topology. The estimated dynamically equivalent model is then employed to help predict/filter/gap-fill the raw pose estimates, using an unscented Kalman filter. The framework is tested initially on videos of a relatively simple AMBS (double pendulum in a structured laboratory environment). The double pendulum not only served as a surrogate model for the human lower limb in flight phase, but also helped evaluate the role of model fidelity. The treatment is then extended to realize physically plausible pose-estimates of human lower-limb motions, in more-complex uncalibrated monocular videos (from the publicly available DARPA Mind's Eye Year 1 corpus). Beyond the immediate problem-at-hand, the presented work has applications in creation of low-order surrogate computational dynamics models for analysis, control, and tracking of many other articulated multibody robotic systems (e.g., manipulators, humanoids) using vision. Read More
Publications:
P. Agarwal, S. Kumar, J. Ryde, J. Corso and V. Krovi, "Estimating Dynamics On-the-fly Using Monocular Video For Vision-Based Robotics", IEEE/ASME Transactions on Mechatronics, 99(4), pp. 1412-1423, 2013.
P. Agarwal, S. Kumar, J. Ryde, J. Corso, and V. Krovi, "Estimating Human Dynamics On-the-fly Using Monocular Video for Pose Estimation", Robotics: Science and Systems Conference, University of Sydney, Sydney, Australia, July 9-13, 2012.
P. Agarwal, S. Kumar, J. Corso, and V. Krovi, "Estimating Dynamics On-the-fly Using Monocular Video", Dynamic Systems and Control Conference, California, October 12-14, 2011.
Youtube Channel: http://www.youtube.com/user/ubmaemindseye