I am a doctoral researcher in the International Joint Ph.D. Programme in Intelligent Mechanical Engineering, a collaborative initiative between IIT Guwahati (India) and Gifu University (Japan). I am currently affiliated with the Biomimetic Robotics and Artificial Intelligence Lab (BRAIL) at IIT Guwahati and the Motion Laboratory at Gifu University.
My research lies at the convergence of soft robotics, wearable sensing, and machine learning, with a core focus on restoring upper limb function—particularly grasp ability—in individuals with motor impairments such as those caused by stroke or neuromuscular disorders.
Key objectives and contributions of my research include:
Development of a multi-sensory data glove to accurately measure finger joint angles and fingertip forces during grasping tasks, enabling detailed analysis of hand kinematics and kinetics.
Design of a 3D-printed tendon-driven soft hand exoskeleton using flexible materials and ergonomic architecture, aimed at providing comfortable, compliant assistance for users during rehabilitation.
Grasp type classification and early intent prediction through the implementation of hybrid deep learning models (e.g., CNN-BiLSTM, Transformer-based architectures) to enable intelligent, real-time control of assistive devices.
Grasp force estimation using direct, autoregressive, and multimodal predictive models to infer user effort and interaction forces without external instrumentation.
Exploration of grasp synergies—coordinated patterns of joint motion and force distribution—using dimensionality reduction techniques to simplify control and personalize assistance.
Implementation of synergy-based and Assistance-as-Needed control strategies, enabling adaptive support levels in soft exoskeletons based on real-time user performance and intent.
Currently, I am also serving as a JST-India Young Invited Researcher under the Sakura Science Exchange Program supported by Japan Science and Technology Agency, where I am working on a project titled “Imitation Learning of Grasping Motions for Robotic Automation in Production Environments.”
🚀 Vision and Future Work
As I approach the completion of my Ph.D., I am actively seeking postdoctoral and R&D opportunities in areas aligned with robotic manipulation, soft assistive devices, and human-robot interaction.
My overarching goal is to develop intelligent, adaptive, and human-aware robotic systems that can sense, learn, and respond in real time—bridging the gap between human intent and robotic execution.
In future work, I aim to focus on:
Soft robotic hands and wearable exosuits for upper-limb rehabilitation, motor recovery, and assistive support in daily activities.
Multimodal sensing systems integrating vision, touch, haptics, and wearable biosignals (e.g., EEG/EMG) to enhance perceptual awareness and intent understanding.
Learning-based control strategies such as imitation learning, reinforcement learning, and foundation model integration (e.g., LLMs + Robotics) for flexible and scalable robotic behavior.
Visual-tactile intelligence for deformation-aware, contact-rich manipulation in both structured and unstructured environments.
EEG/EMG-based intent decoding to enable intuitive and direct brain/muscle-interfaced control of robotic assistive devices, particularly for individuals with severe motor deficits.
Synergy-based, co-adaptive control frameworks to personalize robotic assistance based on user performance, fatigue levels, and task complexity.
I am particularly interested in contributing to interdisciplinary teams focused on the next generation of robotic dexterity, assistive technologies, and autonomous systems for applications in neurorehabilitation, healthcare, and collaborative robotics in industrial settings.