Snapshot of the Quadruped
My current research revolves around learning robust locomotion behavior for an in-house built, small-sized Quadruped with 18-dof using Deep Reinforcement Learning. Classical approaches for locomotion control requires complete dynamic and kinematic model of the robot. Based on these models, an expert designs the end-point trajectory to be executed, with tedious fine-tuning. These mathematical models however, fail to utilize the robot's capabilities and fail to provide adaptive and robust control.
Deep Reinforcement Learning (Deep-RL) techniques automate the process and enable the quadruped to explore in it's action space and learn to walk from scratch based on the scalar signals. Deep-RL however, has its own challenges like sample inefficiency.
Dealing with these challenges, my aim is to design an efficient and robust controller that learns to walk. I have built a physics-based simulator to train the robot model. In addition to it, I am working on the methodologies to effectively transfer the learning from a simulation environment to the real hardware.
The hardware is designed and manufactured in-house by Dhaivat Dholakia and Shounak Bhattacharya (IISc).