I am an MS Robotics student at Georgia Tech. I work with Prof. Sehoon Ha on RL for humanoids.
Over the summer, I worked as an AI Resident at 1X, where I taught NEO cool stuff with RL :). I completed my undergrad at the Indian Institute of Technology (BHU) Varanasi, pursuing my bachelor's in Mechanical Engineering.
Previously, I have:
Worked at IISc Bangalore at the Stochastic Robotics lab headed by Prof. Shishir Kolathaya on making a robot dog see and walk.
Collaborated with Dr. Sourav Garg from AIML, University of Adelaide, to develop viewpoint invariant visual place recognition systems.
Interned at the Flight Robotics and Perception Group, University of Stuttgart as a DAAD WISE Scholar, German-Indo exchange program. I developed a deep-RL-based drone docking system under the supervision of Prof. Aamir Ahmad and Pascal Goldschmid.
In my free time I quiz and design cool posters. I also practice kendo (read training to be a samurai :)
I am always happy to learn and open to network.
I’ve always been fascinated by the kind of intelligence that allows humans to move, adapt, and interact effortlessly with the world. This ability, while natural for us, remains surprisingly difficult for robots to achieve, a challenge often described by Moravec’s paradox. My research focuses on using reinforcement learning to give robots this kind of physical intelligence, enabling them to perform complex, human-like movements and adapt to dynamic environments. Inspired by Sci-Fi characters like Baymax and Wall-E, my long-term goal is to create robots that are not just technically advanced but genuinely helpful.
This video perfectly encapsulates my vision for the future of society with robots.
Unsupervised Skill Discovery as Exploration for Learning Agile Locomotion.
CoRL 2025
Seungeun Rho* , Kartik Garg* , Morgan Byrd, Sehoon Ha
By applying unsupervised reinforcement learning, we trained agile locomotion behaviors without relying on a curriculum or reward engineering.
Revisit Anything: Visual Place Recognition via Image Segment Retrieval.
ECCV 2024
Kartik Garg*, Shubodh Sai*, Shishir Kolathaya, Madhava Krishna, Sourav Garg
Sachin Negi, Kartik Garg, Milind Prajapat, Neeraj Sharma
Developed control algorithm for powered prosthetic foot. This project’s aim was to make a computationally effective classification algorithm to detect human gait phases. A major limitation of existing powered foot prosthetics is that they use high power computational hardware which is costly and makes it undesirable for the majority of the Indian demographic. I formulated a fuzzy logic-based algorithm to perform real-time classification. The algorithm produced exemplary real-time results, which led to a journal publication in Springer Nature, Computer Science Journal.
Diffusion-Locomotion
Kartik Garg*, Mohit Javale*, Narayanan P P*, Shishir Kolathaya.
Developed a unified diffusion-based policy for morphology adaptive multi-robot multi-skill locomotion.
Kartik Garg, Shishir Kolathaya
Developed visual locomotion policies using deep reinforcement learning to enable locomotion on challenging terrains.
Kartik Garg, Pascal Goldschmid, Aamir Ahmad
Developed a deep reinforcement learning model for drone docking decision making to maximise the time for a drone to complete a task safely before it needs to dock.
Kartik Garg, Surabhit Gupta, Yashasvi Singh, Prof. Lakshay
Developing a two staged disaster response system to tackle demand uncertainties and optimize the flow of relief material distribution to affected areas as soon as possible
Kartik Garg, Sourav Garg, Michael Milford
Developed patch pooling techniques for robust deep lidar-based place recognition systems.
Kartik Garg, Raghav Soni, Surabhit Gupta, Lokesh Krishna, Niranth Sai
Addressed the challenges of bipedal locomotion, motivated by the agility and nimbleness of a Jerboa. Formulated a MPC controller inspired by MIT Cheetah 3 controller, for bipedal locomotion. Jerbot Beta can't run yet but it can fly :)