Link
TwitterLinkedInEmailGitHubLink

Kartik Garg

I am a researcher at IISc Bangalore at the Stochastic Robotics lab headed by Prof. Shishir Kolathaya on making a robot dog see and walk. I also collaborate with Dr. Sourav Garg from AIML, University of Adelaide, to develop viewpoint invariant visual place recognition systems. I completed my undergrad at the Indian Institute of Technology (BHU) Varanasi,  pursuing my bachelor's in Mechanical Engineering.

Previously I have:

In my free time I quiz and design cool posters :) 

I am always happy to learn and open to network.    

Research

future.mp4

My research interest lies in creating robots that can interact with the environment and take autonomous decisions. Perception based control is a fundamental "skill" any robotic entity needs to be taught in order to be useful to the human society. I am interested in adopting methods from SLAM and learning based control to develop this "common sense" in robots. I am a huge fan of Sci-Fi movies and want to develop robots like Baymax and Wall-E to help humans and the society.

This video perfectly encapsulates my vision for the future of society with robots.

Source : twitter

Publications


Sachin Negi, Kartik Garg, Milind Prajapat, Neeraj Sharma


Surabhit Gupta, Kartik Garg, Lakshay Taneja

Projects

Segment Recognition (Ongoing)

Kartik Garg, Shubodh Sai, Sourav Garg, Madhava Krishna

Developing viewpoint invariant visual recognition models by utilizing the zero-shot capabilities of large vision models like SAM, and DINOv2.   

Visual Locomotion for Quadrupeds (Ongoing)

Kartik Garg, Shishir Kolathaya

Developing visual locomotion policies using deep reinforcement learning to enable locomotion on challenging terrains.

3D Place Recognition 

Kartik Garg, Sourav Garg, Michael Milford

Developed patch pooling techniques for robust deep lidar-based place recognition systems.  

Smart Emergency Response System 

Kartik Garg, Surabhit Gupta, Yashasvi Singh, Prof. Lakshay

Developing a two staged disaster response system to tackle demand uncertainties and optimize the flow of relief material distribution to affected areas as soon as possible

Deep RL for Drone Decision Making

Kartik Garg, Pascal Goldschmid, Aamir Ahmad

Developed a deep reinforcement learning model for drone docking decision making to maximise the time for a drone to complete a task safely before it needs to dock.  


Jerbot Beta 

Kartik Garg, Raghav Soni, Surabhit Gupta, Lokesh Krishna, Niranth Sai

Addressed the challenges of bipedal locomotion, motivated by the agility and nimbleness of a Jerboa. Formulated a MPC controller inspired by MIT Cheetah 3 controller, for bipedal locomotion. Jerbot Beta can't run yet but it can fly :)

Project Page

Powered Prosthetic Ankle

Kartik Garg, Sachin Negi

Developed control algorithm for powered prosthetic foot. This project’s aim was to make a computationally effective classification algorithm to detect human gait phases. A major limitation of existing powered foot prosthetics is that they use high power computational hardware which is costly and makes it undesirable for the majority of the Indian demographic. I formulated a fuzzy logic-based algorithm to perform real-time classification. The algorithm produced exemplary real-time results, which led to a journal publication in Springer Nature, Computer Science Journal.