Projects
Stereo Visual Odometry using Classical Computer Vision
Implementation of Classical Computer Vision techniques used in the front-end of visual odometry using a stereo camera setup
Tested results on KITTI stereo vision dataset for Autonomous driving
Used SIFT keypoints with FLANN matcher for detecting keypoints and LK optical flow to track matched keypoints
Used 3D-2D triangulation of matched features to compute relative pose
GitHub Link | Project Report
Structure from Motion using Monocular Camera (Depth and Pose Estimation)
Research Project at Perception and Autonomous Robotics Group, WPI under the guidance of Prof Nitin J Sanket
Studying the state-of-the-art implementations on depth, pose and optical flow estimation from the motion of Monocular camera in static scenes
Studying the impact of Uncertainty in the prediction of these parameters and how we can model it to improve predictions
GitHub Link
Custom Localization and Navigation Stack for Autonomous Vehicles
C++ and ROS2 based package consists of localization and path planning modules for a custom navigation stack built from ground up.
Includes Kalman and Extended Kalman Filter implementations.
GitHub Link
Classical Structure from Motion
Simultaneously reconstructed 3D scene and extracted camera pose from given camera correspondencesÂ
Implementated Classical Computer Vision pipeline using (Non) Linear triangulation and (Non) Linear PnP pipeline of matched feature correspondences
GitHub Link
Reinforcement Learning for Lane Keeping and Obstacle Avoidance for Autonomous Vehicles
Implementation of Deep Reinforcement Learning Algorithms (DQN, PPO, A3C and DDPG) for racetrack environment in OpenAI Gym
Tasks involved were lane keeping and overtaking
Compared the algorithms to observe the training time and accuracy based on collision rate and lane keepingÂ
GitHub Link
Semantic Segmentation of LiDAR Point Cloud
Utilized the KITTI LiDAR dataset to obtain Spherical projection images of the point cloud
Implemented RangeNet architecture to predict semantic class labels
Trained two modified networks with a Pixel Shuffling layer and one with a temporal connection between encoders to improve performance
Obtained a 17mins/epoch reduction in training time, but there was a 5-7% drop in the mean IoU score for the classes
GitHub Link | Project Report
Human Aware Robot Navigation
Implemented autonomous navigation with dynamic obstacle avoidance using Lattice Planner (global) and Timed Path Follower (local) in hospital simulation environment
Simulation involved humans and static hospital environment in Unity - ROS integrated framework
Achieved smoother trajectories and better social navigation as compared to the baseline implementation using Base Global Planner and TEB Local Planner
GitHub Link | Project Report
Trajectory Tracking for Quadrotor UAVs using Sliding Mode Control
Generated quintic trajectories for given translational coordinates/waypoints.
Designed a boundary layer based Sliding Mode Controller as control strategy to ensure accurate Trajectory Tracking.
Created ROS package to simulate the trajectory tracking in Gazebo environment.
Project Report
Grasp Detection using RGB-D images (eye-in-hand system)
Research Project under the guidance of Prof Berk Calli, WPI
Studied and implemented geometric methods to detect correct grasp for daily objects from the Cornell Dataset
Implemented a pipeline based on Elliptical Fourier Descriptors to detect object boundary and grasp axis
SCARA Manipulator ROS package with Position and Velocity Controllers
Create a SCARA robot definition file with forward and inverse kinematic nodes (publisher-subscriber and service-client)
Use ros_control package to include Joint Position Controllers, and tune the PD gains for smooth performance
Implemented Joint Velocity Control, to track a straight line trajectory in the cartesian space, by using the Jacobian matrix
GitHub Link
Visual Servoing using ROS Noetic
Implemented a ROS Noetic package that spawns a robot with a revolute-revolute joint configuration and an object with certain coloured features
The goal of the node is to move the Robot in the cartesian space, so that we observe the object move from one known location to another in the image space