1. Autonomous Delivery Robot (Delibot):
The Delibot was designed from scratch and integrated with 3D LiDAR, vision sensors, depth cameras, GPS, and IMU. It features full autonomy for sidewalk navigation and includes capabilities such as sidewalk segmentation, object detection and tracking, path planning, real-time mapping, and localization. It is also integrated with Autoware.ai modules.
2. LIDAR-GPS Fused PCL Mapping:
I developed a tightly-coupled 3D mapping framework for autonomous delivery robots by integrating GPS and LiDAR-Inertial Odometry (LIO) data. The system uses an uncertainty-aware mechanism to switch between GPS and LIO, ensuring robust performance even in GPS-denied environments. This approach effectively mitigates long-term drift, enhancing mapping accuracy. The framework was successfully deployed to map the entire Ontario Tech University campus and adjacent neighborhoods.
Here is the proof of my work: Paper is available here.
3. Transformer-based End-to-End Point Restoration Network for Filtering Temporarily-static Objects from Point Cloud:
I developed a transformer-based framework to remove temporarily-static objects (e.g., parked cars) and restore occluded regions in LiDAR point clouds, reducing mapping noise in dynamic environments. I also introduced a novel auto-labeled dataset that eliminates the need for manual annotation. The method achieved a 26.4% reduction in CDL1 and a 30.2% improvement in F-Score, advancing robust filtering for map-based localization systems.
Here is the proof of my work: Draft version is available here. Publication in process.
4. Stereo Depth Estimation for Pseudo-LiDAR
This work introduces a real-time method for generating pseudo point clouds using image sensors (stereo) as an alternative to laser-based LiDAR sensors. While previous approaches, prioritized accuracy, they lacked the speed necessary for real-time operation. This proposed strategy explores alternative depth estimators to produce LiDAR-like point clouds with improved real-time performance.
Here is the proof of my work: Paper is available here.
5. LiDAR integrated Enhanced BEV Segmentation
This work present a novel approach for generating segmented Bird's Eye View (BEV) maps using four surrounding camera images and LiDAR data to enable safe and reliable navigation. The model uses a LiDAR encoder that captures multi-layer spatial features, significantly enhancing the detection of small yet critical objects like vehicles. Specifically, for vehicle-class segmentation, which are relatively small in the context of the entire map. The integration of the LiDAR encoder led to an IoU improvement of approximately 7.92%. To further boost accuracy, an attention mechanism is integrated that helps the model focus on key regions across the input images.
6. Object Detection module for Autonomous Vehicle:
The vehicle uses a camera sensor to detect objects such as pedestrians and stop signs, enabling it to stop accordingly. The ROS interface visualizes the detected objects in real time. Lane following is achieved using an end-to-end CNN-based algorithm.
7. Semantic Segmentation-based Lane Keeping Assist System:
An advanced lane-keeping system that uses semantic segmentation to identify lane markings and guide the vehicle along its path. Real-time implementation using DeepLabv3 was carried out on the KSNU campus road. The segmentation output enabled robust lane keeping
8. Integrated Vision and Laser-Based Navigation System
This approach combined vision and LiDAR to perform robust maneuvering in a Gazebo simulation. Hector SLAM was used for mapping and localization, with trajectory plotting conducted for performance comparison. YOLO object detection is used to avoid objects
9. Real-World Embedded System Integration:
The proposed methods (Integrated Vision and Laser-Based Navigation System) were validated through real-world experiments using a Hokuyo 30LX LiDAR and a vision sensor. The system ran on a Jetson AGX Xavier embedded GPU platform and was interfaced via ROS.
10. Unmanned Systems World Congress Demonstration:
A demonstration of the developed system was conducted in front of Yoo Yeong Min, the Ex-Minister of Science and ICT, at the 2019 Unmanned Systems World Congress in Seoul.
An AI-driven sidewalk navigation algorithm for medium sized vehicle (autonomous delivery vehicle) that uses semantic segmentation to ensure safe and accurate path following in urban environments. The algorithm leveraged DeepLabv3 with transfer learning due to a small training dataset. This approach allowed effective training despite limited annotations.
An integrated obstacle avoidance module (CAOD) that operates alongside sidewalk segmentation to enable safe navigation around dynamic and static obstacles.
13. Lane Following in Extreme Weather Conditions
Implemented DeepLabv3-based semantic segmentation in the AirSim simulator to achieve lane following in various weather conditions The paper is published in 2020 Spring KSME Joint Symposium on Dynamics & Control / IT Convergence
14. Lane Keeping System Comparison
Comparative analysis of lane keeping performance with and without steering control using semantic segmentation.
15. Lane Detection Using Perspective Transformation + LaneNet + Sliding Window:
Most of the work related to lane detection use feature-based approach to detect the lane which is limited in many cases. Replaced traditional feature-based lane detection with a data-driven LaneNet approach for robust steering.
16. UAV Visual Tracking using YOLO
In this work, the YOLO deep learning visual object detection algorithm was utilized to visually guide the UAV to track the detected target. The detected target bounding box and the image frame center were the main parameters that were used to control the forward motion, heading, and altitude of the vehicle. The proposed control system approach consisted of two PID controllers that managed the heading and altitude rates. For a real-time computing device a Nvidia Jetson TX2 based edge-computing module is used, which takes the input data from onboard sensors such as camera. A navigation system operated entirely onboard the UAV in the absence of external localization sensors or a GPS signal is introduced, and it uses a fisheye camera to perform a visual SLAM for localization.
Paper link - Click here
Paper link - Click here
17. Real-World Lane Following with Object Detection
The experimentation is done both in day-light and low-light conditions. The vehicle used a camera sensor to follow the lane using the algorithm as well as to detect objects (pedestrians, stop sign) in order to halt before it. Three camera sensor collected real-time dataset to train the CNN model. Later the model is used to test in the real environment. The car was following lane using the end-to-end-CNN algorithm.
18. Autonomous Shuttle Cart in Simulation
The simulation environment was designed to train the vehicle steer autonomously in the shuttle zone. The experiment was a part to develop a shuttle cart that can be used to tour the campus autonomously. The vehicle used a camera sensor to follow the lane using the algorithm. Three camera sensor collected dataset from the simulation environment to train the CNN model. Later the model is used to test in the simulation environment. The car was following lane using the end-to-end-CNN algorithm.
Paper link- Click here
Paper link - Click here
19. Deep Reinforcement Learning with LiDAR-equipped RC Car
Converted LiDAR data to a 2D grid map for training a DRL-based policy in Gazebo. Tested in various simulated environments.
[Conference: ICCAS 2020, IEEE] Paper Link - Click here
20. Deep Reinforcement Learning with LiDAR-equipped RC Car
Used the raw LiDAR data for training using DRL-based policy in Gazebo. Tested in various simulated environments.
[Conference: ICCAS 2020, IEEE] Paper Link - Click here
21. Lane Following in Athletic Field
Kalman filter-based lane detection algorithm is used to steer the vehicle autonomously in athletic field. The pioneer 3AT platform used a camera sensor to follow the lane using the algorithm.
Highly Curved Lane Detection Algorithms Based on Kalman Filter - click here
22. Simulation trained CNN model
The model is initially trained using three-camera data for autonomous lane following in indoor simulation environment. The simulation environment was designed to train the vehicle steer autonomously in indoor environment. Three camera sensor collected dataset from the simulation environment to train the CNN model. Later the model is used to test in the simulation environment. The vehicle used a camera sensor to follow the path using the algorithm. The car was following lane using the end-to-end-CNN algorithm.
23. Microcontroller-Based Robotics Projects
Automation and Robotics in Product sorting by size and color using Arduino.
Development of navigation for visually impaired person using ISD1760 and Microcontroller.
USART communication from PC to Bi-pedal robot using microcontroller & Visual Basic
Controlling robot with DTMF using Microcontroller.
Design & fabrication of electronic prototyping platform PINGUINO and PICKIT2 using Microcontroller.
24. Hand Gesture Controlled Robot
Control a robot simply by moving your hand. This hand gesture–controlled robot is built using Arduino and operates by detecting hand movements and translating them into motion commands. The system uses an MPU6050 accelerometer and gyroscope, a pair of nRF24L01 transceivers, and an L293D motor driver module. The setup includes two main sections: a transmitter and a receiver.
The transmitter section features an Arduino Uno, an MPU6050 sensor, and an nRF24L01 module, while the receiver section includes another Arduino Uno, an nRF24L01 module, two DC motors, and an L293D motor driver. The transmitter acts as a remote controller, allowing the robot to respond to hand gestures in real time.