In this project I will be talking about sensor fusion, which is the process of taking data from multiple sensors and combining it to give us a better understanding of the world around us. we will mostly be focusing on two sensors, lidar, and radar. By the end I will be fusing the data from these two sensors to track multiple cars on the road, estimating their positions and speed.
Lidar sensing gives us high resolution data by sending out thousands of laser signals. These lasers bounce off objects, returning to the sensor where we can then determine how far away objects are by timing how long it takes for the signal to return. Also we can tell a little bit about the object that was hit by measuring the intesity of the returned signal. Each laser ray is in the infrared spectrum, and is sent out at many different angles, usually in a 360 degree range. While lidar sensors gives us very high accurate models for the world around us in 3D, they are currently very expensive, upwards of $60,000 for a standard unit.
The idea of the camera course is to build a collision detection system. I built the feature tracking part and tested various detector / descriptor combinations to see which ones perform best.
First, I focused on loading images, setting up data structures and putting everything into a ring buffer to optimize memory load.
Then, I integrated several keypoint detectors such as HARRIS, FAST, BRISK and SIFT and compared them with regard to number of keypoints and speed.
In the next part, I focused on descriptor extraction and matching using brute force and also the FLANN approach.
In the last part, once the code framework was complete, I tested the various algorithms in different combinations and compared them with regard to some performance measures.
Developed a way to match 3D objects over time by using keypoint correspondences.
Second, computed the TTC based on Lidar measurements.
Proceeded to do the same using the camera, which requires to first associate keypoint matches to regions of interest and then computed the TTC based on those matches.
And lastly, conducted various tests with the framework. My goal was to identify the most suitable detector/descriptor combination for TTC estimation and also to search for problems that can lead to faulty measurements by the camera or Lidar sensor. Implemented the Kalman filter, which is a great way to combine the two independent TTC measurements into an improved version which is much more reliable than a single sensor alone can be.
A library implementing Q learning with modern C++ syntax and using opencv for simulation
Designed a simple environment with the Building Editor in Gazebo.
Teleoperated the robot and manually tested SLAM.
Created a wall_follower node in C++ that autonomously drives the robot to map the environment.
Used the ROS navigation stack to manual command the robot using the 2D Nav Goal arrow in rviz to move to 2 different desired positions and orientations.
Wrote a pick_objects node in C++ that commands the robot to move to the desired pickup and drop off zones.
Wrote an add_markers node that subscribes to the robot odometry, keeps track of the robot pose, and publishes markers to rviz.
Successfully implemented an Extended Kalman Filter on a TurtleBot in ROS. Used sensor data like odometery and Inertial measurement Unit(IMU) to calculate the state of the robot and verified the results by comparing the filtered trajectory with the non-filtered one.
My research project was completed as part of a 5-month internship at Cranfield University, where I worked on the development of a vehicle dynamics model aimed to minimize motion sickness in autonomous cars. I worked on the project on self-driving cars at the Advanced Vehicle Engineering Centre under Professor Dr. Stefano Longo and a Ph.D. student. My co-supervisor (Ph.D. researcher) was involved in developing the control loop while I assisted him in the development of vehicle dynamics model, complementing the human model to simulate real-life situations. I handled the responsibility of developing the dynamics of the vehicle keeping the bicycle model (2 DOF) as a reference. After doing thorough research, I decided that a 7 DOF(degree of freedom) model would be the best suitable for this project. After numerous simulations on SIMULINK, I then moved on to IPG CarMaker to simulate real-life conditions. I also converted the equations in a state space form and used MATLAB to offer various solutions. Wrapping up the project, I finally submitted a report to my supervisor, Dr. Longo, and fetched an ‘A+’ grade.
Starting August 2019, I started working in Prof. Stephanie Gil's Robotics, Embedded Autonomy, and Communication Theory Lab at ASU. In the first 3 months I achieved the following tasks successfully:
Successfully designed waypoint flight control algorithms in C++ and implemented them through ROS on Crazyflie drones.
Designed a PID and LQR controller in C++ and programmed a Roomba robot to test the controller.
Tested localization algorithms on Gazebo before implementing them on real drones to be confident of the results.
Developed communication algorithms of multi-robot systems for path planning strategies using SLAM.
Tested formation control algorithms on fleet of drones at ASU Drone Studio(Largest indoor drone studio).
As part of my MAE 506 course Advanced Systems Modeling and Control, I was a part of a 4 member team and we researched about various control algorithms to control the lateral as well as longitudinal trajectory of the car. I was extensively involved in modeling the Model Predictive Controller(MPC) for self-driving cars as well as the Pure Pursuit controller to control the lateral trajectory of the car. This project helped me bag 'A' grade.
In this work, we demonstrate a deep learning strategy using ACF(aggregate channel feature) framework to identify plant species like weeds and develop a novel mechanism to destroy the same. A real-time intelligent robotic system was developed to identify and kill the weeds growing in close proximity to the plants. A framework that uses the least computing power is developed. The advantages of deep learning over conventional image analysis and processing methods are discussed. A solenoid valve connected to a hot water sprayer is programmed in C++ and used to kill the weeds. A few systems have been developed worldwide to destroy the weeds autonomously, but are either complex or extremely expensive to replicate or buy. This research paper aims to make a system that is relatively inexpensive but more robust than its predecessors. This work demonstrates how precision agriculture can be deployed with minimalist hardware expenditure.
Developed a user friendly Robotics Toolbox programmed in MATLAB for high school students. This Toolbox will help the students to tinker around various concepts of robotics like Inverse Kinematics, Differential Kinematics and so on. The Toolbox is designed in a way that lets the user design his or her robot and play with various parameters that define a robot.