In 2020, I took a course on Sensing and Estimation, and I took a course on Optimal Control and Reinforcement Learning.
In 2017, I took Dr. Junaed Sattar's class on introductory robotics, learning frame transformations, forward and inverse kinematics, and basic robot planning.
Implemented an optimization-based controller for steering a virtual non-holonomic vehicle toward an arbitrary goal.Â
Used a neural-network trained via Reinforcement Learning to output controls to steer a non-holonomic vehicle toward an arbitrary goal
Optimized RL agent using gradient-free Cross-Entropy Method, implemented from scratch in CUDA!
Code: https://github.com/zachavis/RL-Agent-CEM
Using a Baxter (type of robot with two arms), the robot was able to see dots drawn on a whiteboard and connect them using a whiteboard marker. The project involved basic computer vision and planning, using the ROS (Robot Operating System) environment as a means of interfacing with Baxter.
Implemented SLAM on a simulated robot, in which the position of the robot and the location of the landmarks in a scene were estimated via noisy sensors as the robot moved through the environment. Blue is the ground truth, red is the estimated covariance.
Using an RGB-D camera, I developed a robot-mounted vision system which can interpret point clouds to facilitate localization and planning. The robot can recognize and avoid obstacles in an environment using any planning algorithm that takes in a collection of objects to avoid.