Welcome to my portfolio. I am an undergraduate student in the class of 2026 majoring in Electrical and Computer Engineering. This portfolio showcases my contributions to the development and operation of a mobile manipulator designed to detect specific colors in its environment and perform corresponding actions. The project required integrating vision-based color detection, spatial mapping for localization, and precise motion planning for arm control. My specific contributions to this project include developing SDF (Simulation Description Format) files for creating simulated environments, designing intricate robotic worlds, and ideating practical robot objectives to ensure the project's alignment with real-world applications.
The objective of this project was to implement a robot arm on a wheeled body to perform specific tasks we designed and carry out motion planned code. Our approach relied on a camera-based perception system to identify objects by color and determine their spatial positions within the robot's coordinate frame. Using advanced motion planning techniques, the robot could execute collision-free arm movements to interact with its surroundings. The objective of this project was to develop a mobile manipulator equipped with a robot arm and a camera-based perception system, enabling it to perform specific tasks requiring vision-guided motion planning. The goal was to integrate vision-based object detection, precise localization, and coordinated arm movements to create a system capable of identifying objects by their color, determining their spatial positions, and executing collision-free arm actions within its environment. Leveraging an RGB-D camera and OpenCV for real-time depth and color recognition, the system used advanced spatial mapping techniques to transform detected objects’ positions into the robot's coordinate frame.
As part of a group project to integrate a robotic arm onto the TurtleBot4, I focused on mastering the creation and refinement of SDF files to design custom simulation environments. Starting from scratch, I developed a world with color-coded objects to support potential vision-based tasks, ensuring flexibility for our group's exploration of color sensing capabilities. Guided by our mentor Reagan’s advice, I set up a TurtleBot4 workspace and began experimenting with simulation environments. My contributions included designing test worlds, such as mat.world and test.world, which featured custom enrichments like movable walls, house-like layouts, and a door-like entrance to simulate real-world challenges. These environments provided a platform to rigorously test our robot's functionality, including arm movement and object detection. In addition to world design, I collaborated closely with my teammates on coding tasks. We adopted a collaborative approach, splitting the work and iterating until each functionality, such as arm rotation or color sensing, was achieved. This effort culminated in the creation of three unique task codes tailored to specific objectives, demonstrating the synergy between our robot's hardware and software capabilities.
Click the image to view the final video for our project that includes all three key implementations of various tasks. The first is to scan the centroid and Move toward the red object. Then it shows our orbit code that allows our robot to spiral until it circles very close to a red cylinder, using the red pixels to keep itself a perfect distance. Finally is our Search Map for Red Script which navigates the robot through predefined waypoints while searching for a red object using OpenCV .