This ROS 2-based color detection code uses an RGB-D camera and OpenCV to identify and track prominent colors in real-time. By converting images to the HSV color space and applying predefined color ranges, the code detects colors such as red, blue, and green and highlights them on a visualization mask. It leverages efficient image resizing and masking techniques to ensure fast processing and logs detected colors for analysis. This functionality is essential for robotics applications involving environmental perception, object recognition, or color-based navigation. Not part of final project deliverable but still was cool to work with. I helped code by providing HSV values from the robot's view and giving expected HSV values for red, green, and blue, but the robot cannot actually interpret color values very well which we learned from this test code.
This Python script defines the NavigateAndDetect class, which integrates navigation and object detection for a robot using ROS2. The class uses the BasicNavigator for waypoint navigation, a Realsense camera for image-based red color detection, and joint trajectory control for arm movement. Key functionalities include navigating to predefined arm poses to circle, detecting a red object in the environment, stopping navigation upon detection, and dynamically adjusting the robot's arm pose. The robot's behavior is guided by combining geometry transformations, trajectory planning, and image processing with OpenCV. The code demonstrates a cohesive approach to multi-sensor robotic operation in a modular, ROS2-compatible structure. I helped a lot with getting this one to work, but Brandon helped debug it a lot as well to get it moving much more closely to the cylinder.
Individual Video files if anyone prefers watching them separately in more detail