The objective of our computer vision system is to combine the visual data obtained from two different cameras, a RealSense D435i and an endoscope, in order to enhance our understanding of the environment. By using the RealSense camera, we can obtain a wide-angle view of the scene, while the endoscope can provide a close-up view of specific areas of interest. The system aims to integrate the two sources of data in real-time and leverage their respective strengths to achieve improved detection and recognition accuracy. The end goal is to develop a robust and accurate computer vision system that can be applied to gain an understanding of the state of each panel on the test bed.
RGB-D camera for environment view
End effector view
The computer vision system is designed to identify and monitor features on the panels using the RealSense D435i camera. The YOLOv8 object detection architecture has been implemented to detect and recognize the features on the panels. The RealSense camera captures RGB-D images of the panels and processes them to detect the features. The YOLOv8 architecture is based on the DarkNet-53 model and uses a combination of convolutional layers, shortcut connections, and skip connections to improve accuracy and efficiency. The architecture has been trained on a large dataset of panel images and can accurately detect and classify features on the panels. The system also utilizes ROS (Robot Operating System) to process the camera data and communicate with the robotic control system. Overall, the RealSense camera and YOLOv8 architecture are critical components of the computer vision system, allowing it to accurately identify and monitor the features on the panels in real-time.
Visual servoing is a technique used to control the end effector of a robot by using visual feedback from a camera. In this technique, the camera provides the current position of the object of interest and the desired position, which is used to compute the error signal. The error signal is then used to generate control commands that move the end effector towards the desired position. By continuously updating the position information and computing the error signal, the end effector can be precisely controlled in real-time.