The robot being used is a Turtlebot3 Burger, with the following components being a main part of our robot design:
Intel RealSense Depth Camera D435
NVIDIA Jetson TX2 Development Kit
OpenCR1.0 Board
LDS-02 LiDAR Sensor
HooToo USB Hub Shuttle
DYNAMIXEL Motors
Wheels
LIPO Battery 11.1V 1,800mAh
The Depth Camera will provide a real-time video feed of the robot’s line of sight, enabling continuous visual data processing. The NVIDIA Jetson, serving as the robot’s central computer, will execute the computer vision model to accurately detect and identify weeds. The LiDAR sensor will actively measure distances, mapping the environment to enhance obstacle detection and facilitate precise navigation. Finally, the OpenCR board will function as the robot’s primary controller, executing motor and wheel movement commands to ensure seamless operation.
To implement the weed-killing mechanism, the following parts are being used:
Brushless DC (BLDC) Motor
Electronic Speed Controller (ESC)
Weed Wacker Line
The weed wacker line fits into a printed piece that screws onto the brushless motor. The motor is connected to the ESC, and the ESC is wired to the OpenCR board. The team has tested the spinning and has configured the motor to spin when a command is sent to the ESC from ROS. The team can then test the spinning mechanism and code the motor to rotate at certain speeds when desired. Both the wheel motors and the brushless motor have been configured in ROS and can be manually activated using teleoperation through a remote PC. The OpenCR board subscribes to the brushless motor speed ROS topic and converts the received values into a PWM signal to control the ESC and, in turn, the motor.
Implementation of the Computer Vision Model
For this milestone, we implemented our computer vision algorithm using a combination of Python, YOLOv8, and Roboflow. This setup allowed us to develop an efficient and accurate object detection system tailored to our project’s needs.
Technologies Used:
Python: The core programming language used to develop and run the algorithm.
YOLOv8: A powerful deep-learning model designed for real-time object detection and classification.
Roboflow: A dataset management platform that provided labeled images and preprocessing tools to enhance training quality.
Implementation Process:
Dataset Preparation:
Obtained labeled training images using Roboflow, ensuring a diverse dataset for better model generalization.
Training Set: 2301 Images (87%)
Dataset provided by the Weed Detection Computer Vision Project by koley
Applied preprocessing techniques such as resizing and augmentation to improve detection accuracy.
Model Training:
Used YOLOv8 for object detection due to its balance of speed and accuracy.
Training Settings:
Epochs: 5 per run
Image Size: 640 x 640 pixels
Batch Size: 8 Images
Implementation of the Software (ROS)
The software will primarily be developed using Robot Operating System (ROS) 1. The plan includes creating two key ROS nodes: one to transform pixel data from the Intel Depth Camera into grid coordinates and another to compute the robot’s trajectory based on updated target locations. Python scripts will be written and integrated into ROS for seamless execution. Before deploying the code on the physical robot, simulations will be conducted in Gazebo, with test results visualized in Rviz. The ROS navigation stack will be implemented to interface with the robot’s hardware, and once finalized, the Computer Vision AI model will be fully integrated into ROS for real-time weed detection and navigation.
Testing of the Computer Vision Model
To ensure the accuracy and reliability of our computer vision model, we conducted rigorous testing and validation using new image datasets.
Dataset for Testing:
We used our testing dataset to evaluate the model’s performance on unseen data and measure its accuracy.
Testing Dataset: 109 Images (4%)
We used our validation dataset to fine-tune the model by adjusting parameters and improving detection accuracy.
Validation Dataset: 220 Images (8%)
Testing & Validation:
Assessed performance using key metrics, including:
Box Loss: Measures errors in object bounding box predictions.
Difference Loss: Captures variations between predicted and actual object locations.
Classification Loss: Evaluates the accuracy of object classification.
Updated the weights of the model based on validation results to improve accuracy and reduce loss.
Obtained new images from Kaggle to make predictions and evaluate the model’s performance on unseen data.
The new unseen images were provided by Weed Detection by Jai Dalmotra.
The images below show the unseen images from Kaggle alongside its corresponding prediction with a bounding box.
The team has demonstrated strong collaboration and efficiency throughout this project. The tasks are evenly distributed among the team members, allowing everyone to leverage their individual strengths without feeling overwhelmed. The team maintains constant communication, meeting three times a week. The team also stays connected with text messaging and collaborates in a shared Google Drive folder. Eli and Jason have led the software development efforts, driving the ROS and navigation stack integration. Meanwhile, Chris has taken charge of training and testing the Computer Vision AI model to detect the weeds with greater accuracy. Atharva and Abigail, as hardware engineers, have been instrumental in finalizing the integration of all components and designing the innovative weed-wacking mechanism to effectively eliminate weeds.