Optimization of the Robot Hardware
When optimizing the design for the robot, we realized that we would need hardware that was small and lightweight but had powerful computing capabilities. While some hardware options offered greater processing power, their size and weight made them impractical for our compact robot chassis. Larger components could compromise mobility and efficiency by adding unnecessary bulk and slowing down movement. To address this, we selected lightweight yet capable hardware, including the OpenCR 1.0, NVIDIA Jetson TX2, and Raspberry Pi Zero. Together, these components form a compact but powerful computing system, capable of meeting all our performance needs without exceeding spatial or weight constraints.
Another key optimization in our design was selecting an omni-directional wheel over a traditional ball wheel for the central support. While both options allow for free swiveling and multidirectional movement, the omni-directional wheel offered several advantages. It sat more level with the robot chassis, which helped reduce tilting and improved the robot's overall balance. Additionally, it fit more securely within the chassis design without interfering with surrounding components, making it a better structural fit for our layout. The omni-directional wheel’s larger radius enhances its ability to navigate uneven terrain, such as grass and dirt, by providing smoother movement and improved ground contact.
Optimization of the Computer Vision Model
A key design trade-off involved model size vs. accuracy. While larger models offer higher accuracy, they require more computational resources , which limits their practicality on edge devices used in the field. Due to this, our project uses the smallest version of the YOLOv11 model, YOLOv11 nano. This model still provides high accuracy, precision, and recall despite it being less complex. With this model, we were able to achieve a 97.5% accuracy(mAP50) , 93.4% precision, and 94.1% recall. By optimizing for both speed and accuracy, the system is able to make fast and reliable predictions directly on portable devices, allowing for real-time weed management without needing constant internet access or powerful hardware.
Delivery of the Computer Vision Model
After training and optimizing our computer vision model using YOLOv11 nano, we validated its performance on our validation dataset of 359 images to ensure reliable weed detection capabilities. The model demonstrated impressive metrics with 97.5% accuracy (mAP50), 93.4% precision, and 94.1% recall, confirming its effectiveness for real-world deployment. To further test the model, we ran a reference image through the model. The system successfully identified and labeled the weeds in the image with bounding boxes and confidence scores. This operational test confirmed that the model could handle unseen data effectively and perform reliably.
Delivery of the Robot Operating Software
To integrate the model with our robot system, we exported the trained weights into a format compatible with our ROS environment. We then developed a Python node dedicated to processing the video feed from the Intel RealSense camera in real-time. This node performs the following functions:
Captures frames from the camera
Processes each frame through our trained YOLO model
Identifies and localizes weeds by generating bounding boxes
Calculates the position of detected weeds relative to the robot
Publishes these coordinates to a dedicated ROS topic for navigation
The computer vision system successfully operates on the NVIDIA Jetson TX2, providing real-time weed detection with minimal latency, which is crucial for the robot's autonomous operation in garden environments.
Creating a Launch File for Package Integration
To streamline the deployment and operation of our robot system, we created a comprehensive launch file that coordinates the startup of all necessary ROS packages. This launch file ensures that all components initialize in the correct sequence and with the appropriate parameters.
Our launch file includes the following key components:
Camera Node: Initializes the Intel RealSense D435 camera with optimized parameters for outdoor lighting conditions
LiDAR Node: Activates the LDS-02 LiDAR sensor for environmental mapping and obstacle detection
SLAM Node: Launches the simultaneous localization and mapping algorithm for robot positioning
Navigation Stack: Initializes path planning and obstacle avoidance capabilities
Computer Vision Node: Starts our custom weed detection node with the pre-trained model
Motor Control Node: Launches the interface for communicating with the OpenCR board
The launch file also configures essential parameters such as topic names, transformation frames, and hardware-specific settings. By centralizing these configurations, we've created a robust and repeatable startup process that minimizes deployment errors and ensures consistent system behavior.
Motor Control Integration with OpenCR Board
The final critical component of our delivery was establishing reliable communication between our high-level ROS system and the OpenCR board that controls the robot's motors. This integration enables both the navigation system and the weed removal mechanism to function properly.
We implemented a separate control system for the motor that powers the weed whacker. This system:
Receives activation commands when a weed is within the operational range
Controls the Electronic Speed Controller (ESC) to adjust the rotation speed based on weed density
Implements safety features to prevent accidental activation
Provides feedback on the operational status of the weed removal mechanism
The motor control system was thoroughly tested to ensure reliability under various conditions, including different terrain types and weed densities. Our tests confirmed that the system can effectively navigate to detected weeds and activate the weed removal mechanism at the appropriate times.
The team has exhibited strong project management and collaboration throughout the year. Roles were clearly defined, allowing each member to contribute their expertise while maintaining a balanced workload. The team meets three times per week and communicates regularly, ensuring consistent progress and quick problem-solving.
Eli and Jason lead the software development, focusing on ROS integration and autonomous navigation. Chris oversees the machine learning pipeline, training and optimizing the computer vision model for weed detection. Atharva and Abigail, as hardware leads, have designed and implemented the weed-whacking blade mechanism while ensuring all components are effectively integrated. This structure has enabled the team to stay on schedule, hit key milestones, and continuously iterate based on testing and feedback.