To ensure that our mobile platform would be strong enough and stable enough to support our entire robot, our team performed a weight test and slipping test. After the mobile platform was constructed and the motors were attached we added weight to the platform to see how much load our mobile platform could manage. The platform we constructed was able to successfully move in all directions with 20 pounds of weight on the platform. This was 5 pounds over our estimated final robot weight and therefore proved our mobile platform construction, along with our chosen DC motors, would be more than strong enough for our application.
For our slipping test, we placed the robot on a flat sheet of plywood and lifted one side introducing an angle to the mobile platform. We kept increasing this angle until either our robot base started to slide or started to tip over. We were able to achieve 20 degrees of tilt before our mobile platform lost its grip and slid down the plywood. Considering that the design specifications were changed to not include a rocking motion, we knew that we would be dealing with level ground; this slip test proved our mobile platform would be stable.
The testing of our gantry system was very straightforward. Since it was designed like a 3D printer or CNC machine, all we needed to do was give an X and Y coordinate and make sure our gantry was moving to the correct position. After some adjustments with the current going to the stepper motors and the proper conversion from motor steps to distance, the gantry system achieved the desired position to within two millimeters of accuracy.
Our end effector system was put through two tests to ensure robustness: a position test and a strength test. The position test was quite simple; we extended and contracted the linear actuators until they hit their internal limit switches and checked the resulting orientations. When designing the end effector system, we used both geometry and trigonometry to choose mounting locations that would result in 90-degree orientation changes with the full stroke of the actuator. This significantly simplified the programming as no feedback was required. Because we only needed vertical and horizontal orientations, we could drive the linear actuators to the fully extended location and fully contracted location every time. During testing, our calculations proved to be accurate and the linear actuators were correctly positioning the end effector system. For our strength test, we applied forces to the tip of our end-effector finger. Flipping the breaker boxes was the task where the end effector would need to withstand the most force. We measured that about 6 pounds of force was required to successfully flip the circuit breakers. Our end effector repeatedly was able to withstand this force. We knew going into the finger design that it would have to be strong, so when setting up the 3D print, we selected 100 percent infill and chose a printing orientation that would maximize the material strength.
We tested each motor subsystem carefully before integration. For the DC motors, this meant designing a system that could command four DC motors with encoder feedback using PID control. For the linear actuators, we simply needed to drive the linear actuator pair and the solo linear actuator forward and back as needed. For the stepper motors, we needed to ensure that the SKR was configured appropriately so that each of the XYZ axes could be controlled using G-code commands. Once we programmed and tested each subsystem independently, we then integrated all the systems. We wrote a Python program to command each of the motor subsystems from the Jetson Nano, which ensured that all motors still functioned as desired. Thus, we were confident that we could control each component of the robot as needed for our tasks. A control flow diagram showing each electronic subsystem is shown below.
The perception stack was designed with tests built-in. The YOLO neural network used to detect the valves/breakers in the camera frame was classified on a dataset we collected ourselves. This dataset contained over 14,000 images, one-fifth of which were set aside for validation. Therefore, we could ensure that the network was not being overfitted to the training data. Additionally, each classical vision algorithm was developed using a suite of “test videos” that were pre-recorded. Parameters were tuned until the code successfully detected the valve/breaker in every frame of the videos. Testing the planner was done quantitatively through simple measurement tests that worked up the stack. We first validated that our low-level PID loop controlled each motor’s speed correctly. We commanded each motor to turn at a specified rate (in degrees per second), and validated that the motors did indeed turn at this rate within ±5%. Next, we validated the accuracy of the trajectory generation/follower code. We commanded the robot to drive in a 1-meter square, then rotate 90 degrees and back. Again, we tuned parameters until we achieved a 5% closed-loop error for this trajectory. These tolerances were tight enough to ensure that we could drive to a station and adjust for any small errors using the gantry system.
With all the subsystems covered above, we can now show our assembled robot, with the electromechanical components integrated together. We assess the system's performance in the page on System Performance, and additional photos of the robot are on the Media page.