Since there is no digital communication allowed, our strategy needed to be compatible with a wide variety of collaboration strategies used by the other team’s Locobot. In order to collaborate we attempted to deduce with the other robot’s world model through observation of their block placements.
In order to build a working prototype in a short span of time, we first aimed to construct the minimum viable product (MVP) in Gazebo. Given the constraints laid out in the task, the MVP for this project involves a Drive Controller (to move the robot), an Arm Controller (to pick up and drop off blocks), a Path Planner (to avoid obstacles), and an Odometry module (to determine the Locobot’s position). This simulation MVP was then be translated into the physical world.
By week seven, we achieved the MVP and moved on to task collaboration in the physical world. By the end of week nine, we were able to run the entire task on the physical robot, including scanning the environment for a depth reading, selecting an accurate block target, and following a pre-planned trajectory to the block.
The main challenges we faced in Gazebo were unliftable blocks, slow simulation time, and multi-robot simulation. Despite our best efforts with the Arm Controller module, it was not possible to pick up the blocks. In the end, it turned out that the mass and inertia were set to infeasible constraints in the block urdf files. In order to solve this issue, we decreased the block weight from 0.5 kg to 0.005 kg. The slow simulation time was solved by relying on native Ubuntu computers. Finally, during the last week, we were able to initialize multiple robots in the simulation with individual rostopics and nodes running our system. However, full setup was unsuccessful as robot state and joint state information was undetectable by Gazebo, likely due to robot description discrepancies.
On the physical Locobot, we faced noisy odometry readings, goal overshooting, and gripper issues. If there were multiple Locobots present on the field, the odometry rostopic would provide a noisy reading. It turned out that we combined the rostopics for all three Locobots into one channel, and after isolating each, we were able to reduce much of the noise. However, the camera system continued to jump between Locobots while generating pose, so we ended up using only one Locobot at a time. For the second challenge, our Locobot would overshoot the goal position, causing it to run into the target block. This was caused by the proportional control system. In order to fix overshooting, we optimized our gain values. Finally, we encountered many gripper issues while working with the Locobot. We discovered that MoveIt was the main culprit, and set about integrating the Interbotix gripper module.