We adopted a straightforward code architecture. First, block locations are detected, published in a custom message, and the information is stored in an occupation map. Subsequently, a Nash equilibrium solution is determined for goal and block assignments. The robot then selects the closest block according to the Nash map and transports it to the assigned goal. After each block placement, the Nash map is recalculated, and the cycle repeats. This process functions through a topic that continuously identifies blocks.
The perception code was able to accurately and consistently determine the 3D locations of all 16 blocks in the images below. See the repository code on the perception branch found here.
View of blocks from Locobot camera in Gazebo.
Locobot and multiple blocks in Gazebo.
See the repository code on the planning branch found here.
See the repository code on the control branch found here.
The simulation video can be watched at https://youtu.be/oB4CMK2zu74.