Collaborative Robotics
ME 326 Winter 2022-23
Devin Ardeshna1 , Jerry Wang1 , Timothy Chen2
1 Department of Mechanical Engineering, Stanford University
2 Department of Aeronautics and Astronautics, Stanford University
ME 326 Winter 2022-23
Devin Ardeshna1 , Jerry Wang1 , Timothy Chen2
1 Department of Mechanical Engineering, Stanford University
2 Department of Aeronautics and Astronautics, Stanford University
Simulation and Hardware Demonstration
Abstract
In this work, we propose an end-to-end pipeline to achieve a two-player collaborative pick and place game involving 2 robots. We propose a simple signalling strategy based on robot base motions to indicate the intentions of the agent. We also built upon the provided codebase to enable robust, multi-block perception by representing each block as a color-labelled bounding box. We also employ Dijkstra's algorithm and spline-based paths in our trajectory planner for safe and smooth navigation. This pipeline was validated in simulation and physically onboard the LoCoBot.
Block Detection
Safe Navigation
Grasping
Intent Signaling
Method
The pipeline employed onboard our LoCoBot to play the two-agent collaborative pick-n-place game consists of four components: block detection, trajectory planning, grasping, and intent signaling.
Trajectory Planner
The motion planner employs Dijkstra’s algorithm on a 512x512 voxel grid. An occupancy map is used to exclude regions that would result in a collision between the robot and blocks or scoring locations. We pass the path from the motion planner to a trajectory generator, which fits a smooth C2 continuous spline to the waypoints. We then time-parameterize this trajectory while taking into account constrains on maximum velocity, acceleration and angular rate. This approach results in a trajectory that the locobot can track with high accuracy given the available motor power. We use a nonlinear control law to follow the path and stabilize the robot at the goal position using an approach based on polar coordinate transformations.
Block Perception
This module takes in an RGB-D image from the LoCoBot Intel RealSense camera and returns color-labelled bounding boxes corresponding to the blocks present in the image. HSV masking is used to assign color labels to sets of pixels, and clustering using DBSCAN divides the point cloud into several, contiguous points clouds corresponding to the extent of a block. We then generate oriented bounding boxes on each cluster to approximate the geometry of the block. These bounding boxes are used for collision avoidance in the motion planner and for manipulation.
Grasping
Manipulation is performed using the MoveIt ROS library to plan joint trajectories to the coordinates of the block retrieved from the block detector module.
Intent Signaling
Our collaborative strategy involves signaling our robot's intention via the number of in place rotations our robot performs. The number of rotations corresponds to the station number our robot intends to head to next, after dropping of the block currently in its gripper. By signaling our robot's intention, other robots can collaborate more efficiently in the block placement task.
Acknowledgements
We would like to thank Professor Kennedy and the ME326 TAs for their insights and help throughout this project.