The desired functionality of the project was to get the robot to shoot and hit a moving target (in any direction) initially at a distance of a few feet, then farther distances.
1) Be able to hit the static target consistently as close to the bullseye as possible. The bullet should end up within the yellow, red, or blue zones 95% of the time. Bullet should never miss the target and the distance should be greater than five feet (152 cm).
2) Be able to hit the moving target consistently as close to the bullseye as possible. The target should be moving at a minimum of 5cm/s and the bullet should end up on the target 90% of the time. Distance should be greater than five feet (152 cm).
We decided to use the left wrist camera to detect the target, since the head camera has a distortion. Due to where we were able to position the wrist camera in comparison to the Real Sense camera (due to limitations of the Real Sense Camera cable length), we opted to go with the wrist camera.
We tried implementing several different algorithms in our image segmentation/computer vision pipeline.
Through trial and error, the steps we found worked best were:
1. We had to lower the speed of the moving target in order to hit it consistently.
2. We used the wrist camera for convenience purposes instead of the Real-Sense camera, which could have potentially improved our object detection and tracking.
In order to make our system more consistent, we figured out the optimal gripper setup using rubber bands and foam. We tested the setup and our code for the static target by placing the target and the initial starting configuration of the arm randomly. For the moving target, we tested the system with the target moving in different directions at slightly different speeds and made sure that the bullet hit the target every time.
By thresholding on the target's color then by picking the largest contour, as well as by covering the table with a dark blue cloth, we were able to mitigate the number of false positives in the background.