We use the USB camera to determine the color and ball’s position in the camera frame
We detect balls with a Hough Circle Transform
Convert the frame to greyscale and apply gaussian blur to aid in circle detection
We get the center of the circle in the image frame (u, v)
We limited applied radius filters to filter out balls, other round objects, and holes
We get the ball’s color by masking the ROI with HSV values and we select the most likely color (highest percentage)
We determined the HSV masking values through trial and error, with an emphasis on looser masks to make sure we always capture a ball
Ball and color detection stress test
We first get the ball's pose in the camera frame, where we assume a pinhole model of a camera
Using these derivations of the point in the camera frame, we can get the f_x, and f_y from the "camera_info" topic in ROS, we just need to find the Z value to get the ball's X and Y position in the camera frame
We use an AR Tag attached to the pool table, which we utilize to give us the distance the pool table is away from the camera
Once we have this, we can derive the balls 3D position in the camera frame!
We now want to translate from the camera frame to the base frame
Our final implementation consisted of:
Get ball's pose in the camera frame
Construct the transform between usb_camera and head_camera
Transform the ball to the head_camera's frame
Transform the ball to the base frame
Transform from the head_camera to usb_camera
We create a transform between usb camera and head
We used both camera's to detect the same AR Tag
We used the LookupTransform method from TF ROS Package to get the two transforms from the usb_camera to the AR_Tag, and the head_camera to the ar_tag
We could then apply the equation on the left to get the transformation from the usb_camera to the head_camera
This gives us an accurate translation between the ball's position in the camera frame to the base frame with a margin of error of +- 5mm allowing us to hit the ball
We precomputed the translation as both cameras were static in ball detection stage, allowing for fast transformations
Visualization of the ball and ar marker in the base frame
We utilized RVIZ to validate our published ball positions by viewing the pose of the ball "/ball/{color}" in RVIZ by overlaying it with the camera view, to make sure that the ball's published pose lined up with its actual position