Our software had 5 main steps, listed below
Picking up the ball
Moving to a viewing angle
Extracting cup positions (x, y, z) from image and selecting cup (see subpage Computer Vision)
Calculating throw pose via optimization (see subpage Initial Throw Position)
Throwing the ball (see subpage Throwing)
Steps 3, 4, 5 are linked in separate sub-pages.
*** Please note that in-line code may have adjustments from the code in the repository due to readability and further development of our codebase.
To pick up the ball, we decided that we would hold the ball in front of the grippers and manually execute a close; this was executed by this code flow;
gripper = robot_gripper.Gripper('right_gripper')
# ...
c = input('to close')
gripper.open()
We similarly waited for a manual input to confirm the ball was in the grippers before executing a shell command to move the robot to a good viewing position. We chose to use an intera_examples script called go_to_joint_angles.py to do this quickly.
import subprocess
# ...
def go_to_joint_angles(theta0, theta1, theta3, theta5):
subprocess.run(f"rosrun intera_examples go_to_joint_angles.py -q {theta0} {theta1} 0.0 {theta3} 0.0 {theta5} 1.57".split())
rospy.sleep(0.5) # for safety
# ...
_ = input('find cup')
# Vision code covered in Step 3
go_to_joint_angles(-0.5, -0.785, 0, 0)