Full view of experimental setup. There are 3 AR tags present. A webcam is placed from the same perspective that this picture was taken from.
View of the stetup from RVIZ.
The materials we used in this project are included below with a picture and description.
Robotiq E-Pick Single Cup Vacuum Gripper: This is a vacuum gripper manufactured by robotics company, Robotiq. See below for more info.
Sawyer Robot Arm: The Sawyer Arm manufactured by ReThink Robotics is a 7-DOF arm to which various end-effectors can be attached.
Logitech Camera: The Logitech 1080p HD Camera was used for sensing.
Computer: The lab computer running the Ubuntu OS was used for interfacing with the sawyer.
Various Tools and Small Hardware: These tools including Allen Keys, and Screwdrivers and small hardware including nuts and bolts that were used to secure the Vacuum Gripper to the Sawyer Robot Arm.
AR Tags: These were used to enable pose detection of various objects and used to measure object position relative to the Vacuum Gripper.
Power Supply & Supporting Cables: This was used as an external power source for the Robotiq Vacuum Gripper.
Blocks: These were used to construct objects of variable size, shape, and weight.
We used the Robotiq E-Pick as our vacuum gripper. It operates off a single 24V input which we generated using an external power supply. The gripper also has a USB connection to the computer which is used to send control signals to grip, release, and change gripping pressure. The gripper doesn’t require an external pressurized air supply.
The software we wrote and used -- included below as the key files -- in our project is described below in terms of the key functionality used in the project. Each section is expandable to reveal the files contents. To the right is a diagram showing the structure of our ROS workspace. In particular, the contents of the move_grip package are shown within the workspace. This package contains the functionality we implemented, although, we also used ar_track_alvar, lab4_cam, usb_cam, and other packages from lab to fully implement our software.
This file contains the code we used to publish AR tag information, collected as angles between the different AR tags, as a custom message to a ROS topic at a frequency of 10 Hz. This file contains our sensing code.
#!/usr/bin/env python
# The line above tells Linux that this file is a Python script, and that the OS
# should use the Python interpreter in /usr/bin/env to run it. Don't forget to
# use "chmod +x [filename]" to make this script executable.
# Import the rospy package. For an import to work, it must be specified
# in both the package manifest AND the Python file in which it is used.
import rospy
import tf2_ros
import scipy
from scipy.spatial.transform import Rotation as R
# Import the String message type from the /msg directory of the std_msgs package.
from move_grip.msg import MoveGripARMessage
from geometry_msgs.msg import TransformStamped
# Define the method which contains the node's main functionality
def init_publisher():
# Create an instance of the rospy.Publisher object which we can use to
# publish messages to a topic. This publisher publishes messages of type
# std_msgs/String to the topic /chatter_talk
pub = rospy.Publisher('move_grip_ar_tag', MoveGripARMessage, queue_size=10)
# Create a timer object that will sleep long enough to result in a 10Hz
# publishing rate
rate = rospy.Rate(10) # 10hz
tfBuffer = tf2_ros.Buffer()
tfListener = tf2_ros.TransformListener(tfBuffer)
# Loop until the node is killed with Ctrl-C
i = -1
while not rospy.is_shutdown():
i += 1
# Construct a string that we want to publish (in Python, the "%"
# operator functions similarly to sprintf in C or MATLAB)
pub_time = rospy.get_time()
# pub_input = raw_input("Please enter a line of text and press <Enter>: ")
object_robot_angle=-1
wall_object_angle=-1
# Get the transform between the AR Tags
rotation_quaternion = None
try:
target_frame = "ar_marker_0" # On the object
source_frame = "ar_marker_1" # On the robot
trans = tfBuffer.lookup_transform(target_frame, source_frame, rospy.Time())
rotation_quaternion = trans.transform.rotation
# Use the transform to compute the angle between the the object and the end effector
r = R.from_quat([rotation_quaternion.x, rotation_quaternion.y, rotation_quaternion.z, rotation_quaternion.w])
z, y, x = r.as_euler('zyx', degrees=True)
object_robot_angle = z
except (tf2_ros.LookupException, tf2_ros.ConnectivityException, tf2_ros.ExtrapolationException):
if i % 100 == 0:
print("Bad Transform for Object-Robot")
rotation_quaternion2 = None
try:
# print("Hello")
source_frame = "ar_marker_0" # On the object
base_frame = "ar_marker_2" # On the wall
trans2 = tfBuffer.lookup_transform(base_frame, source_frame, rospy.Time())
rotation_quaternion2 = trans2.transform.rotation
r2 = R.from_quat([rotation_quaternion2.x, rotation_quaternion2.y, rotation_quaternion2.z, rotation_quaternion2.w])
z2, y2, x2 = r2.as_euler('zyx', degrees=True)
wall_object_angle = z2
except (tf2_ros.LookupException, tf2_ros.ConnectivityException, tf2_ros.ExtrapolationException):
if i % 100 == 0:
print("Bad Transform for Wall-Object")
# print("Object angle: ", object_robot_angle)
# print("Wall angle: ", wall_object_angle)
if not (object_robot_angle == -1 and wall_object_angle == -1):
timestamp_msg = MoveGripARMessage(object_robot_angle, wall_object_angle, pub_time)
pub.publish(timestamp_msg)
if i % 50 == 0:
print(rospy.get_name() + ": I sent \"%s\"" % timestamp_msg)
# Use our rate object to sleep until it is time to publish again
rate.sleep()
# This is Python's syntax for a main() method, which is run by default when
# exectued in the shell
if __name__ == '__main__':
# Run this program as a new node in the ROS computation graph called /talker.
rospy.init_node('move_grip_ar_tag_node', anonymous=True)
# Check if the node has received a signal to shut down. If not, run the
# talker method.
try:
init_publisher()
except rospy.ROSInterruptException: pass
This file contains the code we used to subscribe to AR tag information published to a ROS topic by our publisher node. This node was simply used to test the functionality of our publisher.
#!/usr/bin/env python
# The line above tells Linux that this file is a Python script, and that the OS
# should use the Python interpreter in /usr/bin/env to run it. Don't forget to
# use "chmod +x [filename]" to make this script executable.
# Import the dependencies as described in example_pub.py
import rospy
from move_grip.msg import MoveGripARMessage
# Define the callback method which is called whenever this node receives a
# message on its subscribed topic. The received message is passed as the first
# argument to callback().
def callback(message):
# Print the contents of the message to the console
# print("Message: %s, Sent at: %f, Received at: %f" % (message.message, message.timestamp, rospy.get_time()))
rospy.loginfo(message)
# Define the method which contains the node's main functionality
def listener():
# Create a new instance of the rospy.Subscriber object which we can use to
# receive messages of type std_msgs/String from the topic /chatter_talk.
# Whenever a new message is received, the method callback() will be called
# with the received message as its first argument.
rospy.Subscriber("move_grip_ar_tag",MoveGripARMessage, callback)
# Wait for messages to arrive on the subscribed topics, and exit the node
# when it is killed with Ctrl+C
rospy.spin()
# Python's syntax for a main() method
if __name__ == '__main__':
# Run this program as a new node in the ROS computation graph called
# /listener_<id>, where <id> is a randomly generated numeric string. This
# randomly generated name means we can start multiple copies of this node
# without having multiple nodes with the same name, which ROS doesn't allow.
rospy.init_node('ar_tar_subscriber_node', anonymous=True)
listener()
This file contains the planning and actuation code. This file contains the majority of the code we used to plan and actuate in each of our experiments and benchmark tasks. This node subscribes to the ROS topic the AR Tag Publisher node writes messages containing the relative angle information.
#!/usr/bin/env python
import sys
from intera_interface import Limb
import rospy
import numpy as np
import traceback
import time
from moveit_msgs.msg import OrientationConstraint
from geometry_msgs.msg import PoseStamped
from path_planner import PathPlanner
try:
from controller import Controller
except ImportError:
pass
import gripper_controller as gc
from robotiq_vacuum_grippers_control.msg import _RobotiqVacuumGrippers_robot_output as outputMsg
from move_grip.msg import MoveGripARMessage
from scipy.spatial.transform import Rotation as R
import matplotlib.pyplot as plt
#Define global variables
object_robot_angle = None
wall_object_angle = None
planner = PathPlanner("right_arm")
command = outputMsg.RobotiqVacuumGrippers_robot_output()
pub = rospy.Publisher('RobotiqVacuumGrippersRobotOutput', outputMsg.RobotiqVacuumGrippers_robot_output)
orientation_original = [-180, 0, -180] #[0.0, 1.0, 0.0, 0.0] # Robot Gripper
object_height = 0.1
def to_quat(euler):
r = R.from_euler('xyz', euler, degrees=True)
return r.as_quat()
# Gripping Startup Code -- run before moving to pose
def gripper_startup():
global command
command = gc.genCommand('a', command) #activate
pub.publish(command)
time.sleep(1)
command = gc.genCommand('c', command) #release
pub.publish(command)
#Function for moving to a specified location
def go_to_pose(position, euler_angles, grip, enter=True):
global command
goal = PoseStamped()
# Construct Goals from Positions and Orientations
goal.header.frame_id = "base"
# convert euler to quat
orientation = to_quat(euler_angles)
#x, y, and z position
goal.pose.position.x = position[0]
goal.pose.position.y = position[1]
goal.pose.position.z = position[2]
goal.pose.orientation.x = orientation[0]
goal.pose.orientation.y = orientation[1]
goal.pose.orientation.z = orientation[2]
goal.pose.orientation.w = orientation[3]
#move
try:
# Might have to edit this . . .
plan = planner.plan_to_pose(goal, [])
if (enter):
raw_input("Press <Enter> to move the arm to goal pose " + "\n\tAngle between object-robot: " + str(object_robot_angle) + "\n\tAngle between wall-object: " + str(wall_object_angle))
else:
time.sleep(1)
if not planner.execute_plan(plan):
if grip:
command = gc.genCommand(grip, command) # do something, this is a hack
pub.publish(command)
raise Exception("Execution failed")
if grip:
command = gc.genCommand(grip, command) # do something
pub.publish(command)
except Exception as e:
print e
traceback.print_exc()
def callback(message):
global object_robot_angle
global wall_object_angle
object_robot_angle = message.object_robot_angle if not np.isclose(message.object_robot_angle, -1) else object_robot_angle
wall_object_angle = message.wall_object_angle if not np.isclose(message.wall_object_angle, -1) else object_robot_angle
# Define the method which contains the node's main functionality
def init_ar_tag_listener():
rospy.Subscriber("move_grip_ar_tag",MoveGripARMessage, callback)
def find_k_increments(increment):
# default_pos = [0.864, 0.108, 0.311]
# go_to_pose(default_pos, orientation_original, None)
#
# go_to_pose([0.840, 0.094, 0.15], orientation_original, None) #slightly above to align object with gripper
# go_to_pose([0.840, 0.094, 0.10], orientation_original, 'g')
# go_to_pose([0.840, 0.094, 0.3], orientation_original, None) #move up
goto_initial_pos()
pickup_object()
goto_initial_pos()
wall_object_measurements = {}
object_robot_measurements = {}
y_angle = 0
while y_angle < 140:
print("about to do " + str(y_angle))
go_to_pose([0.840, 0.094, 0.3], [-180, y_angle, -180] , None)
time.sleep(3)
wall_object_measurements[y_angle] = wall_object_angle
object_robot_measurements[y_angle] = object_robot_angle
print('Took measurement for %s!' % y_angle)
print("For angle %s, we have wall object angle %s and object robot angle %s" % (y_angle, wall_object_angle, object_robot_angle))
y_angle = y_angle + increment
print('Here are the measurements:')
print(wall_object_measurements)
print(object_robot_measurements)
drop_object()
def pickup_object():
go_to_pose([0.840, 0.094, object_height + 0.01], orientation_original, None) # slightly above object
go_to_pose([0.840, 0.094, object_height], orientation_original, 'g') # grab object
def drop_object():
# go_to_pose([0.840, 0.094, 0.2], orientation_original, None) # slightly above object
go_to_pose([0.840, 0.094, object_height], orientation_original, 'c') # grab object
def goto_initial_pos():
"""
Go to initial arm location
"""
go_to_pose([0.864, 0.108, 0.311], orientation_original, None)
def feedback_control_level_out(tolerance=5, goal=90, proportional_constant=0.5):
"""
Closed loop feedback controller to level out the object held by the gripper
"""
# go_to_pose([0.707, 0.158, 0.456], [-180, 90, -180], None) # Gripper parallel to groun
rospy.sleep(3) # wait for object to stabilize
# closed loop control to get to desired position
curr = 0
print('starting closed loop feedback control')
while np.abs(goal - wall_object_angle) > tolerance:
print('Wall object error is %s which is above tolerance of %s. Fixing...' % (goal - wall_object_angle, tolerance))
curr += proportional_constant*(goal - wall_object_angle)
print("going to move to %s" % curr)
go_to_pose([0.707, 0.158, 0.456], [-180, curr, -180], None, False) # Flat position
rospy.sleep(1)
print('Wall object angle converged to %s' % wall_object_angle)
return [-180, curr, -180]
def feedback_control_demo():
goto_initial_pos()
pickup_object()
goto_initial_pos()
feedback_control_level_out()
def go_to_box(orientation):
box_height=-0.15
time.sleep(2)
# go_to_pose([0.362, 0.563, 0.186], orientation, None) # intermediate pos
# time.sleep(2)
# go in the box
# go_to_pose([0.308, 0.616, box_height], orientation, None) #z orignally -0.013 # level with box
# time.sleep(2)
# go_to_pose([0.55, 0.616, box_height], orientation, 'c') #z orignally -0.013 # into box
# go on top of the box
go_to_pose([0.777, 0.435, 0.2], orientation, None)
time.sleep(2)
go_to_pose([0.930, 0.380, 0.2], orientation, 'c')
def go_into_box_demo():
goto_initial_pos()
pickup_object()
goto_initial_pos()
level_orientation = feedback_control_level_out()
# level_orientation = [-180 ,136.83, -180]
# go_to_box(level_orientation)
# go_to_box(orientation_original)
def add_constraint(title, position, orientation, size):
#Table Constraint
p = PoseStamped()
p.header.frame_id = "base"
#x, y, and z position
p.pose.position.x = position[0]
p.pose.position.y = position[1]
p.pose.position.z = position[2]
#Orientation as a quaternion
p.pose.orientation.x = orientation[0]
p.pose.orientation.y = orientation[1]
p.pose.orientation.z = orientation[2]
p.pose.orientation.w = orientation[3]
planner.add_box_obstacle(size=np.array(size), name=title, pose=p)
def remove_constraint(title):
planner.remove_obstacle(title)
def move_object_no_cam_to_angle_demo():
global object_height
theta_desired = np.radians(90)
# object masses
object_1_mass = 0.157 # small
object_2_mass = 0.345 # medium
object_3_mass = 0.373 + .084 # large
# object d
object_1_d = 0.051 # small
object_2_d = 0.102 # medium
object_3_d = 0.122 # large
# object height
object_1_h = 0.05
object_2_h = 0.1
object_3_h = 0.11
# set params based on which object is being used
mass = object_1_mass
object_d = object_1_d
object_height = object_1_h
# pick up object
goto_initial_pos()
pickup_object()
goto_initial_pos()
# compute gripper angle to go to
candidate_theta_grippers = np.linspace(0,np.pi,1000)
resulting_object_angles = [calc_object_angle(mass, get_k(gripper_theta), gripper_theta, object_d) for gripper_theta in candidate_theta_grippers]
smallest_diff = np.inf
smallest_diff_i = -1
for i, object_angle in enumerate(resulting_object_angles):
cur_diff = np.abs(object_angle - theta_desired)
if cur_diff < smallest_diff:
smallest_diff = cur_diff
smallest_diff_i = i
best_gripper_angle = candidate_theta_grippers[smallest_diff_i]
best_resulting_object_angle = resulting_object_angles[smallest_diff_i]
print("found that %s is the best gripper angle, which causes a %s object angle" % (np.degrees(best_gripper_angle), np.degrees(best_resulting_object_angle)))
# Generate plot
plt.plot(np.degrees(candidate_theta_grippers), np.degrees(resulting_object_angles), label='Predicted behavior')
plt.xlabel('Candidate gripper theta (degrees)')
plt.ylabel('Predicted gripper theta (degrees)')
plt.axhline(y=np.degrees(theta_desired), color='r', label='Desired object angle', linestyle='dashed')
plt.axvline(x=np.degrees(best_gripper_angle), color='g', label='Best gripper angle', linestyle='dashed')
plt.title('Non rigid gripper model curve for mass=%s kg' % mass)
plt.legend()
plt.savefig('/home/cc/ee106a/fl21/class/ee106a-ace/Desktop/106a-final-project/figures/mass_%s.png' % mass)
# go to desired location
go_to_pose([0.707, 0.158, 0.456], [-180, np.degrees(best_gripper_angle), -180], None) # Flat position
#
# k = get_k() # .308
# angle_rad = calc_object_angle(mass, k, theta_desired)
# print("Calculated angle (degrees) is: " + str(np.degrees(angle_rad)))
#
# go_to_pose([0.707, 0.158, 0.456], [-180, np.degrees(theta_desired + angle_rad), -180], None) # Flat position
#
# print(wall_object_angle)
print('waiting for object to stabilize')
time.sleep(3)
print('actual angle between object and robot is %s' % wall_object_angle)
print('error from desired is %s' % (wall_object_angle - np.degrees(theta_desired)))
def modeling_determine_mass_demo():
global object_height
object_height = 0.085
# 200g is actual weight
# pick up object
goto_initial_pos()
pickup_object()
goto_initial_pos()
gripper_angle = np.radians(90) # radians
# move gripper to 90
go_to_pose([0.707, 0.158, 0.456], [-180, np.degrees(gripper_angle), -180], None) # Flat position
# stabilize and measure object wall angle
time.sleep(3)
cur_wall_obj_angle = np.radians(wall_object_angle)
print("Current wall obj angle is %s" % np.degrees(cur_wall_obj_angle))
# compute mass
mass_computed = compute_m(get_k(gripper_angle), 0.108, gripper_angle, cur_wall_obj_angle)
print("Mass is computed to be %s" % mass_computed)
def get_k(gripper_angle): # gripper_angle is in radians
k = 0.361
return k
def main():
"""
Main Script
"""
init_ar_tag_listener()
gripper_startup()
#
add_constraint("Table", [1.1,.2,-.2], [0,0,0,1], [0.40, 1.20, 0.10]) #Table
# add_constraint("Box", [1,0.6, .05 + 0.3048/2 - .2],[0,0,0,1],[0.3048, 0.3048, 0.3048]) #Box
# remove_constraint("Box")
# remove_constraint('Table')
#
# find_k_increments(15)
#feedback_control_demo()
# feedback_control_demo()
# go_into_box_demo()
# move_object_no_cam_to_angle_demo()
modeling_determine_mass_demo()
# Move to utils later
# def calc_k(mass, theta): #THIS IS THE WRONG FORMULA
# G = 9.81
# d = 0.025 # 2.5 cm (length of small vacuum gripper)
#
#
# spring_constant_k = (mass*G*d)/theta
# print("no bad")
#
# return spring_constant_k
# Step 1: Find angle
# Step 2: Use angle to find k
# Step 3: Calculate angle of new object with mass and k (known)
# Step 4: Calculate the pose of the end effector to make angle of object relative to base frame 0. (straighten object)
# Step 4a: Add tag to base of robot
# Step 4b: Track angle between object and base ar_tags
# Step 4c: Control the angle needed between the vacuum gripper and object that makes the angle between the object and base frame equal to 0
def calc_object_angle(mass, k, gripper_angle, object_d):
""" Figures out new angle for object given k and the object's mass """
G = 9.81
d = object_d + 0.025
# return (mass * G * d)/k
return gripper_angle - (mass * G * d * np.sin(gripper_angle) / k)
def compute_m(k, object_d, gripper_angle, object_angle):
G = 9.81
d = object_d + 0.025
return k*(gripper_angle - object_angle)/(G*d*np.sin(gripper_angle))
if __name__ == '__main__':
rospy.init_node('moveit_node')
main()
This file starts each node that we use in our project.
<launch>
<arg name="marker_size" default="16.5" />
<arg name="max_new_marker_error" default="0.05" />
<arg name="max_track_error" default="0.05" />
<arg name="cam_image_topic" default="/usb_cam/image_raw" />
<arg name="cam_info_topic" default="/usb_cam/camera_info" />
<arg name="output_frame" default="/usb_cam" />
<node name="usb_cam" pkg="usb_cam" type="usb_cam_node" output="screen" >
<param name="video_device" value="/dev/video0" />
<param name="image_width" value="1280" />
<param name="image_height" value="720" />
<param name="pixel_format" value="yuyv" />
<param name="camera_frame_id" value="usb_cam" />
<param name="io_method" value="mmap"/>
</node>
<!-- copied from ar_track.launch from lab4 cam package-->
<node name="ar_track_alvar" pkg="ar_track_alvar" type="individualMarkersNoKinect" respawn="false" output="screen">
<param name="marker_size" type="double" value="$(arg marker_size)" />
<param name="max_new_marker_error" type="double" value="$(arg max_new_marker_error)" />
<param name="max_track_error" type="double" value="$(arg max_track_error)" />
<param name="output_frame" type="string" value="$(arg output_frame)" />
<remap from="camera_image" to="$(arg cam_image_topic)" />
<remap from="camera_info" to="$(arg cam_info_topic)" />
</node>
<node pkg="intera_interface" name="enable_robot" type="enable_robot.py" args="-e" output="screen">
</node>
<node pkg="intera_interface" name="joint_trajectory_action_server" type="joint_trajectory_action_server.py" output="screen">
</node>
<include file="$(find sawyer_moveit_config)/launch/sawyer_moveit.launch" >
<arg name= "electric_gripper" value="false"/>
</include>
<!-- <node pkg="move_grip" name="path_test" type="path_test.py" args="sawyer" output="screen">
</node> -->
<node pkg="robotiq_vacuum_grippers_control" name="RobotiqVacuumGrippersRtuNode" type="RobotiqVacuumGrippersRtuNode.py" args="/dev/ttyUSB0" output="screen">
</node>
<node pkg="move_grip" name="ar_tag_publisher" type="ar_tag_publisher.py" output="screen">
</node>
</launch>
The steps below give an in-depth description of each part of our experimental procedure including: collecting data to find the spring coefficient in our model using AR Tags, using a closed loop controller to demonstrate our setup achieving a desired object pose using sensing and control, and using out model to achieve a number of benchmark tasks like achieving a desired object pose without sensing and determining the mass of an unknown object.
Note that we perform all our computations assuming the gripper and object only move within a 2D plane (as viewed from a camera pointed towards the side-view of the gripper). When we refer to angle we are referring to the angle within this plane where 0 degrees is straight down, 90 degrees is pointing away from the robot to the right, 180 degrees is up, etc. This set up is shown in the figure to the right.
In order to model our system, we need to know relative poses between the object and gripper and between the object and the world frame. We put one AR tag each on the gripper, object, and on the wall (world frame) as seen in the figure to the right. Throughout our experiments, we always manipulated the gripper such that the object and gripper AR tags would always be facing towards the camera whenever we needed to take a pose measurement. We achieved this by using orientation constraints in move_it.
We wrote a ROS node that read in the AR tag frames that were detected from the ar_track_alvar ROS package. This node would then compute a relative transform between the AR tags using the TF package. In this case we keep all the AR tags in a single plane (discarding depth information) and were only interested in relative angles between AR tags and not interested in translational offsets between the tags. We created a ROS topic and message type and published the angles we computed to that topic at 10Hz. This message is included to the right.
Now that we have a way to determine angles between the object, wall, and gripper, we wrote a closed loop controller that moves an object to a desired world frame angle. Notably this controller used feedback from the AR tag angles generated from the camera data and does not yet use any of our modeling of the non-rigid behavior of the vacuum gripper. We elected to use a simple proportional controller of the following form:
Өgripper[t+1] = Өgripper[t] + 𝛼 * (Өdesired - Өwall to object)
The goal here is to demonstrate the ability to put the object at a certain angle given camera information. Once we do the non rigid modeling in later steps we show we can perform this same task (putting the object at a specified angle) without the need for a camera or AR tags.
To properly fit the torsional spring model we need to determine the spring constant. As a reminder the model is depicted to the right where the vacuum gripper-object system is modeled as a torsional spring with torque equal to the product of the mass of the object, the acceleration due to gravity, and the distance to the center of mass of the suction cup - object system.
Thus to fit this model we need to put the gripper at a range of angles and determine the corresponding gripper-object angle. To accomplish this we start the gripper at 0 degrees (pointing downward) holding an object we constructed and move the gripper at 15 degree increments up until 180 degrees (pointing straight up). At each angle increment we determine the gripper-object angle from the AR tags and record the data.
We are able to leverage the data collected in the previous step, shown in the figure to the right, to find the spring constant. Using the torsional spring model we determined, we compute a spring constant K for each of the 15 degree increments. Then we average those K values together to determine our estimate of K. Now we have fit our model! Note that we did this only for a single object of constant mass. In the future using multiple objects of different mass could lead to a more accurate K value.
Now that we have the spring constant K, we can use our model from before to figure out which gripper angle we should have to get the object at a desired angle. This is challenging to solve in closed form since the gripper angle is present both in terms both inside and outside of a sine function. To resolve this try 1000 different potential gripper angles (from 0 to 180 degrees) and compute using this formula what the resulting object angle would be. We then pick the gripper angle that corresponds best to the desired object angle we want the object to be at.
Now that we have a way of determining what the gripper angle should be to get the object at a desired pose, we can simply just supply an object’s mass and center of mass location as well as a desired object angle and are able to compute what gripper angle we should be. Then we move the gripper to that angle. Note that this happens completely without the need of a camera or AR tags. For the sake of testing the validity of this approach we do still use the camera and AR tags to compute an error between the object and it’s desired angle.
Our model relates object mass, gripper angle and object angle. Thus we can use the camera and AR tags to compute the object angle for a certain gripper angle and can use our model to compute a mass estimate of an unknown object. Rearranging our model formula we get:
mass = k*(gripper_angle - object_angle)/(G*d*np.sin(gripper_angle))
where d is the distance to the center of mass of the unknown object.