Before we can have any meaningful interaction with the objects, we first have to figure out how to handle them properly. Apart from just pushing the objects around, it would be very useful to pick solid objects up during several actions. However, as we have limited vision feedback, we do not previously know the dimensions of the objects. Another issue is the lack of force sensing on the gripper's fingers, so we have no straightforward way of knowing how hard the gripper is holding an object.
To solve this problem and enable the gripper to detect objects within its grasp, we proposed that the working current of the gripper can be an indication on this topic. Normally when the gripper fingers are not in contact with any object, the working current is always nearly zero. This reading increases immediately upon contact with solid objects. Luckily, by reading through the Robotiq 2F-140 gripper manual, we learned that there is a certain register on the gripper that automatically turns current reading into a boolean-like indicator of gripping status. We incorporated this register value into our gripper motion code to let it stop closing in when a solid object is detected within its fingers. Through this implementation, we can hold most solid objects of reasonable smoothness with just enough force so that it's not putting too much pressure on either the object or the gripper, while also ensuring the object won't get out of grip when the robot arm is in motion.
With this done, we can manipulate most solid objects however we want.
For probing an object's deformability, our plan is to drive the closed gripper down upon the object slowly. Shortly after contact, the force sensor will return a strong reading, which we'll then use to calculate a spring constant.
This design is based on two limitations: we only know the rough location of the object and the robot arm cannot withstand too much opposing force. In order to not cause harm to the robot or the object, we chose to approach the object cautiously. However to still get a meaningful reading, we programmed to robot to take a step-like motion. In this way, we'll always know the displacement between readings, thus making it possible to calculate the spring constant.
For weighing an object, we initially thought the change in force sensor reading along z-axis will scale linearly according to its weight.
However, this naïve method is prone to issues due to the noise of the force sensor. There is one problem in particular that we encountered multiple times: After weighing an object, the force sensor might not revert to its neutral state, causing the next reading to have a considerable error.
With that in mind, our final approach for measuring weight is composed of taking readings multiple times and calculating the average.
From the decision tree, we decided to implement it as a function of 2 to 3 inputs corresponding to the object's properties gathered by the sensing team. We return a weighted sum of the attributes from this function, with empirically determined scalar coefficients. Our end equation was as follows:
This sum is implemented as a function within our planning class, so once we finish probing the object and receive the property inputs, we simply made a call to the planner, which returns a list of points based on relative transforms and observed locations of AR Tags.
Using a RealSense that captures depth and color, we utilize AR tag tracking from a previous in-class lab to identify rigid transforms between the robot's base, the object, and the destination. This AR tag tracking software utilizs the ar_track_alvar package in ROS. Once the transforms are calculated, we are able identify points in which the trajectory of the end effector should pass through, and plug them into MoveIt!, our inverse kinematics solver.
We are able to acquire the MoveIt! Robot configuration for the UR5 Robot specifically via downloading it from the Universal Robots website. By launching the inverse kinematics solver node, we are able to send in global coordinates for the end effector and have it return a valid trajectory for the robot to attempt. We verified the safety of each of these trajectories in Rviz, and then let the robot take course.