Conclusion

Discussion

Our project successfully implements reliable, convenient batch recording of training trajectories, dynamic time warping to normalize trajectories to the same temporal frame, determines variance of the signals at each sample, and forms a set of waypoints passed to a cartesian planner to execute learned trajectories. We have demonstrated the above via the task of putting an IKEA table leg into place. As aforementioned, we did not have time to implement much of the modelling or vision components and are leaving that till next semester.

Difficulties

The primary difficulties encountered in the project include the following:

  • There was a lot of setup and 'infrastructure' work at the start that took more time than anticipated to get working including account configuration, batch recording, writing and reading CSV files, debugging dynamic time warping, etc.
  • Limited time to test ideas on Baxter especially towards end of semester due to resource contention arising from biggest EE215/125 class in history and only a single Baxter with grippers available.
  • Various errors and problems with MoveIt compounded by a lack of documentation. In particular, while attempting to plan cartesian paths using waypoints, we often encountered inconsistent planner results (e.g., sometimes couldn't find a plan, sometimes could, for similar starting configurations each of which we had verified we could move the joints from) and threshold exceeded errors (e.g., "Exceeded Error Threshold on [joint]", "Exceeded Max Goal Velocity Threshold")

Flaws and Hacks

The primary hack we used for our system was to get the cartesian planner working. We initially passed all the points from the average normalized trajectory to the cartesian planner and specified an eff_threshold and jump_step. However, we observed only extremely slow, jerky executions, which perplexed us. Modifying the parameters governing path interpolation resolution and jump threshold didn't seem to have any affect. Furthermore, we often saw the planner return paths only going through a small fraction of the waypoints or it would return paths going through all the waypoints but the execution would fail at runtime with joint error threshold or velocity threshold errors. To get around this, we created our own sampling function that picked a limited number of waypoints through the signal every certain threshold between samples (where increment is measured as euclidean distance between two points of the signal, which must be greater than the threshold). We passed this set of sampled wapoints to the planner. This worked in getting the arm to execute paths reasonably quickly and smoothly but still occasionally suffered from some of the errors above. After discussing this with our helpful adviser, Aaron, we later realized a couple things which made everything make sense. First, when execution stopped early on and we got threshold errors, it meant the arm's real world joint positions were lagging too far behind the planned joint positions. This could easily be a result of the end effector's starting position and orientation being too different from the first waypoint, causing the execution to immediately give up. We fixed this by asking the robot the current real world end effector transform relative to the base and prepending this to the waypoints list. This solved our problems with error thresholds. Secondly, the waypoints could not only be specified with position and orientation, but also velocity. We had not been specifying any velocities since we didn't know that was possible, so the planner had been assuming we wanted a velocity of 0 at each waypoint, hence the super slow and jagged execution. Specifying the velocity at each waypoint would allow us to get rid of our hack where we sample to create a smaller spaced out set of waypoints on our own. We didn't have time to do this second part, but leave it for next semester.

Another quick hack currently in use was made to get the grippers working with the execution of learned trajectories. Long term, we have been recording the gripper status (in terms of how closed or open it is) with our training trajectories and will likely later set a threshold that has the gripper open or close based on those numbers at execution time. For now though, we've just set up the code such that the gripper opens and closes at predefined points in between trajectory executions.

Future Work

Next semester, we will be continuing this work. In particular, we plan on implementing vision using Baxter's cameras to recognize objects and figure out where they are in the world using homographies. This will allow us to 'perturb' the start and end points of learned trajectories appropriately to pick and place the objects in accordance with varied environment starting configurations (see Billard, 2008). For the sake of simplicity, we may start this component with fiducial markers in the form of AR tags and progress to object recognition from there, possibly using other sensors (e.g., Kinect).

Another major component we plan on focusing on next semester is the modelling of tasks and generating new trajectories for new situations. In particular, we are currently investigating Gaussian Mixture Models (GMMs) for learning by demonstration as described by Calinon, 2007.
Comments