Fabric Smoothing

Daniel Seita, Aditya Ganapathi, Ryan Hoque, Minho Hwang, Edward Cen, Ajay Kumar Tanwani, Ashwin Balakrishna, Brijen Thananjeyan, Jeffrey Ichnowski, Nawid Jamali, Katsu Yamane, Soshi Iba, John Canny, Ken Goldberg

You can find the full paper here on arXiv. This is the same version that was under review, except it has an appendix at the end. The paper has been accepted to the International Conference on Intelligent Robots and Systems (IROS), October 2020.

For any questions, contact Daniel (seita@berkeley.edu).

@inproceedings{seita_fabrics_2020,

author = {Daniel Seita and Aditya Ganapathi and Ryan Hoque and Minho Hwang and Edward Cen and Ajay Kumar Tanwani and Ashwin Balakrishna and Brijen Thananjeyan and Jeffrey Ichnowski and Nawid Jamali and Katsu Yamane and Soshi Iba and John Canny and Ken Goldberg},

title = {{Deep Imitation Learning of Sequential Fabric Smoothing From an Algorithmic Supervisor}},

booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},

Year = {2020}

}


Code and Data

Video Submission

The code for the project is here:

The offline demonstrator data can be found here: https://drive.google.com/file/d/1CfQSW2GCLTPOei9g6FuhDERfh1w75Btr/view?usp=sharing (warning: 5.2 GB).

Run the command to tar it to get:


$ tar -zxvf offline-demo-data.tar.gz demos-2019-08-28-pol-oracle-seed-1337_to_1341-clip_a-True-delta_a-True-obs-blender-tier1_epis_2000_COMBINED.pkldemos-2019-08-28-pol-oracle-seed-1337_to_1341-clip_a-True-delta_a-True-obs-blender-tier2_epis_2000_COMBINED.pkldemos-2019-08-28-pol-oracle-seed-1337_to_1341-clip_a-True-delta_a-True-obs-blender-tier3_epis_2000_COMBINED.pkldemos-2019-08-30-pol-oracle-seed-1337_to_1341-clip_a-True-delta_a-True-obs-blender-depthimg-False-tier1_epis_2000_COMBINED.pkldemos-2019-08-30-pol-oracle-seed-1337_to_1341-clip_a-True-delta_a-True-obs-blender-depthimg-False-tier2_epis_2000_COMBINED.pkldemos-2019-08-30-pol-oracle-seed-1337_to_1341-clip_a-True-delta_a-True-obs-blender-depthimg-False-tier3_epis_2000_COMBINED.pkl

This is technically is not needed for DAgger, but it's nice to have data to get the learner policy in a good configuration before we do DAgger. The first three are for depth images, and the last three are for color images. All are pickle files that store one list, of which each item contains information about a specific trajectory.

Update March 2020, for the second version of the paper, I regenerated a similar dataset for RGBD data. You can find it here for Tiers 1, 2, and 3, respectively:

These files will correspond to these file names (dated from February 09 and February 10):

demos-2020-02-09-16-31-pol-oracle-seed-1336_to_1340-obs-blender-depth-False-rgbd-True-tier1_epis_2000_COMBO.pkldemos-2020-02-10-15-02-pol-oracle-seed-1336_to_1340-obs-blender-depth-False-rgbd-True-tier2_epis_2000_COMBO.pkldemos-2020-02-10-15-05-pol-oracle-seed-1336_to_1340-obs-blender-depth-False-rgbd-True-tier3_epis_2000_COMBO.pkl

I strongly suggest using the above data set if you are interested in using data from this work.

The repositories have a fair amount of documentation, but there are a lot of moving pieces to tie together. If you have questions about the code, email me (seita@berkeley.edu) with details about what you want to do, and I will do my best to help you out.

This is the video that is currently under the submission website. It is one minute long. Note that all of the videos here of the robot (in this video, and for most of the others on this website unless specified otherwise) are sped up by 2x. The file size has been reduced thanks to Handbrake.

2020-03-01-IROS-video-submission-handbrake.mp4

Here is an earlier video we made for the project. For this earlier version, we did not have the RGBD baseline that is in the latest version of the paper. We also used a slightly lighter fabric back then. Unfortunately it seemed like we ran out of the fabric type we originally used, so I took the next best approximation from the existing fabric set we had. There were other slight differences between the older setup and the newer setup, which I also tried to control to ensure as close a match as possible (e.g., the camera angle moved a bit and we needed to change the foam rubber since the one we have changes color over time...).

2019-09-19-ICRA-video-v03-separate_rw_handbrake.mp4

Videos (Simulated)

The videos below are taken from rendering software that we use to visualize the simulator. We use the rendering software for taking videos of the simulator and debugging, but not for domain randomization. For that, we export our cloth meshes to Blender. By the way, for simulated videos, the fabric plane is blue. (In the real life setup, the fabric plane is white foam rubber.)

Oracle Corner Pulling Policy on a Tier 1 Starting State: it is able to do the trajectory in one shot. This is pretty typical.

AUTOLAB-2019-08-12-tier1-oracle.mp4

Oracle Corner Pulling Policy on a Tier 2 Starting State: it shows how doing several actions is necessary. The first two actions pull the top layer above the corner furthest from the target, and in both cases it is the upper right fabric corner going to the upper right fabric plane target. Note: it looks like the pulls are pulling "past" the corner but that's an artifact of our renderer. A similar case is seen in Tier 3 starting states.

AUTOLAB-2019-08-12-tier2-oracle.mp4

Videos (Real)

For the videos below, I took them by taking short videos with my phone and then stitching them together using iMovie. The main reason for doing so is that we use an overhead camera above and have code set so that we hit ENTER to get a new color/depth image pair. There is a process that takes a few seconds which involves image processing, before we can actually query the neural network policy. I didn't want the videos to be dominated by the waiting period.

Update March 2020: I arrange this by tier, listing color or depth (or RGBD) on tiers 1, 2, and 3. At the end I show several Tier3 RGBD videos which are from the newer version of the paper from March 2020.

Color Policy on Tier 1: I observed this behavior frequently. In some of the 20 trajectories, the policy was able to get to the coverage threshold in just one action. (The same thing happens in simulation.)

2019-09-12-tier1-color-ep-023-one-shot-success.MOV

Depth Policy on Tier 1: Here's another tier 1 starting state, this time with the depth based policy. Notice that it requires several actions to cover. In general, depth based policies don't do as good a job of finishing trajectories in "one shot" as the color policies on tier 1 starting states. (And I try to reset the fabric so that it is similar for each pair of depth-vs-color tests.) This video is sped up by 4x.

2019-09-12-tier1-depth-ep-022-all-04-acts-finally-fine-tuning.mp4

Color Policy on Tier 2: the upper left fabric corner is initially occluded and slightly underneath the fabric. The color trained policy (as done in simulation) will pull above it and then towards the upper left fabric plane target. It "over-pulls" but the next actions are able to compensate for that, resulting in great coverage. This is a common pattern I've observed, where slightly over-pulling first can be beneficial for later because corners that are folded underneath end up closer to the actual fabric plane (i.e., foam rubber) targets.

2019-09-10-tier2-color-ep-007-all-03-acts-success-amazing.mp4

Depth Policy on Tier 2: Here's the depth policy. The main takeaway is that it does some reasonable actions here, but at the 9th action (second-to-last) it will perform a poor action, which decreases coverage. Also, notice how it misses the fabric a few times --- but then the next action touches it, perhaps largely due to depth being somewhat less consistent across time steps versus color images.

2019-09-11-tier2-depth-ep-014-all-10-acts-bad-at-end.mp4

Color Policy on Tier 3: Despite a highly wrinkled starting state, the learned policy gets excellent coverage. This kind of "back and forth" motion is often helpful to later let the policy fine-tune the fabric by pulling at exposed corners.

2019-09-11-tier3-color-ep-013-all-06-acts-great.mp4

Depth Policy on Tier 3: the actions are somewhat reasonable. It's not terrible, but not ideal. The depth policy is particularly susceptible to missing the fabrics.

2019-09-13-tier3-depth-ep-017-all-10-acts-reasonable.mp4

RGBD Policy on Tier 3 (from newest version of the paper on March 2020): this is a Tier3 starting state, and the policy got excellent coverage.

t3_rgbd_ep016_acts06_2020-02-23.mp4

Second example, RGBD Policy on Tier 3 (from newest version of the paper on March 2020): this is similar to the previous video.

t3_rgbd_ep019_acts06_2020-02-23.mp4

Failure Case: this is when the policy slightly misses grabbing the fabric. However, we are able to measure structural similarity of the image before and after the action, and if it shows that the two images are nearly identical, the next action moves closer to the center, which is usually sufficient for our purposes. Notice that this may require several actions, since there's no guarantee the next action will touch fabric. (An alternative would be to simply map it to the nearest fabric pixel, though this is also subject to calibration error.) Here, we show back to back actions in the same trajectory. You'll see the above frequently. Note that this is shown with our older setup where we taped some paper in the background. The newer setup, used for experiments, uses a flat piece of paper with a cut piece to allow the foam rubber to be seen.

2019-09-09-tier1-color-ep-010-act-01-miss.MOV
2019-09-09-tier1-color-ep-010-act-02-one-shot.MOV

Interesting Observation Regarding Time Steps: I limited trajectories to 10 time steps, but this was somewhat of a heuristic. The below video shows a color trained, tier 2 policy. The actions it takes are reasonable, and indeed if we go straight by the corner pulling demo, the actions in the middle of the trajectory that pull above the corner are what the demonstrator would do. By the 10th time step, the fabric has gone back and forth, but given enough time it might have gotten 92% coverage, particularly due to how good the color policy is at fine-tuning fabric.

2019-09-11-tier2-color-ep-014-all-10-acts-so-close.mp4

Transfer to Yellow Fabric: The color based policy trained on tier1 suffers from terrible performance when deployed on the following yellow fabric. When this fabric was blue, we did not observe this behavior.

2019-09-12-tier1-color-on-yellow-ep-000-all-10-acts-BAD.mp4

Calibration Video

The following (sped-up!) video shows how we calibrated the robot. We have it go to the corners of the checkerboard, and visually inspect if it is accurate enough.

2019-08_26_calibration_checkerboard_davinci.mp4