It worked fairly well, as it was able to identify and navigate around obstacles in the way we expected it to. However, the motion wasn't as smooth as we would've liked, and occasionally it would make rapid jerking back-and-forth movements. Multiple obstacles close together also caused the robot to thrash and become confused.
See below. Start the video on the left at 20 seconds, and the video on the right at 8 seconds to have them sync up. The (faint) white pixels in the image represent points that the LIDAR sensor sees; the green squares represent potential obstacles (of a pre-specified minimum size); and the red line represents the planned path.
The black image represents a 4 meter by 4 meter square, with the robot at the center (LIDAR range is 3.5 meters instead of 2 meters, but we cropped the points we look at to be within a max radius of 2 meters). The path is planned 1 meter out.