Here is a video illustrating the lidar perception pipeline I wrote at Stanford for Junior the self driving car.
The raw lidar data is first analysed to find the ground and the points that land above the ground. Then it creates blobs by aggregating points that are near one another. The next step is tracking, where blobs are matched with existing tracks, or new tracks are created. This provides temporal consistency and velocity estimation. Finally, a classifier decides whether the track is a pedestrian, a bike or a vehicle (this part was done by Alex Teichman).
In this next video, the lidar data is associated with camera images, providing an RBGD point cloud (or colored point cloud). This data was accumulated over a few seconds while driving through the Volkswagen Automotive Innovation Lab (VAIL) at Stanford, where Junior and Shelly resided at the time.