1st place in DARPA Robotics Challenge Finals, “Team KAIST,” June, 7, 2015.
Teams start arriving to set-up their garages in preparation for the Robotics Competition. Twenty-five of the top robotics organizations in the world will gather to compete for $3.5 million in prizes as they attempt a simulated disaster-response course. The event is free to attend and open to the public. It takes place at Fairplex (home of the LA County Fair) in Pomona, California, just east of downtown Los Angeles.


Team KAIST (Hubo Lab., RCV Lab., Rainbow Co.)
DRC-HUBO+ took 1st place in the 2015 DARPA Robotics Challenge Finals. Hubo Lab and Rainbow Co. designed DRC-HUBO+ which is the most powerful version among the previous HUBO series for handling various tasks in a disaster scenario. It can also transform from a standing position to a kneeling pose for fast moving.

Team KAIST mainly consists of two laboratories in KAIST, 13 students of Hubo Lab and 5 students of RCV Lab. Hubo Lab., supervised by prof. Jun Ho Oh, has played leading role in development of HUBO platform and its basic function such as walking, manipulation, and control algorithm. RCV Lab., supervised by prof. In So Kweon, has developed algorithms related to vision system such as sensor calibration methods, object detection and pose estimation.

What did RCV Lab. do?
In DARPA Robotics Challenge (DRC), Robotics and Computer Vision Laboratory (RCV Lab.) designed the eyes and the brain for Hubo for it to understand its surroundings and maintain a level of autonomy crucial for the given mission scenario of a harsh disaster event.
RCV Lab. was established in 1993 to analyze the cognition capability of the human visual system and to develop high-performance computer vision system. We have succeeded in developing and commercializing many automatic systems, and have developed vision systems for intelligent mobile robots and autonomous vehicles. We research and apply visual features to intelligent mobile robots for robust robot vision. Besides, we also research robot localization technology based on vision for network-based intelligent robots and embedded system technology to commercialize our vision systems. In addition, we are doing various tasks related to national defense and developing vision system in cooperation with some companies simultaneously. 
The supervisor of our lab., prof. In So Kweon, has co-authored several books, including "Metric Invariants for Camera Calibration," and more than 300 technical papers. He served as a Founding Associate-Editor-in-Chief for “International Journal of Computer Vision and Applications”, and has been an Editorial Board Member for “International Journal of Computer Vision” since 2005.

Hubo Head System 
RCV Lab. develops data acquisition program for Hubo head system and its calibration program. Hubo head system has one Light Detection and Ranging (LIDAR) sensor, one GigE camera (1288x964), and one step motor with an encoder. As shown in right figure, Hubo head acquires 3D point cloud by sweeping LIDAR and captures an image at a target angle. User can control the sweeping scope, sweeping speed, and a target angle for capturing an image. Due to these features, we can obtain full 3D point cloud of target area and control sparsity of point cloud using motor sweeping speed. For using Hubo head system, we have to calibrate this sensor system. It has three coordinates; LIDAR sensor coordinate (PL), camera sensor coordinate (PC), and motor sensor coordinate (PM). We transform PL and PC into PM. We calibrate this system using [1].

Datasets[link]
We open our datasets. Please come to the project page and comments for us.

Overview of Overall System Architecture
DRC-HUBO+ system is consisted of three parts: computing system inside HUBO, field computing system, and operating control station. In computing system inside HUBO, there are two small computers. One controls HUBO motions, and the other acquires and compresses sensor data. The current motion and sensor information are send to field computer systems via wireless network. The field computer system calculates all candidate motion planning, object detection, and pose estimation for a target object. The outputs from the field system are also send to OCS via limited network. Based on the outputs, users decide actions and send commands.

Data-flow Diagram for Vision System

Vision Algorithms Modules for DRC Tasks (will be provided to JFR)

Image guidance for manual driving
Valve pose estimation
Drill pose estimation
Planes detection and its pose estimation for terrain and stairs
Laser depth upsampling
Object detector using CNN (Because of strategic reason, the object detector was not used at the challenge.)


DRC team members of RCV Lab.
Prof. In So Kweon (3rd from left) : Vision team supervisor.
Inwook Shim (5th from left)    Chief at vision team of Team KAIST, design overall vision system, Hubo head system grabber, toeholds detection and its pose estimation for terrain and stairs.
Seunghak Shin (1st from left) Streaming camera grabber, object detector using CNN, general pose estimation for surprise task.
Yunsu Bok (6th from left) Hubo head system calibration, valve pose estimation.
Kyungdon Joo (2nd from left) Streaming camera calibration (Image guidance for manual driving), drill pose estimation.
Dong-Geol Choi (4th from left) : Team management, hubo head system calibration.