I work on real-time and large-scale localization and mapping systems for robotics, in particular, my expertises are in structure from motion (SfM), simultaneous localization and mapping (SLAM) and sensor fusion. I have solid mathematical and engineering background: 10+ years hands-on experience on various modalities of sensors (monocular camera, setero cameras, rgbd, lidar, gps, imu), and make them work efficiently and robustly on different platforms(personal computers, mobile devices, drones and cars).
Now I am a senior technical specialist at Faraday Future, working with a group of people on the localization, mapping and sensor calibrations modules of our ADAS and autonomous driving systems. I also possess extensively technical background to collaborate and coordinate with people working on different areas in the company.
Previously I worked a researcher at Baidu Institute of Deep Learning, working on LiDAR-based mapping and localization for autonomous ground and aerial vehicles. Before joing Baidu Research, I received my PhD degree in computer science from Georgia Tech and my BS and MS degrees from National Taiwan University. At those times, I have published papers in large-scale Structure from Motion (SfM), Simultaneous Localization and Mapping (SLAM), Motion Segmentation and Visual Tracking. During my PhD study, I focus on using support theory to derive novel subgraph preconditioners to improve the efficiency of solving large-scale SLAM and SfM problems.
Previous Projects