Ki-In Na, Robotics Engineer

 Ki-In Na is a principal researcher in the Mobility Robot Research Division at the Electronics and Telecommunications Research Institute (ETRI) in the Republic of Korea. He received his B.S. degree from the Mechanical Engineering Department of the Pohang University of Science and Technology (POSTECH) and M.S. and Ph.D. from the Robotics Program of the Korea Advanced Institute of Science and Technology (KAIST) under the supervision of Jong-Hwan Kim in the Robot Intelligence Technology Laboratory (RIT Lab)

RESEARCH INTEREST

RECENT RESEARCH

SPU-BERT: We propose a fast multi-trajectory prediction model that incorporates two non-recursive BERTs for multi-goal prediction (MGP) and trajectory-to-goal prediction (TGP). First, MGP predicts multiple goals through generative models, followed by TGP generating trajectories that approach the predicted goals. SPU-BERT can simultaneously understand movement, social interaction, and scene context from trajectories and semantic maps using a single Transformer encoder, providing explainable results as evidence of socio-physical understanding.


IMM-MIX: We propose IMM-based adaptive target tracking with heterogeneous velocity representations and linear/curvilinear motion models. It can integrate four motion models with different state definitions and dimensions to be completely complimentary for all types of motions. We experimentally demonstrate the effectiveness of the proposed method with accuracy for various motion patterns using two types of datasets: synthetic datasets and real datasets.


3D DATMO: We propose real-time, accurate, three-dimensional (3D) multi-pedestrian detection and tracking using a 3D light detection and ranging (LiDAR) point cloud in crowded environments. The pedestrian detection quickly segments a sparse 3D point cloud into individual pedestrians using a lightweight convolutional autoencoder and connected-component algorithm. The multi-pedestrian tracking identifies the same pedestrians considering motion and appearance cues in continuing frames. 

SPriorSeg: We propose a fast and accurate point-level object segmentation for point clouds by integrating the strengths of deep convolutional auto-encoder and region growing algorithm. Semantic segmentation using the light-weighted convolutional auto-encoder generates semantic prior by labeling a spherical projection image of point clouds pixel-by-pixel with classes of road-objects. The region growing algorithm achieves pixel-wise instance segmentation by taking into account semantic prior and geometric features between neighboring pixels.

Drivable Space Perception: We propose the real-time drivable space detection for complex urban environment by integrating the model-based segmentation and the region-based segmentation. Moreover, the proposed method utilizes point cloud from 3D LiDAR because it is effective to understand surrounding topography.


SELECTED PUBLICATION

BRIEF RESEARCH PORTFOLIO