Publication
Publication
Mode-GS: Monocular Depth Guided Anchored 3D Gaussian Splatting for Robust Ground-View Scene Rendering
Yonghan Lee, Jaehoon Choi, Dongki Jung, Jaeseong Yun, Soohyun Ryu, Dinesh Manocha, and Suyong Yeon
Paper (Submitted; Under Review)
We propose a novel 3D Gaussian splatting algorithm that integrates monocular depth network with anchored Gaussian splatting in a scale-consistent framework, enabling robust rendering performance on sparse-view datasets observed from free trajectories of ground robots.
TK-Planes: Tiered K-Planes with High Dimensional Feature Vectors for Dynamic UAV-based Scenes
Christopher Maxey, Jaehoon Choi, Yonghan Lee, Hyungtae Lee, Dinesh Manocha, and Heesung Kwon
Paper (Submitted; Under Review)
We propose an extension of K-Planes Neural Radiance Field (NeRF), wherein our algorithm stores a set of tiered feature vectors that effectively model both static and dynamic scene information.
EDM: Equirectangular Projection-Oriented Dense Kernelized Feature Matching
Dongki Jung, Jaehoon Choi, Yonghan Lee, Somi Jeong, Taejae Lee, Dinesh Manocha, and Suyong Yeon
(Submitted; Under Review)
We propose the first learning-based dense matching algorithm for ominidirectional images, combining Gaussian process-based kernelized matching with spherical coordinate embeddings
MeshGS: Adaptive Mesh-Aligned Gaussian Splatting for High-Quality Rendering
Jaehoon Choi, Yonghan Lee, Hyungtae Lee, Heesung Kwon, and Dinesh Manocha
Proceedings of the Asian Conference on Computer Vision (ACCV), 2024.
We propose a mesh-aligned 3D Gaussian splatting method that flexibly integrates 3D Gaussian splats with traditional triangle meshes to improve rendering quality in mesh-based 3D scenes.
A Single Correspondence Is Enough: Robust Global Registration to Avoid Degeneracy in Urban Environments
Hyungtae Lim, Suyong Yeon, Soohyun Ryu, Yonghan Lee, Youngji Kim, Jaeseong Yun, Euigon Jung, Donghwan Lee, and Hyun Myung
Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), 2022.
We present Quatro, a robust global registration method for 3D point clouds that addresses the degeneracy problem in urban settings by utilizing quasi-SO(3) estimation to reduce rotation degrees of freedom and improve robustness against outliers.
SelfTune: Metrically Scaled Monocular Depth Estimation through Self-Supervised Learning
Jaehoon Choi, Dongki Jung, Yonghan Lee, Deokhwa Kim, Dinesh Manocha, and Donghwan Lee
IEEE International Conference on Robotics and Automation (ICRA), 2022.
We propose a self-supervised learning method that fine-tunes pre-trained supervised monocular depth networks by incorporating metric poses from SLAM, enabling metrically scaled depth estimation for applications such as autonomous navigation in diverse environments.
Large-scale Localization Datasets in Crowded Indoor Spaces
Donghwan Lee (*), Soohyun Ryu (*), Suyong Yeon (*), Yonghan Lee (*), Deokhwa Kim, Cheolho Han, Yohann Cabon, Philippe Weinzaepfel, Nicolas Guerin, Gabriela Csurka, and Martin Humenberger (*: Equal Contribution)
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021.
We propose a fully automated visual mapping pipeline with a novel SFM optimization that leverages LiDAR SLAM information as a spline-based trajectory prior, enabling large-scale 3D reconstruction in challenging indoor scenes chracterized by crowd density and repetitive textures.
SelfDeco: Self-Supervised Monocular Depth Completion in Challenging Indoor Environments
Jaehoon Choi, Dongki Jung, Yonghan Lee, Deokhwa Kim, Dinesh Manocha, and Donghwan Lee
IEEE International Conference on Robotics and Automation (ICRA), 2021.
We propose a self-supervised monocular d epth completion algorithm that trains a neural network using sparse depth measurements and monocular video sequences, designed for challenging indoor environments with textureless regions.
DnD: Dense Depth Estimation in Crowded Dynamic Indoor Scenes
Dongki Jung, Jaehoon Choi, Yonghan Lee, Deokhwa Kim, Changick Kim, Dinesh Manocha, and Donghwan Lee
IEEE International Conference on Robotics and Automation (ICRA), 2021.
We propose a novel approach for estimating metric depth images from a monocular camera in complex indoor environments, using RGB images and sparse depth maps from traditional 3D reconstruction methods to predict dense depth maps for scenes with static backgrounds and moving people.
Tight fusion of GPS-VIO for Indoor-Outdoor Transitional Flight of UAV
Yonghan Lee, Jiseock Kang, and Dongjun Lee
Workshop at the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS Workshop), 2019. (Best Paper Candidate)
We propose a GPS-fused Visual-Inertial Odometry (VIO) system that tightly integrates the random-walk behavior of GPS signals into a VIO factor-graph framework, enabling seamless indoor-to-outdoor transitional flight of UAV.
Pose and Posture Estimation of Aerial Skeleton Systems for Outdoor Flying
Sangyul Park, Yonghan Lee, Jinuk Heo, and Dongjun Lee
IEEE International Conference on Robotics and Automation (ICRA), 2019.
We propose a novel pose and posture estimation framework for aerial skeleton systems in outdoor environments, utilizing IMU and GNSS sensors on each link and employing SE(3)-motion EKF and smoothly constrained Kalman filtering to enhance estimation accuracy and system control.
IF Based SLAM-EKF Sensor Fusion Implemented to ODAR Platform and Flight Experiment
Yonghan Lee, Sangyul Park, Yongseok Lee, and Dongjun Lee
Institute of Control, Robotics, and Systems Annual Conference (ICROS), 2018.
We propose an Information-Filter (IF) based sensor fusion method that calibrates the metric scale of Monocular SLAM using metric information from IMU-Sonar EKF, implemented in the Omni-Directional Aerial Robot (ODAR) system and validated through flight experiments.
Teleoperation of a Platoon of Distributed Wheeled Mobile Robots with Predictive Display
Changsu Ha, Jaemin Yoon, Changu Kim, Yonghan Lee, Seongjin Kwon, and Dongjun Lee
Autonomous Robots 42(8), 2018.
We propose a teleoperation framework for distributed wheeled mobile robots that employs a leader-follower strategy with onboard sensing and a predictive display, incorporating uncertainty propagation while adhering to their nonholonomic constraints and distribution requirements.