This tutorial series is designed to provide a comprehensive journey through the evolution of robot perception, starting from the historical foundations of SLAM (Simultaneous Localization and Mapping) to the cutting-edge frontiers of Spatial AI.
The program begins with [Part 1] History and Introduction to SLAM, which serves as a foundational gateway for researchers. This session traces the technological lineage of SLAM and introduces the "SLAM Handbook" project to provide a structured roadmap for beginners. By covering essential topics such as 3D spatial representations and the mathematical formulation of sensor data from LiDAR and cameras, this session ensures participants establish a rigorous understanding of how robots define and interact with their environment.
Building upon these fundamentals, [Part 2] Spatial AI: From Sensor Fusion to Deep Learning explores the transition into modern autonomous intelligence. This session delves into advanced technical challenges, including high-performance point cloud registration, back-end optimization theories, and robust IMU sensor fusion. The discussion extends to the latest breakthroughs in AI-driven Visual SLAM, illustrating how deep learning is being integrated with traditional robotics to overcome the limitations of classical approaches.
By bridging the gap between historical principles and future-oriented AI innovations, this integrated tutorial offers a holistic perspective on spatial intelligence. Participants will gain the insights necessary to navigate the current research landscape and lead the next generation of breakthroughs in the global robotics field.
Time: 18:00 – 19:40, Feb 05 (Thu) | The Auditorium (1st floor of the Alpensia Resort Convention Center)
18:00 - 18:10 (10') | Introduction to the tutorial
18:10 - 18:50 (40') | Ayoung Kim (SNU): "History of SLAM and the SLAM Handbook Project"
18:50 - 19:30 (40') | Sunglok Choi (SeoulTech): "A Short Journey from 3D Vision to 3D Representations"
19:30 - 19:40 (10') | Panel Discussion & QnA (All speakers)
Time: 09:00 – 10:40, Feb 06 (Fri) | The Auditorium (1st floor of the Alpensia Resort Convention Center)
09:00 - 09:10 (10') | Opening Remarks
09:10 - 09:50 (40') | Pyojin Kim (GIST): "Representations for 3D Visual World"
09:50 - 10:30 (40') | Younggun Cho (Inha Univ.): "SLAM Back-end: Probabilistic and Linear Algebra Foundations for Optimization"
10:30 - 10:40 (10') | Panel Discussion & QnA (All speakers)
Time: 13:50 – 15:30, Feb 06 (Fri) | The Auditorium (1st floor of the Alpensia Resort Convention Center)
13:50 - 14:20 (30') | Giseop Kim (DGIST): "IMU Basics and Inertial Aided Navigation"
14:20 - 14:50 (30') | Hyungtae Lim (MIT): "From ICP to Learning-based Approaches: The Evolution of Point Cloud Registration"
14:50 - 15:20 (30') | Alex Junho Lee (Sookmyung Women's Univ.): "AI Fundamentals for Monocular Visual SLAM"
15:20 - 15:30 (10') | Panel Discussion & QnA (All speakers)
Ayoung Kim is an Associate Professor at Seoul National University and leads the Robust Perception and Mobile Robotics Lab (RPM Lab). She recently served as an Editor of the SLAM Handbook, a cornerstone resource for the robotics community. Her work advances perceptual robot autonomy through multi-modal sensor fusion and neural representations, aiming to build robust spatial intelligence for navigation and spatial representation learning.
Website: https://rpm.snu.ac.kr/
Sunglok Choi is an Assistant Professor at Seoul National University of Science and Technology (SeoulTech) and leads the Mobile Intelligence Laboratory (MINT Lab). He is widely recognized for his '3dv_tutorial' project, which is a critical educational bridge between multiple view geometry and modern Visual SLAM. His research continues to focus on developing efficient 3D representations and ensuring the practical scalability of spatial perception algorithms for agile robot platforms.
Website: https://mint-lab.github.io
Pyojin Kim is an Assistant Professor at GIST and leads the Machine Perception and Intelligence Lab (MPIL). He specializes in Visual-Inertial Odometry (VIO) and structural SLAM. His research explores how robots can utilize geometric and structural regularities (e.g., Manhattan World assumptions) to achieve drift-free navigation in challenging indoor and space environments, aiming for human-level spatial understanding.
Website: https://mpil-gist.github.io
Hyungtae Lim is a postdoctoral associate at the MIT SPARK Lab. He is a leading researcher in large-scale LiDAR SLAM and robust 3D perception. His excellence has been recognized through robotics awards, including 1st prize in the ICRA HILTI SLAM Challenge for two consecutive years (2023, 2024) and being named an RSS Pioneer 2024. His research focuses on spatial AI and long-term autonomy
Younggun Cho is an Assistant Professor at Inha University and leads the Spatial AI and Robotics Lab (SPAROLab). His research interest lies in resilient field robotics and long-term mapping. His recent breakthroughs include the development of unified multi-modal LiDAR mapping frameworks and distributed multi-robot SLAM. He is now expanding spatial intelligence into open-world visual language navigation and semantic-graph localization to bridge robust perception with high-level embodied intelligence.
Website: https://sparolab.github.io
Giseop Kim is an Assistant Professor at DGIST and leads the Autonomy and Perceptual Robotics Lab (APRL). His research foundation is built on high-performance sensor fusion and inertial-aided LiDAR-based navigation. Currently, he is bridging this expertise in robust state estimation with Spatial AI, specifically exploring the synergy between deep learning-driven SLAM and visual language navigation (VLN).
Alex Junho Lee is an Assistant Professor at Sookmyung Women's University and leads the SPRINT Lab. His research focuses on robust monocular Visual SLAM and AI-driven perception. He explores how to make visual navigation systems more reliable in dynamic and everyday human environments, bridging the gap between high-level AI perception and low-level geometric navigation.
Webpage: https://sites.google.com/sookmyung.ac.kr/sprint-lab