Period: 2025 – Present
Funding: Sejong Science Fellowship, National Research Foundation of Korea (NRF)
Key Contributions:
Development of simultaneous hand motion and object tracking technology.
Construction of a digital-twin simulation environment and dataset expansion.
Development of robot motion generation technology based on real-virtual data co-training.
Collection of daily environment datasets and application to real-world robotic tasks.
Period: 2025 – Present
Funding: Robot Center, Samsung Research
Key Contributions:
Development of a teleoperation glove combining haptic feedback and visual–inertial hand tracking.
Design of a low-latency, high-precision teleoperation interface for dual-arm/dual-hand humanoid robots.
System deployment on bimanual robots with real-time precision-task execution and validation.
Period: 2025 – Present
Funding: Ministry of Trade, Industry and Energy (MOTIE)
Key Contributions:
High-precision, photo-realistic physics simulation for contact-rich robot behaviors.
Provision of accurate physical signals and real-world–quality visual outputs.
Sim-to-Real application on humanoids using Robot Foundation Model–based methods to validate simulators and datasets.
Period: 2024 – Present
Funding: Ministry of Trade, Industry and Energy (MOTIE)
Key Contributions:
Development of a bimanual finger-based interface for teleoperation of aerial manipulators.
Sensor fusion and SLAM for real-time robot state estimation and 3D digital-twin generation.
Establishment of a digital-twin based simulation for physically-accurate robot AI learning.
Period: 2022 – 2024
Location: Global AI Center, Samsung Research
Key Contributions:
Simulation of real-time robot sensor data (Vision, IMU, LIDAR) and actuator data (BLDC, Servo) in smart home environments.
Development of control strategies via AI algorithms (Reinforcement Learning, Transformer, Motion Planning).
Contributed to the release of Samsung robots, including robotic vacuum cleaners and Bot Handy.
Period: 2023 – 2024
Location: Global AI Center, Samsung Research
Key Contributions:
Participated in the development of Samsung Gauss, a proprietary large multimodal foundation model.
Contributed to the design and training of vision-language architectures for cross-modal understanding.
Explored the integration of Large Multimodal Models (LMMs) with embodied AI to enhance robotic perception and decision-making.
Optimized model performance for real-world applications within the Samsung product ecosystem.
Other research projects during graduate studies at SNU (check CV)