Research
The iRASC Lab introduces the research topics that we aim to pursue.
The iRASC Lab introduces the research topics that we aim to pursue.
Robot reinforcement learning enables robots to learn control policies through trial-and-error interaction, making it effective for complex control problems. Our lab studies RL for integrated locomotion and manipulation, including coordinated control of a Unitree Go2 + WidowX mobile manipulator. We also develop RL environments for autonomous drone soccer and lunar rover navigation with terrain-adaptive driving in simulation.
Our research focuses on improving robot autonomy, adaptability, and real-world applicability through simulation-based reinforcement learning.
Robot imitation learning enables robots to learn task behaviors directly from expert demonstrations, making it effective for complex manipulation with high data efficiency. Our lab studies imitation learning for vision-based robot control and Vision-Language-Action (VLA) models. In particular, we develop RetoVLA to improve spatial reasoning by reusing register tokens, and Depth-ACT to enhance policies with richer 3D scene information from RGB and depth inputs.
Our research aims to improve spatial awareness, data efficiency, and real-world applicability in robot imitation learning.
Large Language Models (LLMs) are AI systems trained on large-scale text data to understand, generate, and reason over language, and they can be extended to multimodal inputs such as images. Our lab studies LLMs as practical reasoning engines, including efficient model merging with DARE to improve reasoning performance and multimodal LLMs for disaster response using aerial imagery, few-shot in-context learning, and chain-of-thought prompting.
Our research aims to make LLMs and multimodal LLMs more efficient, adaptive, and useful for real-world reasoning and autonomous decision-making systems.
Robot vision enables robots to understand scene structure, depth, objects, and traversable space from visual input, supporting perception, decision-making, and control. Our lab studies robot vision with a focus on spatial perception and simulator construction, including SPACE-CLIP for lightweight monocular depth estimation from frozen vision encoders and NVSim for building large-scale indoor simulators from traversal image sequences.
Our research aims to improve depth understanding, spatial reasoning, and environment generation for more capable robotic perception and navigation.
Multimodal healthcare AI uses multiple data sources such as video, inertial sensor signals, and biosignals to better understand a patient’s condition, actions, and recovery process. Our lab studies this area through MMeViT, which recognizes post-stroke upper-limb daily activities from IMU and RGB-D data, and RAST-G@, which evaluates rehabilitation movement quality from skeleton sequences using ST-GCN and temporal attention.
Our research aims to make home-based rehabilitation more accessible, objective, and continuous through AI-driven digital healthcare systems.