Robust Robot Learning
Deep Reinforcement Learning for Robot/Sensor Control (Humanoid, Legged Robot, Manipulator, Vision/Tactile/Audio/Olfactory/Taste Sensor, etc)
Multi-modal Sensor Fusion for Robust Spatial/Semantic Perception
Vision-Language Navigation/Manipulation
Scalable Representation Learning
Learning from Self-supervision (Image, Video, Action, Audio, Language, etc)
Learning from Multi-Sensor Data (RGB, NIR, SWIR, Thermal, Event camera, (Spinning, Solid-state) LiDAR, RADAR, Sonar, etc)
Foundation Model for Multi-modal Sensors/Robotics (VLM, LBM, etc)
Robust Spatial/Semantic Understanding in-the-wild
Robust Robot Perception in Adverse Conditions (Rainy, Snowy, Dusty, Over-exposed, Fired, Low-lighted, etc)
Vision-Language Understanding in Adverse Conditions
Self-supervised 3D Geometry (Depth, Optical Flow, Scene flow, Odometry, Object pose, SLAM)
Continual Learning/Domain Adaptation in the wild