Projects
Multimodal Perception and Learning for Human–AI Interaction
Embedded Computer Vision
- Modeling vehicle-pedestrian interactions and real-time risk assessment
- Real-time computer vision systems and synthetic data generation for UAV platforms
- Multimodal perception for assistive autonomy: integrating human state recognition and vision-based wheelchair navigation
- Advancing egocentric vision for human–AI interaction (e.g. hand pose estimation, document understanding)Medical Imaging Applications
- Surgical AI for intraoperative decision-making: video-language models for workflow understanding and ultrasound-based physiological state assessment
- Self-supervised representation learning for classification and segmentation to support early diagnosis and progression monitoring of retinal disorders
- Multimodal learning for integrated healthcare decision-making