Projects
Multimodal Perception and Learning for Medical and Autonomous Systems
Embedded Computer Vision
- Efficient model development through knowledge distillation and structured pruning
- Modeling vehicle–pedestrian interactions and real-time risk assessment
- Real-time computer vision systems and synthetic data generation for UAV platforms
- Multimodal perception for assistive autonomy: integrating human state recognition and vision-based wheelchair navigation
- Advancing egocentric vision for human–AI interactionMedical Imaging Applications
- Surgical AI for intraoperative decision-making: video-language models for workflow understanding
- Self-supervised representation learning and foundation model adaptation for classification and segmentation to support early diagnosis and progression monitoring of retinal and pathological disorders
- Multimodal learning for integrated clinical decision-making