Welcome to the Signal Learning Lab (SLL)!
We are a passionate team of researchers at Korea University, Republic of Korea, dedicated to advancing the frontiers of signal learning.
Our current focus lies in foundation model-based efficient deep learning methods, positioning us at the forefront of upcoming technological innovations.
Join us as we explore and contribute to the latest breakthroughs in this dynamic and evolving field!
논문상 수상
(ACVYS 2025)
Nov. 9, 2025
논문상 수상
(AI신호처리합동학술대회)
Sept. 26, 2025
One paper (CTTA) accepted at ICML 2025
May 01, 2025
[Nov./2025] Annual Conference of Vietnamese Young Scientist (ACVYS2025)
- DO DIHN PHAT 박사과정 학생 Best Poster Presentation 수상
[Oct./2025] 겨울 방학 학부 인턴생 모집
- 관심 학생 신청 : 교수에게 메일로 문의 요청 (wjhwang@korea.ac.kr)
[2025.09.26] 제 35회 인공지능신호처리합동학술대회 정석화, DO DIHN PHAT 학생 논문상 수상
- 우수상: 정석화 석박통합과정 학생
- 우수상(포스터): DO DIHN PHAT 박사과정 학생
[Sept./2025] One paper accepted at Pattern Recognition Journal
- [Knowledge Distillation] Mr. Cheung's paper
"Knowledge Tailaring: Bridging the Teacher-Student Gap in Semantic Segmentation,"
Pattern Recognition (PR), Sept. 2025 (JIF Rank=93.1%, Q1)
[May/2025] One paper accepted at ICML 2025
- [CTTA] Ph. D student, Mr. Han's Paper,
"Ranked Entropy Minimization for Continual Test-Time Adaptation,"
International Conference on Machine Learning (ICML), Vancouver, Canada, July 2025
Efficient and Robust Learning with Multiple Foundation Models for various deep learning tasks.
(New) Multiple Foundation Models for Source-Free Domain Adaptation
Arxiv Coming Soon
(New) SAM marries CLIP for Human Parsing
https://arxiv.org/abs/2503.22237
Test-Time Adaptation (TTA) adapts a pre-trained model to new data distributions during inference, without accessing the original training data.
(New) D-TPT (Test-time Prompt Tuning) for Vision-Language Models
https://arxiv.org/abs/2510.09473
(New) When Test-Time Adaptation Meets Self-Supervised Models
https://arxiv.org/abs/2506.23529
[ICML'25] Ranked EM-based Continual Test-Time Adaptation
https://arxiv.org/abs/2505.16441
Semantic Segmentation is the task of clustering parts of an image together which belong to the same object class.
Single Source to Multi-Target Domain Adaptation for Semantic Segmentation
https://arxiv.org/abs/2403.11582
[NeurIPS'23] Switching Teachers for Semi-supervised Semantic Segmentation
https://arxiv.org/abs/2310.18640
[CVPR'22] Label to Label driven Human Parsing
https://arxiv.org/abs/2111.14173
Continual Learning is a model learning a large number of tasks sequentially without forgetting knowledge obtained from the preceding tasks.
[WACV'25] Semantic Prompting for Continual Learning
https://arxiv.org/abs/2403.11537
[ACCV'24] Selective Regularization for Class Incremental Learning
https://arxiv.org/abs/2305.05175
Domain Adaption aims to adapt models from a labeled source domain to a different but target domain without labels.
[CVPR'24] Sensor Adaptation for RGB to Thermography
https://arxiv.org/abs/2403.09359
[ECCV'22] Entropy Maximization Point-based Domain Adaptation
https://arxiv.org/abs/2111.13353
[CVPR'21] Fixed Ratio-based MixUp for Domain Adaptation https://arxiv.org/abs/2011.09230
Knowledge Distillation extracts pivtoal knowledge from a teacher network to guide the learning of a student network.
[ICCV'23] Online Selective Multiple Teachers for Distillation
https://arxiv.org/abs/2206.01186
[CVPR'23] Knowledge Distillation for 3D Object Detection
https://arxiv.org/abs/2205.15531
[ICCV'21] Mutiple Teacher-based Knowledge Distillation
https://arxiv.org/abs/2009.08825