Welcome to the Signal Learning Lab (SLL)!
We are a passionate team of researchers at Korea University, Republic of Korea, dedicated to advancing the frontiers of signal learning.
Our current focus lies in foundation model-based deep learning methods, positioning us at the forefront of upcoming technological innovations.
Join us as we explore and contribute to the latest breakthroughs in this dynamic and evolving field!
[Oct./2025] 2025년 겨울 방학 학부 인턴생 모집
- 관심 학생 신청 Link: https://forms.gle/4erqfzUiAd5XBuTd6
[2025.09.26] 제 35회 인공지능신호처리합동학술대회 정석화, DO DIHN PHAT 학생 논문상 수상
- 우수상: 정석화 석박통합과정 학생
- 우수상(포스터): DO DIHN PHAT 박사과정 학생
[Sept./2025] One paper accepted at Pattern Recognition Journal
- [Knowledge Distillation] Mr. Cheung's paper
"Knowledge Tailaring: Briding the Teacher-Student Gap in Semantic Segmentation,"
Pattern Recognition (PR), Sept. 2025 (JIF Rank=93.1%, Q1)
[May/2025] One paper accepted at ICML 2025
- [CTTA] Ph. D student, Mr. Han's Paper,
"Ranked Entropy Minimization for Continual Test-Time Adaptation,"
International Conference on Machine Learning (ICML), Vancouver, Canada, July 2025
Test-Time Adaptation (TTA) adapts a pre-trained model to new data distributions during inference, without accessing the original training data.
(New) "D-TPT (Test-Time Prompt Tuning) for Vision-Language Models"
https://arxiv.org/abs/2510.09473
(New) "When Test-Time Adaptation Meets Self-Supervised Models"
https://arxiv.org/abs/2506.23529
[ICML'25]
"Ranked EM-based Continual TTA"
https://arxiv.org/abs/2505.16441
Semantic Segmentation is the task of clustering parts of an image together which belong to the same object class.
[NeurIPS'23]
https://arxiv.org/abs/2310.18640
[CVPR'22] https://arxiv.org/abs/2111.14173
Continual Learning is a model learning a large number of tasks sequentially without forgetting knowledge obtained from the preceding tasks.
[WACV'25]
https://arxiv.org/abs/2403.11537
[ACCV'24] https://arxiv.org/abs/2305.05175
Domain Adaption aims to adapt models from a labeled source domain to a different but target domain without labels.
Knowledge Distillation extracts pivtoal knowledge from a teacher network to guide the learning of a student network.
[ICCV'23]
https://arxiv.org/abs/2206.01186
[CVPR'23] https://arxiv.org/abs/2205.15531
[ICCV'21] https://arxiv.org/abs/2009.08825
Self-supervised Learning for leveraging training data without supervision signals for Classification and Detection.