Let there be LAIT!
"Modeling the Unseen, Understanding the Unknown."
LAIT는 인간의 인식을 넘어, 우리가 보고 느끼는 세상을 더욱 정교하게 모델링하고 이해하는 것을 목표로 합니다. 그리고 생성 모델이 이를 위한 가장 강력한 해결책이자 효과적인 접근법이라고 믿습니다. 이 때, 단순히 높은 성능의 모델을 목표로 하기보다는, 설명 가능하고, 효율적이며, 직관적으로 사용하기 쉬운 이미징 알고리즘 개발을 추구합니다. 이를 통해 세상을 경험하고 해석하는 방식에 혁신적인 변화를 일으키고자 합니다.
Our mission is to redefine the boundaries of what can be seen and understood.
Together, we go beyond perception, modeling the world we see and sense.
We challenge the limits by developing intelligent models that reveal structures of the world beyond our senses.
Generative models, we believe, are the most powerful—and indeed the only—solution for building systems that understands the world.
Grounded in simple, clear principles, we aim to create technology that is both intuitive and easy to use—solutions that transform how we experience and interpret our world.
By combining the rigor of signal processing with the creativity of machine learning, we transform imaginations into reality. We build algorithms that analyzes and synthesizes images, audios, and videos to construct comprehensive "world models" of the seen and the sensed. Along the way, our models uncover mathematical insights and reveal the beauty of hidden patterns in nature, pushing the frontier of signal modeling and interpretation.
News
[2024.10] Congratulation! Jeongho and Hyeonjin have achieved third place among 77 competing teams in the Multimodal Semantic Segmentation Challenge 2024 at IEEE WHISPERS (Workshop on Hyperspectral Imaging and Signal Processing: Evolution in Remote Sensing)!
[2024.09] Congratulation! Sanghun has been accepted into NAVER AI lab's internship program!
[2024.08] Congratulation! Chan has been accepted into CJ AI's internship program!
[2024.07] Congratulations! Three papers from our group has been accepted to ECCV 2024!
[2024.04] Congratulations! Our paper "PosterLlama: Bridging Design Ability of Langauge Model to Contents-Aware Layout Generation" has been selected for a spotlight presentation in CVPR Workshop 2024 GDUG .
[2024.04] Our paper "Universal Dehazing via Haze Style Transfer" has been accepted to IEEE TCSVT !
[2024.02] I will serve as an Award Committee of ICASSP 2024.
[2024.02] Congratulation! Our students received Best Paper Awards (one Gold & two Bronze) and four Best Poster Awards from IPIU2024 !
[2024.01] Congratulation! Our paper "STREAM: Spatio-TempoRal Evaluation and Analysis Metric for Video Generative Models" has been accepted to ICLR 2024!
[2024.01] Congratulation! Our paper "Data Augmentation for Low-Level Vision: CutBlur and Mixture-of-Augmentation" has been accepted to Springer International Journal of Computer Vision (Impact Factor 19.5)!
[Wanted] I am looking for strong and motivated students (or interns). The partial list of future project topics are as follows:
Computational Imaging (e.g, super-resolution, image compression, medical image reconstruction, etc.)
Generative models for High-Dimensional Data (beyond 2D)
Better Conditioning, Better Generation
Explainable & Reliable Generation
Understanding Intelligent Models & Representations
For more details, please check Research Questions. Students who are interested in joining our lab, please check Call for Students & Interns.
Publications (partial list)
For an entire publication list, please check my Google Scholar
HVDM: Hybrid Video Diffusion Models with 2D Triplane and 3D Wavelet Representation
Kihong Kim, Haneol Lee, Jihye Park, Seyeon Kim, Kwanghee Lee, Seungryong Kim, Jaejun Yoo
ECCV 2024 (Corresponding author)
Paper | Code | Project page
PosterLlama: Bridging Design Ability of Language Model to Content-Aware Layout Generation
Jaejung Seol, Seojun Kim, Jaejun Yoo
ECCV 2024 (Corresponding author)
Paper | Code | Project page
Nickel and Diming Your GAN: A Dual-Method Approach to Enhancing GAN Efficiency via Knowledge Distillation
Sangyeop Yeo, Yoojin Jang, Jaejun Yoo
ECCV 2024 (Corresponding author)
Paper | Code (coming soon) | Project page
STREAM : Spatio-TempoRal Evaluation and Analysis Metric for Video Generative Models
Pum Jun Kim, SeoJun Kim, Jaejun Yoo
ICLR 2024 (Corresponding author)
Paper | Code | Project page
TopP&R: Robust Support Estimation Approach for Evaluating Fidelity and Diversity in Generative Models
Pum Jun Kim, Yoojin Jang, Jisu Kim, Jaejun Yoo
NeurIPS 2023 (Corresponding author)
Paper | Code | Project page
Can We Find Strong Lottery Tickets in Generative Models?
Sangyeop Yeo, Yoojin Jang, Jy-yong Sohn, Dongyoon Han, Jaejun Yoo
AAAI 2023 (Corresponding author)
Rethinking the Truly Unsupervised Image-to-Image Translation
Kyungjune Baek, Yunjey Choi, Youngjung Uh, Jaejun Yoo, Hyunjung Shim
ICCV 2021 (Research Mentor)
Time-Dependent Deep Image Prior for Dynamic MRI
Jaejun Yoo, Kyong Hwan Jin, Harshit Gupta, Jerome Yerly, Matthias Stuber, Michael Unser
IEEE TMI (JCR: Journal Citation Reports IF rank upper 10%) 2021 (First author)
Reliable Fidelity and Diversity Metrics for Generative Models
Muhammad Ferjad Naeem*, Seong Joon Oh*, Youngjung Uh, Yunjey Choi, Jaejun Yoo
ICML 2020 (Corresponding author)
Paper | Code | Video (En) | Video (Kr)
Photorealistic Style Transfer via Wavelet Transforms
Jaejun Yoo*, Youngjung Uh*, Sanghyuk Chun*, Byeongkyu Kang, and Jung-Woo Ha
ICCV 2019 (Co-first author)
Paper | Code | Video (En) | Video (Kr)
Selected Talks
Image Enhancement Techniques: CutBlur, WCT2, and SimUSR
CVPR 2020 (Naver LABS)
신호처리 이론으로 실용적인 스타일 변환 모델 만들기 (Better Faster Stronger Transfer)
Deview 2019