Let there be LAIT!

"Modeling the Unseen, Understanding the Unknown."

우리는 인간의 인식을 넘어, 우리가 보고 느끼는 세상을 더욱 정교하게 모델링하고 이해하는 것을 목표로 합니다. 이를 통해 우리가 세상을 경험하고 해석하는 방식에 혁신적인 변화를 일으키고자 합니다. 또한 우리는 생성 모델이 세계를 이해하는 시스템을 구축하기 위한 가장 강력한 해결책이자 효과적인 접근법이라고 믿습니다. 이 때, 단순히 높은 성능을 넘어, 설명 가능하고, 효율적이며, 직관적으로 사용하기 쉬운 이미징 알고리즘 개발을 추구합니다.

Our mission is to redefine the boundaries of what can be seen and understood. 

By combining the rigor of signal processing with the creativity of machine learning, we transform imaginations into reality. We build algorithms that analyzes and synthesizes images, audios, and videos to construct comprehensive "world models" of the seen and the sensed. Along the way, our models uncover mathematical insights and reveal the beauty of hidden patterns in nature, pushing the frontier of signal modeling and interpretation.

News

For more details, please check Research Questions. Students who are interested in joining our lab, please check Call for Students & Interns

Publications (partial list)

HVDM: Hybrid Video Diffusion Models with 2D Triplane and 3D Wavelet Representation

Kihong Kim, Haneol Lee, Jihye Park, Seyeon Kim, Kwanghee Lee, Seungryong Kim, Jaejun Yoo 

ECCV 2024 (Corresponding author)

PaperCode | Project page

PosterLlama: Bridging Design Ability of Language Model to Content-Aware Layout Generation

Jaejung Seol, Seojun Kim, Jaejun Yoo 

ECCV 2024 (Corresponding author)

PaperCode | Project page

Nickel and Diming Your GAN: A Dual-Method Approach to Enhancing GAN Efficiency via Knowledge Distillation

Sangyeop Yeo, Yoojin Jang, Jaejun Yoo 

ECCV 2024 (Corresponding author)

Paper |  Code (coming soon) | Project page

STREAM : Spatio-TempoRal Evaluation and Analysis Metric for Video Generative Models

Pum Jun Kim, SeoJun Kim, Jaejun Yoo 

ICLR 2024 (Corresponding author)

PaperCode | Project page

TopP&R: Robust Support Estimation Approach for Evaluating Fidelity and Diversity in Generative Models

Pum Jun Kim, Yoojin Jang, Jisu Kim, Jaejun Yoo

NeurIPS 2023 (Corresponding author)

PaperCode | Project page

Can We Find Strong Lottery Tickets in Generative Models?

Sangyeop Yeo, Yoojin Jang, Jy-yong Sohn, Dongyoon Han, Jaejun Yoo

AAAI 2023 (Corresponding author)

Paper | SupplementaryProject page

Rethinking the Truly Unsupervised Image-to-Image Translation

Kyungjune Baek, Yunjey Choi, Youngjung Uh, Jaejun Yoo, Hyunjung Shim

ICCV 2021 (Research Mentor)

Paper (will be updated soon) | Code 

Time-Dependent Deep Image Prior for Dynamic MRI

Jaejun Yoo, Kyong Hwan Jin, Harshit Gupta, Jerome Yerly, Matthias Stuber, Michael Unser

IEEE TMI (JCR: Journal Citation Reports IF rank upper 10%) 2021 (First author)

Paper (Journal) | Paper (arXiv ver.) | Code

Reliable Fidelity and Diversity Metrics for Generative Models

Muhammad Ferjad Naeem*, Seong Joon Oh*, Youngjung Uh, Yunjey Choi, Jaejun Yoo

ICML 2020 (Corresponding author)

Paper | Code | Video (En) | Video (Kr)

StarGAN v2: Diverse Image Synthesis for Multiple Domains

Yunjey Choi*, Youngjung Uh*, Jaejun Yoo*, Jung-Woo Ha

CVPR 2020 (Co-first author)

Paper | Code | Video

Rethinking Data Augmentation for Image Super-resolution: A Comprehensive Analysis and a New Strategy

Jaejun Yoo*, Namhyuk Ahn*, Kyung Ah Sohn

CVPR 2020 (Co-first author)

Paper | Code | Video

Photorealistic Style Transfer via Wavelet Transforms

Jaejun Yoo*, Youngjung Uh*, Sanghyuk Chun*, Byeongkyu Kang, and  Jung-Woo Ha

ICCV 2019 (Co-first author)

Paper | Code | Video (En) | Video (Kr)

Selected Talks

Image Enhancement Techniques:  CutBlur, WCT2, and SimUSR                                                         

CVPR 2020 (Naver LABS)

신호처리 이론으로 실용적인 스타일 변환 모델 만들기 (Better Faster Stronger Transfer)

Deview 2019