The Visual Intelligence and Generation Lab (VIG Lab) aims to bring innovation to the world by developing Artificial Intelligence based on visual signals. We are pioneering Visual Generative AI that creates and understands complex visual worlds from images and videos to 3D environments. Furthermore, we advance Visual Multimodal AI that empowers systems with cognitive reasoning by synthesizing vision and language. By integrating these multimodal capabilities with generative models, we are dedicated to shaping the future of Physical AI, creating intelligent agents capable of perceiving, reasoning, and acting in the real world.
Specific tasks addressed in our lab
Visual Generative AI
Image / Video /3D Generation
Diffusion Models / Flow Matching
Gaussian Splatting / NeRF
Visual Multimodal AI
Large Vision-Language Models (LVLMs)
Image / Video Retrieval and Detection
Vision-based Dialogue & Reasoning
If you are interested in applying for positions (Intern / MS / Ph.D) in our lab, please contact me (sunjaeyoon@cau.ac.kr)
Recently Accepted Papers
[Feb/2026] One paper accepted (ICLR 2026)
[Jan/2026] One paper accepted (EACL 2026)
[Nov/2025] One paper accepted (AAAI 2026)
[Aug/2025] One paper accepted (EMNLP 2025)
[Jun/2025] One paper accepted (ICCV 2025)
[May/2025] One paper accepted (ICML 2025)
[Feb/2025] One paper accepted (CVPR 2025)
Recent News
[Feb/2026] Open Visual Intelligence and Generation Lab
[Aug/2025] Join Area Chair for EACL (S. Yoon)
[Sep/2025] Award Doctoral Consortium at ICCV 2025 (with mentor Umar Iqbal, NVIDIA Sr. Research Manager)
[Mar/2025] Ph.D. Dissertation Award at KAIST (S. Yoon)