Welcome to Show Lab at National University of Singapore!
We research multimodal intelligence, which looks into the synergy and conversion among modalities including image, video, language, audio, tactile. This involves techniques like:
Video Understanding e.g. video LLM, robot learning.
Multi-modal e.g. vision+language, vision+tactile.
Video Generation e.g. video diffusion model, world model.
Asst Prof. Mike Zheng Shou
Presidential Young Professor, Fellow of National Research Foundation
Links: [Twitter] [Google Scholar] [知乎]
Bio: Mike Shou is a tenure-track Assistant Professor (Presidential Young Professorship) at National University of Singapore. He was a Research Scientist at Facebook AI in Bay Area. He obtained his Ph.D. degree at Columbia University in the City of New York, working with Prof. Shih-Fu Chang. He received the Best Paper Finalist at CVPR'22, Best Student Paper Nomination at CVPR'17, PREMIA Best Paper Award 2023, EgoVis Distinguished Paper Award 2022/23. His team won the 1st place in the international challenges including ActivityNet 2017, EPIC-Kitchens 2022, Ego4D 2022 & 2023. He regularly serves as Area Chair for top-tier artificial intelligence conferences including CVPR, ECCV, ICCV, ACM MM. He is a Singapore Technologies Engineering Distinguished Professor and a Fellow of National Research Foundation Singapore. He is on the Forbes 30 Under 30 Asia list.
Contact: mike.zheng.shou AT gmail.com
(course related matters pls refer to the page under Teaching)
Press:
12/2023: Release a 3-hours tutorial for Video Diffusion Models.
02/2022: Featured on NRF magazine, page 12.
10/2021: Interview with NUS Comms.
News:
06/2025: Invited talks at CVPR 2025 workshops on "AI Assistants for the Real-world", "MetaFood", "Transformers for Vision".
12/2024: ShowUI received Outstanding Paper Award at NeurIPS Open-World Agents Workshop 2024.
12/2024: Invited talk at ACCV 2024, Rich Media with Generative AI Workshop
11/2024: Serve as Lead Area Chair for CVPR 2025
10/2024: Invited talks at ACM MM 2024, Large Generative Models Meet Multimodal Applications (LGM3A) workshop
09/2024: Invited talk at ECCV 2024, Tutorial on "Recent Advances in Video Content Understanding and Generation"
06/2024: Invited talk at CVPR 2024, Prompting in Vision Workshop
06/2024: EgoVLP received Egocentric Vision (EgoVis) 2022/2023 Distinguished Paper Award.
04/2024: Received Outstanding Early-Career Award at NUS.
02/2024: 12 papers to appear at CVPR 2024.
02/2024: Tutorial "Diffusion-based Video Generative Models" and LOVEU workshop to appear at CVPR 2024.
12/2023: Congrats to Jay for the IDS Student Research Achievement Award 2023!
12/2023: Congrats to Jiawei for the SDSC Dissertation Research Fellowship 2023!
10/2023: Keynote talk at ACM Multimedia 2023, Large Generative Models Meet Multimodal Applications (LGM3A) Workshop.
08/2023: Congrats to Kevin for winning Pattern Recognition and Machine Intelligence Association (PREMIA) Best Paper Award 2023 (Gold Award)!
07/2023: Congrats to our lab's first undergraduate alumni Benita Wong for joining Stanford University for her graduate study. Best wishes!
07/2023: 12 papers accepted by ICCV 2023.
06/2023: Received a research gift from Adobe.
01/2023: Received College Teaching Excellence Award.
10/2022: Received research awards and gifts from Meta and Google.
10/2022: Invited talk at ECCV 2022, DeeperAction Workshop
06/2022: Keynote speech at CVPR 2022, International Challenge on Activity Recognition (ActivityNet) Workshop.
06/2022: Received the best paper finalist at CVPR'22
05/2022: Selected to the Forbes 30 Under 30 Asia list, Class of 2022
05/2022: Serve as Program Chair for BigMM 2022.
10/2021: Happy to introduce the Ego4D dataset - 3K hours egocentric videos to lay down the groundwork of AI perception for AR and robot. [NUS News] [Facebook's post] [CNA FM938 Tech Talk] CNBC, MARKECHPOST, The Conversation, Insider