06/2025: Invited talks at CVPR 2025 workshops on "AI Assistants for the Real-world", "MetaFood", "Transformers for Vision".
12/2024: ShowUI received Outstanding Paper Award at NeurIPS Open-World Agents Workshop 2024.
12/2024: Invited talk at ACCV 2024, Rich Media with Generative AI Workshop
11/2024: Serve as Lead Area Chair for CVPR 2025
10/2024: Invited talks at ACM MM 2024, Large Generative Models Meet Multimodal Applications (LGM3A) workshop
09/2024: Invited talk at ECCV 2024, Tutorial on "Recent Advances in Video Content Understanding and Generation"
06/2024: Invited talk at CVPR 2024, Prompting in Vision Workshop
06/2024: EgoVLP received Egocentric Vision (EgoVis) 2022/2023 Distinguished Paper Award.
04/2024: Received Outstanding Early-Career Award at NUS.
02/2024: 12 papers to appear at CVPR 2024.
02/2024: Tutorial "Diffusion-based Video Generative Models" and LOVEU workshop to appear at CVPR 2024.
12/2023: Congrats to Jay for the IDS Student Research Achievement Award 2023!
12/2023: Congrats to Jiawei for the SDSC Dissertation Research Fellowship 2023!
10/2023: Keynote talk at ACM Multimedia 2023, Large Generative Models Meet Multimodal Applications (LGM3A) Workshop.
08/2023: Congrats to Kevin for winning Pattern Recognition and Machine Intelligence Association (PREMIA) Best Paper Award 2023 (Gold Award)!
07/2023: Congrats to our lab's first undergraduate alumni Benita Wong for joining Stanford University for her graduate study. Best wishes!
07/2023: 12 papers accepted by ICCV 2023.
06/2023: Received a research gift from Adobe.
01/2023: Received College Teaching Excellence Award.
10/2022: Received research awards and gifts from Meta and Google.
10/2022: Invited talk at ECCV 2022, DeeperAction Workshop
06/2022: Keynote speech at CVPR 2022, International Challenge on Activity Recognition (ActivityNet) Workshop.
06/2022: Received the best paper finalist at CVPR'22
05/2022: Selected to the Forbes 30 Under 30 Asia list, Class of 2022
05/2022: Serve as Program Chair for BigMM 2022.
10/2021: Happy to introduce the Ego4D dataset - 3K hours egocentric videos to lay down the groundwork of AI perception for AR and robot. [NUS News] [Facebook's post] [CNA FM938 Tech Talk] CNBC, MARKECHPOST, The Conversation, Insider