At M³ Lab, we explore the frontiers of multimedia and multimodal intelligence to advance the perception, quality, and generation of next-generation media.
Our research spans mobile cloud gaming, VR avatars, AI-generated content, and immersive environments—laying the foundation for intelligent systems that seamlessly blend human perception with real-time multimedia technologies.
We aim to support the evolving metaverse, where interaction, creativity, and perception converge across physical and virtual domains.
Prospective Students
I am actively recruiting undergraduate, master, and Ph.D. students to join M³ Lab at NYCU. Our research focuses on advancing multimedia, multimodal perception, and machine learning techniques to enhance, understand, and generate intelligent media content.
If you are interested in joining our group, please check out this [introductory slide] and fill out the appropriate Google Form below:
Potential Research Directions
Perceptual Quality Assessment for AI-Generated Multimedia Content
Quality and Perception in Immersive and 3D Metaverse Environments
Robust Multimedia Quality Assessment in Challenging Environments
Aesthetic Evaluation and Style Adaptation for Creative Multimedia
Multimodal Perception and Integration for Next-Generation Media
Cross-Disciplinary Multimedia Applications