I am an assistant professor in the Department of Computer Science at National Yang Ming Chiao Tung University since August 2025. I received my Ph.D. in Electrical and Computer Engineering from UT Austin (2021–2025), specializing in Image and Video Quality Assessment. My research focuses on developing advanced datasets and AI/ML algorithms for video quality assessment, particularly in mobile cloud gaming and human avatars in VR, leveraging multimodal perception and machine learning techniques.
During my Ph.D. at UT Austin under the supervision of Prof. Alan Bovik, I developed strong expertise in human study design, data analysis, and technological innovation. I also gained industry experience as a research intern at Disney Research | Studios.
I received the Yushan Young Fellow Program (教育部玉山青年學者) from the Ministry of Education, Taiwan, in 2026, the Junior Chair Professor/Junior Faculty Reward (賞刃校長青年講座教授) from NYCU in 2025.
Prospective Students
I am actively recruiting undergraduate, master, and Ph.D. students to join M³ Lab at NYCU. Our research focuses on advancing multimedia, multimodal perception, and machine learning techniques to enhance, understand, and generate intelligent media content.
If you are interested in joining our group, please check out this [introductory slide] and fill out the appropriate Google Form below:
沒有填寫表單,直接寄罐頭信來的一律不回覆。If you didn’t fill out the form, I won’t respond to any canned/template emails.
Research Interests
My research lies at the intersection of human perception, multimedia signal processing, and deep learning, with a focus on perceptual quality assessment and enhancement for interactive, immersive, and generative media. I aim to build methodologies that are both scientifically grounded and practically deployable across real-world multimedia applications.
Perceptual Quality Assessment & QoE: NR-IQA/NR-VQA and subjective QoE modeling for real-world multimedia.
Trustworthy Quality Modeling: robustness, reliability, and security-aware evaluation of perceptual metrics.
Immersive & 3D Media: VR/XR, omnidirectional video, avatars, and 3D media quality assessment.
Responsible Generative AI Evaluation: culture-aware evaluation, quality control for generative content, controllable editing evaluation, and aesthetics/style assessment.
Efficient Enhancement & Multimodal Learning: real-time generative enhancement, multimodal fusion (visual/audio/depth), and domain-specific reconstruction.
News
Feb 2026 🎊 Received Award: Yushan Young Fellow Program (教育部玉山青年學者) from the Ministry of Education, Taiwan.
Jan 2026 🎊 Paper Accepted by IEEE TIP: HoloQA: Full Reference Video Quality Assessor of Rendered Human Avatars in Virtual Reality.
Aug 2025 🎊 Received Award: Junior Chair Professor/Junior Faculty Reward (賞刃校長青年講座教授) from NYCU.
Jul 2025 🎊 Paper Accepted by SIGGRAPH Conference Papers ’25 (August 10–14, 2025, Vancouver, BC, Canada): FaceExpressions-70k: A Dataset of Perceived Expression Differences.
Dec 2024 🎤 Seminar at NTU: Delivered research presentations to master's students in EE and CSIE. [Slides]
Oct 2024 🎊 Paper Accepted by IEEE TIP: Subjective and Objective Quality Assessment of Rendered Human Avatar Videos in Virtual Reality.
Mar 2024 🎤 Talk: Delivered a presentation on my research as part of the ECE Outstanding Student Lecture Series for prospective PhD students. [Certificate] [Slides]
Jun 2023 🎤 Talk: Delivered a presentation on Video Quality Assessment for Cloud Gaming at the VQEG Meeting.
Jun 2023 🎊 Paper Accepted by IEEE TIP: Study of Subjective and Objective Quality Assessment of Mobile Cloud Gaming Videos.
Mar 2023 🎊 Paper Accepted by IEEE SPL: GAMIVAL: Video Quality Prediction on Mobile Cloud Gaming Content.