Publications
Publications
Journal Articles
HoloQA: Full Reference Video Quality Assessor for Rendered Human Avatar in Virtual Reality
Avinab Saha, Yu-Chih Chen, Alexandre Chapiro, Christian Häne, Jean-Charles Bazin, Bo Qiu, Stefano Zanetti, Ioannis Katsavounidis, Alan C Bovik
IEEE Transactions on Image Processing, 2026
[Paper] [Code]
HoloQA is a full-reference VQA model for rendered digital human avatars in VR/AR, combining neuroscience- and information-theoretic perceptual features with self-supervised deep semantic features in a multi-level Mixture-of-Experts framework.
Subjective and Objective Quality Assessment of Rendered Human Avatar Videos in Virtual Reality
Yu-Chih Chen, Avinab Saha, Alexandre Chapiro, Christian Häne, Jean-Charles Bazin, Bo Qiu, Stefano Zanetti, Ioannis Katsavounidis, Alan C Bovik
IEEE Transactions on Image Processing, 2024
[arXiv] [Paper] [Database]
A new database, the LIVE-Meta Rendered Human Avatar VQA Database, contains 720 human avatar videos processed with 20 different encoding settings and includes corresponding human quality judgments, used to evaluate and compare video quality models including the new HoloQA model.
Study of Subjective and Objective Quality Assessment of Mobile Cloud Gaming Videos
Avinab Saha, Yu-Chih Chen, Chase Davis, Bo Qiu, Xiaoming Wang, Rahul Gowda, Ioannis Katsavounidis, Alan C Bovik
IEEE Transactions on Image Processing, 2023
[arXiv] [Paper] [Database]
A recent large-scale subjective study of Mobile Cloud Gaming Video Quality Assessment (MCG-VQA) on a diverse set of gaming videos.
GAMIVAL: Video Quality Prediction on Mobile Cloud Gaming Content
Yu-Chih Chen, Avinab Saha, Chase Davis, Bo Qiu, Xiaoming Wang, Rahul Gowda, Ioannis Katsavounidis, Alan C Bovik
IEEE Signal Processing Letters, 2023
[arXiv] [Paper] [Code]
A gaming-specific No-Reference Video Quality Assessment model designed to predict the quality of mobile cloud gaming videos without reference.
Intelligent Glioma Grading Based on Deep Transfer Learning of MRI Radiomic Features
Chung-Ming Lo, Yu-Chih Chen, Rui-Cian Weng, Kevin Li-Chun Hsieh
Applied Sciences, 2019
[Paper]
A novel approach for classifying glioma grades using deep transfer learning on MRI-derived radiomic features.
Conference Papers
Learning Perceptual Representations for Gaming NR-VQA with Multi-Task FR Signals
Yu-Chih Chen, Michael Wang, Chieh-Dun Wen, Kai-Siang Ma, Avinab Saha, Li-Heng Chen, Alan C Bovik
arXiv, 2026 (Under Review)
[arXiv] [Paper] [Code]
MTL-VQA is a multi-task learning model for no-reference gaming video quality assessment that leverages multiple full-reference metrics as supervisory signals for label-free pretraining with adaptive task weighting, learning transferable perceptual features and achieving competitive performance on gaming video datasets in both MOS-supervised and label-efficient/self-supervised settings.
Stream-DiffVSR: Low-Latency Streamable Video Super-Resolution via Auto-Regressive Diffusion
Hau-Shiang Shiu, Chin-Yang Lin, Zhixiang Wang, Chi-Wei Hsiao, Po-Fan Yu, Yu-Chih Chen, Yu-Lun Liu
arXiv, 2025 (Under Review)
[arXiv] [Project Page] [Code]
Stream-DiffVSR achieves the lowest latency reported for diffusion-based VSR, reducing initial delay from over 4,600 seconds to 0.328 seconds, thereby making it the first diffusion VSR method suitable for low-latency online deployment.
FaceExpressions-70k: A Dataset of Perceived Expression Differences
Avinab Saha, Yu-Chih Chen, Jean-Charles Bazin, Christian Häne, Ioannis Katsavounidis, Alexandre Chapiro, Alan C Bovik
SIGGRAPH Conference Papers '25
[Paper] [Project Page] [Github] [Video]
We introduce the first large-scale public dataset of realistic human faces annotated with perceived expression difference scores, enabling new research in facial expression perception. Our dataset covers a diverse range of expressions, and measures both inter and intra-expression differences on multiple actors.
Chung-Ming Lo, Yu-Chih Chen, Kevin Li-Chun Hsieh. “A transferred deep learning brain tumor classification model”. International Conference on Image, Video Processing and Artificial Intelligence, 15-17 August 2018, Shanghai.