Publications
Publications
Journal Articles
FaceExpressions-70k: A Dataset of Perceived Expression Differences
Avinab Saha, Yu-Chih Chen, Jean-Charles Bazin, Christian Häne, Ioannis Katsavounidis, Alexandre Chapiro, Alan C Bovik
SIGGRAPH Conference Papers '25
[HTML] [Project Page] [Github] [Video]
We introduce the first large-scale public dataset of realistic human faces annotated with perceived expression difference scores, enabling new research in facial expression perception. Our dataset covers a diverse range of expressions, and measures both inter and intra-expression differences on multiple actors.
HoloQA: Full Reference Video Quality Assessor for Rendered Human Avatar in Virtual Reality
Avinab Saha, Yu-Chih Chen, Alexandre Chapiro, Christian Häne, Jean-Charles Bazin, Bo Qiu, Stefano Zanetti, Ioannis Katsavounidis, Alan C Bovik
IEEE Transactions on Image Processing, under review
[Arxiv] [HTML] [Code]
A model called HologramQA utilizes advances in visual neuroscience, information theory, and self-supervised deep learning to assess the quality of rendered Digital Human Holograms in VR and AR.
Subjective and Objective Quality Assessment of Rendered Human Avatar Videos in Virtual Reality
Yu-Chih Chen, Avinab Saha, Alexandre Chapiro, Christian Häne, Jean-Charles Bazin, Bo Qiu, Stefano Zanetti, Ioannis Katsavounidis, Alan C Bovik
IEEE Transactions on Image Processing, 2024
[Arxiv] [HTML] [Database]
A new database, the LIVE-Meta Rendered Human Avatar VQA Database, contains 720 human avatar videos processed with 20 different encoding settings and includes corresponding human quality judgments, used to evaluate and compare video quality models including the new HoloQA model.
Study of Subjective and Objective Quality Assessment of Mobile Cloud Gaming Videos
Avinab Saha, Yu-Chih Chen, Chase Davis, Bo Qiu, Xiaoming Wang, Rahul Gowda, Ioannis Katsavounidis, Alan C Bovik
IEEE Transactions on Image Processing, 2023
[Arxiv] [HTML] [Database]
A recent large-scale subjective study of Mobile Cloud Gaming Video Quality Assessment (MCG-VQA) on a diverse set of gaming videos.
GAMIVAL: Video Quality Prediction on Mobile Cloud Gaming Content
Yu-Chih Chen, Avinab Saha, Chase Davis, Bo Qiu, Xiaoming Wang, Rahul Gowda, Ioannis Katsavounidis, Alan C Bovik
IEEE Signal Processing Letters, 2023
[Arxiv] [HTML] [Code]
A gaming-specific No-Reference Video Quality Assessment model designed to predict the quality of mobile cloud gaming videos without reference.
Intelligent Glioma Grading Based on Deep Transfer Learning of MRI Radiomic Features
Chung-Ming Lo, Yu-Chih Chen, Rui-Cian Weng, Kevin Li-Chun Hsieh
Applied Sciences, 2019 (SCI; 2019 IF=2.474; General Engineering: 85/299)
[HTML]
A novel approach for classifying glioma grades using deep transfer learning on MRI-derived radiomic features.
Conference
Chung-Ming Lo, Yu-Chih Chen, Kevin Li-Chun Hsieh. “A transferred deep learning brain tumor classification model”. International Conference on Image, Video Processing and Artificial Intelligence, 15-17 August 2018, Shanghai.