HQD-EM: Robust VQA Through Hierarchical Question Decomposition Bias Module and Ensemble Adaptive Angular Margin Loss
SeongHyeon Noh, Jae Won Cho
Mathematics 2025 (Impact Factor 2.2, Q1 [top 10%])
[ Paper ]
Also presented at "VizWiz Grand Challenge Workshop" in conjunction with CVPR 2025
Preserving Multi-Modal Capabilities of Pre-trained VLMs for Improving Vision-Linguistic Compositionality
Youngtaek Oh, Jae Won Cho, Dong-Jin Kim, In So Kweon, Junmo Kim
International Conference on Empirical Methods in Natural Language Processing (EMNLP), 2024. (long, main)
Let Me Finish My Sentence: Video Temporal Grounding with Holistic Text Understanding
Jongbhin Woo, Hyeonggon Ryu, Youngjoon Jang, Jae Won Cho, Joon Son Chung
ACM Multimedia (MM), 2024
[ Paper ]
Grained Action Understanding with Tools in Instructional Videos
Saelyne Yang, Jaesang Yu, Jae Won Cho, Juho Kim
CVPR Workshop on Learing from Procedural Videos and Language (CVPRW), 2024
[ Paper ]
Empirical study on using Adapters for debiased Visual Question Answering
Jae Won Cho, Dawit Mureja Argaw, Yeongtaek Oh, Dong-Jin Kim, In So Kweon
Computer Vision and Image Understanding (CVIU), 2023 (Impact Factor 4.5)
[ Paper ]
Counterfactual Mix-Up for Visual Question Answering
Jae Won Cho*, Dong-Jin Kim*, Yunjae Jung, In So Kweon (* Equal Contribution)
IEEE Access, 2023 (Impact Factor 3.9)
[ Paper ]
Generative Bias for Robust Visual Question Answering
Jae Won Cho, Dong-Jin Kim, Hyeonggon Ryu, In So Kweon
IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), 2023
Received Bronze Prize, 28th Samsung Humantech Paper Awards (Top 2.8%)
Excellent Paper Award IW-FCV
Also presented at "Workshop on Open-Domain Reasoning Under Multi-Modal Settings" in conjunction with CVPR 2023
Self-Sufficient Framework for Continuous Sign Language Recognition
Youngjoon Jang, Youngtaek Oh, Jae Won Cho, Myungchul Kim, Dong-Jin Kim, In So Kweon, Joon Son Chung
International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2023 [Oral]
Top 3% recognition of all accepted papers
MCDAL: Maximum Classifier Discrepancy for Active Learning
Jae Won Cho*, Dong-Jin Kim*, Yunjae Jung, In So Kweon (* Equal Contribution)
IEEE Transactions on Neural Networks and Learning Systems (TNNLS) 2022 (Impact Factor 14.255)
[ Paper ]
Also presented at "The Workshop on Fine-Grained Visual Categorization" in conjunction with CVPR 2022
Investigating Top-k White-box and Transferable Black-box Attack
Chaoning Zhang, Philipp Benz, Adil Karjauv, Jae Won Cho, Kang Zhang, In So Kweon
IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), 2022
[ Paper ]
Single-Modal Entropy based Active Learning for Visual Question Answering
Dong-Jin Kim*, Jae Won Cho*, Jinsoo Choi, Yunjae Jung, In So Kweon (* Equal Contribution)
British Machine Vision Conference (BMVC), 2021
[ Paper ]
LabOR: Labeling Only if Required for Domain Adaptive Semantic Segmentation
Inkyu Shin, Dong-Jin Kim, Jae Won Cho, Sanghyun Woo, Kwanyong Park, In So Kweon
IEEE International Conference on Computer Vision (ICCV), 2021 [Oral] (acceptance rate 3%)
[ Paper ]
Dealing with Missing Modalities in the Visual Question Answer-Difference Prediction Task through Knowledge Distillation
Jae Won Cho, Dong-Jin Kim, Jinsoo Choi, Yunjae Jung, In So Kweon
CVPR Multimodal Learning and Applications Workshop (CVPRW), 2021
[ Paper ]
Also presented at "Visual Question Answering Workshop" and "VizWiz Grand Challenge Workshop" in conjunction with CVPR 2021.