2024
(New) [C05] Translation Deserves Better: Analyzing Translation Artifacts in Cross-lingual Visual Question Answering
ChaeHun Park, Koanho Lee, Hyesu Lim, Jaeseok Kim, Junmo Park, Yu-Jung Heo, Du-Seong Chang, and Jaegul Choo. ACL 2024 Findings Accepted.
TL;DR: Our analysis reveals that translated texts contain unique characteristics distinct from human-written ones, referred to as translation artifacts. We find that these artifacts can significantly affect the models, confirmed by extensive experiments across diverse models, languages, and translation processes. [PDF]
[C04] Slice and Conquer: A Planar-to-3D Framework for Efficient Interactive Segmentation of Volumetric Images
Wonwoo Cho*, Dongmin Choi*, Hyesu Lim*, Jinho Choi, Saemee Choi, Hyun-seok Min, Sungbin Lim, and Jaegul Choo. (*: equal contributions) WACV2024 Accepted.
TL;DR: We propose a planar-to-3D pipeline for interactive 3D image segmentation called Slice-and-Conquer, which formulates volumetric mask construction into a two-stage pipeline: 1) 2D interactive segmentation and 2) guided segmentation. [PDF]
2023
[W01] Towards Calibrated Robust Fine-Tuning of Vision-Language Models
Changdae Oh, Mijoo Kim, Hyesu Lim, Junhyeok Park, Euiseog Jeong, Zhi-QiCheng, Kyungwoo Song. NeurIPS 2023 Workshop on Distribution Shifts (DistShift) Accepted.
TL;DR: We initiate the investigation on the calibration of VLM after fine-tuning under distribution shifts and introduce simple yet effective approaches to improve calibration error. [PDF]
[C03] PRiSM: Enhancing Low-Resource Document-Level Relation Extraction with Relation-Aware Score Calibration
Minseok Choi, Hyesu Lim, and Jaegul Choo. IJCNLP-AACL 2023 Accepted.
TL;DR: We propose a document-level relation extraction (RE) method that calibrates prediction scores based on relation descriptions. This allows us to enhance calibration and accuracy when we have limited labeled data. [PDF]
[C02] TTN: A Domain-Shift Aware Batch Normalization in Test-Time Adaptation
Hyesu Lim, Byeonggeun Kim, Jaegul Choo, and Sungha Choi. ICLR2023 Accepted (31.8% acceptance rate).
TL;DR: We propose a test-time batch normalization method, which interpolates source and current batch statistics considering each layer's domain-shift sensitivity level, showing robust performance over various realistic evaluation scenarios. [PDF] [Webpage]
2021
[C01] AVocaDo: Strategy for Adapting Vocabulary to Downstream Domain
Jimin Hong*, Taehee Kim*, Hyesu Lim*, and Jaegul Choo. (*: equal contributions) EMNLP2021 Accepted.
TL;DR: We propose to consider a vocabulary of a pre-trained language model as an optimizable parameter, allowing us to update the vocabulary by expanding it with domain-specific vocabulary based on tokenization statistic. [PDF] [Code] [Video&Slide]
Selected Granted Patents
[P02] Adapting machine learning models for domain-shifted data
Hyesu Lim, Byeonggeun Kim, and Sungha Choi
U.S. Patent No. US20240119360A1, 11 April, 2024. [PDF]
[P01] Suggesting a New and Easier System Function by Detecting User's Action Sequences
Sungrack Yun, Hyoungwoo PARK, Seunghan YANG, Hyesu LIM, Taekyung Kim, Jaewon Choi, Kyu Woong Hwang
U.S. Patent No. US20240045782A1, 08 Feb, 2024. [PDF]