HYU Natural Language Processing (NLP) Laboratory

Welcome to Natural Language Processing (NLP) Lab. at Hanyang University.

We study various approaches and problems with regard to natural language, chiefly based on machine learning and AI technologies.

We are looking for MS/Ph.D. students (and interns) who are self-motivated and passionate about doing research in NLP.

Please submit your information on this page if you are interested in applying for our lab.

News!

(24/02/20) One paper (BlendX: Complex Multi-Intent Detection with Blended Patterns) has been accepted for presentation at LREC-COLING 2024. Big congrats to Yejin, Jungyeon, and Kangsan!

(24/02/15) Jinhyeon, Young Hyun, Taejun, and Seong Hoon have graduated with their Master's degrees. Wish them all their best!

(23/11/22) One paper has been accepted to KSC 2023. Congrats to Jii!

(23/10/08) Two papers (X-SNS: Cross-Lingual Transfer Prediction through Sub-Network Similarity & Universal Domain Adaptation for Robust Handling of Distributional Shifts in NLP) have been accepted at EMNLP 2023 (Findings).  X-SNS is especially our first internal project that will be showcased at a major conference.  Congrats to Taejun, Jinhyeon, Deokyoung, and Seong Hoon. See you in Singapore!

Recent Publications

BlendX: Complex Multi-Intent Detection with Blended Patterns (LREC-COLING 2024)

Abstract

Task-oriented dialogue (TOD) systems are commonly designed with the presumption that each utterance represents a single intent. However, this assumption may not accurately reflect real-world situations, where users frequently express multiple intents within a single utterance. While there is an emerging interest in multi-intent detection (MID), existing in-domain datasets such as MixATIS and MixSNIPS have limitations in their formulation. To address these issues, we present BlendX, a suite of refined datasets featuring more diverse patterns than their predecessors, elevating both its complexity and diversity. For dataset construction, we utilize both rule-based heuristics as well as a generative tool-OpenAI's ChatGPT-which is augmented with a similarity-driven strategy for utterance selection.To ensure the quality of the proposed datasets, we also introduce three novel metrics that assess the statistical properties of an utterance related to word count, conjunction use, and pronoun usage. Extensive experiments on BlendX reveal that state-of-the-art MID models struggle with the challenges posed by the new datasets, highlighting the need to reexamine the current state of the MID field.
The dataset is available at https://github.com/HYU-NLP/BlendX.

X-SNS: Cross-Lingual Transfer Prediction through Sub-Network Similarity (EMNLP 2023 Findings)

Abstract

Cross-lingual transfer (XLT) is an emergent ability of multilingual language models that preserves their performance on a task to a significant extent when evaluated in languages that were not included in the fine-tuning process. While English, due to its widespread usage, is typically regarded as the primary language for model adaption in various tasks, recent studies have revealed that the efficacy of XLT can be amplified by selecting the most appropriate source languages based on specific conditions. In this work, we propose the utilization of sub-network similarity between two languages as a proxy for predicting the compatibility of the languages in the context of XLT. Our approach is model-oriented, better reflecting the inner workings of foundation models.  In addition, it requires only a moderate amount of raw text from candidate languages, distinguishing it from the majority of previous methods that rely on external resources. In experiments, we demonstrate that our method is more effective than baselines across diverse tasks. Specifically, it shows proficiency in ranking candidates for zero-shot XLT, achieving an improvement of 4.6% on average in terms of NDCG@3. We also provide extensive analyses that confirm the utility of sub-networks for XLT prediction.