Research

 Research Interests

Our lab focuses on a wide range of studies in natural language processing (NLP), with a primary emphasis on enhancing the trustworthiness of language models. Our main research goal is to develop factual and safe language models that can deliver reliable information to users. To achieve this goal, we research effective utilization of information in various NLP tasks such as summarization systems and conversational QA systems.

Document Summarization, Knowledge Grounded Dialog, Personalized Dialog, Task-Oriented Dialog

Fact Checking, Factual Consistency Evaluation, Factual Error Correction, Context-aware Decoding, Hate Speech Detection

Joint work with Adobe Research.

In-Context Learning, Data Augmentation, Unlearning

Joint work with LG AI Research

Query Reformulation, Retrieval Augmented Generation, Document Clustering

 Collaborators

Our group encourages collaboration with researchers from other institutions and research groups to widen the horizon of our research capability. We are doing or will do joint works with the following institutes. 

Research Projects

RAG based Conversational Question Answering System (Funded by URP)

Factual Error Correction of Abstractive Summaries using Large Language Models (Funded by Adobe Research)

Improvement of Speech Recognition Rate Using LLMs (Funded by 120 Dasan Call Foundation)

Artificial Intelligence Graduate School Program (Funded by IITP)

Risk Assessment and Prediction with Large Language Models (Funded by Doosan Enerbility)

Anomaly Detection in Sequential Data (Funded by Doosan Fuel Cell, Completed)

Evaluating Medical Multi-Document Summarization System

Generative Conversational Aspect-based Sentiment Analysis