Multimodal Language Processing Lab.
KAIST 멀티모달 자연어처리 연구실 (임경태 교수와 MLP랩)
https://sites.google.com/view/aailab
KAIST 멀티모달 자연어처리 연구실 (임경태 교수와 MLP랩)
https://sites.google.com/view/aailab
(2025, Aug.) Our laboratory has been selected for the "우수신진" research project (NRF)
(2025, Aug.) Two papers are accepted to EMNLP2025
(2025, Aug.) Our laboratory has been selected for the "독자 AI파운데이션 모델 프로젝트" research project with Upstage (IITP)
(2025, July.) Our laboratory has been selected for the "생성AI 선도인재양성 사업" research project with NC AI (IITP)
(2025, July.) Our laboratory has been selected for the "AI 글로벌 빅테크 육성사업" research project with KAERI (IITP)
(2025, Feb.) Our paper Unlocking Korean Verbs: A User-Friendly Exploration into the Verb Lexicon is accepted to NAACL 2025 Demo
(2025, Jan.) Our paper Unified Automated Essay Scoring and Grammatical Error Correction is accepted to Findings of NAACL 2025
(2025, Jan.) Our paper Integrating Econometrics and Artificial Intelligence to Access the Impact of Trade on Nuclear Proliferation is accepted to Nuclear Technology (SCIE)
(2024, Dec.) Changsu Choi, a second-semester master's student, has been accepted into LG's recruitment-linked internship program.
(2024, Nov.) Our paper VLR-Bench: Multilingual Benchmark Dataset for Vision-Language Retrieval Augmented Generation is accepted to COLING 2025
(2024, Nov.) Our paper SCV: Light and Effective Multi-Vector Retrieval with Sequence Compressive Vectors is accepted to COLING 2025
(2024, Nov.) SeungWoo Song, a second-semester master's student, has been accepted into SK's recruitment-linked internship program.
(2024, Sep.) Our paper When the Misidentified Adverbial Phrase Functions as a Complement is accepted to Findings of EMNLP 2024
(2024, Aug.) Three students (임현석, 원인호, 신동재) have been awarded the NRF Master’s Scholarship Grant (석사과정생연구장려금). Congrats!
(2024, July) Our paper Korean Grammar Error Correction Model for Everyone has received the Best Presentation Paper Award at the KCC2024. (학부생 육정훈 축하!!)
(2024, Mar.) Our paper X-LLaVA: Optimizing Bilingual Large Vision-Language Alignment is accepted to Findings of NAACL 2024 (BK 우수)
(2024, Feb.) Our paper Optimizing Language Augmentation for Multilingual Large Language Models: A Case Study on Korean is accepted to LREC-COLING 2024
(2024, Feb.) Our paper Towards Standardized Annotation and Parsing for Korean FrameNet is accepted to LREC-COLING 2024
(2024, Feb.) Our paper A Linguistically-Informed Annotation Strategy for Korean Semantic Role Labeling is accepted to LREC-COLING 2024
(2024, Jan.) Our paper "Sow Posture Analysis and Estrus Prediction using Closed-circuit Television Cameras" has been accepted by IEEE Access (SCIE).
(2024, Jan.) Our paper "A Novel Communication Framework of SMRs: A Prototype Development of a NLP-based System" has been accepted by Nuclear Technology (SCIE).
Introduction
What we do: AAI Lab pursues research on Language Resources and Applied Artificial Intelligence (AAI) techniques.
What we research: We are mainly studying Natural Language Processing (NLP) and Multimodal Systems.
Research Topics: Large Language Model (LLM), Vision-Language Model (VLM), Machine Translation, and Dependency Parsing.
We're looking for colleagues (PhD, M.S., Undergraduate) to study artificial intelligence technology that's more realistic and fun.
챗봇, 음성인식, 번역기와 같은 실용적인 응용인공지능을 함께 연구할 학,석,박 학생을 모집하고 있습니다.
Research
KAIST 멀티모달 언어처리 연구실(MLP Lab)은 (1) 한국어 언어자원 설계 (2) 전통 자연어처리 (3) 멀티모달 초거대 언어모델을 연구합니다
KORMo and Bllossom: Building and Scaling the First Korean–English 10B-Scale Foundation Models from Scratch in an Academic Lab
Research and dissemination of fully open-source Korean LLMs
Efficient bilingual tokenizer design and analysis
Observing pre-training performance across diverse data composition strategies
Domain-specialized vision–language models
Knowledge transfer using agent-style teacher models
https://huggingface.co/KORMo-Team
Lead of the Bllossom project. (https://www.bllossom.ai) (https://huggingface.co/MLP-KTLim/llama-3-Korean-Bllossom-8B)
Lead of the Universal Korean Language Resource project. (https://sites.google.com/view/universal-korean)
Visual Question Answering (VQA) is a question answering system that answers user's questions based on the input image and question.
Vision-Language-Action Model
Control of Robots Based on Speech Teaching Command
Nvidia Jetson Nano, Jet Racer, Jet racer pro
Spot (Boston Dynamics)