Mingi Ji

Software engineer @ Google

Model Optimization

email:  mingi.ji1015@gmail.com, mingiji@google.com

[Google scholar] [LinkedIn]

Research interest: recommendation system, knowledge distillation, self-supervised learning, fine-tuning

I am a software engineer at Google. I received Ph.D degree and M.S degree in industrial and system engineering at KAIST and B.S degree in naval architecture and ocean engineering at Seoul National University (SNU). I am interested in applying machine learning methodologies to solve real world problems.

Publications

Unknown-Aware Domain Adversarial Learning for Open-Set Domain Adaptation 

JoonHo Jang, Byeonghu Na, DongHyeok Shin, Mingi Ji, Kyungwoo song, Il-Chul Moon

NeurIPS 2022

Existing domain adversarial learning methods are not suitable for OSDA (Open-Set Domain Adaptation) because distribution matching with unknown classes leads to negative transfer. We propose Unknown-Aware Domain Adversarial Learning (UADAL), which aligns the source and the target-known distribution while simultaneously segregating the target-unknown distribution in the feature alignment procedure.

[arxiv]

BROS: A Pre-trained Language Model Focusing on Text and Layout for Better Key Information Extraction from Documents 

Teakgyu Hong, Donghyun Kim, Mingi Ji, Wonseok Hwang, Daehyun Nam, Sungrae Park

AAAI 2022

Key information extraction from document images requires understanding the contextual and spatial semantics of texts in two-dimensional space. In this work, we propose a pre-trained language model that encodes relative positions of texts in 2D space and learns from unlabeled documents with area-masking strategy.

[arxiv]

Refine Myself by Teaching Myself: Feature Refinement via Self-Knowledge Distillation

Mingi Ji, Seungjae Shin, Seunghyun Hwang, Gibeom Park, Il-Chul Moon

CVPR 2021

Existing self-knowledge distillation algorithm can not utilize the spatial information of the feature. In this work, we propose an auxiliary teacher network which utilizes the soft label and the feature-map information for the self-knowledge distillation.

[arxiv] [code]

Show, Attend and Distill: Knowledge Distillation via Attention-based Feature Matching

Mingi Ji, Byeongho Heo, Sungrae Park

AAAI 2021

Most knowledge distillation methods manually tie intermediate features of the teacher layers and the student layers. In this work, we propose a learning based linking method between the teacher layers and student layers.

[arxiv] [code]

Sequential Recommendation with Relation-Aware Kernelized Self-Attention

Mingi Ji, Weonyoung Joo, Kyungwoo Song, Yoon-Yeong Kim, Il-Chul Moon

AAAI 2020

For sequential recommendation, reflecting relation between items are important. Our proposed attention method utilizes a covariance information based on co-occurrences between items to reflect the relational information.

[arxiv] 

Hierarchical Context enabled Recurrent Neural Network for Recommendation

Kyungwoo Song*, Mingi Ji*, Sungrae Park, Il-Chul Moon (*Equal contribution)

AAAI 2019

User history may reflects the transitions of personal interests over time. In this work, we propose an RNN structure that hierarchically models the users' interests into three levels, the global, the local, and the temporary interests.

[arxiv] [code]

Adversarial Dropout for Recurrent Neural Networks

Sungrae Park, Kyungwoo Song, Mingi Ji, Wonsung Lee, Il-Chul Moon

AAAI 2019

Dropout techniques for RNNs were introduced to respond to these demands, but we conjecture that the dropout on RNNs could have been improved by adopting the adversarial concept. This paper investigates ways to improve the dropout for RNNs by utilizing intentionally generated dropout masks.

[arxiv] [code]