Machine Learning Research Engineer at Naver Cloud
Machine Learning Engineer with 4+ years of experience in computer vision, specializing in large-scale training pipelines, model optimization, and ML infrastructure development. Delivered production-ready solutions across tasks such as harmful content detection, face recognition, and generative image models. Published research in top-tier conferences (CVPR, AAAI) and ranked 6th in the NIST Face Recognition Evaluation. Skilled in PyTorch, distributed training (DDP, FSDP), and GPU cluster optimization.
Machine Learning Research Engineer for Image Vision and Visual Diffusion Model Teams, NAVER Cloud. (Jan 2023 - Current)
Developing "Data Processing Pipeline" for diffusion based text-to-image generation model training.
Developed "Harmful Content Detection Model" to prevent inappropriate images in Generative AI services.
Conducted "Text-to-Image Personalization Research", achieved the better fidelity of reference object in generated images. (DreamMatcher, CVPR'24)
Conducted "Open-Vocabulary Object Detection Research", achieved the better detection performance on novel classes. (ProxyDet, AAAI'24)
Developed "ML model inference/deployment framework", accomplished automatic CI/CD and unified inference pipeline for various computer vision models.
Research Engineer for Machine Learning, Face, NAVER. (Sep 2020 - Dec 2022)
Developed optimized "Face Recognition Pipeline" including face detection, face landmark, and face recognization. (NIST 6th)
Researched "Continual learning", achieved the better performance on the realistic class distribution. (Rainbow Memory, CVPR'21)
Research Intern, Data AI, Clova, Naver Corp. (Feb 2020 - Aug 2020)
Mentors: YoungJoon Yoo and Jung-Woo Ha
Researched "Active learning for speech recognition" by measuring the uncertainty of the decoded text to minimize the cost of data labeling.
Intern, Processer Development Team, System LSI Business, Samsung Electronics (Jul 2015 - Aug 2015)
Mentor: Hayoung Jeong
Implemented and tested a performance monitor for Mobile Application Processor (AP) in RTL-level.
Jisu Nam, Heesu Kim, DongJae Lee, Siyoon Jin, Seungryong Kim, and Seunggyu Chang, “DreamMatcher: Appearance Matching Self-Attention for Semantically-Consistent Text-to-Image Personalization,” CVPR'24, Seattle, USA, Jun 2024.
Joonhyun Jeong, Geondo Park, Jayeon Yoo, Hyungsik Jung, and Heesu Kim. “ProxyDet: Synthesizing Proxy Novel Classes via Classwise Mixup for Open Vocabulary Object Detection.”, AAAI 2024, Vancouver, Canada, Feb 2024.
Bang, Jihwan*, Heesu Kim*, YoungJoon Yoo, Jung-Woo Ha, Jonghyun Choi. "Rainbow Memory: Continual Learning with a Memory of Diverse Samples," CVPR'21, Virtual, Jun 2021.
Heesu Kim, Hanmin Park, Taehyun Kim, Kwanheum Cho, Eojin Lee, Soojung Ryu, Hyuk-Jae Lee, Kiyoung Choi, and Jinho Lee, “GradPIM: A Practical Processing-in-DRAM Architecture for Gradient Descent,” IEEE International Symposium on High-Performance Computer Architecture (HPCA), Virtual, Mar 2021.
Jongho Kim, Heesu Kim, H. Amrouch, J. Henkel, A. Gerstlauer and Kiyoung Choi, “Aging Gracefully with Approximation,” in IEEE International Symposium on Circuits and Systems (ISCAS), Sapporo, Japan, May 2019.
Heesu Kim, Euntae Choi, and Kiyoung Choi, “Speaker Verification Based on Deep Neural Network for Text-Constrained Short Commands,” in Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, Honolulu, Hawaii, USA, Nov 2018.
Heesu Kim, Joonsang Yu, and Kiyoung Choi, “Hybrid spiking-stochastic Deep Neural Network,” in International Symposium on VLSI Design, Automation and Test, Hsinchu, Taiwan, Apr 2017
Heesu Kim, Jongho Kim, H. Amrouch, J. Henkel, A. Gerstlauer, Kiyoung Choi, and Hanmin Park, “Aging Compensation with Dynamic Computation Approximation,” IEEE Transactions on Circuits and Systems I: Regular Papers (TCAS-I), vol. 67, no. 4, Apr 2020.
Jaehyun Kim, Heesu Kim, Subin Huh, Jinho Lee, and Kyoung Choi, “Deep Neural Networks with Weighted Spikes,” Neurocomputing, vol. 311, pp. 373–386, Oct 2018.
Jinho Lee, Heesu Kim, Sungjoo Yoo, Kiyoung Choi, H. Peter Hofstee, Gi-Joon Nam, Mark R. Nutter, and Damir Jamsek, “ExtraV: Boosting Graph Processing Near Storage with a Coherent Accelerator,” Proceedings of the VLDB Endowment, vol. 10, no. 12, pp. 1706–1717, Aug 2017.
Jihwan Bang*, Heesu Kim*, YoungJoon Yoo, and Jung-Woo Ha, “Boosting Active Learning for Speech Recognition with Noisy Pseudo-labeled Samples,” in arXiv preprint arXiv:2006.11021, arXiv, Jun 2020.
M.S./Ph.D. in Seoul National University, South Korea (2020.08)
Electrical and Computer Engineering
Advisor: Kiyoung Choi, Hyuk-jae Lee
B.S. in Seoul National University, South Korea (2015.02)
Electrical and Computer Engineering