Heesu Kim
Research Engineer at Naver Cloud
LinkedIn / CV / Github / Google Scholar
muncok@gmail.com
Work Experience
Research Engineer for Machine Learning, ImageVision, NAVER Cloud. (Jan 2022 - Current)
Developing "Safety for Generative AI" to prevent inappropriate images in a conversational AI service.
Researched "Text-to-Image Personazliation", achieved the better fidelity of reference object in generated images.
Researched "Open-Vocabulary Object Detection", achieved the better detection performance on novel classes.
Developed "image editing pipeline" including inpainting, object removal, and background replacement.
Research Engineer for Machine Learning, Face (before ImageVision), NAVER. (Sep 2020 - Dec 2021)
Researched "Continual learning", achieved the better perfornamce on the realistic class distribution.
Developed optimized "Face authentification pipeline" including face detection, face landmark, and face recognization.
Research Intern, Data AI, Clova, Naver Corp. (Feb 2020 - Aug 2020)
Mentors: YoungJoon Yoo and Jung-Woo Ha
Researched "Active learning for speech recognition" by measuring the uncertainty of the decoded text to minimize the cost of data labeling.
Intern, Processer Development Team, System LSI Business, Samsung Electronics (Jul 2015 - Aug 2015)
Mentor: Hayoung Jeong
Implemented and tested a performance monitor for Mobile Application Processor (AP) in RTL-level.
Education
M.S./Ph.D. in Seoul National University, South Korea (2020.08)
Electrical and Computer Engineering
Advisor: Kiyoung Choi, Hyuk-jae Lee
B.S. in Seoul National University, South Korea (2015.02)
Electrical and Computer Engineering
Publications
Conference Papers
Jisu Nam, Heesu Kim, DongJae Lee, Siyoon Jin, Seungryong Kim, and Seunggyu Chang, “DreamMatcher: Appearance Matching Self-Attention for Semantically-Consistent Text-to-Image Personalization,” will appear in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, USA, Jun 2024.
Joonhyun Jeong, Geondo Park, Jayeon Yoo, Hyungsik Jung, and Heesu Kim. “ProxyDet: Synthesizing Proxy Novel Classes via Classwise Mixup for Open Vocabulary Object Detection.”, AAAI 2024, Vancouver, Canada, Feb 2024.
Bang, Jihwan*, Heesu Kim*, YoungJoon Yoo, Jung-Woo Ha, Jonghyun Choi. "Rainbow Memory: Continual Learning with a Memory of Diverse Samples," CVPR'21, Virtual, Jun 2021.
Heesu Kim, Hanmin Park, Taehyun Kim, Kwanheum Cho, Eojin Lee, Soojung Ryu, Hyuk-Jae Lee, Kiyoung Choi, and Jinho Lee, “GradPIM: A Practical Processing-in-DRAM Architecture for Gradient Descent,” IEEE International Symposium on High-Performance Computer Architecture (HPCA), Virtual, Mar 2021.
Jongho Kim, Heesu Kim, H. Amrouch, J. Henkel, A. Gerstlauer and Kiyoung Choi, “Aging Gracefully with Approximation,” in IEEE International Symposium on Circuits and Systems (ISCAS), Sapporo, Japan, May 2019.
Heesu Kim, Euntae Choi, and Kiyoung Choi, “Speaker Verification Based on Deep Neural Network for Text-Constrained Short Commands,” in Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, Honolulu, Hawaii, USA, Nov 2018.
Heesu Kim, Joonsang Yu, and Kiyoung Choi, “Hybrid spiking-stochastic Deep Neural Network,” in International Symposium on VLSI Design, Automation and Test, Hsinchu, Taiwan, Apr 2017
Journal Papers
Heesu Kim, Jongho Kim, H. Amrouch, J. Henkel, A. Gerstlauer, Kiyoung Choi, and Hanmin Park, “Aging Compensation with Dynamic Computation Approximation,” IEEE Transactions on Circuits and Systems I: Regular Papers (TCAS-I), vol. 67, no. 4, Apr 2020.
Jaehyun Kim, Heesu Kim, Subin Huh, Jinho Lee, and Kyoung Choi, “Deep Neural Networks with Weighted Spikes,” Neurocomputing, vol. 311, pp. 373–386, Oct 2018.
Jinho Lee, Heesu Kim, Sungjoo Yoo, Kiyoung Choi, H. Peter Hofstee, Gi-Joon Nam, Mark R. Nutter, and Damir Jamsek, “ExtraV: Boosting Graph Processing Near Storage with a Coherent Accelerator,” Proceedings of the VLDB Endowment, vol. 10, no. 12, pp. 1706–1717, Aug 2017.
Preprints
Jihwan Bang*, Heesu Kim*, YoungJoon Yoo, and Jung-Woo Ha, “Boosting Active Learning for Speech Recognition with Noisy Pseudo-labeled Samples,” in arXiv preprint arXiv:2006.11021, arXiv, Jun 2020.