My research is driven by a strong interest in artificial intelligence, robotics, and human-robot interaction (HRI), and I have consistently pursued projects in these domains. Currently, I am focusing on research with students in areas such as Social Robots, VR Applications, and UX Research. This current work builds upon a broader research interest in social interaction and the implementation of robotic platforms and artificial intelligence to enhance HRI.
In the early period of my research experience, I developed robotic facial expressions for both robotic hardware and software simulators. I also studied psychology to understand how emotions work in our mind and proposed emotion generation models and an emotional episodic memory structure. For my doctoral dissertation, I developed an interactive robotic head with a 3 DoF neck, attached to a humanoid body. With this robotic head, I implemented capabilities such as touch and speech interaction, a facial expression simulator, telepresence, user identification, and differentiated reactions based on emotional history.
I am continuously studying various machine learning algorithms and deep learning tools, using Keras, to build models for handling spatio-temporal sensor data. I have also worked on a tele-operational mobile manipulator, developing natural user control interfaces—including haptic, gesture-based, and voice-based control—to understand how to make a user feel confident when using the mobile manipulator in a remote space. Virtual reality environments were also utilized for more intuitive tele-operation.
You can find a more detailed history of my research experience and projects, including these recent patents, in the project description below.
Recent Research Topics
(Visualized research topic descriptions will be provided soon.)
Social Robots & UX Research
Health-Information Service Robot Development for Motivation Improvement
Kiosk-Guide Robot for Seniors
Medical-Information Consulting Robot for Older Adults
Implementation, Enhancement & Effectiveness Evaluation of Robot Interaction Services
Recommendation Algorithms & Facial-Expression Recognition for Parenting-Counselling Service Robots
Empathic Robot Development
Analysis of User-Interaction Elements Using the Social Robot
Laboratory Security & Reception Robot Leveraging User Differentiation
Development of an Emotion-Expressive Robot Head for Human–Robot Social Interaction
Interaction Algorithms for Multi-Robot Systems Utilizing User Differentiation
Task-based Robot Systems (This topic also may be related to social robots and UX Reseach)
LiDAR-Based Autonomous Driving Technology for Scale Cars
Development of an Intelligent Manipulator Control System Using Collaborative Robots
Cherry-tomato picking robot system
Interactive Robotic Arm Producer
Sustainable and Inflatable Aeroponics Smart Farm System for Water Efficiency and High-Value Crop Production
Semi-Autonomous Remote Control & Language-Based Intelligent Manipulator Using Collaborative Robots
3-D Position Control of Drones for Indoor Smart-Farm Management
Remote-Manipulation Methods for Robotic Arms
Sociable Drink Delivery Robot
Gesture-Based Remote Control & Autonomous Motion Generation for Robots
VR Applications
Development of a VR Training Program for Children with ADHD
Automatic Natural Facial-Expression Generation Algorithm for 3-D Avatars
VR Karaoke
Projects
비대면 환경에서 일상생활 밀착 관찰과 함께 가족구성원의 사회화 발달과업 멘토링이 가능한 AI집사 홈서비스 개발
AI Butler Service to Facilitate Socialization of Family Members in Their Daily Life
2021-2023
한국산업기술평가관리원 funded by the Ministry of Trade, industry Energy
My role
Project Manager
문화정보분야 적용을 위한 서비스 로봇 및 인공지능(AI)기술 기준마련 연구용역
A study on the standardization of service robots and artificial intelligence (AI) technology for application in the field of cultural information
2020
한국문화정보원
My role
Project Maager
Result
Report
Multi-sensor based Gesture UI/UX development for VR training (In Korean: 가상훈련 콘텐츠용 복합감각 센서 기술을 이용한 제스처 UI/UX 개발)
2017-2020
funded by K-Tech (한국기술교육대학)
Collaborators: Chung Hyuk Park (PI, GWSEAS)
My role
Project leader (2019-2020)
Senior researcher (2017-2019)
Hardware and software engineer
Achievement
Development of real-time hand gesture recognition program (using C++, C#) with SR-300 sensors
Development of deep learning model for EMG signal classification (using Python, ThenSorflow, Keras) with Myo band sensor.
VR environment development (using Unity, C#, C++) with Kinect2 sensor and HTC Vive device
Videos
3D Facial Avatar Development
2018
My role
Director
Software engineer
Achievement
Development of customized 3D facial avatar control in Unity and C#
3D mesh model rigging and morphing by C# script
Video
https://youtu.be/dzqlQWg88H8?si=2VZvAVFzped9_LdQ
CTSI-CN: Teleassistive Robotic Nurse for Human-Robot Collaboration in Neonatal Environment
2017-2018
Collaborators: Chung Hyuk Park (PI, GWSEAS), Ashley Darcy-Mahoney (Co-PI, GWSoN), and Mia Waldron (Co-PI, Children’s National Medical Systems).
My role
Senior researcher
Hardware-oriented software engineer
System maintainer
Achievement
Development of haptic teleoperation (using C++) with Omega7 haptic device and Pioneer Manipulator platform with robotic arms
Development of real-time body pose tracking algorithm (using C++) with Kinect seonsor
Development of speech teleoperation (using Python, C++)
System integration for network programming to remotely control the robot
Videos
Related Papers
WonHyong Lee, Jeabyung Park, and Chung Hyuk Park, 2018. Acceptability of Tele-assistive Robotic Nurse for Human-Robot Collaboration in Medical Environment. In Proceedings of ACM/IEEE International Conference on Human Robot Interaction, Chicago, Illinois USA, March 2018 (HRI’18) (link)
Development Of Intelligent Technology For Home Service Using Robot (In Korean, 로봇을 이용한 홈서비스 제공을 위한 지능 기술 개발)
My role
Senior researcher
Hardware and software engineer
Achievement
Tablet type robotic head development and touch-based human-robot interaction development (using C++, MFC)
Robotic head teleoperation development (using C++, ROBOTIS robot toolkit)
Voice communication system integration (using Python, Java, C++)
Emotional human-robot interaction framework development (using Python, C++, Google Cloud API)
Real-time face detection and identification (using OpenCV, C++)
Development of social relationship framework between human and robot based on emotional memory (using Python, C++)
Visual Question Answer(VQA) interaction system integration (using Python, C++, TensorFlow)
Videos
Related Papers
WonHyong Lee and Jong-Hwan Kim, "Social Relationship Development between Human and Robot through Real-time Face Identification and Emotional Interaction(Video Abstract)," in Proc. ACM/IEEE International Conference on Human Robot Interaction, Chicago, Illinois USA, March 2018 (HRI’18) (Best Video Award) (link, video link)
Sanghyun Cho, Won-Hyong Lee and Jong-Hwan Kim, “Implementation of Human-Robot VQA Interaction System with Dynamic Memory Networks”, IEEE SMC, 2017 (Best Student Paper Award) (link1, link2, video link)
HRI MESSI (Project Nickname)
2012.12~2014.02 : N02120248, "Development of a self-improving bidirectional sustainable HRI technology for 95% of successful responses with understanding user's complex emotion and transactional intent through continuous interactions" funded by the Ministry of Knowledge Economy(MKE, Korea) (In Korean, "(RCMS) 지속적인 상호작용을 통하여 사용자의 복합정서 이해 및 교류의도를 파악하고, 이에 대한 대응을 95%이상 적절하게 할 수 있는 자율발달 쌍방향 HRI 기술 개발")
My role
Subteam leader (2012~2016)
Hardware and software engineer
Achievement
Development of interactively touchable robotic face with a windows tablet device (using C++, MFC)
Emotion game development (using MATLAB, C++) with a robot DARWIN-OP
Development of robot's expressive gesture generation program (using ROBOTIS toolkit) with a robot DARWIN-mini
Video
Related Papers
Won Hyong Lee, J.W. Park, W.H. Kim, H.S. Lee, D.S. Kwon, M.J. Chung, "Motivational Emotion Generation and Behavior Selection based on Emotional Experiences for Social Robots", International Conference on Social Robotics (ICSR), Workshop on Attention for Social Intelligence, Oct. 2014 (link, link2)
Won Hyong Lee, Jeong Woo Park, Woo Hyun Kim, and Myung Jin Chung, "Interactive Facial Robot System on a Smart Device; Enhanced Touch Screen Input Recognition and Robot’s Reactive Facial Expression," 8th IEEE/ACM International Conference on Human-Robot Interaction(HRI), Tokyo, Japan, March 4-7, 2013
Won Hyong Lee, Jeong Woo Park, Woo Hyun Kim, Hui Sung Lee, and Myung Jin Chung, "사람과 로봇의 사회적 상호작용을 위한 로봇의 가치효용성 기반 동기-감정 생성 모델(Robot’s Motivational Emotion Model with Value Effectiveness for Social Human and Robot Interaction)" 제어로봇시스템학회 ICROS 논문지 Journal of Institute of Control, Robotics and Systems (ICROS) 20.5 (2014): 503-512. (link)
Kim Han-Gyeol, Seo Ju-Hwan, Lee Won Hyong, Kim Woo Hyun, Park Jeong Woo, Kim Jong-min, Chung Myung Jin, Yoo Chang.D, and Kwon Dong-Soo, "실환경에서의 사용자 행동 및 정서적 경험에 기반한 사용자 감정 추론 및 로봇의 정서 생성(User Behavior and Emotional Experience based emotion inference and generation)", 제 9회 한국로봇종합학술대회, Korea Robotics Society Annual Conference 2014, Buyeo, Korea, June, 2014
Frontier Project (Project Nickname)
2008.3~2008.4 : Emotional Interaction using Emotion Generation, Recognition, and Expression (In Korean: “감정 생성, 인식, 표현을 이용한 감정 상호작용 기술”)
2008.4~2009.3 : Development of Emotion Expression System with Facial Expression, Voice, and Gesture (In Korean: “얼굴 표정 음성 제스처가 포함된 감정표현 시스템 개발”)
2009.4~2010.3 : Development of Synchronized Multi-modal Emotion Expression based on Primitive DB (In Korean: “Primitive DB 개발을 통한 동기화된 멀티모달 감정 표현 기술 개발”)
2010.4~2011.3 : Development of Multi-modal Emotion/Situation/Intention Expression DB and Automatic Generation, 2nd Year (In Korean: “멀티모달 감정/상황/의도 표현 DB 및 표현 자동생성 기술(2차년도)”)
2011.4~2012.3 : Development of Multi-modal Emotion/Situation/Intention Expression DB and Automatic Generation, 3rd Year (In Korean: “멀티모달 감정/상황/의도 표현 DB 및 표현 자동생성 기술(3차년도)-1서브” )
2012.4~2013.4 : Development of Multi-modal Emotion/Situation/Intention Expression DB and Automatic Generation, 4th Year (In Korean: “멀티모달 감정/상황/의도 표현 DB 및 표현 자동생성 기술(4차년도)”)
This research was performed for the Intelligent Robotics Development Program, one of the 21st Century Frontier R&D Programs, funded by the Ministry of Knowledge Economy of Korea.
My role
Subteam leader (2012~2013)
Software and hardware engineer
System maintainer
Achievement
Expressive robotic head development
Hardware and software maintenance
Development of robot's gesture generation program (using C++, MFC)
Video
Related Papers
H. S. Lee, J. W. Park, S. H. Jo, M. G. Kim, Won Hyong Lee, M. J. Chung, “A Mascot-Type Facial Robot with a Linear Dynamic Affect-Expression Model,” Proc. the 17th World Congress of International Federation of Automatic Control (IFAC) Seoul, Korea, p. 14099, July, 2008 → Best Video Prize
Park, Jeong Woo, Hui Sung Lee, and Myung Jin Chung. "Generation of realistic robot facial expressions for human robot interaction." Journal of Intelligent & Robotic Systems 78.3-4 (2015): 443-462.
J. W. Park, W. H. Kim, Won Hyong Lee, W. H. Kim, M. J. Chung, "Lifelike Facial Expression of Mascot-type Robot based on Emotional Boundaries," 2009 IEEE International Conference on Robotics and Biomimetics, Guilin, Guangxi, China, December 19-23, 2009 → Best Paper Finalist
J. W. Park, W. H. Kim, Won Hyong Lee, M. J. Chung, "A Robot Simulator 'FRESi' for Dynamic Facial Expression", The 6th International Conference on Ubiquitous Robots and Ambient Intelligence, pp. 727 - 728, Gwangju, Korea, October 29-31, 2009
Jaewoo Kim, Woo Hyun Kim, Won Hyong Lee, Ju-Hwan Seo, Myung Jin Chung and Dong-Soo Kwon, "Automated Robot Speech Gesture Generation System Based on Dialog Sentence Punctuation Mark Extraction," 2012 IEEE/SICE International Symposium on System Integration, Fukuoka, Japan, December 16-18, 2012
W. H. Kim, J. W. Park, Won Hyong Lee, W. H. Kim, M. J. Chung, "Synchronized Multimodal Expression Generation using Editing Toolkit for a Human-friendly Robot," 2009 IEEE International Conference on Robotics and Biomimetics , Guilin, Guangxi, China, December 19-23, 2009
Woo Hyun Kim, Jeong Woo Park, Won Hyong Lee, Myung Jin Chung, and Hui Sung Lee, "LMA based Emotional Motion Representation using RGB-D Camera," 8th IEEE/ACM International Conference on Human-Robot Interaction(HRI), Tokyo, Japan, March 4-7, 2013
I3RC Project (Project Nickname)
2008.6~2010.12 : Development of Multi-modal Intelligent Human-Robot Interaction for Service Robot (In Korean: “서비스 로봇을 위한 멀티 모달 지능형 인간-로봇 상호작용 기술개발”, 정보통신연구진흥원)
This research was performed for MKE(The Ministry of Knowledge Economy), Korea, under the ITRC(Information Technology Research Center) support program supervised by the NIPA(National IT Industry Promotion Agency)
My role
Software engineer
Achievement
Development of multi-modal intelligent human-robot interaction technology for service robots
Related Papers
W. H. Kim, J. W. Park, Won Hyong Lee, W. H. Kim, M. J. Chung, "Stochastic Approach on a Simplified OCC Model for Uncertainty and Believability," 2009 IEEE International Symposium on Computational Intelligence in Robotics and Automation, Daejon, Korea, December 15-18, 2009
Development of a Multi-sensor Facial Robot, "Sunflower"
2008
My role
Team member
Media director
Video
Undergraduate Research Project(URP)
2007
Title: Implementation of Voice Synthesizer on DSP and Glove Interface
Science Exhibition Attendance
2022
English Title: Restoration of A-Ja room(亞字房) in Chil-Bul Temple(七佛寺) by Scientific Research of Korean Floor Heating System
Korean Title: 구들(온돌)의 과학적 원리분석을 통한 칠불사 아자방의 복원
Award: Grand Prize at Seoul Science Exhibition
Award: Encouraging Prize at National Science Exhibition