Research & Projects
My research interest lies in social interaction and implementing robotic platforms and artificial intelligence for human-robot interaction.
In the early period of my research experience, I developed robotic facial expressions for both robotic hardware and software simulator.
I also studied psychology to understand how emotions work in our mind and proposed emotion generation models and an emotional episodic memory structure.
For my doctoral dissertation, I developed an interactive robotic head with 3 DoF neck which is attached on top of a humanoid body. With the robotic head, I implemented touch interaction, speech interaction, facial simulator, telepresence, user identification and different reaction based on emotional history, and so on. The more details can be found in the description below.
I am studying various machine learning algorithms and deep learning tools using Keras(and TensorFlow) to build a model to handle spatio-temporal sensor data.
I worked on a tele-operational mobile manipulator and developing natural user control interfaces: haptic control, gesture-based control, and voice-based and control to figure out how to make a user feel confident of using the mobile manipulator in remote space. Also, virtual reality environment was utilized for an intuitive tele-operation.
You can find my history of research experience in the project description below.
Period
2021.04 ~ 2023.12
2020.08 ~ 2020.11
Oct. 2018 ~ current
Sep. 2017 ~ Feb. 2020
Sep. 2017 ~ 2018
2016 ~ 2017
December 2012 ~
2016
March 2008 ~
April 2013
June 2008 ~
December 2010
2008
July 2007 ~
December 2007
September 2002 ~
August 2003
Projects
비대면 환경에서 일상생활 밀착 관찰과 함께 가족구성원의 사회화 발달과업 멘토링이 가능한 AI집사 홈서비스 개발
- 한국산업기술평가관리원
문화정보분야 적용을 위한 서비스 로봇 및 인공지능(AI)기술 기준마련 연구용역
- 한국문화정보원
3D Facial Avatar Development
My role
Director
Software engineer
Achievement
Development of customized 3D facial avatar control in Unity and C#
3D mesh model rigging and morphing by C# script
Video
Multi-sensor based Gesture UI/UX development for VR training (In Korean: 가상훈련 콘텐츠용 복합감각 센서 기술을 이용한 제스처 UI/UX 개발)
funded by K-Tech (한국기술교육대학)
Collaborators: Chung Hyuk Park (PI, GWSEAS)
My role
Project leader (2019-2020)
Senior researcher (2017-2019)
Hardware and software engineer
Achievement
Development of real-time hand gesture recognition program (using C++, C#) with SR-300 sensors
Development of deep learning model for EMG signal classification (using Python, ThenSorflow, Keras) with Myo band sensor.
VR environment development (using Unity, C#, C++) with Kinect2 sensor and HTC Vive device
Video
Related papers
WonHyong Lee, Aidan Murray, and Chung Hyuk Park, “Personalizable Real-time Hand Gesture Registration and Classification Framework” (submitted)
CTSI-CN: Teleassistive Robotic Nurse for Human-Robot Collaboration in Neonatal Environment
Collaborators: Chung Hyuk Park (PI, GWSEAS), Ashley Darcy-Mahoney (Co-PI, GWSoN), and Mia Waldron (Co-PI, Children’s National Medical Systems).
My role
Senior researcher
Hardware-oriented software engineer
System maintainer
Achievement
Development of haptic teleoperation (using C++) with Omega7 haptic device and Pioneer Manipulator platform with robotic arms
Development of real-time body pose tracking algorithm (using C++) with Kinect seonsor
Development of speech teleoperation (using Python, C++)
System integration for network programming to remotely control the robot
Video
Related Papers
WonHyong Lee, Jeabyung Park, and Chung Hyuk Park, 2018. Acceptability of Tele-assistive Robotic Nurse for Human-Robot Collaboration in Medical Environment. In Proceedings of ACM/IEEE International Conference on Human Robot Interaction, Chicago, Illinois USA, March 2018 (HRI’18) (link)
Development Of Intelligent Technology For Home Service Using Robot (In Korean, 로봇을 이용한 홈서비스 제공을 위한 지능 기술 개발)
My role
Senior researcher
Hardware and software engineer
Achievement
Tablet type robotic head development and touch-based human-robot interaction development (using C++, MFC)
Robotic head teleoperation development (using C++, ROBOTIS robot toolkit)
Voice communication system integration (using Python, Java, C++)
Emotional human-robot interaction framework development (using Python, C++, Google Cloud API)
Real-time face detection and identification (using OpenCV, C++)
Development of social relationship framework between human and robot based on emotional memory (using Python, C++)
Visual Question Answer(VQA) interaction system integration (using Python, C++, TensorFlow)
Video
Related Papers
WonHyong Lee and Jong-Hwan Kim, "Social Relationship Development between Human and Robot through Real-time Face Identification and Emotional Interaction(Video Abstract)," in Proc. ACM/IEEE International Conference on Human Robot Interaction, Chicago, Illinois USA, March 2018 (HRI’18) (Best Video Award) (link, video link)
Sanghyun Cho, Won-Hyong Lee and Jong-Hwan Kim, “Implementation of Human-Robot VQA Interaction System with Dynamic Memory Networks”, IEEE SMC, 2017 (Best Student Paper Award) (link1, link2, video link)
HRI MESSI (Project Nickname)
2012.12~2014.02 : N02120248, "Development of a self-improving bidirectional sustainable HRI technology for 95% of successful responses with understanding user's complex emotion and transactional intent through continuous interactions" funded by the Ministry of Knowledge Economy(MKE, Korea) (In Korean, "(RCMS) 지속적인 상호작용을 통하여 사용자의 복합정서 이해 및 교류의도를 파악하고, 이에 대한 대응을 95%이상 적절하게 할 수 있는 자율발달 쌍방향 HRI 기술 개발")
My role
Subteam leader (2012~2016)
Hardware and software engineer
Achievement
Development of interactively touchable robotic face with a windows tablet device (using C++, MFC)
Emotion game development (using MATLAB, C++) with a robot DARWIN-OP
Development of robot's expressive gesture generation program (using ROBOTIS toolkit) with a robot DARWIN-mini
Video
Related Papers
Won Hyong Lee, Jeong Woo Park, Woo Hyun Kim, and Myung Jin Chung, "Interactive Facial Robot System on a Smart Device; Enhanced Touch Screen Input Recognition and Robot’s Reactive Facial Expression," 8th IEEE/ACM International Conference on Human-Robot Interaction(HRI), Tokyo, Japan, March 4-7, 2013
Won Hyong Lee, Jeong Woo Park, Woo Hyun Kim, Hui Sung Lee, and Myung Jin Chung, "사람과 로봇의 사회적 상호작용을 위한 로봇의 가치효용성 기반 동기-감정 생성 모델(Robot’s Motivational Emotion Model with Value Effectiveness for Social Human and Robot Interaction)" 제어로봇시스템학회 ICROS 논문지 Journal of Institute of Control, Robotics and Systems (ICROS) 20.5 (2014): 503-512. (link)
Kim Han-Gyeol, Seo Ju-Hwan, Lee Won Hyong, Kim Woo Hyun, Park Jeong Woo, Kim Jong-min, Chung Myung Jin, Yoo Chang.D, and Kwon Dong-Soo, "실환경에서의 사용자 행동 및 정서적 경험에 기반한 사용자 감정 추론 및 로봇의 정서 생성(User Behavior and Emotional Experience based emotion inference and generation)", 제 9회 한국로봇종합학술대회, Korea Robotics Society Annual Conference 2014, Buyeo, Korea, June, 2014
Frontier Project (Project Nickname)
2008.3~2008.4 : Emotional Interaction using Emotion Generation, Recognition, and Expression (In Korean: “감정 생성, 인식, 표현을 이용한 감정 상호작용 기술”)
2008.4~2009.3 : Development of Emotion Expression System with Facial Expression, Voice, and Gesture (In Korean: “얼굴 표정 음성 제스처가 포함된 감정표현 시스템 개발”)
2009.4~2010.3 : Development of Synchronized Multi-modal Emotion Expression based on Primitive DB (In Korean: “Primitive DB 개발을 통한 동기화된 멀티모달 감정 표현 기술 개발”)
2010.4~2011.3 : Development of Multi-modal Emotion/Situation/Intention Expression DB and Automatic Generation, 2nd Year (In Korean: “멀티모달 감정/상황/의도 표현 DB 및 표현 자동생성 기술(2차년도)”)
2011.4~2012.3 : Development of Multi-modal Emotion/Situation/Intention Expression DB and Automatic Generation, 3rd Year (In Korean: “멀티모달 감정/상황/의도 표현 DB 및 표현 자동생성 기술(3차년도)-1서브” )
2012.4~2013.4 : Development of Multi-modal Emotion/Situation/Intention Expression DB and Automatic Generation, 4th Year (In Korean: “멀티모달 감정/상황/의도 표현 DB 및 표현 자동생성 기술(4차년도)”)
This research was performed for the Intelligent Robotics Development Program, one of the 21st Century Frontier R&D Programs, funded by the Ministry of Knowledge Economy of Korea.
My role
Subteam leader (2012~2013)
Software and hardware engineer
System maintainer
Achievement
Expressive robotic head development
Hardware and software maintenance
Development of robot's gesture generation program (using C++, MFC)
Video
Related Papers
H. S. Lee, J. W. Park, S. H. Jo, M. G. Kim, Won Hyong Lee, M. J. Chung, “A Mascot-Type Facial Robot with a Linear Dynamic Affect-Expression Model,” Proc. the 17th World Congress of International Federation of Automatic Control (IFAC) Seoul, Korea, p. 14099, July, 2008 → Best Video Prize
Park, Jeong Woo, Hui Sung Lee, and Myung Jin Chung. "Generation of realistic robot facial expressions for human robot interaction." Journal of Intelligent & Robotic Systems 78.3-4 (2015): 443-462.
J. W. Park, W. H. Kim, Won Hyong Lee, W. H. Kim, M. J. Chung, "Lifelike Facial Expression of Mascot-type Robot based on Emotional Boundaries," 2009 IEEE International Conference on Robotics and Biomimetics, Guilin, Guangxi, China, December 19-23, 2009 → Best Paper Finalist
J. W. Park, W. H. Kim, Won Hyong Lee, M. J. Chung, "A Robot Simulator 'FRESi' for Dynamic Facial Expression", The 6th International Conference on Ubiquitous Robots and Ambient Intelligence, pp. 727 - 728, Gwangju, Korea, October 29-31, 2009
Jaewoo Kim, Woo Hyun Kim, Won Hyong Lee, Ju-Hwan Seo, Myung Jin Chung and Dong-Soo Kwon, "Automated Robot Speech Gesture Generation System Based on Dialog Sentence Punctuation Mark Extraction," 2012 IEEE/SICE International Symposium on System Integration, Fukuoka, Japan, December 16-18, 2012
W. H. Kim, J. W. Park, Won Hyong Lee, W. H. Kim, M. J. Chung, "Synchronized Multimodal Expression Generation using Editing Toolkit for a Human-friendly Robot," 2009 IEEE International Conference on Robotics and Biomimetics , Guilin, Guangxi, China, December 19-23, 2009
Woo Hyun Kim, Jeong Woo Park, Won Hyong Lee, Myung Jin Chung, and Hui Sung Lee, "LMA based Emotional Motion Representation using RGB-D Camera," 8th IEEE/ACM International Conference on Human-Robot Interaction(HRI), Tokyo, Japan, March 4-7, 2013
I3RC Project (Project Nickname)
2008.6~2010.12 : Development of Multi-modal Intelligent Human-Robot Interaction for Service Robot (In Korean: “서비스 로봇을 위한 멀티 모달 지능형 인간-로봇 상호작용 기술개발”, 정보통신연구진흥원)
This research was performed for MKE(The Ministry of Knowledge Economy), Korea, under the ITRC(Information Technology Research Center) support program supervised by the NIPA(National IT Industry Promotion Agency)
My role
Software engineer
Achievement
Development of multi-modal intelligent human-robot interaction technology for service robots
Related Papers
W. H. Kim, J. W. Park, Won Hyong Lee, W. H. Kim, M. J. Chung, "Stochastic Approach on a Simplified OCC Model for Uncertainty and Believability," 2009 IEEE International Symposium on Computational Intelligence in Robotics and Automation, Daejon, Korea, December 15-18, 2009
Development of a Multi-sensor Facial Robot, "Sunflower"
My role
Team member
Media director
Video
Undergraduate Research Project(URP)
Title: Implementation of Voice Synthesizer on DSP and Glove Interface
Science Exhibition Attendance
English Title: Restoration of A-Ja room(亞字房) in Chil-Bul Temple(七佛寺) by Scientific Research of Korean Floor Heating System
Korean Title: 구들(온돌)의 과학적 원리분석을 통한 칠불사 아자방의 복원
Award: Grand Prize at Seoul Science Exhibition
Award: Encouraging Prize at National Science Exhibition