In the dynamic domain of multimodal learning, my research endeavors revolve around synergizing information from diverse sources, including video data, time-series sensor data, and language data. By developing innovative techniques, I aim to create a comprehensive world model that enhances our capacity to interpret and understand complex environments through multiple modalities. This interdisciplinary approach not only facilitates nuanced insights into human activities, as captured by wearable sensors, but also extends to the broader spectrum of multimedia data, contributing to advancements in areas such as computer vision, natural language processing, and the seamless integration of multimodal information for a more comprehensive understanding of the world.
Within the range of generative models, my distinctive research niche focuses on harnessing the power of these models for time-series data analysis. Specifically, I specialize in leveraging generative models to decode intricate patterns embedded in time-series data sourced from wearable devices, facilitating precise human activity recognition. Additionally, my focus extends to the industrial domain, where I employ generative models to interpret time-series data from various processes, aiming to optimize industrial workflows and enhance predictive maintenance strategies for machinery and lithium-ion batteries. This unique approach aims to bridge the gap between generative models and real-world applications, particularly in the realms of human-centric technologies and industrial process optimization.
Human Activity Recognition (HAR) is one of the most challenging research fields with the development of wearable sensors and deep learning technologies. By harnessing the wealth of data obtained from wearable sensors, I aim to develop robust algorithms and models that can accurately interpret and classify human activities. This not only holds significance in health and wellness monitoring but also extends to applications in fields such as sports science, rehabilitation, and personalized user interfaces. Through my work, I strive to contribute to the evolution of wearable technology as a powerful tool for understanding and enhancing human performance and well-being.
Human-robot interaction focuses on enabling robots to collaborate seamlessly with humans in real-world settings. In collaboration with Posco, we work on controlling quadruped robots such as Spot for industrial safety and site monitoring, while with Robotis I lead a humanoid robot project targeting distribution and retail applications. At the core of these efforts lies the integration of Vision-Language-Action (VLA) models, empowering robots to perceive, understand, and act intelligently in dynamic environments, advancing the future of human-centric intelligent robotics.
In the domain of Predictive Maintenance for industrial machines and Lithium-ion batteries, my research is dedicated to advancing the reliability and efficiency of critical systems. Specifically, I focus on developing cutting-edge techniques for fault detection and diagnosis, with a particular emphasis on identifying issues in machine components like bearings. Additionally, my work extends to the realm of Lithium-ion batteries, where I specialize in predicting the remaining useful life of these crucial energy storage devices. By leveraging data-driven approaches and advanced algorithms, I strive to provide actionable insights that empower industries to proactively address potential failures, optimize maintenance schedules, and ultimately enhance the overall performance and longevity of both industrial machinery and energy storage systems.