In the dynamic domain of multimodal learning, my research endeavors revolve around synergizing information from diverse sources, including video data, time-series sensor data, and language data. By developing innovative techniques, I aim to create a comprehensive world model that enhances our capacity to interpret and understand complex environments through multiple modalities. This interdisciplinary approach not only facilitates nuanced insights into human activities, as captured by wearable sensors, but also extends to the broader spectrum of multimedia data, contributing to advancements in areas such as computer vision, natural language processing, and the seamless integration of multimodal information for a more comprehensive understanding of the world.
Within the range of generative models, my distinctive research niche focuses on harnessing the power of these models for time-series data analysis. Specifically, I specialize in leveraging generative models to decode intricate patterns embedded in time-series data sourced from wearable devices, facilitating precise human activity recognition. Additionally, my focus extends to the industrial domain, where I employ generative models to interpret time-series data from various processes, aiming to optimize industrial workflows and enhance predictive maintenance strategies for machinery and lithium-ion batteries. This unique approach aims to bridge the gap between generative models and real-world applications, particularly in the realms of human-centric technologies and industrial process optimization.
Human Activity Recognition (HAR) is one of the most challenging research fields with the development of wearable sensors and deep learning technologies. By harnessing the wealth of data obtained from wearable sensors, I aim to develop robust algorithms and models that can accurately interpret and classify human activities. This not only holds significance in health and wellness monitoring but also extends to applications in fields such as sports science, rehabilitation, and personalized user interfaces. Through my work, I strive to contribute to the evolution of wearable technology as a powerful tool for understanding and enhancing human performance and well-being.
In the domain of Predictive Maintenance for industrial machines and Lithium-ion batteries, my research is dedicated to advancing the reliability and efficiency of critical systems. Specifically, I focus on developing cutting-edge techniques for fault detection and diagnosis, with a particular emphasis on identifying issues in machine components like bearings. Additionally, my work extends to the realm of Lithium-ion batteries, where I specialize in predicting the remaining useful life of these crucial energy storage devices. By leveraging data-driven approaches and advanced algorithms, I strive to provide actionable insights that empower industries to proactively address potential failures, optimize maintenance schedules, and ultimately enhance the overall performance and longevity of both industrial machinery and energy storage systems.
In the vibrant landscape of virtual try-on technology for online clothing shopping, my research is dedicated to revolutionizing the way consumers interact with fashion in the digital realm. By leveraging advanced computer vision and augmented reality techniques, I aim to create immersive and realistic virtual try-on experiences. This entails developing sophisticated models that can accurately simulate the fit and appearance of clothing items on users, enabling them to make more informed and satisfying purchasing decisions without the need for a physical try-on. Through this innovative approach, I strive to bridge the gap between the online and offline shopping experiences, enhancing convenience and confidence for consumers in the dynamic world of fashion e-commerce.