Healthcare
Our lab conducts research on multimodal healthcare AI for recognizing rehabilitation actions and quantitatively assessing movement quality in stroke patients. In particular, MMeViT uses data from IMU sensors and RGB-D cameras to recognize upper-limb activities of daily living in home environments, enabling more reliable post-stroke action monitoring.
We also developed RAST-G@, a deep learning model that combines ST-GCN and temporal attention to evaluate rehabilitation movement quality from skeleton sequences and provide structured feedback to both patients and therapists.
These studies are meaningful because they support more objective, consistent, and scalable rehabilitation assessment compared to conventional subjective evaluation methods. In multimodal healthcare, AI is used to jointly analyze different types of data—such as video, inertial sensor signals, and biosignals—to better understand a patient’s condition and behavior. This makes it possible not only to recognize actions, but also to assess recovery progress, detect abnormal movement patterns, and deliver personalized rehabilitation feedback.
Overall, our lab aims to improve the accessibility, objectivity, and continuity of rehabilitation through AI-driven digital healthcare systems for home-based recovery.