My research interests lie in the intersection of Artificial Intelligence (AI) and Human- Computer Interaction (HCI) in the healthcare domain. My first exposure to this area was during my undergraduate thesis, which highlighted the interplay between machine learning (ML) and human affect. It evolved through my master’s research in precision health and inspired me to pursue a PhD to advance my research in this domain.
My doctoral research interests focus on developing AI applications in healthcare through multimodal data integration that involves imaging, text, audio, and sensor data. My Master's thesis has led me to explore this area further. It involved projects developing personalized mobile sensing models for stress prediction using wearable biosensor data. These projects focus on addressing two major challenges in wearable technologies: first, labeling huge amounts of biosignal data, and second, dealing with the subjective nature of stress labels across participants. I published two papers from this project: "Individualized Stress Mobile Sensing Using Self-Supervised Pre-Training", published in the Applied Sciences Journal, which demonstrates that SSL models perform robustly with only 30% of labels compared to purely supervised trained models in a single-modal setting; and "Personalized Prediction of Recurrent Stress Events Using Self-Supervised Learning on Multimodal Time-Series Data”, presented at the AI & HCI Workshop at ICML, where the models pretrained with SSL require 5% of the annotations to achieve similar performance to the non-SSL models. A key area of my research interest is parameter-efficient learning. I want to tackle the computational demands of large-scale AI models. It is important in digital healthcare, where AI models are often deployed in edge devices. Developing parameter-efficient representations will ensure robust and accurate health-AI systems. Another direction of my research interest is in domain adaptation. I aim to explore how language-based representations from LLMs can be adapted to other modalities beyond text for seamless multimodal reasoning. I want to leverage LLMs to convert language intelligence into specific knowledge. Finally, I aim to advance data-efficient deep learning to address the challenge of data scarcity in healthcare. Privacy concerns, high annotation cost, and disparities across demographic groups often limit the availability of large datasets. These challenges especially persist in wearable technologies, where the devices produce a large amount of data every day. I aim to explore representation learning and self-supervised methods that can learn effectively from less annotated data.
Beyond research, I am committed to teaching and mentorship. As a Teaching Assistant for multiple undergraduate computer science courses at the University of Hawai‘i at Mānoa, I have delivered hands-on Java labs, guided students through data structure implementations, and provided individualized debugging support during office hours. These experiences strengthened my dedication to creating inclusive learning environments and communicating complex technical concepts clearly to diverse audiences.
Looking ahead, I aim to pursue a career in research, specifically focusing on leveraging ML to address complex challenges in healthcare applications. I hope to advance the next generation of medical AI systems that are not only powerful but also equitable, interpretable, and accessible to all.