Our primary research areas include, but not limited to:
While AI has achieved remarkable performance in tasks such as image classification and generation, its practical deployment remains limited to certain domains. In the era of ubiquitous computing, the integration of AI directly onto devices with various sensing capabilities (e.g., multimodal, vision, audio, natural languages, motion sensors, etc.) presents a new opportunity for various intelligent applications that process sensitive user data locally. Our research is at the forefront of this innovation, seeking innovative applications that enrich user experiences by harnessing the capabilities of AI while keeping user privacy.
#Mobile AI Agent
#Mobile Health
#Vision-Language Model (VLM) and Vision-Language Action (VLA)
#Multi-Modal Sensing
Environmental change poses a significant challenge in deploying AI due to the diverse nature of users and devices; users have unique physical conditions, behaviors, and lifestyles, and devices have different technical specifications like sensor types, computational power, and operating systems. These diverse factors collectively result in data discrepancies. As machine learning usually works well under trained data only, it is difficult to ensure the desired performance for new data from a new environment. This problem has been regarded as a major hurdle for the broader adoption of promising on-device AI technologies. Our work in this direction seeks unique contributions by proposing adaptation frameworks that minimize user burden while overcoming real-world challenges.
#Personal AI Agent
#Personalized LLM
#Test-Time Adaptation
#Few-Shot Learning
The deployment of AI on small devices presents significant challenges due to their limited resources, such as processor capacity, memory, and battery life. Contrary to cloud-based AI systems, which have access to almost unlimited resources, on-device systems must function under strict resource constraints. This issue is especially critical for advanced AI models that demand considerable computational power. Our goal is to design systems and frameworks that enable faster inference while minimizing memory and battery usage, thereby supporting efficient on-device AI.
#Small On-Device LLM
#Model Compression
#Tiny AI Accelerators
#Efficient Mixture-of-Experts
#Collaborative Learning