“Ubiquitous AI” refers to AI that is present everywhere, seamlessly embedded in the devices and environments people use every day. As AI models grow larger and more capable, relying solely on the cloud leads to latency, dependence on stable connectivity, high operating cost, and, most importantly, privacy risk. Moving AI onto devices offers a way forward, enabling fast, private, and reliable intelligence across emerging systems such as self-driving vehicles, Apple Intelligence, AI-powered wearables, and smart appliances. This trend is accelerating and is expected to extend even further into home robots, assistive devices, implanted technologies, and future everyday platforms.
Achieving ubiquitous AI, however, introduces unique technical challenges. AI models must run on devices with strict limits on compute power, memory, and energy. In addition, AI must adapt to individual differences and constantly changing environments, since users have diverse behaviors and devices vary widely in sensors and hardware capabilities. Making AI efficient, adaptive, and privacy-aware at the same time is a central goal of ubiquitous AI research. It represents the next era of computing, where intelligence supports people anytime and anywhere directly from the devices around them.
We work at the intersection of AI and systems to make AI available on every device and for every person.
Our primary research areas include, but are not limited to:
Efficient On-Device AI Systems
Adaptive and Personalized AI
Human-Centered AI Applications
The deployment of AI on small devices presents significant challenges due to their limited resources, such as processor capacity, memory, and battery life. Contrary to cloud-based AI systems, which have access to almost unlimited resources, on-device systems must function under strict resource constraints. This issue is especially critical for advanced AI models that demand considerable computational power. Our goal is to design systems and frameworks that enable faster inference while minimizing memory and battery usage, thereby supporting efficient on-device AI.
#Efficient AI Agent
#Small On-Device LLM
#Model Compression
#Tiny AI Accelerators
#Efficient Mixture-of-Experts
Environmental change poses a significant challenge in deploying AI due to the diverse nature of users and devices; users have unique physical conditions, behaviors, and lifestyles, and devices have different technical specifications like sensor types, computational power, and operating systems. These diverse factors collectively result in data discrepancies. As machine learning usually works well under trained data only, it is difficult to ensure the desired performance for new data from a new environment. This problem has been regarded as a major hurdle for the broader adoption of promising on-device AI technologies. Our work in this direction seeks unique contributions by proposing adaptation frameworks that minimize user burden while overcoming real-world challenges.
#Personal AI Agent
#Personalized LLM
#Test-Time Adaptation
#Few-Shot Learning
While AI has achieved remarkable performance in tasks such as image classification and generation, its practical deployment remains limited to certain domains. In the era of ubiquitous computing, the integration of AI directly onto devices with various sensing capabilities (e.g., multimodal, vision, audio, natural languages, motion sensors, etc.) presents a new opportunity for various intelligent applications that process sensitive user data locally. Our research is at the forefront of this innovation, seeking innovative applications that enrich user experiences by harnessing the capabilities of AI while keeping user privacy.
#Mobile AI Agent
#Mobile Health
#Vision-Language Model (VLM) and Vision-Language Action (VLA)
#Multi-Modal Sensing