Research

Distribued Artifical Intelligent Framework

Challenges in Distributed AI Systems

achievement

How to jointly train a machine learning model in a distributed network while keeping the data private and secure? Our research builds efficient and scalable frameworks to address this problem. These frameworks keep both the data and the model information-theoretically private, while allowing efficient parallelization of training across distributed data-owners/workers as well as guaranteeing fast convergence. For doing so, we are researching:

Collaborators

Federated Learning in Low Earth Orbit (LEO) Satellite Networks

Large-scale deployments of low Earth orbit (LEO) satellites collect massive amount of Earth imageries and sensor data, which can empower machine learning (ML) to address global challenges such as real-time disaster navigation and mitigation. However, it is often infeasible to download all the high-resolution images and train these ML models on the ground because of limited downlink bandwidth, sparse connectivity, and regularization constraints on the imagery resolution. To address these challenges, we leverage Federated Learning (FL), where ground stations and satellites collaboratively train a global ML model without sharing the captured images on the satellites. We are making the following contributions:

Collaborators

On-device Learning for mobile/edge devices

Machin Learning has provided significant improvement in many applications including computer vision (CV), natural language processing (NLP) due to large volumen of training dataset combining with increase of computing resources. Recently, on-device learning has emerged new paradigm to make edge devices "smarter" and more effieicnet by observing changes in the data collected and self-adjusting/reconfiguaring the devices' operating model. However, these edge devices are still suffering from limited computing resources. To address this challenge, we are studying:

We plan to extend the application of on-device learning from 5G/6G cellular systems to small Language Model (SLM), and eventually to Large Language Model (LLM)

Collaborators