2nd ACM Workshop on Machine Learning for Next Generation Communication and Edge Networks (ML4NxtGNet)
Co-located with ACM MobiHoc 2025
Houston, USA, October 30, 2025
Learning in Next Generation (NextG) communication networks will be performed on data predominantly originating at edge and user devices in order to support applications such as 6G and beyond wireless networks, Internet of Things (IoT), mobile healthcare, self-driving cars, etc. A growing body of research work has focused on learning at the edge-to-cloud continuum by engaging the edge in the learning process, along with the cloud, and everything in between, which can be advantageous in terms of a better utilization of network resources, delay reduction, and resiliency against unavailability and failures.
However, enabling emerging artificial intelligence/ machine learning (AI/ML) algorithms in NextG and edge networks necessitates going back to the drawing board to rethink several now-established assumptions on networks, resources, and infrastructures. For instance, standard centralized solutions can be ill-equipped to handle more complicated and larger AI/ML models, such as large language models (LLM) at the edge, given the network-constrained resources. In this context, research is needed to tailor AI/ML mechanisms for the edge-to-cloud continuum to securely harvest limited and heterogeneous resources, including computing power, storage, battery, networking resources (including bandwidth), etc., scattered across end devices, edge servers, and cloud with a minimum amount of communication cost.
This workshop will explore the AI/ML mechanism over NextG and edge networks by exploiting techniques including decentralized learning, model distributed inference and conditional computation. We invite researchers to contribute their novel research results that advance the development of distributed AI/ML at the resource-constrained edge.
Topics of interest include, but are not limited to:
Decentralized learning
Model-distributed training and inference
Split learning
Large language models at edge-to-cloud continuum
Random-walk-based learning
Efficient gossip algorithms for decentralized learning
Early-exit mechanisms for resource-aware AI/ML
Conditional computation and mixture of experts
Low-cost privacy-preserving AI/ML
Trustworthy AI/ML
Hybrid distributed and decentralized learning
Important Dates:
Paper Submission: July 20, 2025, AoE
Acceptance Notification: August 21, 2025
Camera Ready Submission: August 30, 2025
Workshop Date: October 30, 2025
Paper Submissions: Authors are invited to submit original, unpublished papers not under review elsewhere. Submissions will be subjected to a peer-review process. All submissions should be written in English and follow the MobiHoc template and requirements. Papers should NOT exceed 6 pages (US letter size) double column including figures, tables, and references. Submissions are via hotcrp; https://ml4nxtgnet25.hotcrp.com
Organizers:
Scott Brown, Army Research Lab
Matt Dwyer, Army Research Lab
Salim El Rouayheb, Rutgers University
Erdem Koyuncu, University of Illinois Chicago
Hulya Seferoglu, University of Illinois Chicago