During the last decade, new machine learning (ML) methods have been developed to solve complex tasks with an impressive performance that were until recently believed to be impossible. Cameras recognize faces, cars see and avoid obstacles, and computational approaches revolutionize biology and medicine. Such amazing performance is often enabled by the availability of big datasets and large platforms that can provide vast amounts of computational and communication resources to be used for training. In many real-world scenarios, however, we deal with some form of networks (e.g., multi-agent systems, Internet-of-Things, transportation networks, engineered biological networks, social networks, and intra-body sensor networks) with limited computational and communication resources, where the traditional ML approaches may fail.
This course covers fundamentals of machine learning over networks (MLoN). It starts from a conventional single-agent setting where one server runs a convex/nonconvex optimization problem to learn an unknown function. We introduce several approaches to address this seemingly, simple yet fundamental, problem. We introduce an abstract form of MLoN, present centralized and distributed solution approaches to address this problem, and exemplify via training a deep neural network over a network. The course covers various important aspects of MLoN, including optimality, computational complexity, communication complexity, security, large-scale learning, online learning, MLoN with partial information, and several application areas. As most of these topics are under heavy researches nowadays, the course is not based on a single textbook but builds on a series of key publications in the field.
This 10 credits PhD-level course consists of 16 lectures and is organized in two parts to give the participants time to digest the materials and prepare for the second part:
The slides for all lectures are available on our github repository.
Date: Download the calendar
Location: See Course materials for more information
Course code: EP3260
This course has 3 homework and 6 computer assignments. Students will be grouped for homework and computer assignments. Special topic sessions are in the format of a two-days workshop, where students will present some key publications of the field.
A basic knowledge of convex optimization and probability theory is required to follow the course.
This course should benefit anyone who uses or will use machine learning and optimization in various engineering domains. Specially, the following fields: Electrical Engineering (especially areas like network control, communications, signal processing, large-scale optimization), Computer Science (especially machine learning, robotics, algorithms and complexity), Operations Research, Statistics, and Optimization.