Course information


Description

During the last decade, new machine learning (ML) methods have been developed to solve complex tasks with an impressive performance that were until recently believed to be impossible. Cameras recognize faces, cars see and avoid obstacles, and computational approaches revolutionize biology and medicine. Such amazing performance is often enabled by the availability of big datasets and large platforms that can provide vast amounts of computational and communication resources to be used for training. In many real-world scenarios, however, we deal with some form of networks (e.g., multi-agent systems, Internet-of-Things, transportation networks, engineered biological networks, social networks, and intra-body sensor networks) with limited computational and communication resources, where the traditional ML approaches may fail.

This course covers fundamentals of machine learning over networks (MLoNs). It starts from a conventional single-agent setting where one server runs a convex/nonconvex optimization problem to learn an unknown function. We introduce several approaches to address this seemingly, simple yet fundamental, problem. We introduce an abstract form of MLoNs, present centralized and distributed solution approaches to address this problem, and exemplify via training a deep neural network over a network. The course covers various important aspects of MLoNs, including optimality, computational complexity, communication complexity, security, large-scale learning, online learning, MLoN with partial information, and several application areas. As most of these topics are under heavy researches nowadays, the course is not based on a single textbook but builds on a series of key publications in the field.

Lectures

This 10 credits PhD-level course consists of 13 lectures and is organized in two parts to give the participants time to digest the materials and prepare for the second part:

  • Part 1: Fundamentals (lectures 1-8)
  • Part 2: Special Topics (lectures 9-13)


Date: Download the calendar

Location: Q2, Malvinas vag 10, KTH Main Campus

Course code: EP3260

Assignments

This course has 2 homework and 6 computer assignments. Students will be grouped for homework and computer assignments. Special topic sessions are in the format of a two-days workshop, where students will present some key publications of the field.

Prerequisites

A basic knowledge of convex optimization and probability theory is required to follow the course.

Course objectives

  • to give new tools and training to model basic ML problems by optimization
  • to present basic theories of large-scale ML, distributed ML, and MLoNs
  • to provide a thorough understanding of how such problems are solved, pros and cons of various approaches, and some experience in solving them
  • to review on recent topics in ML and MLoNs, including communication-efficiency, security, and MLoNs with partial knowledge
  • to give students the background and skills required to do research in this growing field
  • to give oral a workshop exposure, both oral and poster presentation and networking with fellow researchers

Intended audience

This course should benefit anyone who uses or will use machine learning and optimization in various engineering domains. Specially, the following fields: Electrical Engineering (especially areas like network control, communications, signal processing, large-scale optimization), Computer Science (especially machine learning, robotics, algorithms and complexity), Operations Research, Statistics, and Optimization.