Training School on Machine Learning for Communications

23-25 September 2019 // Paris, France

Program Overview

Detailed Program

September 23

09:00 - 12:30

Wireless communications poses fundamental challenges to machine learning (ML). Wireless links are subject to fading and may be exposed to strong interference. Since wireless resources are scarce, this may severely limit the capacity of wireless links, thus requiring distributed ML solutions that efficiently use the wireless resources. Moreover, ML methods need to provide robust results based on small uncertain data sets and under strict latency constraints.

Current signal processing algorithms for wireless transceivers are typically based on models that assume, for example, ideal linear amplifiers, perfect channel state information, and knowledge of interference patterns. In practice, however, these assumptions are unrealistic because many parameters have to be estimated, so it is often unclear how well the idealized models can capture the true behavior of real communication systems. As a result, in recent years a great deal of effort has been devoted to replacing many of the building blocks of the radio access network by few machine learning algorithms, with the the intent to reduce drastically the number of assumptions and the number of complex estimation techniques. However, this reduction in model knowledge brings many technical challenges. In particular, in the physical layer, the wireless environment can be considered roughly constant only for few milliseconds, which can be all the time available for acquisition of training sets and for the training procedure. As a result, computationally simple learning techniques that can cope with small training sets, or that are able to extract largely time-invariant features of the wireless signals (so that traditional learning tools can be employed), have been in great demand. In this tutorial, which is based on two courses given to graduate students at TU Berlin, we review online machine learning algorithms for these tasks.

In more detail, we start by introducing selected mathematical tools widely used in machine learning. Topics to be discussed include fundamentals of (reproducing kernel) Hilbert spaces, fixed point algorithms, kernel-based learning, convex learning, regularization, dimensionality reduction, and compressive sensing. Special attention is given to online machine learning methods based on projections in Hilbert spaces, which have found many applications in wireless communications, including nonlinear beamforming, radio map reconstruction, localization, and channel covariance estimation in FDD massive MIMO systems, to name a few. Meeting the latency requirements of 5G networks requires massive parallelization, so we also discuss how to parallelize and map these algorithms to GPU architectures to achieve orders-of-magnitude acceleration. We complete the tutorial by reviewing recent results on the design of neural networks. Our tutorial also uses findings of the ITU-T focus group on machine learning for 5G to discuss the impact of machine learning on future network architectures.

14:00 - 17:30

  • Speakers: Christophe Moy and Lilian Besson
  • Title of the talk: Reinforcement learning for on-line dynamic spectrum access: theory and experimental validation
  • Abstract:

This tutorial covers both theoretical and implementation aspects of on-line machine learning for dynamic spectrum access in order to solve spectrum scarcity issue. We target in this work efficient and ready-to-use solutions in real radio operation conditions, at an affordable electronic price, even in embedded devices.

We focus on two wireless applications in this presentation: Opportunistic Spectrum Access (OSA) and Internet of Things (IoT) networks. OSA is the scenario that has been first targeted in the early 2010s, and is a futuristic scenario that has not been regulated yet. Internet of Things has known a more recent interest and revealed to be also a potential candidate for the application of learning solutions of the Reinforcement Learning family as soon as now.

First part (Lilian BESSON): Introduction to Multi-Armed Bandits and Reinforcement Learning

The first part of the tutorial introduces the general framework of machine learning, and focuses on reinforcement learning. We explain the model of multi-armed bandits (MAB), and we give an overview of different successful applications of MAB, since the 1950s.

By first focusing on the simplest model, of a single player interacting with a stationary and stochastic (i.i.d) bandit game with a finite number of resources (or arms), we explain the most famous algorithms that are based on either a frequentist point-of-view, with Upper-Confidence Bounds (UCB) index policies (UCB1 and kl-UCB), or a Bayesian point-of-view, with Thompson Sampling. We also give details on the theoretical analyses of this model, by introducing the notion of regret which is a measure of performance of a MAB algorithm, and famous results from the literature on MAB algorithms, covering both what no algorithm can achieve (ie, lower-bounds on the performance on any algorithm), and what a good algorithm can indeed achieve (ie, upper-bounds on the performance of some efficient algorithms).

We also introduce some generalizations of this first MAB model, by considering non-stationary stochastic environments, Markov models (either rested or restless), and multi-player models. Each variant is illustrated with numerical experiments, showcasing the most well-known and most efficient algorithms, using our state-of-the-art open-source library for numerical simulations of MAB problems, SMPyBandits (see https://SMPyBandits.github.io/).

Second part (Christophe MOY): Decentralized spectrum learning for IoT: proof-of-concept and real LoRa deployment

We show that both OSA and IoT spectrum access schemes can be modeled as multi-armed bandit (MAB) problems for the spectrum band occupancy issue. We start on the OSA model, which was the first targeted by this approach, and show results of learning at theoretical and simulation level for OSA. Then we illustrate the results through an OSA demonstrator running reinforcement learning algorithms on real radio signals. It confirms the mathematical derivations in both i.i.d. and Markovian spectrum load contexts, with mono and multi-users scenarii.

Then we mostly detail the IoT model and show results of learning for mono- and multi-users cases. We consider here LPWAN –Low Power Wide Area Networks) in unlicensed bands. Contrary to OSA where the learning feedback is done by sensing, the IoT feedback loop is done through ACK. We propose a decentralized approach where learning is on IoT device side. Real world results are given with increasing levels of complexity:

    • IoT Proof-of-Concept demonstrator results based on USRP platforms operating in laboratory radio conditions (MALIN).
    • Implementation results of bandit algorithms on IoT devices (named IoTligent) in a real LoRaWAN network
    • Implementation results in a real LoRaWAN network where future heavy traffic load is emulated at radio level in order to reproduce the future conditions that are expected when the number of devices in unlicensed bands will explode.

The advantages of such a learning approach are of major interest for IoT. It can mitigate the spectrum overload in the IoT band, as well as extend IoT devices battery lifetime and decrease the latency of message delivery.

September 24

09:00 - 10:30

  • Speakers: Bartlomiej Blaszczyszyn and Antoine Brochard
  • Title of the talk: Wavelet Tools and Determinantal Processes for Geometry Learning with Network Applications
  • Abstract:

Design, performance evaluation and control of wireless networks are facing rapid increase in complexity, due to more and more dense deployment of the classical cellular networks and the advent of the Internet of things. It is clear that engineering of these networks needs to make more systematic use of the data massively collected in operational conditions, thus opening this domain to possible applications of machine learning methods. While advanced signal processing techniques at the link layer already integrate elements of artificial intelligence it is more seldom to see them used at higher levels, in particular at the network layer. The data corresponding to this layer (e.g. base station locations, their characteristics and performance metrics, user distribution and QoS metrics) have geometric structure, reflecting (usually two-dimensional) geographic network deployment. In the following two parts of this talk we shall present two ideas relating statistical learning with stochastic geometry for network applications.

    • First part (A. Brochard): Wavelet methods for point processes: learning of geometric network characteristics and generative models.

We show how scattering moments and phase harmonics, which are wavelet-based descriptors of (possibly marked) point patterns, can be used to learn intrinsic dependencies in geometric networks. We use them to build generative network models and predict geometric network characteristics. Statistical learning of geometric characteristics of cellular networks and their design are natural applications.

Based on a joint work with Stephane Mallat [College de France and ENS Paris] and Sixin Zhang [Peking University]

    • Second part (B. Blaszczyszyn): Determinantal thinning of point processes: repulsive scheduling in networks with optimization and learning

We show how models based on discrete determinantal processes can be used to generate optimized subsets of network nodes exhibiting some repulsion. ON/OFF base station switching and opportunistic scheduling in D2D networks are among natural applications.

Based on a joint work with H. P. Keeler [University of Melbourne]

11:00 - 12:30

  • Speaker: Vincent Poor
  • Title of the talk: Learning at the Edge
  • Abstract:

Wireless networks can be used as platforms for machine learning, taking advantage of the fact that data is often collected at the edges of the network, and also mitigating the latency and privacy concerns that backhauling data to the cloud would entail. This talk will present an overview of some results on distributed learning at the edges of wireless networks, in which machine learning algorithms interact with the physical limitations of the wireless medium. This talk will provide an overview of some results in this area, including both federated learning in which end-user devices interact with edge devices such as access points to implement joint learning algorithms, and collaborative learning in which end-user devices learn through peer-to-peer interaction.

14:00 - 17:30

  • Speaker: Jakob Hoydis
  • Title of the talk: Recent Progress on End-to-end Learning and Neural MIMO Detection
  • Abstract:

End-to-end learning and neural MIMO detection are two promising applications of machine learning for the physical layer. A tutorial introduction to both topics will be given, bleeding-edge results discussed, and directions for future research presented.

September 25

09:00 - 12:30

  • Speaker: Merouane Debbah
  • Title of the talk: Wireless AI : Challenges and Opportunities
  • Abstract:

Mobile cellular networks are becoming increasingly complex to manage while classical deployment/optimization techniques are cost-ineffective and thus seen as stopgaps. This is all the more difficult considering the extreme constraints of 5G networks in terms of data rate (more than 10 Gb/s), massive connectivity (more than 1,000,000 devices per km2), latency (under 1ms) and energy efficiency (a reduction by a factor of 100 with respect to 4G network). Unfortunately, the development of adequate solutions is severely limited by the scarcity of the actual resources (energy, bandwidth and space). Recently, the community has turned to a new resource known as Artificial Intelligence at all layers of the network to exploit the increasing computing power afforded by the improvement in Moore's law in combination with the availability of huge data in 5G networks. This is an important paradigm shift which considers the increasing data flood/huge number of nodes as an opportunity rather than a curse. In this tutorial, we will discuss through various examples the on-going AI architectures for the design of Next Generation Intelligent Networks.

14:00 - 15:30

  • Speaker: Emil Matus
  • Title of the talk: RF Transmitter Linearization using Machine Learning
  • Abstract:

The important operational and regulatory requirement on wireless transmission path is its linearity and high dynamic range. Transmitter linearity allows adoption of spectrally efficient waveforms while guarantees strict in-band operation by avoiding spurious out-of-band interferences imposed by nonlinear effects. Moreover, the signal quality is essential to continuously improve the capacity, efficiency and the coexistence of radio access technologies. However, to obtain a mobile cell coverage associated with sufficient transmit power level and achieve high efficiency at the same time, it is of fundamental importance to operate the RF power amplifiers near their saturation region. This leads to the nonlinear intermodulation distortion and spectral regrowth of the transmission signal that deteriorate considerably due to the memory effects imposed by increasing bandwidth of modern systems. For that reason, transmitter linearization techniques have attracted a lot of attention in wireless communications. Moreover, the advances of the digital signal processing hardware technologies enable the compensation of nonlinearities by predistorting baseband signals using an inverse model of the power amplifier. Traditionally, Volterra models can well represent nonlinear and memory behavior of amplifiers. However, the complexity of the model pose challenge on model parameter identification as well as on derivation of equivalent baseband representation. Approximate models adopted in practical systems suffer from inaccuracy and model identification limitations. In this talk we introduce machine learning as a generic method to solve the problem of identifying nonlinear models for the purpose of power amplifier and predistortion modeling. Moreover, the capability of training algorithms to identify models non-linear in the parameters allows us to adopt more advanced and neural-network inspired models which perform better, requiring a lower number of parameters.

Social Event

The social event will take place at Montparnasse Tower. The participants will get the chance to visit the Montparnasse Tower Panoramic Observation Deck and enjoy a networking dinner at Ciel de Paris restaurant at the 56th floor.