Research Projects

Model-Based Machine Learning

Recent years have witnessed a dramatically growing interest in machine learning (ML) methods. These data-driven trainable structures have demonstrated an unprecedented empirical success in various applications, including computer vision and speech processing. The benefits of ML-driven techniques over traditional model-based approaches are twofold: First, ML methods are independent of the underlying stochastic model, and thus can operate efficiently in scenarios where this model is unknown, or its parameters cannot be accurately estimated; Second, when the underlying model is extremely complex, ML algorithms have demonstrated the ability to extract and disentangle the meaningful semantic information from the observed data. Nonetheless, not every problem can and should be solved using deep neural networks (DNNs). In fact, in scenarios for which model-based algorithms exist and are computationally feasible, which is the case in various signal processing and communications setups, these analytical methods are typically preferable over ML schemes due to their theoretical performance guarantees and possible proven optimality. In my work I study how DNNs can be combined with classic methods into hybrid model-based/data-driven algorithms which leverage the model-agnostic nature of deep learning while preserving the interpretability and suitability of classic methods. My goal is therefore to allow model-based algorithms to be applied in scenarios for which, due to either a complex underlying statistical model or missing knowledge of it, these methods cannot be applied directly. This is achieved by exploring the continuous spectrum between data-centric deep learning and knowledge-centric model-based algorithms.


Deep Learning on the Edge

The dramatic success of deep learning is largely due to the availability of data. Data samples are often acquired on edge devices, such as smart phones, vehicles and sensors, and in some cases cannot be shared due to privacy considerations. Federated learning is an emerging machine learning paradigm for training models across multiple edge devices holding local datasets, without explicitly exchanging the data. It enables multiple users to collaboratively train a common, robust machine learning model without sharing local data, thus allowing to address critical issues such as data privacy, data security, and access to heterogeneous data. Learning in a federated manner differs from conventional centralized machine learning, and poses several core unique challenges and requirements, which are closely related to classical problems studied in the areas of signal processing and communications. In my research, I tackle challenges of learning on the edge which are associated with its distributed nature and reliance on reliable communication using signal processing tools. These include the incorporation of compression, quantization, precoding, and privacy enhancement techniques, and their adaptation to operate in a learning-aware manner, accounting for the fact that the overall goal is to rapidly train an accurate model.

Task-Based Analog-to-Digital Conversion

Analog-to-digital converters allow physical signals to be processed using digital hardware. Their operation consists of two stages: Sampling, which maps of a continuous-time signal into discrete-time, and quantization, i.e., representing the continuous-amplitude quantities using a finite number of bits. This conversion is typically carried out using generic uniform mappings that are ignorant of the task for which the signal is acquired, and can be costly when operating in high rates and fine resolutions. In my work I study task-based acquisition with uniform scalar ADCs. This research combines lossy source coding, sampling theory, and quantization theory with tools for handling the hardware limitations, including convex optimization and machine learning.

A video of my talk on task-based quantization from BIRS workshop (October 2018) can be found here.

Massive MIMO - From Theory to Practice

Massive multiple-input multiple output (MIMO) systems are multi-user wireless networks in which the base station (BS) is equipped with an arbitrarily large number of antennas. While massive MIMO technology has the potential of increasing the spectral efficiency of wireless networks, using a large scale antenna array, and particularly, accurately quantizing the signal observed at the antennas, gives rise to various implementation challenges. Among these challenges are increased cost, high power consumption, and large memory usage. In my work I study massive MIMO systems, starting from the fundamental limits of such channels, computed assuming only standard power constraints, and proceeding to the realization of practical systems accounting for these constraints, focusing on the application of the emerging technology of dynamic metasurface antennas. This work uses tools from a broad range of research areas, including network information theory, random matrix theory, lossy source coding, and metamaterials analysis.

Dual-Function Radar Communication Systems

In a multitude of practical applications, such as autonomous vehicles, the communicating device must also be able to sense its environment using radar. Jointly implementing radar and communications contributes to reducing the number of antennas, system size, weight, and power consumption, while allowing spectrum and resource sharing. In my research I focus on the embedding of digital communication methods into radar systems such that both functionalities can operate reliably with minimal mutual interference. In particular, we identify the combination of the notion of agile radar with usage of index modulation schemes as a potential strategy for realizing dual-function radar communications methods without giving rise to coexistence issues.