Recent Projects (EPFL)

 Since June 2023:

Up to June 2023 (when projects were more "official" and student-like):

Research

On Characterizing Nonlinear Strong Data Processing Inequalities [report]
EDIC Spring semester project report, Advisor: Prof. Emre Telatar
February 2023 - June 2023

The data processing inequality is a fundamental result in information theory. It states that no further processing of data can generate new information that is not already present, or equivalently, the information contained in any dataset can only decrease on further processing. This statement can be strengthened by quantifying this decrease in information as a scaling by a constant factor strictly smaller than 1, leading to strong data processing inequalities. However, it has since been observed that such linear relations do not fully capture the true characteristics of the decrease in information. Thus we study nonlinear data processing inequalities, which we call data processing functions, and conjecture that these functions are concave. We also study a specific example of these functions, namely for the KL divergence over the binary symmetric channel, and see that even for this simple case, the conjecture is difficult to prove.

Multiple Access Channels Under Maximal Error Probability [report]
EDIC Fall semester project report, Advisor: Prof. Emre Telatar
September 2022 - January 2023

The multiple access channel is the most well-understood problem setup in network information theory, with its capacity region exactly characterized when the average error probability is used as the reliability criterion. Nevertheless, it is known that when the maximal rather than average error probability is used, this leads, in general, to a smaller capacity region, about which very little is known. And yet, allowing for randomization at the encoders surprisingly leads to the same capacity region in both cases. For this result to be practically useful, it is also necessary that the maximal error decays to zero at a comparable rate to the average error (which is exponentially fast in the blocklength). We show that if the encoders have "too little" randomness, the maximal error decay cannot be exponential, and discuss the difficulties  in proving a more general result.

Pure Exploration Over Communication-Constrained Channels [paper at WiOpt 2023]
Advisors: Prof. Manjesh K. Hanawal, Prof. Nikhil Karamchandani
August 2022 - April 2023

We study the problem of best-arm identification in a distributed variant of the multi-armed bandit setting, with a central learner and multiple agents. Each agent is associated with an arm of the bandit, generating stochastic rewards following an unknown distribution. Further, each agent can communicate the observed rewards with the learner over a bit-constrained channel. We propose a novel quantization scheme called Inflating Confidence for Quantization (ICQ) that can be applied to existing confidence-bound based learning algorithms such as Successive Elimination. We analyze the performance of ICQ applied to Successive Elimination and show that the overall algorithm, named ICQ-SE, has the order-optimal sample complexity as that of the (unquantized) SE algorithm. Moreover, it requires only an exponentially sparse frequency of communication between the learner and the agents, thus requiring considerably fewer bits than existing quantization schemes to successfully identify the best arm. We validate the performance improvement offered by ICQ with other quantization methods through numerical experiments. 

Others

From Simple to Composite Hypothesis Testing Through Universal Distributions [report]
COM-621: Advanced Topics in Information Theory
Spring 2023

For a simple binary hypothesis testing setup, Hoeffding's bound gives the optimal error exponent of one type of error when the other decays exponentially fast, with an exponent smaller than the KL divergence between the null and alternative hypothesis distributions. We consider an extension of this result to a composite hypothesis testing setup. In particular, we study an axiomatic approach based on a "universal distribution", introduced by Tomamichel and Hayashi. By looking at the proof restricted to a specific setting, we try to understand the use of the axiomatic approach, and in particular, the universal distributions.

Solving the N-queens problem via the Metropolis Algorithm [report]
COM-516: Markov Chains and Algorithmic Applications
Fall 2022

The N-queens problem is easy enough to state: arrange N queens on an NxN chessboard such that no two queens can attack each other. We use the Metropolis algorithm to (1) find a solution, and (2) estimate the number of solutions for large N.