A social network partitioned into circles of trust
F. M. Naini, J. Unnikrishnan, P. Thiran and M. Vetterli, "Privacy-Preserving Function Computation by Exploitation of Friendships in Social Networks" to be presented at ICASSP, Florence, Italy, 2014.
Thresholds for asymptotically optimal hypothesis tests
In recent years solutions to various hypothesis testing problems in the asymptotic setting have been proposed using results from large deviations theory. Such tests are optimal in terms of appropriately defined error-exponents. For the practitioner, however, error probabilities in the finite sample size setting are more important. In this paper we show how results on weak convergence of the test statistic can be used to obtain better approximations for the error probabilities in the finite sample size setting. While this technique is popular among statisticians for common tests, we demonstrate its applicability for several recently proposed asymptotically optimal tests, including tests for robust goodness of fit, homogeneity tests, outlier hypothesis testing, and graphical model estimation.
J. Unnikrishnan and D. Huang, "Weak Convergence Analysis of Asymptotically Optimal Hypothesis Tests" submitted to IEEE Transactions on Information Theory, revised Oct 2015.
Dimensionality reduction for hypothesis testing on large-alphabet data
Many modern applications require hypothesis tests to be performed on data drawn from large alphabets. In such large alphabet problems, classical hypothesis tests tend to perform very poorly for moderate sample sizes. We quantify this disadvantage of optimal tests by identifying the limiting behavior of the test statistics used in classical solutions to the problems of universal and composite hypothesis testing. We then develop a new dimensionality reduction framework to address this issue. Our procedure allows the statistician to choose an appropriate test statistic so as to control the limiting bias and variance of the test statistic under the null hypothesis, and at the same time ensure good error performance against specific distributions under the alternate hypothesis. Our solution is based on a new relaxation of the Kullback-Leibler divergence, which we call the mismatched divergence. The resulting test, called the mismatched test, can be interpreted as a generalization of the Generalized Likelihood Ratio Test (GLRT). A special case of this test is a test that is provably robust to uncertainties in distributions under the null hypothesis. The dimensionality reduction approach based on the mismatched divergence can be applied in very broad contexts including source coding, and filtering for Markov decision processes.