Information Security and Privacy (Universitat Politècnica de Catalunya, Postdoctoral Fellowship)

I have recently joined the Information Security Group (ISG) as a postdoctoral researcher, within the Department of Telematics of the Universitat Politècnica de Catalunya. I am working on the Consolider project, under the supervision of Prof. Jordi Forné Muñoz and Prof. Miquel Soriano Ibáñez, in collaboration with Javier Parra-Arnau.

We are currently investigating a number of problems regarding security and privacy in computer networks, some of which are outlined below. Before the outline, we provide a quick summary of the ISG's work on privacy. Don't forget to check out the brand new site for regularly scheduled research talks organized by the ISG.

Summary of the ISG's Work on Privacy

The ISG is currently investigating measures aimed at protecting user privacy in information systems, focusing on statistical databases and location-based services.

Regarding databases, we consider microdata sets carrying information on individual responders. In general, the data sets contains key attributes or quasi-identifiers, namely attributes that may be linked with external information to identify the respondents to whom the records in the microdata set refer. Examples are job, address, age and gender. Additionally, the data set contains confidential attributes with sensitive information on the respondent, such as salary, religion, political affiliation or health condition. Perturbation of the key attributes enables us to preserve privacy to a certain extent, at the cost of losing some of the data utility with respect to the unperturbed version. The ISG is applying the information-theoretic methods based on Shannon’s rate-distortion theory to find a mathematically optimal tradeoff between privacy and distortion.

As for location-based services, we study collaborative protocols for users to privately send location-based queries  to a service provider. The focus is on scenarios where preexistent infrastructures are hardly affordable. More specifically, we assume that no trusted-third party is available to act as an intermediary ensuring the user’s anonymity, and consistently, that neither users nor service providers can be completely trusted.

Research projects we are or have been involved include ARES CONSOLIDER and CONSEQUENCE [WWW,WWW].

A More Technical Overview

The brief, 2-page summary [PDF] is more accurate but somewhat more technical than the content on this page, and it is recommended if you are already familiar with the field. As a third alternative, for a quick, visual overview, see this brief project presentation [PDF].

A less technical description, slightly outdated but still informative, follows.

Information-Theoretic Methods for Privacy in Statistical Databases

This section summarizes the paper [PDF] submitted to PSD'08.


A microdata set is a data set whose records carry information on individual respondents, like people or enterprises. The data set commonly contains key attributes or quasi-identifiers, namely attributes that, in combination, may be linked with external information to reidentify the respondents to whom the records in the microdata set refer. Examples are job, address, age and gender. Additionally, the data set contains confidential attributes with sensitive information on the respondent, such as salary, religion, political affiliation or health condition. The classification of attributes as key or confidential is not necessarily unique and disjoint. k-Anonymity requires that each combination of key attribute values should be shared by at least k records in the data set. This may be achieved through the microaggregation approach illustrated in Fig. 1.

Fig. 1. Microaggregation of key attributes.

The problem with k-anonymity is its vulnerability against skewness and similarity attacks, which essentially exploit the difference between the prior distribution of confidential data in the population, and the posterior (conditional) distribution given the observed, perturbed key attributes. To counter this, t-Closeness was recently defined as a privacy risk criterion for data anonymization. A data set is said to satisfy t-closeness if, for each group of records sharing a combination of key attributes, the Kullback-Leibler divergence between the prior and the posterior distributions is no more than a threshold t.

Our Research

We have developed an information-theoretic formulation of the privacy-distortion tradeoff in microdata anonymization, represented in Fig. 2. 

Fig. 2. Information-theoretic formulation of the privacy-distortion problem.

Following the t-closeness model, the privacy risk is measured as the mutual information between perturbed key attributes and confidential attributes, equivalent to the Kullback-Leibler divergence between posterior and prior distributions. We consider the problem of maximizing privacy, that is, minimizing the above mutual information, while keeping the perturbation of data within a specified bound to ensure that data utility is not too damaged. We have established a strong connection between this privacy-perturbation problem and the rate-distortion problem of information theory and extend of a number of results, including convexity of the privacy-distortion function and the Shannon lower bound. A closed formula is obtained for the quadratic-Gaussian case, proving that the optimal perturbation is randomized rather than deterministic, which justifies the use of PRAM in the case of attributes with finite alphabets or noise addition in the general case.

Private Information Retrieval in Location-Based Services via User Collaboration

For additional details on this section, please refer to the paper [PDF] submitted to the IADIS'09 conference.


Privacy and security are paramount for the proper deployment of location-based services (LBSs). Suppose that a trusted third party (TTP) is available as an intermediary between the user and the LBS provider, as depicted in Fig. 3. 

Fig. 3. Trusted third party between user and LBS provider.

Then, the user ID may simply trust the TTP to hide the ID from the service provider, while the provider may trust the TTP to accept queries only from authorized users. In scenarios where a preexistent infrastructure is hardly feasible, an alternative solution to private LBSs consists in perturbing the location information, as shown in Fig. 4.

Fig. 4. Location perturbation in LBSs.

In this situation, there is a trade-off between privacy and data utility, and since the service provider knows the user ID, the query (and the reply) is still vulnerable to statistical analysis.

Our Research

We have developed a protocol based on user collaboration to privately retrieve location-based information from an LBS provider, which we expect to publish shortly. A conceptual representation is shown in Fig. 5.

Fig. 5. Privacy in LBSs through user collaboration.

Essentially, the protocol trusts the group, but not the individuals, and focuses on preserving the privacy of the entire query and its reply. Naturally, there is a cost in terms of network traffic between the users. The solution is appealing in environments where a preexistent infrastructure may not be assumed, such as ad hoc networks.

Quantization and Transforms for Distributed Source Coding, and Distributed Classification (Stanford University, Ph.D.)

My doctoral studies at Stanford University with the Image, Video and Multimedia Systems Group, lead by Prof. Bernd Girod, focused on distributed source coding applied to video . I worked closely with Anne Aaron and Shantanu Rane. In particular, I investigated quantization, transforms for distributed source coding and distributed classification. An overview follows. For additional details, please refer to my Ph.D. thesis [PDF].

Distributed Source Coding of Noisy Sources with Side Information

The main goal of my doctoral research was the design of rate-distortion optimal quantizers and transforms for distributed coding of noisy sources. In this problem, several instances of source data, or more generally, several noisy observations of unseen source data, are individually encoded and jointly decoded with the help of side information available at the decoder only (Fig. 6).

Fig. 6. Distributed source coding in a sensor network.

Despite the communication constraints, we attempted to exploit the joint statistics, assumed to be known everywhere, to obtain the best performance in terms of transmission rate, and distortion introduced in the estimation of the source data. More precisely, we considered the problem of designing optimal quantizers and reconstruction functions, assuming that ideal Slepian-Wolf coders are used for lossless distributed coding of the quantization indices at a rate arbitrarily close to joint encoding, i.e., the joint conditional entropy given the side information, with negligible error probability. The special case of a single observation with decoder side information is known as Wyner-Ziv coding.

Distributed coding finds applications in a variety of data compression scenarios, involving sensor networks, low-complexity video encoding, compression of light fields, error protection with graceful degradation, and secure data authentication.

Quantization and Transforms for Distributed Source Coding

Our initial work addressed the problem of designing optimal quantizers for distributed source coding. The generality of our formulation included both the symmetric and asymmetric scenarios, together with a number of coding schemes, such as the ideal coding achieving a rate equal to the joint conditional entropy of the quantized sources given the side information. We showed the optimality conditions quantizers must satisfy, and generalized the Lloyd algorithm for its design. Experimental results were shown for the Gaussian scalar asymmetric case.

Recent work extended the theory to networks, considered high-rate quantization approximations and included experimental results for vector quantizers. We also carried out experiments using transforms for distributed source coding. Our theoretical results used the high-rate quantization approximations developed. Both in the case of clean sources and in the case of quantization and transforms for distributed source coding of noisy sources.

The topics of quantizer design, high-rate quantization, and transforms for distributed source coding are best detailed Chapters 3-6 of my Ph.D. dissertation [PDF].

Distributed Classification

Finally, we extended certain classification techniques, such as principal component analysis and linear discriminant analysis, to distributed classification problems. Results are summarized in Sec. 7.2 of my Ph.D. dissertation [PDF].