Research

This page is a bit outdated. Recently I have applied deep learning and reinforcement learning to solve various problems in telecommunications and video analysis and processing. I have also conducted core machine learning research, in particular to improve the efficiency of deep neural networks for embedded applications.

Smart cities

Today, city managers are faced with many challenges: urban population increases, environmental constrains, resource availability, economic problems… To handle these, many cities strive to improve their planning and management. The term "smart city" is used to denote cites where sensors, information and communication technologies and citizen involvements are combined with traditional city resources to improve city management. This new field of research opens up many research opportunities. In the context of the INSIGHT project, I am attempting to extract and combine the information contained in large streams of sensor and human generated data to provide operators with a real-time picture of the city. This has a huge potential impact on disaster monitoring. Even in normal operation, operators lack the manpower and the technology to fully exploit the data they have access to. In critical situations such as a flood or a snow storm, the situation is worse (operators missing, coordination of the response, communication…). In this project, I have the chance to collaborate with the city of Dublin and the Federal Office of Civil Protection and Disaster Assistance. They give us access to real, massive-scale data and drive our research by defining they needs and evaluating our solutions. For example, we have access to the measurements of more than 1000 vehicle count sensors and hundreds of GPS-equipped buses.

I have been working on time-series modelling, in order to detect anomalies, that might indicate a critical situation, and to predict the evolution of the system. A key challenge of this project is to fuse the information provided by the various sensors and social media data we have access to. Moreover, some data can be commercially sensitive and cannot be accessed by the system. I have been developing a distributed system for heterogeneous sensor stream aggregation while respecting this constrain. Data is first processed locally, mapped to an ontology and then aggregated in a centralised system. Finally, Dublin, like many cities, has a radio station that broadcasts traffic updates and receives information from listeners. We are enhancing this system by also analysing social network data and automatically querying people.


Probabilistic graphical models

The real world is full of uncertainty. Analysing and modelling this uncertainty is crucial in many tasks. Probabilistic graphical models allow computers or humans to reason on large probability distributions that couldn't even be stored without them. They are a primordial tool in machine learning, computer vision, bioinformatics, and of course smart cities, among others. For example, probabilistic graphical model can define a probability distribution over all pixels of an image for optical character recognition or segmentation. They can model protein levels in a cell or energy production and consumption. They can be used for fault diagnosis, robot navigation and to compute Halo player ratings. They are used in many other applications, and are a fascinating area of research.To encode multivariate density distributions efficiently, probabilistic graphical models exploit conditional independence relationships between variables to reduce the number of parameters. These models consist of a graph and parameters. Typically, the nodes of the graph are the variables are the edges encode independence relationships between them. For example, two variables not connected by an edge can mean that they are independent conditionally on all the others. The parameters quantify the distribution.

I am particularly interested in learning high-dimensional probability distributions that allow efficient inference, for example computing beliefs over certain variables conditionally on the observed values of other variables.

Reinforcement Learning

Since their development, computers have been playing an ever increasing role in our life. However, while a human is able to adapt to changing or new circumstances, computers were originally programmed to achieved predefined tasks. Developing algorithms that can mimic humans' capacity to adapt is the focus of artificial intelligence research. This involves several challenges. Reinforcement learning studies one of them: the development of algorithms able to observe and act on an unknown generic environment, to progressively learn about it and to achieve a given objective. An impressive example is the algorithm developed by DeepMind. This algorithm is able to learn how to play various Atari 2600 video games by observing pixels of the screen, that is, without any original knowledge of the rules or mechanisms of the game. Reinforcement learning already has several applications, such as add placements on webpages, recommendations of movies, books or news items, home automation...

A key challenge in reinforcement learning is to handle dynamical environments composed of a high number of variables. I am leveraging my knowledge of probabilistic graphical models to develop new approaches to discover and exploit the structure between the variables of the environment.

Note: This image is by Raizin, from wikipedia and is licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license.