Research

“The joy of discovery is certainly the liveliest that the mind of man can ever feel”

Areas of Interest

Assessment of patient physical rehabilitation exercises using Deep Learning

This work tackles the challenge of automatically assessing physical rehabilitation exercises for patients who perform the exercises without clinician supervision. The objective is to provide a quality score to ensure correct performance and achieve desired results. To achieve this goal, a new graph-based model, the Dense Spatio-Temporal Graph Conv-GRU Network with Transformer, is introduced. This model combines a modified version of STGCN and transformer architectures for efficient handling of spatio-temporal data. The key idea is to consider skeleton data respecting its non-linear structure as a graph and detecting joints playing the main role in each rehabilitation exercise. Dense connections and GRU mechanisms are used to rapidly process large 3D skeleton inputs and effectively model temporal dynamics. The transformer encoder’s attention mechanism focuses on relevant parts of the input sequence, making it useful for evaluating rehabilitation exercises. The evaluation of our proposed approach on the KIMORE and UI-PRMD datasets highlighted its potential, surpassing state-of-the-art methods in terms of accuracy and computational time. This resulted in faster and more accurate learning and assessment of rehabilitation exercises. Additionally, our model provides valuable feedback through qualitative illustrations, effectively highlighting the significance of joints in specific exercises.

Fall Detection Event Using Multisensor Data 

In this work, we aim to address this challenge by applying thorough preprocessing techniques to the multisensor dataset, the goal is to eliminate noise and improve data quality. Furthermore, we employ a feature selection process to identify the most relevant features derived from the multisensor UP-FALL dataset, which in turn will enhance the performance and efficiency of machine learning models.

We then evaluate the efficiency  of various machine learning models in detecting the impact moment using the resulting data information from multiple sensors. Through extensive experimentation, we assess the accuracy of our approach using various evaluation metrics. Our results achieve high accuracy rates in impact detection, showcasing the power of leveraging multisensor data for fall detection tasks. This highlights the potential of our approach to enhance fall detection systems and improve the overall safety and well-being of individuals at risk of falls.

SAR Image Colorization Using Generative Adversarial Networks (GANs)

Project: Fusion InfraRouge – Optique – SAR  (FIROSAR)

Location: Thales & Bordeaux INP (IMS Lab)

In this project, we propose methods for the translation from Synthetic Aperture Radar (SAR) to optical images using Generative Adversarial Networks (GANs). Satellite images have been widely utilized for various purposes, such as natural environment monitoring (pollution, forest or rivers), transportation improvement and prompt emergency response to disasters. However, the obscurity caused by clouds leads to unstable monitoring of the ground situation while using the optical camera. Images captured by a longer wavelength are introduced to reduce the effects of clouds. In particular, SAR images are known to be nearly unaffected by clouds and are often used for stably observing the ground situation. On the other hand, SAR images have lower spatial resolution and visibility than optical images. Therefore, we propose a deep neural network approaches that generates optical images from SAR images. Finally, we confirm the feasibility of the proposed network on datasets such as Sentinel-1 (SAR) and Sentinel-2 (Optical) consisting of optical images and the corresponding SAR images.

Complex Networks & Image Segmentation

With the recent advances in complex networks theory, graph-based techniques for image segmentation has attracted great attention recently. In order to segment the image into meaningful connected components, we proposes an image segmentation general framework using complex networks based community detection algorithms. If we consider regions as communities, using community detection algorithms directly can lead to an over-segmented image. To address this problem, we start by splitting the image into small regions using an initial segmentation. The obtained regions are used for building the complex network. To produce meaningful connected components and detect homogeneous communities, some combinations of color and texture based features are employed in order to quantify the regions similarities. To sum up, the network of regions is constructed adaptively to avoid many small regions in the image, and then, community detection algorithms are applied on the resulting adaptive similarity matrix to obtain the final segmented image. Experimental results have shown that the proposed general framework increases the segmentation performances compared to some existing methods. 

Multilayer Network Model for  Movie Story Analysis

Network models have been increasingly used in the past years to support summarization and analysis of narratives, such as famous TV series, books and news. Inspired by social network analysis, most of these models focus on the characters at play. The network model well captures all characters interactions, giving a broad picture of the narration’s content. A few works went beyond by introducing additional semantic elements, always captured in a single layer network. In contrast, we introduce in this work a multilayer network model to capture more elements of the narration of a movie: people, locations, and other semantic elements. This model enables new measures and insights on movies. We demonstrate this model on two very popular science fiction movies.