Master's thesis

My thesis project consisted of the development of the first concept-based explainability methods for graph neural networks in order to build an interpretable QSAR (quantitative structure-activity relationship) model for drug discovery. The potential of such models is that they allow the identification of the physicochemical properties (our concepts) that result in the molecules having a desired activity. Consequently, using an additional post-hoc explainability method, we can identify the structural parts of the molecules that make them have certain physicochemical properties, so we know to which parts of the molecules we need to drive chemical modifications to have the desired activity. Our explainability module, obtained from the adaptation of an explainability method for convolutional neural networks to graph neural networks, not only improves the interpretability of our models, but also their performances.

University projects

Interactive Graphics Project

Comparing classing and primitive-based versions of RRT* 

Conditional GAN for brain MRI denoising in the k-space

Measurement of gait clearance from wearable sensors 

Interactive game play with Pepper robot