About my Ph.D thesis

Copyright of the image: ©Amélie Véron and Xavier Hinaut/Communication/UCBL

Keywords:

Recurrent Neural Network, Firing Rate Neurons, Model, Language, Human-Robot

Interaction, Categorization, Sequence Processing, Thematic Role Assignment, Artificial Neural Network, Echo State Network, Reservoir Computing.

Abstract:

Primates can learn complex sequences that can be represented in the form of categories, and even more complex hierarchical structures such as language. The prefrontal cortex is involved in the representation of the context of such sequential structures, and the striatum permits the association of this context with appropriate behaviours.

In this thesis, the Reservoir Computing paradigm is used: a recurrent neural network

with fixed connections models the prefrontal cortex, and the output layer models the

striatum.

In a first study, the model was used to show that it is possible to obtain neural activations similar to electrophysiological recordings made in macaque prefrontal cortex, namely specific activations coding for the identity and category of the sequences. It was shown that this activity can be used to discriminate the sequences in an online fashion.

Then the model was applied to language syntactic processing, especially thematic role assignment. The model is able to (1) process correctly the majority of the grammatical constructions that were not learned, demonstrating its generalization capabilities, and (2) to make online predictions while processing a construction. It is proposed that a significant modification of the predictions in a short period of time is responsible for generating evoked related potentials like the P600.

Finally, the model, as used to process grammatical constructions, was applied in the framework of human-robot interaction for both sentence comprehension and production.

The use of the model for different tasks shows that the faculty of representation and learning of sequences in the brain may be based on highly recurrent connections.

Python code of the Syntactic Reservoir Model :

Source code of Syntactic Reservoir Model (PLoS ONE, 2013)

    This is the source code (in Python) given in supplementary material of our PLoS ONE paper of 2013. It enables to reproduce experiments of the paper and gives a full access to the model: you will be able to modify all the parameters and change the model. It uses Python libraries like Numpy, and also uses the Oger toolbox developed within the EU FP7 Organic (2009-2013) project.

    For explanations of how to install and use this code, and access to the corpus used, please see my Dowloads page.

Relevant Publications (of my PhD work only):

For more publications: see my profile on Google Scholar.

Read more about Reservoir Computing.

More information about the image above:

This photo was selected for the exhibition "Thèse'art" in Lyon in spring 2011 for the 40 years of the University of Lyon. The idea was to represent PhD student "at work" with different cues on their research topic. I let you find out what are these cues ... (there is 3 cues ... 4 if you have really good eyes) ... if you think you find the answer, check it out here.

Copyright of the image: ©Amélie Véron and Xavier Hinaut/Communication/UCBL

I obtained my PhD thesis in January 2013 at the University of Lyon I, France.I did my PhD under the supervision of Peter F. Dominey at the Stem Cell and Brain Research Institute, Lyon.

Download my PhD thesis (introduction in French and scientific content in English)

PhD Thesis Tittle:

Recurrent Neural Networks for Abstract Sequence and Grammatical Structure Processing, with an Application to Human-Robot Interaction