Computational Neuroscience, Reservoir Computing, Language, Songbirds & Robotics
Quick outlook of my research
The best way to have the last updates about my research is to have look at the last batch of my preprints and papers on my HAL arxiv profile and my Google Scholar profile.
Who am I?
I am a Researcher at Inria in Bordeaux, France (started in February 2016). My research is at the "edge of chaos" of several fields: Computational Neuroscience, Bio-Inspired Machine Learning & Developmental Robotics. I am in the Mnemosyne team headed by Frédéric Alexandre, at the edge of 3 labs: Inria Bordeaux Sud-Ouest, LaBRI, Institute of Neurodegenerative Diseases (IMN).
Nov. 2022: Habilitation Thesis / Habilitation à Diriger les Recherches (HDR), University of Bordeaux. "Reservoir SMILES: Towards SensoriMotor Interaction of Language and Embodiment of Symbols with Reservoir Architectures".
My work mainly focus on Recurrent Neural Network modelling (especially to model prefrontal cortex), Language Acquisition (applied to Robotics) and exploration of brain codes of bird song syntax. For modelling, I mainly use Echo State Networks, which is part of the Reservoir Computing framework (here is an introduction to RC).
The common thread of my research aims at the exploration of the neural coding and the modelling of complex sequences processing, chunking, learning and production, for “syntax-based” sequences, and to apply these models to robotics (for future embodiment purposes). In particular, my artificial neural networks based models are were focused on dynamics of prefrontal cortex and basal ganglia.
In these models, I am interested in human (and robot) grammar learning and acquisition as well as the categorization of monkey motor action sequences. Besides, I worked on experimental protocols and on the analysis of neural codings of canary song. In order to better understand human language mechanisms, canary song is particularly interesting because it is very variable, it has a complex “syntax” and continues to evolve even during adult life.
Short bio
After obtaining my Ph.D. at the University of Lyon in January 2013 at the Stem Cell and Brain Research Institute (SBRI / INSERM 846) under the supervision of Peter Ford Dominey, I did post-doctorate internships at the University of Hamburg in 2013 and 2015 (Marie Curie Individual Fellowship) in the team of Stefan Wermter, and a post-doctorate internship (CNRS) in 2014 at the Paris-Saclay Neuroscience Institute (NeuroPSI) in the team of Catherine Del Negro and Jean-Marc Edeline.
Some news!
(sorry, not up to date! See my HAL arxiv profile and my Google Scholar profile instead.)
March 2022
Presentation of our library ReservoirPy at Sorbonne Center for Artificial Intelligence (SCAI) : https://sed-paris.gitlabpages.inria.fr/ai-community/dev_talks/2022-03-22/
February 2022
Presentation of our library ReservoirPy at Dataquitaine2022: https://www.dataquitaine.com/2022/conference-ia-ai-datascience-ro-bordeaux-2022#calendar
January 2022
New major release of ReservoirPy! https://github.com/reservoirpy/reservoirpy
October 2020
Nov 2/3: I am organising the SMILES workshop on Sensorimotor, Language and Embodiment of Symbols! It is a satellite online workshop of ICDL 2020.
Nov 2018 to Sept 2020
Several papers out! Please check my Google Scholar for updated info.
October 2018
Summary presentation of my research on language processing with Reservoir Computing and application to Human-Robot Interaction.
New papers from this summer:
• Strock, A., Rougier, N., & Hinaut, X. (2018). A Simple Reservoir Model of Working Memory with Real Values. Prooc. of IJCNN, Brasil, July 2018.
– Code: https://github.com/anthony-strock/ijcnn2018
• Hinaut, X. (2018). Which Input Abstraction is Better for a Robot Syntax Acquisition Model? Phonemes, Words or Grammatical Constructions? In 2018 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob).
– Supp. Mat. & Code: https://github.com/neuronalX/Hinaut2018_icdl-epirob
• Pagliarini, S., Hinaut, X., Leblois, A. (2018) A Bio-Inspired Model towards Vocal Gesture Learning in Songbird. In 2018 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob).
December 2017
October 2017
Invited talk at "The role of the basal ganglia in the interaction between language and other cognitive functions" workshop at ENS Ulm, Paris, France.
March 2017
Looking for a PhD candidate for SONNET project
October 2016
Invited talk at IROS Workshop on Machine Learning Methods for High-Level Cognitive Capabilities in Robotics workshop.
September 2016
Paper accepted at ICDL-epirob 2016 conference.
August 2016
Paper accepted in RO-MAN 2016 conference: Using Natural Language Feedback in a Neuro-inspired Integrated Multimodal Robotic Architecture ResearchGate
April 2016
Two papers accepted at ESANN 2016 conference:
Semantic Role Labelling for Robot Instructions using Echo State Networks ResearchGate
Activity recognition with echo state networks using 3D body joints and objects category ResearchGate
February 2016
Starting a Researcher position ("Chargé de Recherche 2ème classe", CR2) at INRIA (French Institute for Research in Computer Science and Automation) in the team Mnemosyne. The team is active in several labs: INRIA, IMN and LaBRI.
December 2015
New workshop NIPS paper on a Bilingual Recurrent Neural Network learning English and French at the same time:
X. Hinaut, J. Twiefel, M. Petit, P.F. Dominey, S. Wermter (2015) A Recurrent Neural Network for Multiple Language Acquisition: Starting with English and French. Workshop on "Cognitive Computation: Integrating Neural and Symbolic Approaches", CoCo @ NIPS 2015. PDF
September 2015
New journal paper on a the Production of sentences in English and Japanese with a Recurrent Neural Network:
X. Hinaut, Lance, C. Drouin, M. Petit, G. Pointeau, P.F. Dominey (2015) Cortico-Striatal Response Selection in Sentence Production: Insights from neural network simulation with Reservoir Computing. Brain and Language, vol. 150, Nov. 2015, pp. 54–68. doi:10.1016/j.bandl.2015.08.002 HTML (FREE until mid-October 2015)
July 2015
NEW! IJCAI 2015 video:
"Humanoidly Speaking": How the Nao humanoid robot can learn the name of objects and interact with them through common speech.
IJCAI 2015 video competition link.
Software: Take a look at "Syntactic Reservoir Model" and "DOCKS" on the WTM team sofware webpage to download and try the source code to make the same experiment.
Contact infoEmail: xavier {dot} hinaut {home symbol} inria {dot} frLatest news August 31th 2021: I am organising the 2nd edition SMILES workshop on Sensorimotor, Language and Embodiment of Symbols! It is a satellite online workshop of ICDL 2021.
March 2015
I just began a new post-doc at the University of Hamburg with a grant from the European Union:
Marie Curie Intra-European Fellowship (IEF) 2015-2017, "EchoRob": Echo State Networks for Developing Language Robots. University of Hamburg, Dept. of Computer Science, Knowledge Technology group (Stefan Wermter).
Soon will come a dedicated website for this project which will enable users to interact and help robots to learn languages.
Some other videos of my work
Videos of Human-Robot Interaction (HRI): application of our language neural model.
*** French television news (JT de France 2):
If you cannot see the video please follow this link.
*** Language comprehension (via syntax learning):
(Description in French)
Le robot humanoïde iCub sur lequel l'équipe dirigée par Peter Ford Dominey de l'unité Inserm 846 (Institut pour les cellules souches et cerveau de Lyon) travaille depuis de nombreuses années est dorénavant capable de comprendre ce qu'on lui dit et d'anticiper la fin d'une phrase. Cette prouesse technologique a été rendue possible par la mise au point d'un « cerveau artificiel simplifié » qui reproduit certains types de connexions dites « récurrentes» observées dans le cerveau humain. Ce système de cerveau artificiel permet au robot d'apprendre, puis de comprendre des phrases nouvelles, avec une structure grammaticale nouvelle. Il peut faire le lien entre deux phrases et peut même prédire la fin de la phrase avant qu'elle ne survienne.
Ces travaux sont publiés par Xavier Hinaut et Peter F. Dominey dans la revue Plos One et dans la revue Frontiers in NeuroRobotics.
If you cannot see the video please follow this link.
*** Language production (with an "inverse" model similar to the comprehension model):
If you cannot see the video please follow this link.
Latest papers!
New workshop NIPS paper on a the Bilingual Recurrent Neural Network learning English and French at the same time:
X. Hinaut, J. Twiefel, M. Petit, P.F. Dominey, S. Wermter (2015) A Recurrent Neural Network for Multiple Language Acquisition: Starting with English and French. Workshop on "Cognitive Computation: Integrating Neural and Symbolic Approaches", CoCo @ NIPS 2015. PDF
New journal paper on a the Production of sentences in English and Japanese with a Recurrent Neural Network:
X. Hinaut, Lance, C. Drouin, M. Petit, G. Pointeau, P.F. Dominey (2015) Cortico-Striatal Response Selection in Sentence Production: Insights from neural network simulation with Reservoir Computing. Brain and Language, vol. 150, Nov. 2015, pp. 54–68. doi:10.1016/j.bandl.2015.08.002 HTML (FREE until mid-October 2015)
(September 2014) in ICANN: An Incremental Approach to Language Acquisition: Thematic Role Assignment with Echo State Networks pdf
(May 2014) in Frontiers in NeuroRobotics:Exploring the acquisition and production of grammatical constructions through human-robot interaction with echo state networks (free)
More papers on my Publications page or here: ResearchGate or Google Scholar profile pages.
Some links!
Social networks pages:
Mendeley,Academia, ResearchGate, Linked-in
Other webpages
(previous) Lab web page: https://www.informatik.uni-hamburg.de/~hinaut/
Education
Ph.D in Computational Neuroscience, 2013, University of Lyon
MS in Cognitive Science, 2009, EPHE Paris
MS in Computer Science, 2008, University of Technology of Compiègne
Research Interests
Computational Neuroscience, Bio-Inspered Machine Learning, and Developmental Robotics
Reservoir computing, Echo State Networks, Recurrent Neural Networks
Natural Language Processing, Songbird syntax, Complex sequences processing, Time-series prediction
Neural dynamics theory, Dynamical systems
Brain hierarchy, Hierarchical and distributed model architectures, Time-scale hierarchy
Generic neural model for computation, learning, encoding, production and selection of sequences
Developmental robotics, Embodied cognition, Enaction, Multi-modal representations
Thalamo-cortico-basal loops, Reinforcement learning, Working memory, Decision making
Other bio-inspired methods (Genetic Algorithms, ...), Artificial life, Game theory