Post-Doctoral Position: HuRRiCane project (until end March 2019)

posted Mar 13, 2019, 9:33 AM by Xavier Hinaut   [ updated Mar 17, 2019, 11:47 AM ]
HuRRiCane: Hierarchical Reservoir Computing for Language Comprehension

Post-doc position for 1 year
Starting October 2019
The position may be extended further (depending on funds availability)

Deadline for applications
As soon as possible.
Email before 27th of March 2019
Apply on before 31st of March 2019

How to apply?
First, send me an email with CV and motivation letter at xavier(dot)hinaut(at)inria.fr

Keywords

Computational Neuroscience, Recurrent Neural Networks, Reservoir Computing, Language Processing, Language Acquisition, Speech Processing, Modeling, Language Grounding, Prefrontal Cortex, Sequence Learning, Machine Learning.

Scientific research context

How does our brain understand a sentence at the same time as it is pronounced? How are we able to produce sentences that the brain of the hearer will understand? There is a huge number of tools for Natural Language Processing (NLP), but there is fewer computational models that try to understand how language comprehension and production works effectively in the brain. There are theoretical models or models based on psychological experiments (Dell & Chang 2014), but few models are based on neuro-anatomy and brain processes (Hinaut & Dominey 2013). Moreover, even fewer of these models have been implemented in robots, to demonstrate their robustness among other things (Hinaut et al. 2014). Using robots to study language grounding, acquisition and development is not a new topic, in fact Cangelosi et al. (2010) have proposed a roadmap to tackle this long research plan.
In order to model how a sentence can be processed, word by word (Hinaut & Dominey 2013), or even phoneme by phoneme (Hinaut 2018), the use of recurrent neural networks such as Reservoir Computing (Jaeger 2004), and its extension, the Conceptors (Jaeger 2017), offers advantages and interesting results. In particular, the possibility to compare the dynamics of the model with the dynamics of brain electrophysiological recordings (Enel et al. 2016) is an interesting asset. The Reservoir Computing paradigm is also interesting because it can be trained with few data and have quick execution time for human-robot interactions. The use of linguistic models with robots, is not only useful to validate the models in real conditions: it also enables to test other hypotheses, notably the anchoring of the language (Harnard 1990) or the emergence of symbols (Taniguchi et al., 2016).

Work description

The objective is to experiment how a sentence comprehension model, based on reservoir computing, can learn to understand sentences by exploring which meanings can have the sentences, implying several steps from stream of phonemes to words and from stream of words to sentence comprehension. The model will be implemented on a virtual agent first and then on the Nao humanoid robot
For the experiments we want to make, a model implemented in a situated agent or robot is needed in order to "understand" the meaning of the utterance which is being told and figure out whether the meaning is plausible or not. If the meaning found does not make sense, the model could "re-parse" the sentence and reinterpret it. Moreover, we prefer to have a concrete corpus of based on actions a robot can do rather than abstract sentences of the Wall Street Journal (a classical benchmark). We have already performed several experiments with both sentence comprehension and production models with humanoid robots such as iCub and Nao (Hinaut et al. 2014, Twiefel et al. 2016).
Having a plausibility measure of a meaning will allow us to set up a cascading reinterpretation if necessary, first from the word level, and if that is not enough, to reinterpret the phonemes to find new plausible words. There exists two main types of observations made with an electroencephalogram (EEG) with human subject: P600 and N400. They are assumed to correspond to syntactically or semantically reinterpretations of the sentence. The model developed will have to account for these induced observations. In Hinaut & Dominey (2013), we showed that our model could provide an equivalent of P600 when a sentence was syntactically complex.
This project is linked to other projects in the team on the hierarchical organization of the prefrontal cortex (including Broca's area, involved in language). This hierarchy corresponds to an increasingly higher abstraction, which is made by different sub-areas. We will therefore be able to link this post-doc project to existing projects of the team, where different levels of abstractions are necessary for sentence comprehension.

Main References

• G. S. Dell, & F. Chang (2014) The P-chain: Relating sentence production and its disorders to comprehension and acquisition. Philosophical Transactions of the Royal Society B: Biological Sciences, 369(1634).
• H. Jaeger, (2017) Using conceptors to manage neural long-term memories for temporal patterns. The Journal of Machine Learning Research, 18(1), 387-429.
• X. Hinaut, P.F. Dominey (2013) Real-Time Parallel Processing of Grammatical Structure in the Fronto- Striatal System: A Recurrent Network Simulation Study Using Reservoir Computing. PloS ONE 8(2): e52946.
• X. Hinaut, M. Petit, G. Pointeau, P.F. Dominey (2014) Exploring the Acquisition and Production of Grammatical Constructions Through Human-Robot Interaction. Frontiers in NeuroRobotics 8:16.
• Hinaut, X. (2018). Which Input Abstraction is Better for a Robot Syntax Acquisition Model? Phonemes, Words or Grammatical Constructions? In 2018 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob).

Skills

Good background in computational neuroscience, computer science, physics and/or mathematics;

A strong interest for neuroscience, linguistics and the physiological processes underlying learning;

Python programming with experience with scientific libraries Numpy/Scipy (or similar programming language: matlab, etc.);

Experience in machine learning or data mining;

Independence and ability to manage a project;

Good English reading/speaking skills.

French speaking is not required

Benefits package

  • Subsidized meals
  • Partial reimbursement of public transport costs
  • Possibility of teleworking (after 6 months of employment) and flexible organization of working hours
  • Professional equipment available (videoconferencing, loan of computer equipment, etc.)
  • Social, cultural and sports events and activities
  • Access to vocational training
  • Social security coverage

Remuneration

2653€ / month (before taxes)

More information

Team websites:
team.inria.fr/mnemosyne
www.inria.fr/en/teams/mnemosyne

Official job description and application form:
(also send me an email before applying)
https://jobs.inria.fr/public/classic/en/offres/2019-01449

Comments