This workshop is endorsed by SIGNLL, the Special Interest Group on Natural Language Learning of the Association for Computational Linguistics.
The human ability to acquire and process language has long attracted interest and generated much debate due to the apparent ease with which such a complex and dynamic system is learnt and used on the face of ambiguity, noise and uncertainty. This subject raises many questions ranging from the nature vs. nurture debate of how much needs to be innate and how much needs to be learned for acquisition to be successful, to the mechanisms leading the learning path (general vs specific), to the way language is represented and processed in the human brain. These have been discussed in the context of first and second language acquisition and bilingualism, as well on studies targeting the way linguistic structures are stored, retrieved, and processed, with crosslinguistic studies shedding light on the influence of the language and the environment.
The past decades have seen a massive expansion in the application of neural networks, statistical and machine learning methods to natural language processing (NLP). This work has yielded impressive results in numerous speech and language processing tasks, including e.g. speech recognition, morphological analysis, parsing, lexical acquisition, semantic interpretation, and dialogue management. They can also contribute to research on human language acquisition, processing, and change.
The use of computational modeling has been boosted by advances in neural networks and machine learning techniques, and the availability of resources like corpora of child and child-directed sentences, and data from psycholinguistic tasks by normal and pathological groups. Many of the existing computational models attempt to study language tasks under cognitively plausible criteria, to explain the developmental stages observed in the acquisition and evolution of the language abilities, and to capture the constraints governing language processing. In doing so, computational modeling provides insight into the plausible mechanisms involved in human language processes, and inspires the development of better language models and techniques. These investigations are very important since if computational techniques can be used to improve our understanding of human language acquisition, processing, and change, these will not only benefit cognitive sciences in general but will reflect back to NLP and place us in a better position to develop useful language models.
Success in this type of research requires close collaboration between the NLP, physics, biology, linguistics, psychology and cognitive science communities. To this end, interdisciplinary workshops can play a key role in advancing existing and initiating new research. The workshop is targeted at anyone interested in the relevance of computational techniques for understanding first, second and bilingual language acquisition, processing and change or loss in normal and clinical conditions. It has relevance for cognitive sciences work in the fields of speech and language processing, machine learning, artificial intelligence, linguistics, psycholinguistics, psychology, etc. Papers are invited on, but not limited to, the following topics:
The authors of the best papers will be invited to submit an extended version to the Special Issue on Computational Models of Language and Cognition of the Journal of Natural Language Engineering.