Semantic Spaces at the Intersection of NLP, Physics, and Cognitive Science


Semantic Spaces at the Intersection of NLP, Physics, and Cognitive Science (SemSpace2019) is the third in a series of workshops. Previous editions were held in 2016 (, co-located with the 13th International Conference on Quantum Physics and Logic (QPL2016) and in 2018 ( co-located with the International Symposium on Quantum Interactions 2018

This year we are very pleased to be part of ESSLLI 2019 (

We also have a course at ESSLLI on Vector Space Models of Meaning:


Aims and Scope

Vector embeddings of word meanings have become a mainstream tool in large scale natural language processing tools through implementations publicly available by Google and Facebook (Word2Vec and FastText). These models rely on a distributional hypothesis, put forward by Harris and Firth, that words that often occur in the same contexts have similar meanings. They have been applied to many language tasks, such as summarisation, disambiguation, name entity recognition, all through their key characteristic, which is the ability to reason about degrees of similarity between word meanings via the distances between the embeddings. The notion of vectors and their distances have also been employed in cognitive science, where we have the vector symbolic architectures of Smolensky [1990] and the conceptual spaces of Gärdenfors [2004]. Unrelated to natural language and cognitive science, vectors and vector spaces have been extensively used as models of physical theories and especially the theory of quantum mechanics. These were first put forward by Hilbert and led to one of the first axiomatisations of quantum theory. They were later used by Birkhoff and von Neumann [1936] to develop the first logic to reason about properties of quantum processes. The similarities between the vector representations of quantum mechanics and those of natural language were first discovered by Van Rijsbergen [2004] in the context of vector models of documents in information retrieval. Recently this connection was rediscovered through the work of Clark and Pulman [2007], Coecke et al. [2010] and also independently through the work of Lambek [2001] on bicompact linear logic and compact closed structures in natural language.

SemSpace 2019 integrates the related areas of natural language processing (NLP), physics, and cognitive science. The interplay between the three disciplines will foster theoretically motivated approaches to understanding how meanings of words are formed and how they interact to form meanings of sentences and other discourse units, how concepts form and develop in the mind, and how the meanings of these words and concepts get exchanged via utterances, such as those in a dialogue. Commonalities between these different levels of compositional mechanisms will be extracted, and applications and phenomena traditionally thought of as ‘non-compositional’ will be examined.

Topics of interests include (but are not restricted to):

  • Reasoning in semantic spaces
  • Compositionality in semantic spaces and conceptual spaces
  • Conceptual spaces in linguistics and natural language processing
  • Applications of quantum logic in natural language processing and cognitive science
  • Modelling functional words such as prepositions and relative pronouns in compositional distributional models of meaning
  • Diagrammatic reasoning for natural language processing and cognitive science