The SemaSign Project.        >> Gehen Sie zur deutschen Seite

One of the unique things about human language is its abundant, open-ended 'wordiness'. In childhood, we undergo a vocabulary explosion, and then effortlessly store and retrieve many thousands of words in a lifetime, with the ability learn multiple languages and specialized terminology well into adulthood. This species-specific capacity for large flexible vocabularies exists in sign languages as well as spoken languages, yet the nature of the mental lexicon in sign languages — that is, the storage and retrieval of words in the brain — is not well understood, due in part to properties of the language modality that make studying combinations of form and meaning a challenge.

Words in sign languages are made up of both discrete units (e.g., selected fingers) and gradient aspects (e.g., degrees of flexion in the joints of the arm) in articulation, just as in spoken languages. One difference in language modality, however, is the degree to which signers to use the body's resources — handshapes, movements, locations, relations in space, etc. — to depict objects and actions through visual symbols and metaphors. For instance, as shown below, there is a sign in Kenyan Sign Language meaning 'to contemplate, muse, ponder' (below, right) in which the sign for 'word' (below left; one of several variants) moves off the head repeatedly, as if pulling words out of the mind. Signs are also made up of highly simultaneous constructions, unlike sequences of consonant and vowel sounds in spoken words. 

'word' in Kenyan Sign Language (one variant)

'to contemplate, muse, ponder' in KSL

How, then, do signers process and create meaning-rich language with richly symbolic words that occur in highly simultaneous forms? At present, insight into these mental mappings remains occluded, not only at the level of neural and behavioral phenomena, but in terms of linguistic analysis as well. What, indeed, is morphology in sign languages when even the smallest units of form—like a single hooked finger, pinching the thumb & index finger, or a location at the throat—can carry meaning below the word level? What is the nature of these constellations of form and meaning? What kind of language-specific paradigms arise from these relationships? To what extent do they vary across sign languages, and where do we find similar elements re-occurring across unrelated sign languages? 

The SemaSign project proposes a new approach to these questions by identifying form-meaning correspondences in sign languages through computational means, while in turn creating new datasets that can reveal how signs are organized in the minds of signers from different linguistic and cultural contexts. This project will yield semantic networks for sign languages from three countries with different social and historical backgrounds: in Germany for German Sign Language (Deutsche Gebärdensprache; DGS), in Kenya for Kenyan Sign Language (KSL), and in Guinea Bissau for Guinean Sign Language (Língua Gestural Guineense; LGG

To do so, a primary network is created using word association or free association responses, in which a signer sees a sign from their language and responds with the first three signs that come to mind. Next, a secondary network of semantic relations in each language is derived from the associative network, using an iterative neural net algorithm to establish an objective measure of semantic distance. When paired with a metric of phonological distance, computational means are applied to identify clusters of signs unusually close in both form and meaning.