LINGUAE-multimodality —
Multimodal Semantics Seminar
LINGUAE, Institut Jean-Nicod, CNRS
LINGUAE, Institut Jean-Nicod, CNRS
LINGUAE-multimodality —
Multimodal Semantics Seminar
LINGUAE, Institut Jean-Nicod, CNRS
LINGUAE, Institut Jean-Nicod, CNRS
Disclaimers:
1. Topics are skewed towards our group's interest, and some sessions might involve presentations by our team's members.
2. The format is experimental (and we have no dedicated funding), so please be patient with any technical or other issues that might arise.
To find the corresponding time where you live, use for instance this website.
Note: If there are unexpected technical problems (e.g. malfunctioning Zoom link), we will (i) provide corrected information on this page, and (ii) contact you by email if you have registered.
-Questions should be asked directly rather than through the chat unless technical problems get in the way.
-We will record sessions. But to ensure that this does not get in the way of online interactions, we will edit questions out if we ever make the video public (as opposed to just sharing the video on a person-by-person basis).
Introduction. This paper presents a linguistic account of how gestures are integrated into the morphological structure of signed languages, using the framework of Jackendoff’s Parallel Architecture (PA) (2023). While many studies address the role of iconicity or gesture in signed and spoken languages, few consider how gestural elements are licensed within morphological structure. In PA, grammar consists of multiple independent yet interconnected systems, including phonological, syntactic, and conceptual structure. Each has its own combinatorial principles, with complex expressions formed via interface rules. This modality-independent framework can incorporate additional representational systems—such as spatial or gestural structure—allowing a unified treatment of multimodal phenomena.
Proposed Analysis. We argue gestural integration in signed language morphology is constrained by morphosyntactic selection and interface licensing. Within a Distributed Morphology (DM) / realizational framework, realization in the visual–gestural modality occurs through Vocabulary Insertion, with only certain morphosyntactic environments permitting gestural material. Our analysis is primarily based on morphosyntactic evidence involving feature compatibility, selectional restrictions, and complementary distribution. This approach is aligned with Oomen’s (2021) account of iconicity as mediating between morphosyntactic structure and verb semantics and is further supported by Maier and Steinbach’s (2022) hybrid account of role shift, which shows how maximally iconic gestures can be morphosyntactically anchored within complex expressions. This approach also complements accounts that derive certain gestures through regular linguistic processes (Esipova, 2019; Colasanti, 2023) and is in line with formal semantic treatments of gesture (Schlenker 2020).
Evidence for Analysis. We draw on three types of morphological processes from documented sign languages, including DGS, ASL, and Libras, to support our analysis. Each of these morphological processes receives a differing treatment under the proposed analysis. The first type of morphological process is verb agreement. This process is typically restricted to the realization of person and number features, and the non-first person value incorporates a morphosyntactically licensed deictic gesture, grammatically anchored yet non-at-issue (Schlenker et al., 2013). The second type of morphological process is a set of classifier constructions. These constructions combine a descriptive handshape morpheme with a depictive path gesture, creating a depictive–descriptive hybrid. The gestural component may be partially or fully at-issue depending on its degree of conventionalization, showing how interface links can vary in strength. The third type of morphological process is the set of aspectual modulations. Common examples include continuative and iterative forms, and they modify verb movement to convey the manner in which the event is carried out. While the forms may be diachronically motivated by gesture, they are typically fully conventionalized, functioning solely within the linguistic system. This diachronic shift aligns with documented lexicalization and de-lexicalization processes in sign languages (Cormier et al., 2012). Taken together, these morphological processes show that the presence or absence of gestural integration is not arbitrary; gestural integration follows structured interface constraints. Integrated deictic gestures and depictive–descriptive hybrids involve active links between linguistic and gestural representational systems, while purely linguistic processes lack such links. This distinction accurately predicts greater cross-linguistic similarity with respect to morphological processes with gestural integration and greater variation with respect to those without gestural integration.
Conclusion. The distinction between morphological processes with gestural integration and those without such integration applies beyond our examples, e.g. gradable constructions (Aristodemo, 2019; Thalluri & Davidson, 2024) involve gestural interaction, whereas agentive suffixation and numeral incorporation do not -- though both remain iconic and modifiable for effect. Beyond sign languages, our approach offers a comparative basis for analyzing multimodal integration in spoken languages, including ideophones (Dingemanse 2015; Ebert & Steinbach 2024) and morphosyntactically anchored co-speech gestures (Colasanti, 2023). We conclude that while iconicity has been addressed at various grammatical levels in signed language research, morphosyntactic constraints on gestural interactions must be explicitly addressed to fully account for the structural integration of gesture into signed languages and to inform broader theories of language architecture.
I will discuss dynamic semantic phenomena of pronoun and presupposition binding and point out how these phenomena reappear in the domain of gesture-speech interaction. Crucially, speech can be bound by gesture, but also gesture can be bound by speech. Suggesting a cross-modal account where pointing gestures and iconic gestures introduce discourse referents for rigid designators, I show that these can be anaphorically picked up by speech pronouns. On the other hand, also discourse referents introduced in speech can be picked up by what can be interpreted as gestural pronouns. We will discuss such dynamic binding between gesture and speech and further phenomena of dynamic binding across modalities and how they can be handled in a formal semantic model. Finally, we will also look at experimental investigations aimed at validating the existing models.
Polar questions play a fundamental role in communication and have been investigated extensively across a wide range of spoken languages. Much less is known about polar questions in sign languages. This talk concentrates on one particular sign language, namely Dutch Sign Language (NGT). It investigates (i) which polar question forms exist in NGT, (ii) in which types of context these different forms are used, and (iii) what the semantic contribution is of specific elements that may occur in polar questions. We present a production experiment in which two contextual factors are manipulated through role play between the participant and two confederates: (i) the prior expectations of the person asking the question with respect to the truth of the proposition that the question is about, and (ii) the evidence available in the immediate context of utterance with respect to the truth of this proposition. We identify a broad range of polar question forms, differing in terms of polarity marking (positive, negative, null), the presence or absence of question tags (among others, a tag whose function seems very similar to the English tag “right?"), and use of non-manual markers such as raised or lowered eyebrows, a forward position of the head and/or body, and mouth shrug. We further observe that each question form has a distinct distribution across the different contexts of use under consideration. For instance, certain forms require that the speaker expects a positive or a negative answer, and others favor contexts in which a certain type of contextual evidence is present. With regard to non-manuals, a notable finding is that, contrary to expectation given the previous literature on polar questions in sign languages, raised eyebrows are not consistently used in the polar questions in our data set; in fact, brow lowering is also rather common. This leads us to suggest that a head or body forward position -- not raised eyebrows -- is the main polar question marker in NGT. We suggest raised and lowered eyebrows mark a broad range of other functions that may be relevant in (non-canonical) polar questions.
Emoji symbols are widely used in online communication, including instant messaging and social media platforms (Dresner & Herring 2010). But what meanings do emojis contribute to the sentences they accompany? Existing work draws comparisons between the functions of emoji and those of gestures (see, for example, Gawne & McCulloch 2019), with Pierini (2021) proposing that emoji symbols interact with the logical structure of the sentences they appear in, much like gestures interact with logical operators in speech (Schlenker 2018). In this talk, I will present some experimental studies of the kinds of linguistic inferences that emojis can trigger. Specifically, the results suggest that emoji symbols can trigger presuppositions and supplements (Tieu, Qiu, Puvipalan & Pasternak 2024); however, while we observe evidence for direct scalar implicatures, emojis in negative sentences do not seem to trigger indirect scalar implicatures (Tieu, Faehndrich, Sritharan & Schlenker 2025).