Decades of insightful work in formal linguistics has succeeded in providing a largely unified treatment of both spoken and sign languages despite their differing modalities of externalisation (see Brentari 1993, 2019; Wilbur 1991,1996; Petronio and Lillo-Martin 1997; Neidle et al. 2000; Sandler and Lillo-Martin 2006; Cecchetto et al. 2006; Napoli and Sutton-Spence 2010; Davidson 2014; Pfau et al. 2018; Kimmelmann 2019; among many others). However, the overall success of this unified approach does not mean that modality is a 'solved problem' or of secondary importance in formal linguistics---far from it. For example, it has long been noted that the physical properties of the visual-gestural modality afford a greater degree of simultaneity of expression than the auditory-spoken modality (Sandler and Lillo-Martin 2006). Simultaneity of this sort poses a prima facie challenge for theories of linearisation (particularly those requiring a total ordering among linguistic objects in a derivation, e.g. Kayne 1994), and yet it remains an under-theorised research area.
Other matters relating to modality of externalisation have received more attention in the formal literature, particularly within the last 10 years, coinciding with the rise of linguistically-grounded work on gesture (e.g., Super Linguistics: Patel-Grosz et al. 2023). Recent advancements in sign language linguistics pose exciting prospects for a formal approach to multimodality in otherwise-spoken languages.
For example, several recent works on the formal semantics of gesture observe that at least some gestures behave semantically like normal linguistic objects of a certain kind, e.g. by exhibiting scopal interactions with pieces of the spoken content they are paired with, projecting alternatives under focus, etc. (Lascarides and Stone 2009a,b; Ebert and Ebert 2014, Ebert 2024; Schlenker 2014, 2018, 2020; Schlenker and Chemla 2018; Esipova 2019a,b). Facts of this sort led Esipova (2019b) to conjecture that, if gestures behave semantically like normal linguistic objects, then they must be the product of normal linguistic derivations within a Y-model of grammar (modulo simultaneity and other modality-specific PF properties).
This conjecture has been further developed in very recent work on the syntax of gesture (e.g., Sailor and Colasanti 2020). For instance, Colasanti (2023a,b) argues that the inventory of functional items within a single language can be multimodal: i.e., a language may have both spoken and gestural functional morphemes. Functional items expressed in the visual modality within otherwise-spoken languages include question particles (see Jouitteau 2007 on Atlantic French and Colasanti 2023a on Neapolitan), focus markers (see Colasanti and Cuonzo 2022 and Colasanti 2023b on Lancianese), topic markers (Colasanti and Marchetiello forthcoming), epistemic markers (Marchetiello 2024), and negators (Prieto and Espinal 2020; Colasanti and Sailor 2025).
FLAMM welcomes all submissions adopting a formal approach to linguistic (multi)modality in order to address questions like those above. Other questions directly relevant to this call include the following:
Questions relating to the grammatical integration of gesture
To what extent is gesture (and/or specific gestures) a truly linguistic object, i.e. the output of a modular linguistic derivation constrained by the Y-model?
For gestures that can be shown to be 'grammatically integrated' in this way (i.e., the product of a linguistic derivation):
Are there principled reasons for continuing to refer to such objects as 'gestures' rather than 'signs'? Are there formal differences between grammatically-integrated gestures in spoken languages and signs in sign languages?
What sorts of syntactic, semantic, and phonological properties can such objects have or not have? What grammatical principles would these generalisations follow from?
What consequences would these have for our theory of the Lexicon? From a realisational / Distributed Morphology perspective, is externalisation in the visual-gestural modality purely a PF property (i.e. specified in List 2 and exponed during Vocabulary Insertion), or are gestures lexically special somehow?
If certain physical movements co-occurring with speech or sign are para-linguistic rather than the product of a linguistic derivation (cf. the 'gesture' vs. 'gesticulation' distinction), how can we tell? What diagnostics can the linguist use to distinguish the linguistic from the para-linguistic in this empirical domain?
Are iconic and non-iconic gestures grammatically integrated in the same way? If not, at what level(s) of representation do they differ, and why?
Similarly, what if any role does the conventionalisation of gesture play?
Questions relating to simultaneity
Can the temporal alignment of gesture and speech inform our theory of linearisation?
Given that both gesture and prosody exhibit the property of simultaneity (with speech), what formal properties do the two systems have (or not have) in common?
To what extent can research into bimodal bilingualism (e.g. Lillo-Martin et al. 2016) inform our approach to co-speech gesture, particularly with respect to questions of linearisation and simultaneity (e.g. Donati & Branchini 2013)?
Co-speech gestures are expressed simultaneously with speech, but so is prosody. Can the study of simultaneity in the visual-gestural modality inform our approach to the meaningful aspects of prosody (e.g. focal accents, intonational melodies associated with clause types, expressive lengthening, etc.)?
Beyond simultaneity, are there properties of signs in sign languages that are not found in spoken languages due to the exclusive use of the visual-gestural modality (e.g., modality-specific effects)? What can we learn about modality-specific effects by studying gestures in otherwise-spoken languages?
Other questions
Since gestures only happen while we use language, how do generative linguists reconcile the competence-performance dichotomy in light of the grammatical contribution of gesture?
What empirical methods (e.g., fieldwork, experimentation, etc.) can or should be employed to study gesture formally? What lessons can we learn from formal sign language linguistics in this regard (e.g. Kimmelmann 2021)?
References
Brentari, Diane. 1993. Establishing a sonority hierarchy in American Sign Language: The use of simultaneous structure in phonology. Phonology 10(2):281–306.
Brentari, Diane. 2019. Sign language phonology. Cambridge, UK: Cambridge University Press.
Cecchetto, Carlo, Carlo Geraci and Sandro Zucchi. 2006. Strategies of relativization in Italian Sign Language. Natural Language and Linguistic Theory 24: 945–975. https://doi.org/10.1007/s11049-006-9001-x
Colasanti, Valentina. 2023a. Functional gestures as morphemes: Some evidence from the languages of Southern Italy. Glossa: A Journal of General Linguistics 8:1–45. https://doi.org/10.16995/glossa.9743.
Colasanti, Valentina. 2023b. Gestural focus marking in Italo-Romance. Isogloss 9(4)/5:1–39. https://doi.org/10.5565/rev/isogloss.296
Colasanti, Valentina and Clara Cuonzo. 2022. Gestural focus marking. Talk given at Romance Languages: Recent Contributions to Linguistic Theory, Harvard University, 28-29 April 2022.
Colasanti, Valentina, and Craig Sailor. 2025. Some formal properties of gestural polar response markers. Talk given at the Trinity Forum on Formal Linguistics (TriFFL), Trinity College Dublin, 28th February 2025.
Davidson, Kathryn. 2014. Scalar implicatures in a signed language. Sign Language and Linguistics 17(1):1–19. https://doi.org/10.1075/sll.17.1.01dav
Donati, Caterina, and Chiara Branchini. 2013. Challenging linearization: Simultaneous mixing in the production of bimodal bilinguals. In Theresa Biberauer and Ian Roberts (eds.), Challenges to linearization, 93-128. De Gruyter.
Ebert, Cornelia. 2024. Semantics of Gesture. Annual Review of Linguistics 10:169–189. https://doi.org/10.1146/annurev-linguistics-022421-063057
Ebert, Cornelia and Christian Ebert. 2014. Gestures, demonstratives, and the attributive/referential distinction. Paper presented at Semantics and Philosophy in Europe (SPE 7), Humboldt-Universität zu Berlin.
Esipova, Maria. 2019a. Acceptability of at-issue co-speech gestures under contrastive focus. Glossa: A Journal of General Linguistics 4:1–22. https://doi.org/10.5334/gjgl.635.
Esipova, Maria. 2019b. Composition and projection in speech and gesture. Ph.D. thesis, NYU.
Jouitteau, Mélanie. 2007. Listen to the sound of salience. Multichannel syntax of Q particles. In Sergio Baauw and Frank Drijkoningen (eds.), Romance Languages and Linguistic Theory 2007, 185–200. Amsterdam: Benjamins.
Kayne, Richard. 1994. The Antisymmetry of Syntax. Cambridge, MA: MIT Press.
Kimmelman, Vadim. 2019. Information structure in sign languages: Evidence from Russian Sign Language and Sign Language of the Netherlands. Berlin: De Gruyter Mouton.
Kimmelman, Vadim. 2021. Acceptability judgments in sign linguistics. In G. Goodall (ed.), Cambridge Handbook of Experimental Syntax, 561–584. Cambridge, UK: Cambridge University Press.
Lascarides, Alex and Matthew Stone. 2009a. Discourse coherence and gesture interpretation. Gesture 9:147–180. https://doi.org/10.1075/gest.9.2.01las.
Lascarides, Alex and Matthew Stone. 2009b. A formal semantic analysis of gesture. Journal of Semantics 26:393–449. https://doi.org/10.1093/jos/ffp004.
Lillo-Martin, Diane, Ronice Müller de Quadros, and Deborah Chen Pichler. 2016. The development of bimodal bilingualism: Implications for linguistic theory. Linguistic Approaches to Bilingualism 6:6, 719–755.
Marchetiello, Chiara. 2024. Co-speech Palm Down Open Hand Prone gesture as epistemic marker in Neapolitan: A first look. Talk given at the 18th Cambridge Italian Dialect Syntax-Morphology Meeting (CIDSM18), 25-27 June 2024.
Napoli, Donna-Jo and Rachel Sutton-Spence. 2010. Limitations on simultaneity in sign language. Language 86(3):647–662. https://dx.doi.org/10.1353/lan.2010.0018
Neidle, Carol, Judy A. Kegl, Dawn MacLaughlin, Benjamin Bahan and Robert G. Lee. 2000. The syntax of American Sign Language: Functional categories and hierarchical structure. Cambridge, MA: MIT Press.
Patel-Grosz, Pritty, Salvador Mascarenhas, Emmanuel Chemla and Philippe Schlenker. 2023. Super Linguistics: An introduction. Linguist and Philosophy 46:627–692. https://doi.org/10.1007/s10988-022-09377-8
Petronio, Karen and Diane Lillo-Martin. 1997. Wh-movement and the position of Spec-CP: Evidence from American Sign Language. Language (Baltim) 73(1):18–57. https://doi.org/10.2307/416592.
Pfau, Roland, Martin Salzmann, and Markus Steinbach. 2018. The syntax of sign language agreement: Common ingredients, but unusual recipe. Glossa: A Journal of General Linguistics 3(1):1–46. https://doi.org/10.5334/gjgl.511.
Prieto, Pilar and Maria Teresa Espinal. 2020. Negation, prosody and gesture. In Viviane Deprez and M.Teresa Espinal (eds.), The Oxford Handbook of Negation, 677–693. Oxford: Oxford University Press.
Sailor, Craig, and Valentina Colasanti. 2020. Co-speech gestures under ellipsis: a first look. Paper presented at the 2020 LSA Annual Meeting.
Sandler, Wendy and Diane Lillo-Martin. 2006. Sign Language and Linguistic Universals. Cambridge, UK: Cambridge University Press.
Schlenker, Philippe. 2020. Gestural Grammar. Natural Language and Linguistic Theory 38(3):887–936. https://doi.org/10.1007/s11049-019-09460-z.
Schlenker, Philippe. 2018. Gesture projection and cosuppositions. Linguistics and Philosophy 41:295–365. https://doi.org/10.1007/s10988-017-9225-8.
Schlenker, Philippe. 2014. Iconic features. Natural Language Semantics 22:299–356. https://doi.org/10.1007/s11050-014-9106-4.
Schlenker, Philippe and Emmanuel Chemla. 2018. Gestural agreement. Natural Language and Linguistic Theory 36:587–625. https://doi.org/10.1007/s11049-017-9378-8.
Wilbur, Ronnie. 1996. Evidence for the function and structure of wh-clefts in American Sign Language. In William H. Edmondson and Ronnie B. Wilbur (eds.), International review of sign linguistics, 209–256. Mahwah, NJ: Lawrence Erlbaum.
Wilbur, Ronnie. 1991. Intonation and focus in American Sign Language. In Yong Koon No and Mark Libucha (eds.), ESCOL’90, 320–331. Columbus, OH: OSU Press.