"Sign languages have a parallel structure to that of spoken languages. They have been shown to have ‘phonology’ (with inventories of handshapes, locations and movements), demonstrating ‘duality of patterning’ just like spoken languages. The morphology, syntax and discourse features of many sign languages have been described (cf. Klima and Bellugi 1979; Petitto 2000; Sandler and Lillo-Martin 2006). Sign languages are acquired as a first language by children of signing parents and by those exposed to sign language models via intervention programmes (Sutton-Spence and Woll 1999). The same sociolinguistic factors that affect spoken languages are found in sign languages, with register and language domain influencing language form (Roy 2011) and there is regional and dialectal variation within sign languages (Lucas et al 2001) and evidence of diachronic change. Developments in neuroscience during the last ten years have provided evidence of how sign language is processed. Sign languages are processed in the same left hemisphere classical language areas (Broca’s, Wernicke’s, auditory cortex) as spoken languages (see MacSweeney et al 2008 for a review);damage to this hemisphere (for example, as the result of stroke) leads to aphasia (Atkinson et al 2005). This area of research has developed our understanding of which features of language are modality independent and which are modality specific."
Source: Letter from Dr Christopher Stone BSc (Hons), MSc, PhD, FASLI Researcher/Interpreter Co-ordinator, University College London, to Maya de Wit, President of EFSLI, 25.06.2011
For the full letter, please download the file below.