Language in the Visual Modality: Co-speech Gesture and Sign Language

Autor/a: ÖZYÜREK, Asli; WOLL, Bencie
Año: 2019
Editorial: 2019
Tipo de código: Copyright
Soporte: Digital


Lingüística, Lingüística » Lingüística de otras Lenguas de Signos


As  humans, our ability to communicate and use lan-guage is instantiated not only in the vocal modality but also in the visual modality. The main examples of this are sign languages and (co- speech) gestures used in spo-ken languages. Sign languages, the natu ral languages of Deaf1 communities, use systematic and conventional-ized movements of the hands, face, and body for linguis-tic expression (Brentari, 2010; Emmorey, 2002; Klima & Bellugi, 1979; Stokoe, 1960). Co- speech gestures, though nonlinguistic, are produced and perceived in tight semantic and temporal integration with speech (Ken-don, 2004; McNeill, 1992, 2005). Thus, language—in its primar y face- to- face context (as is the case both phyloge-ne tically and ontogenetically)—is a multimodal phenom-enon (Kendon, 2014; Vigliocco, Perniss, & Vinson, 2014). Expression in the visual modality appears to be an intrinsic feature of  human communication. As such, our models of language need to take  these visual modes of communication into account and provide a unified framework for how the semiotic and expressive resources of the visual modality are recruited in both spoken and sign languages and what the consequences of this recruit-ment are for cognitive architecture and pro cessing of language. Most research on language, however, has focused on spoken or written language and has rarely considered the visual context in which it is embedded as a means of understanding our linguistic capacity.