Cross-modal and cross-language activation in bilinguals reveals lexical competition even when words or signs are unheard or unseen
Temas
Detalles
We exploit the phenomenon of cross-modal, cross-language activation to examine the dynamics of language processing. Previous within-language work showed that seeing a sign coactivates phonologically related signs, just as hearing a spoken word coactivates phonologically related words. In this study, we conducted a series of eye-tracking experiments using the visual world paradigm to investigate the time course of cross-language coactivation in hearing bimodal bilinguals (Spanish–Spanish Sign Language) and unimodal bilinguals (Spanish/Basque). The aim was to gauge whether (and how) seeing a sign could coactivate words and, conversely, how hearing a word could coactivate signs and how such cross-language coactivation patterns differ from within-language coactivation. The results revealed cross-language, cross-modal activation in both directions. Furthermore, comparison with previous findings of within-language lexical coactivation for spoken and signed language showed how the impact of temporal structure changes in different modalities. Spoken word activation follows the temporal structure of that word only when the word itself is heard; for signs, the temporal structure of the sign does not govern the time course of lexical access (location coactivation precedes handshape coactivation)—even when the sign is seen. We provide evidence that, instead, this pattern of activation is motivated by how common in the lexicon the sublexical units of the signs are. These results reveal the interaction between the perceptual properties of the explicit signal and structural linguistic properties. Examining languages across modalities illustrates how this interaction impacts language processing.