Split screen exploration in sign language users: An eye tracking studyA

Autor/a: SOLER, Olga; BOSCH-BALIARDA, Marta; ORERO, Pilar
Año: 2018
Editorial: Conference: Scandinavian Workshop on Applied Eye Tracking 2018
Tipo de código: Copyright
Soporte: Digital


Medios de comunicación y acceso a la información, Medios de comunicación y acceso a la información » Nuevas Tecnologías


n this research we applied eye-tracking measures to examine how sign-language users explore split TV screens. We used a sign-translated documentary where both visual and linguistic information is relevant. Four possible screen combinations (see Figure 1) resulted from combining Position of the SLI sub-screen (Left/Right) and Size (Small- 1/5 of the screen width; Medium- 1/4 of the screen width). Participants were 28 deaf signers from 17 to 74 years old. The documentary “Joining the Dots” (Romero-Fresco, 2012) was translated into Catalan SL and edited into four clips displaying all four combinations. All participants watched all contents in different combinations using a Latin Square design while eye movements were recorded with Tobii Eye Tracker. We defined two areas of interest: SLI sub-screen and documentary sub-screen. After watching each clip, participants filled up two questionnaires to evaluate their recall of linguistic content (SL interpretation) and visual content (Documentary visual information). We analysed the effects of the factors: Size, Position and Area on the measures Fixation Count, Fixation Duration, and Total Visit Duration using a GLM with Repeated Measures. Area was the only factor showing significant effects: the SLI sub-screen was visited for longer time, with longer fixations, and more fixations. Position and size in this experiment were not relevant for Sign Language users, whose pattern of exploration consists mainly on focussing in SLI with shorter gazes to the general screen. We ran Paired Samples T-tests in order to check if there were differences between Linguistic and Visual recall for each screen configuration. Linguistic recall was better for configuration Small Size/Left position. Visual recall did not differ significantly from linguistic recall, even if users tended to make longer visits with longer fixation durations on the SLI sub-screen. Probably deaf sign-language users collect visual information parafoveally. This interpretation is based on some perceptual studies that point out that parafoveal vision is enhanced in sign language users (Dye, Seymour, Hauser, 2016; Siple, 1978). A tentative conclusion from our results is that sign-language users seem to adapt swiftly to different screen configurations. Further studies could test on other screen designs to favor usability and guide directions to content producers.