Linguistic modelling and language-processing technologies for Avatar-based sign language presentation

Autor/a: ELLIOTT, R.; GLAUERT, R.; KENNAWAY, J.R.; MARSHALL, I.; SAFAR, E.
Año: 2008
Editorial: Universal Access in the Information Society, Vol. 4, Nº 6 (2008) pp. 375-391
Tipo de código: Copyright
Soporte: Digital

Temas

Medios de comunicación y acceso a la información » Nuevas Tecnologías, Medios de comunicación y acceso a la información » Accesibilidad

Detalles

Sign languages are the native languages for many pre-lingually deaf people and must be treated as genuine natural languages worthy of academic study in their own right. For such pre-lingually deaf, whose familiarity with their local spoken language is that of a second language learner, written text is much less useful than is commonly thought. This paper presents research into sign language generation from English text at the University of East Anglia that has involved sign language grammar development to support synthesis and visual realisation of sign language by a virtual human avatar. One strand of research in the ViSiCAST and eSIGN projects has concentrated on the generation in real time of sign language performance by a virtual human (avatar) given a phonetic-level description of the required sign sequence. A second strand has explored generation of such a phonetic description from English text. The utility of the conducted research is illustrated in the context of sign language synthesis by a preliminary consideration of plurality and placement within a grammar for British Sign Language (BSL). Finally, ways in which the animation generation subsystem has been used to develop signed content on public sector Web sites are also illustrated.