Facial expressions and speechreading performance.
In the present study, the role of facial expressions in visual speechreading (lipreading) was examined. Speechreading was assessed by three different tests: sentence-based speechreading, word-decoding, and word discrimination. Twenty-seven individuals participated as subjects in the study. The results revealed that no general improvement as a function of expression was obtained across all tests. Nevertheless, skilled speechreaders could significantly improve their performance as a function of emotional expression in the word-decoding and word-discrimination conditions. Furthermore, a correlational analysis indicated that there was a significant relationship between the subjects' rating of confidence regarding their responses to each test-item and performance on speechreading tests where lexical analysis is a necessary task-demand. The results are discussed with respect to how information from facial expressions is integrated with the information given by the lip movements in visual speechreading, and also with respect to general models of face-processing (i.e., Bruce & Young, 1986; Young & Bruce, 1991).[1]References
- Facial expressions and speechreading performance. Lyxell, B., Johansson, K., Lidestam, B., Rönnberg, J. Scandinavian audiology. (1996) [Pubmed]
Annotations and hyperlinks in this abstract are from individual authors of WikiGenes or automatically generated by the WikiGenes Data Mining Engine. The abstract is from MEDLINE®/PubMed®, a database of the U.S. National Library of Medicine.About WikiGenesOpen Access LicencePrivacy PolicyTerms of Useapsburg