A comparison of bound and unbound audio-visual information processing in the human cerebral cortex.
Human speech has auditory (heard speech) and visual (seen speech) qualities. The neural representation of audiovisual integration in speech was investigated using functional magnetic resonance imaging (fMRI). Ten subjects were imaged while viewing a face in four different conditions: with speech and mouth movements synchronized, with speech and mouth movements desynchronized, during silent speech, or while viewing a static face. Subtractions of the different sets of images showed that lipreading primarily activated the STG/ STS. Synchronized audio-visual speech and desynchronized audio-visual speech activated similar areas. Regions activated more in the synchronized versus the desynchronized conditions were considered to be those involved in cross-modal integration. One dominant activation focus was found near the left claustrum, a subcortical region. A region-of-interest analysis of the STS and parietal areas found no difference between audio-visual conditions. However, this analysis found that synchronized audio-visual stimuli led to a higher signal change in the claustrum region. This study extends previous results, using other sensory combinations, and other tasks, indicating involvement of the claustrum in sensory integration.[1]References
- A comparison of bound and unbound audio-visual information processing in the human cerebral cortex. Olson, I.R., Gatenby, J.C., Gore, J.C. Brain research. Cognitive brain research. (2002) [Pubmed]
Annotations and hyperlinks in this abstract are from individual authors of WikiGenes or automatically generated by the WikiGenes Data Mining Engine. The abstract is from MEDLINE®/PubMed®, a database of the U.S. National Library of Medicine.About WikiGenesOpen Access LicencePrivacy PolicyTerms of Useapsburg