The world's first wiki where authorship really matters (Nature Genetics, 2008). Due credit and reputation for authors. Imagine a global collaborative knowledge base for original thoughts. Search thousands of articles and collaborate with scientists around the globe.

wikigene or wiki gene protein drug chemical gene disease author authorship tracking collaborative publishing evolutionary knowledge reputation system wiki2.0 global collaboration genes proteins drugs chemicals diseases compound
Hoffmann, R. A wiki for the life sciences where authorship matters. Nature Genetics (2008)

Reading fluent speech from talking faces: typical brain networks and individual differences.

Listeners are able to extract important linguistic information by viewing the talker's face-a process known as ''speechreading.'' Previous studies of speechreading present small closed sets of simple words and their results indicate that visual speech processing engages a wide network of brain regions in the temporal, frontal, and parietal lobes that are likely to underlie multiple stages of the receptive language system. The present study further explored this network in a large group of subjects by presenting naturally spoken sentences which tap the richer complexities of visual speech processing. Four different baselines (blank screen, static face, nonlinguistic facial gurning, and auditory speech) enabled us to determine the hierarchy of neural processing involved in speechreading and to test the claim that visual input reliably accesses sound-based representations in the auditory cortex. In contrast to passively viewing a blank screen, the static-face condition evoked activation bilaterally across the border of the fusiform gyrus and cerebellum, and in the medial superior frontal gyrus and left precentral gyrus (p < .05, whole brain corrected). With the static face as baseline, the gurning face evoked bilateral activation in the motion-sensitive region of the occipital cortex, whereas visual speech additionally engaged the middle temporal gyrus, inferior and middle frontal gyri, and the inferior parietal lobe, particularly in the left hemisphere. These latter regions are implicated in lexical stages of spoken language processing. Although auditory speech generated extensive bilateral activation across both superior and middle temporal gyri, the group-averaged pattern of speechreading activation failed to include any auditory regions along the superior temporal gyrus, suggesting that f luent visual speech does not always involve sound-based coding of the visual input. An important finding from the individual subject analyses was that activation in the superior temporal gyrus did reach significance (p < .001, small-volume corrected) for a subset of the group. Moreover, the extent of the left-sided superior temporal gyrus activity was strongly correlated with speechreading performance. Skilled speechreading was also associated with activations and deactivations in other brain regions, suggesting that individual differences ref lect the efficiency of a circuit linking sensory, perceptual, memory, cognitive, and linguistic processes rather than the operation of a single component process.[1]


  1. Reading fluent speech from talking faces: typical brain networks and individual differences. Hall, D.A., Fussell, C., Summerfield, A.Q. Journal of cognitive neuroscience. (2005) [Pubmed]
WikiGenes - Universities