Discerning the functional networks behind processing of music and speech through human vocalizations

Description: We examined the independence between music and speech processing using functional magnetic resonance imagining and a stimulation paradigm with different human vocal sounds produced by the same voice. The stimuli were grouped as Speech (spoken sentences), Hum (hummed melodies), and Song (sung sentences); the sentences used in Speech and Song categories were the same, as well as the melodies used in the two musical categories. Each category had a scrambled counterpart which allowed us to render speech and melody inintelligible, while preserving global amplitude and frequency characteristics.

Source data:

Private Collection: To share the link to this collection, please use the private url: https://neurovault.org/collections/DMKGWLFE/

View ID Name Type
Field Value
Compact Identifierhttps://identifiers.org/neurovault.collection:5825
Add DateAug. 27, 2019, 11:09 p.m.
Contributors
Related article DOINone
Related article authors
Citation guidelines

If you use the data from this collection please include the following persistent identifier in the text of your manuscript:

https://identifiers.org/neurovault.collection:5825

This will help to track the use of this data in the literature.