Home
Scholarly Works
Music is scaled, while speech is not: A...
Journal article

Music is scaled, while speech is not: A cross-cultural analysis

Abstract

Music is well-known to be based on sets of discrete pitches that are combined to form musical melodies. In contrast, there is no evidence that speech is organized into stable tonal structures analogous to musical scales. In the current study, we developed a new computational method for measuring what we call the “scaledness” of an acoustic sample and applied it to three cross-cultural ethnographic corpora of speech, song, and/or instrumental music (n = 1696 samples). The results confirmed the established notion that music is significantly more scaled than speech, but they also revealed some novel findings. First, highly prosodic speech—such as a mother talking to a baby—was no more scaled than regular speech, which contradicts intuitive notions that prosodic speech is more “tonal” than regular speech. Second, instrumental music was far more scaled than vocal music, in keeping with the observation that the voice is highly imprecise at pitch production. Finally, singing style had a significant impact on the scaledness of song, creating a spectrum from chanted styles to more melodious styles. Overall, the results reveal that speech shows minimal scaledness no matter how it is uttered, and that music’s scaledness varies widely depending on its manner of production.

Authors

Phillips E; Brown S

Journal

Scientific Reports, Vol. 15, No. 1,

Publisher

Springer Nature

Publication Date

December 1, 2025

DOI

10.1038/s41598-025-03049-w

ISSN

2045-2322
View published work (Non-McMaster Users)

Contact the Experts team