Recent neuroscience research discovered people perceive musical rhythm similarly to how they perceive language patterns, but what does that mean for artists?
Adam Neely, a bassist and YouTuber, spoke on this topic at SXSW on March 16 at the Austin Convention Center. Through YouTube clips and introducing components of melodies, he explored the deep connection between language and music.
Neely began by displaying a clip of him giving an introduction, pairing his speech with musical notes he played on the bass. He referred to this as syllable tracking, which occurs when each syllable is assigned a specific musical note.
“Some people say music is the universal language,” Neely said in his video. “But I’m here to argue that language is the universal music.”
He dove into syllable tracking history, displaying video clips of people using instruments to mimic speech.
“It was actually rare to imitate conversations in a direct way,” Neely said. “It was cartoonish.”
He gave examples like Frank Zappa’s “The Dangerous Kitchen” where the bassist doubled the melody with Zappa’s singing or Steve Vai’s “So Happy” where he played the guitar along with someone speaking. Thus, the idea of finding the music in sound emerged.
After the internet, people began playing instruments to speech like Bill Wurtz’s harmonizing the weather lady or Iggy Jackson Cohen playing the bass while Donald Trump said “China.”
Neely said he thinks this happened because of the enticing quality accompanying this concept.
“When people are talking, they have this melody,” Neely said. “When you tap into that melody, it’s irresistible.”
Neely discussed the linguistics and neuroscience behind it, explaining that language has feeling. For instance, he said each language emphasizes certain syllables. The English language tends to stick to trochee, which is a strong syllable followed by a weak syllable. In turn, Neely said modern artists use trochee in music, like Ariana Grande or Childish Gambino.
“The music of a culture comes from the culture and the language of that nation,” Neely said.
Neely said research also found that stress-timing and syllable-timing differ between languages. For instance, Neely said melodies of French composers matched French language just as melodies of English composers matched English language.
“The rhythm and language are so intertwined,” Neely said. “This is not a coincidence.”
Neely said if people fail to initially hear the melody in speech, people can trick their brain into hearing it through repetition. He showed a clip of the former president of South Africa speaking in Zulu. After looping a phrase in his speech multiple times, the melody became apparent even without knowing the language.
“All it took was me repeating that phrase so you can start to see the melody in his speech,” Neely said. “It’s that little tweak that gets us into the mindset of hearing the melody in speech.”
Neely said this concept is the core of how humans learn a language — by imitating others. Instead of voices, people now use instruments to imitate.
Neely said through this discovery, people learn more about their own speech and music’s connection to it.
“There is sound in music everywhere,” Neely said. “It’s up to us to take the sound and hear the music that’s in it.”