Music is one of those rare mysteries in life that can have an immediate emotional impact on us. You might feel gloomy and be bedridden from melancholy, but play the right uplifting tune and you’ll suddenly be back on track. Such is the power of music, which — much like symbolic language — seems to be uniquely attuned to the human brain.
But how does the brain process music, and what happens when we listen to our favorite songs? These are questions that have fascinated scientists for decades, and recent research has made significant strides in understanding the neural basis of music perception and cognition.
Previously, scientists have shown that music reduces anxiety, blood pressure, and pain as well as improves sleep quality, mood, mental alertness, and memory. Music engages a wide range of brain networks, extending beyond the auditory cortex to emotion-related areas that synchronize during emotional music. It also stimulates memory regions and, intriguingly, triggers the motor system. The brain’s motor system activation is believed to enable us to identify the music’s beat before we even start tapping our foot to it.
Long story short, almost all of your most important brain regions light up in response to music. It’s quite extraordinary what we’ve learned so far. But now, a new study from the University of California, Berkeley, is really pushing the boundaries of neuroscience as it relates to music.
Using cutting-edge computer modeling, scientists have successfully reconstructed Pink Floyd’s masterpiece “Another Brick in the Wall, Part 1” from direct neural recordings of human auditory cortex activity. In other words, just by scanning a person’s brain while they were listening to the song, the scientists could reverse engineer the neural response back into the song that triggered the stimulus in the first place.
Have a listen to the result and then we can get into the nitty-gritty of the science behind it.
“We could successfully reconstruct a song from neural activity recorded in the auditory cortex. The fact that we could do so even in a single patient and with very little data is very promising for Brain-Computer Interface (BCI) purposes,” Ludovic Bellier, a postdoctoral researcher in human cognitive neuroscience working in the Knight lab at UC Berkeley, told ZME Science.
Just another song in your brain
The study involved 29 patients who were undergoing intracranial electroencephalography (iEEG) for epilepsy treatment. During the procedure, the patients listened to the popular Pink Floyd song while their neural activity was recorded. The researchers then used a so-called stimulus reconstruction approach to see if they could play back the original song from the neural activity alone.
To this aim, the researchers trained a machine learning algorithm to identify patterns of neural activity that correspond to different aspects of the song, such as the melody, rhythm, and lyrics.
The reconstructed song is not a perfect replica of the original, but it is clearly recognizable and contains many of the key elements of the song, such as the melody, rhythm, and lyrics. It’s almost as if the patients are humming the melody of the song in their heads and the researchers could tap into the tune.
Some of you might be in awe reading all of this. But does all of this mean you could just scan your brain with an electroencephalogram (EEG) cap and play back your favorite song just by thinking about it? Not quite. The patients in the study had electrodes implanted directly into their brains as part of therapy that is not at all meant for widespread use. Not a lot of people are lining up for this procedure, as you might imagine. And while EEGs are non-invasive they are also much less precise.
“What we could get from scalp EEG is mostly the envelope, giving insights on the rhythmic patterns of the song. It could be enough to identify well-known musical pieces (think, ta ta ta taaaaaa… ta ta ta taaaaa…) were we to reconstruct it. In other words, it could well be recognizable by a computer pending sufficient neural recording duration, but most likely not by people,” said Ludovic.
“I think scalp EEG could categorize a few different songs but could not determine a specific song. And categorization is not reconstruction which scalp EEG cannot do,” said study author Robert Thomas Knight, professor of psychology and neuroscience at UC Berkeley.
The right hemisphere of the brain is more involved in music
Neuro-Shazam would certainly be cool, but this research has provided enough intriguing insight into how the human brain perceives music as it is. Each patient had 2,668 electrodes implanted in their brains, and 347 of these electrodes picked up neural activity specifically related to music. These were located in three regions of the brain: the Superior Temporal Gyrus (STG), the Sensory-Motor Cortex (SMC), and the Inferior Frontal Gyrus (IFG).
Although music is processed and decoded in both hemispheres, the researchers found that the right hemisphere does more of the heavy lifting when listening to a tune. They also found for the first time that a distinct region within the STG (superior temporal gyrus) is responsible for processing rhythm, specifically the guitar rhythm in rock music.
When electrodes from the right STG were removed, the quality of the reconstructed song suffered greatly. Moreover, removing electrodes related to sound onset or rhythm led to a notable decrease in reconstruction accuracy, highlighting their importance in music perception.
“We now know in precise detail the brain regions supporting music decoding. We also know there is a right hemisphere bias for music decoding,” Knight told ZME Science.
Besides the fundamental insights gained from the present work, these findings could potentially guide the development of brain-machine interfaces, including prosthetic devices aimed at enhancing the rhythmic and melodic aspects of speech.
“Our lab and many others worldwide are working to develop a brain-computer interface to help restore communicative ability to people who lost the ability to speak. Think Stephen Hawking with ALS of the many people with aphasia and other disabling neurological conditions,” said Knight.
The findings appeared in the journal PLOS Biology.