By Leigh Dayton From The Australian, May 7, 2004
LAST year in a laboratory in Marseille, three French scientists spent hours playing Happy Birthday to 18 surprised volunteers. And when they weren't playing the famous chanson, they played a recording of a sentence about a lonely wolf in a big forest.
Neither the tune nor the words baffled the listeners, half of whom were musicians. Rather, it was the incongruous tweaks at the end of the strings of sounds.
Those sonic oddities had been fashioned by neuroscientist Daniele Schon and her colleagues at the National Centre for Scientific Research and the University of Trieste in Italy. Two-thirds of their rejigged sentences ended with unexpected inflections and a similar percentage of the Happy Birthday renditions finished out of tune.
So what? After studying the nature and speed of electrical activity of the 18 listening brains, the researchers have concluded that music and language are processed identically in the brain.
At long last, they argue, there's direct evidence that music and language are similar information systems, linked intimately across our heads. Music isn't handled exclusively by the right side of the brain and language the left, as most experts believe. There's plenty of overlap, they argue.
Schon's team goes further. In the latest issue of the journal Psychophysiology they claim to have the first direct evidence that studying music enhances non-musical abilities such as speech perception. Their experiment revealed that the musically minded use more of both sides, or hemispheres, of their brain to interpret sounds, fundamental to both speech and music, than do the musically untrained.
These are big claims. Not surprisingly, although experts such as Monash University psycholinguist Gregory Yelland have praise for Schon and her co-workers, they also have qualifications.
According to Yelland, the experiment does indeed show that the rhythm and tone of speech (its prosody) and the pitch and rhythm of music (the melody) are processed in similar ways.
"But they're claiming a bit more than they ought to," he says. "It's over-reaching to claim [music training] could improve language functioning."
What's more, Yelland says, just because the same parts of the brain send information to the music and language centres of the brain, as Schon and company note, that's "not because language and music are linked but because both need information about the frequency [of sound waves]".
If all this sounds complicated, that's because it is. And that's precisely the problem scientists face when attempting to untangle the connections -- real or illusory -- between music and language or, for that matter, music and any other mental capacity.
The so-called Mozart effect is a case in point. This is the notion, first proposed a decade ago, that listening to the great composer's works - specifically the Sonata for Two Pianos in D Major - boosts brain power.
Gordon Shaw of the University of California at Irvine and Frances Rauscher, now at the University of Wisconsin, claimed to have experimental evidence showing that people did better in standard IQ tests after a hit of the sonata. Since then, though, other researchers have been unable to duplicate their findings, and the science community remains sceptical about the alleged effect and the methods used to prove it.
For instance, how do you control for all the other factors that might have caused the Mozart effect, asks Joe Wolfe, a physicist with a special interest in music. "Relatively quiet, gentle music [like the sonata] has a calming effect," he notes, adding that sorting out complicating variables such as music's soothing power makes testing Mozart's impact on intelligence very difficult indeed.
Of course, it all may be absolutely true. Researchers just don't know for sure. Like the proverbial three blind men and the elephant, they can explore only small patches of the larger phenomena before them. Today, they're unable to combine insights gained from, say, linguistics, physics and neurology into a rigorous theory agreed on by all.
"But we'll find out a lot more in the future," says speech pathologist Brooke-Mai Whelan of Queensland University. "[Imaging technology] that allows us to look at the brain while it's encoding or decoding music, or similarly language, will give us a better understanding of what the brain does and how it does it, and then we'll be able to establish more accurately human cognitive abilities like music and language," she says.
Meanwhile, the state of intellectual play suggests that, as Whelan believes, "music is a language and language is music". That is, speech and music are systems for communication. Both combine conceptual and auditory information according to rules about how to encode and decode the components of meaning.
And speech and music may not be the brain's sole conveyers of structured meaning. "Music, mathematics, spoken language, maybe even map reading, which seems to have a grammar to cartographers - they're all languages, probably governed by a universal or natural language," suggests Yelland. "Our study of computer scientists [shows] that the syntax [ordering] of algebraic equations is very, very close to English and is processed in the same way, but on a different side of the brain: math is on the right and language on the left."
Notice something? The main processing centre for these languages is in the brain's right hemisphere, all except spoken language. Everyone agrees speech is left-hand drive.
What's more, a few years ago, Melbourne neurosurgeons were startled when, while removing a brain tumour in a conscious patient, the patient's language processing centre swapped from the left to the right hemisphere. What gives?
"Language is a dominant function for humans, so while it's set up on the left, it's duplicated on the right," Yelland suggests. "It's highly speculative, but if those areas are not used for spoken language, they could be used for music, mathematics and a grammar of map reading."
If so, that brings the music-language conundrum back to that French laboratory. Possibly what the researchers revealed was an example of the extraordinary plasticity of the brain, its ability to chop and change according to need and training. Perhaps plasticity, built into the brain by nature, is the long-sought link between music and language.
Certainly, plasticity is Whelan's stock in trade. She uses a technique called melody intonation therapy to assist people with some forms of stroke or brain injury to recruit parts of the right hemisphere when the left hemisphere isn't up to the job of managing spoken language.
And plasticity may also be the key to a discovery reported in Nature Neuroscience earlier this year by scientists at Germany's Max Planck Institute of Human Cognitive and Brain Sciences.
They found that music evokes memories of real-world concepts and experiences, not only emotional responses as previously thought. Nature, apparently, was tapping into the under-used right hemisphere to enhance the musical message.
If the devil makes work for idle hands, then Yelland's just-in-case theory of the right hemisphere springing into action fits neatly with Wolfe's notion of why music exists in the first place. It's a joke, a little something to keep that part of the brain occupied.
"There are whole movements that are jokes, whole compositions that are intellectual games. Why else would you write a four or five-part fugue?" he concludes.
Source: http://www.theaustralian.news.com.au/common/story_page/0,5744,9488893%255E16947,00.html