Keith Johnson, Ph.D., Professor of Linguistics at the University of California, Berkeley, studies how language and the brain interact. For the past few years, Dr. Johnson has been working with a research group at the University of California, San Francisco (UCSF) on phonetic neuroscience. I recently sat down with Dr. Johnson to discuss this topic.
Phonetic neuroscience draws from a subfield of linguistics known as phonetics, which Dr. Johnson describes as the study of spoken language. The field can be broken down into two areas of interest:
Both of these aspects of spoken language are dependent on our brains. Listening to someone speak depends on our brain processing the sounds picked up by our ears. Language production similarly depends on our brain controlling our articulators, which include body parts like the lips and tongue. The goal of phonetic neuroscience is to study our brain’s involvement in these aspects of spoken language.
Dr. Johnson’s journey to this type of work began during his undergraduate studies at Abilene Christian University. Though he was majoring in religion at the time, he regularly took other classes with his roommate and engaged in some friendly competition. “He always got As, and I got Bs,” mentioned Dr. Johnson. “Then, we took a linguistics class with each other, and I was the one that found it easy. I was getting the As, and he was struggling with it, and I thought maybe I should do this.”
Dr. Johnson went on to get a Ph.D. in Linguistics from the Ohio State University in 1988.
He was initially interested in studying variation, specifically the physical differences that cause men’s and women’s voices to be so different. Through this work, Dr. Johnson popularized a new theory, known as exemplar theory, of how people mentally adjust for the different ways that people speak.
For example, the word “water” is pronounced differently in different English accents. American English speakers are more likely to say “wah-ter”, while British or African English speakers pronounce the word “wah-er. ”
How does our brain recognize that both pronunciations are the same word?
Earlier theories suggested that everyone has a “variation remover” that simply removes vocal differences. Dr. Johnson instead suggested that when we listen to people speak, we compare the sounds of that speech to memories of people speaking. These comparisons allow us to recognize words across different styles of speech.
Dr. Johnson believes that “both of these models, the one where you remove variation and the one where you remember it all, are wrong in some respects. But they help us think about what the problem is.” However, thinking about variation in this way naturally brings up questions about where this information about sound is actually stored in the brain.
In the late 2000s/early 2010s, Dr. Johnson was approached by auditory neurologist Dr. Edward “Eddie” Chang. Dr. Chang proposed using intracranial EEG (iEEG), a method in neuroscience that allows researchers to directly record brain activity through electrodes implanted in the skull, to study spoken language.
Since then, Dr. Johnson, Dr. Chang, and their other collaborators have been working towards untangling what Dr. Johnson describes as a fundamental “puzzle” of speech.
In our daily lives, understanding speech and producing speech feel connected. However, sound perception and sound production occur in two completely different systems from our organs (ears vs. mouth) to the brain (auditory cortex vs. motor cortex).
Additionally, some of Dr. Johnson’s early work with Dr. Chang’s group showed that the parts of the brain responsible for producing speech and receiving speech are also organized completely differently. The part of the brain involved in speech production, in the motor cortex, is organized according to the organization of the mouth. Brain regions that control the tip of the tongue are close to parts that control the lips, but far from parts of the brain that control parts of the vocal tract that are deeper in the throat like the uvula. Meanwhile, the auditory cortex is organized by the similarity of sounds. Brain regions that respond to similar sounds are closer together.
So, how are these two systems linked? We still don’t know, but Dr. Johnson considers one specific finding as a clue.
It turns out that the motor cortex is not only active when we’re talking, but it’s also active when we listen to speech. When the motor cortex activates in response to hearing speech, the pattern of activation is similar to that of the auditory cortex, with similar sounds activating regions that are close together. This finding suggests that the motor cortex may be the bridge between speech perception and production.
While there’s still a way to go to fully understand this process, our current knowledge is already proving to be useful. Dr. Johnson notes that Dr. Chang’s lab is developing brain-computer interfaces for people with neurological damage that prevents them from speaking. These interfaces try to read the brain, figure out what the person is trying to say, and then use a computer to convert that into words.
According to Dr. Johnson, this type of technology is already being used and has the potential to drastically improve quality of life.
“[A patient] got some serious brain damage, but he’s got the interface, and they’ve been training it for probably 4 or 5 years now,” said Dr. Johnson. “He would get out of a motorized wheelchair and kind of escape the nursing home, wheel over to a convenience store nearby, and ask for a candy bar. And the computer asked it.”
Dr. Johnson also emphasizes the importance of collaboration between different fields. Dr. Johnson works as a linguist on teams that contain people who are first and foremost neuroscientists, which is something that some would shy away from. He emphasizes that the culture of Dr. Chang’s lab is really what’s made it possible for him to be involved in this type of work.
“There’s a certain amount of being willing to let the other person be the expert in their area and to accept their limitations in your area, that’s really necessary to make a team work,” explains Dr. Johnson.
At a long-term monitoring site at Tatoosh Island in northern Washington, researchers noticed an unexpected…
I remember sitting down a few years ago to write what was the most important…
In 2021, the FDA announced that they had approved Wegovy, a GLP-1 receptor agonist, as…
Vaccines are tools of modern medicine used in keeping us safe and healthy from the…
Did you know your smile is one of the most powerful social signals you have?…
We’ve all heard of cause and effect, but what if our universe doesn’t actually follow…