Neurolinguistics: How Our Brains Work
Neurolinguistics: How Our Brains Work
Neurolinguistics: How Our Brains Work
https://www.linguisticsociety.org/resource/neurolinguistics
by Lise Menn
Neurolinguistics is the study of how language is represented in the brain: that is, how and where our
brains store our knowledge of the language (or languages) that we speak, understand, read, and
write, what happens in our brains as we acquire that knowledge, and what happens as we use it in
our everyday lives. Neurolinguists try to answer questions like these: What about our brains makes
human language possible – why is our communication system so elaborate and so different from
that of other animals? Does language use the same kind of neural computation as other cognitive
systems, such as music or mathematics? Where in your brain is a word that you've learned? How
does a word ‘come to mind’ when you need it (and why does it sometimes not come to you?)
If you know two languages, how do you switch between them and how do you keep them from
interfering with each other? If you learn two languages from birth, how is your brain different from
the brain of someone who speaks only one language, and why? Is the left side of your brain really
‘the language side’? If you lose the ability to talk or to read because of a stroke or other brain injury,
how well can you learn to talk again? What kinds of therapy are known to help, and what new kinds
of language therapy look promising? Do people who read languages written from left to right (like
English or Spanish) have language in a different place from people who read languages written from
right to left (like Hebrew and Arabic)? What about if you read a language that is written using some
other kind of symbols instead of an alphabet, like Chinese or Japanese? If you're dyslexic, in what
way is your brain different from the brain of someone who has no trouble reading? How about if you
stutter?
As you can see, neurolinguistics is deeply entwined with psycholinguistics, which is the study of the
language processing steps that are required for speaking and understanding words and sentences,
learning first and later languages, and also of language processing in disorders of speech,
language, and reading. Information about these disorders is available from the American
Speech-Language Hearing Association (ASHA), at http://www.asha.org/public/.
Most of the parts of your brain that are crucial for both spoken and written language are in the left
side of the cortex of your brain (the left hemisphere), regardless of what language you read and how
it is written. We know this because aphasia is almost always caused by left hemisphere injury, not
by right hemisphere injury, no matter what language you speak or read, or whether you can read at
all. (This is true for about 95% of right-handed people and about half of left-handed people.) A large
part of the brain (the 'white matter') consists of fibers that connect different areas to one another,
because using language (and thinking) requires the rapid integration of information that is stored
and/or processed in many different brain regions.
Areas in the right side are essential for communicating effectively and for understanding the point of
what people are saying. If you are bilingual but didn’t learn both languages from birth, your right
hemisphere may be somewhat more involved in your second language than it is in your first
language. Our brains are somewhat plastic – that is, their organization depends on our experiences
as well as on our genetic endowment. For example, many of the ‘auditory’ areas of the brain, which
are involved with understanding spoken language in people with normal hearing, are used in
(visually) understanding signed language by people who are deaf from birth or who became deaf
early (and do not have cochlear implants). And blind people use the ‘visual’ areas of their brains in
processing words written in Braille, even though Braille is read by touch.
Bilingual speakers develop special skills in controlling which language to use and whether it is
appropriate for them to mix their languages, depending on whom they are speaking to. These skills
may be useful for other tasks as well.
Aphasia
What is aphasia like? Is losing language after brain damage the reverse of learning it? People who
have difficulties speaking or understanding language because of brain damage are not like children.
Using language involves many kinds of knowledge and skill. People with aphasia have different
combinations of things that they can still do in an adult-like way and things that they now do
clumsily or not at all. In fact, we can see different patterns of profiles of spared and impaired
linguistic abilities across different people with aphasia.
Therapy can help aphasic people to improve on or regain lost skills and make the best use of
remaining abilities. Adults who have had brain damage and become aphasic recover more slowly
than children who have had the same kind of damage, but they continue to improve slowly over
decades if they have good language stimulation and do not have additional strokes or other brain
injuries. For more information, consult ASHA
(http://www.asha.org/public/speech/disorders/Aphasia.htm), the National Aphasia Association
(http://aphasia.org/), Aphasia Hope (http://www.aphasiahope.org/), or the Academy of Aphasia
(http://www.academyofaphasia.org/ClinicalServices/)
Early-generation computerized x-ray studies (CAT scans, CT scans) and radiographic cerebral
blood-flow studies (angiograms) began to augment experimental and observational studies of
aphasia in the 1970s, but they gave very crude information about where the damaged part of the
brain was located. These early brain-imaging techniques could only see what parts of the brain had
serious damage or restricted blood flow. They could not give information about the actual activity
that was taking place in the brain, so they could not follow what was happening during language
processing in normal or aphasic speakers. Studies of normal speakers in that period mostly looked
at which side of the brain was most involved in processing written or spoken language, because
this information could be gotten from laboratory tasks involving reading or listening under difficult
conditions, such as listening to different kinds of information presented to the two ears at the same
time (dichotic listening).
Since the 1990s, there has been an enormous shift in the field of neurolinguistics. With modern
technology, researchers can study how the brains of normal speakers process language, and how a
damaged brain processes and compensates for injury. This new technology allows us to track the
brain activity that is going on while people are reading, listening, and speaking, and also to get very
fine spatial resolution of the location of damaged areas of the brain. Fine spatial resolution comes
from magnetic resonance imaging (MRI), which gives exquisite pictures showing which brain areas
are damaged; the resolution of CT scans has also improved immensely. Tracking the brain’s
ongoing activity can be done in several ways. For some purposes, the best method is detecting the
electrical and magnetic signals that neurons send to one another by using sensors outside the skull
(functional magnetic resonance imaging, fMRI; electro-enecephalography, EEG;
magnetoencephalography, MEG; and event-related potentials, ERP). Another method is observing
the event-related optical signal, EROS; this involves detecting rapid changes in the way that neural
tissue scatters infra-red light, which can penetrate the skull and see about an inch into the brain. A
third family of methods involves tracking the changes in the flow of blood to different areas in the
brain by looking at oxygen concentrations (BOLD) or at changes the way in which the blood
absorbs near-infrared light (near-infrared spectroscopy, NIRS). Brain activity can also be changed
temporarily by transcranial magnetic stimulation (stimulation from outside the skull, TMS), so
researchers can see the effects of this stimulation on how well people speak, read, and understand
language. NIRS, EROS, ERP, and EEG techniques are risk-free, so they can ethically be used for
research on normal speakers, as well as on people with aphasia who would not particularly benefit
by being in a research study. TMS also appears to be safe.
It is very complicated to figure out the details of how the information from different parts of the brain
might combine in real time, so another kind of advance has come from the development of ways to
use computers to simulate parts of what the brain might be doing during speaking or reading.
Investigations of exactly what people with aphasia and other language disorders can and cannot do
also continue to contribute to our understanding of the relationships between brain and language.
For example, comparing how people with aphasia perform on tests of syntax, combined with
detailed imaging of their brains, has shown that there are important individual differences in the
parts of the brain involved in using grammar. Also, comparing people with aphasia across
languages shows that the various types of aphasia have somewhat different symptoms in different
languages, depending on the kinds of opportunities for error that each language provides. For
example, in languages that have different forms for masculine and feminine pronouns or masculine
and feminine adjectives, people with aphasia may make gender errors in speaking, but in languages
that don’t have different forms for different genders, that particular problem can’t show up.