Neurolinguistics: How Our Brains Work

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

Neurolinguistics

https://www.linguisticsociety.org/resource/neurolinguistics 
 

by Lise Menn

Neurolinguistics is the study of how language is represented in the brain: that is, how and where our
brains store our knowledge of the language (or languages) that we speak, understand, read, and
write, what happens in our brains as we acquire that knowledge, and what happens as we use it in
our everyday lives. Neurolinguists try to answer questions like these: What about our brains makes
human language possible – why is our communication system so elaborate and so different from
that of other animals? Does language use the same kind of neural computation as other cognitive
systems, such as music or mathematics? Where in your brain is a word that you've learned? How
does a word ‘come to mind’ when you need it (and why does it sometimes not come to you?)

If you know two languages, how do you switch between them and how do you keep them from
interfering with each other? If you learn two languages from birth, how is your brain different from
the brain of someone who speaks only one language, and why? Is the left side of your brain really
‘the language side’? If you lose the ability to talk or to read because of a stroke or other brain injury,
how well can you learn to talk again? What kinds of therapy are known to help, and what new kinds
of language therapy look promising? Do people who read languages written from left to right (like
English or Spanish) have language in a different place from people who read languages written from
right to left (like Hebrew and Arabic)? What about if you read a language that is written using some
other kind of symbols instead of an alphabet, like Chinese or Japanese? If you're dyslexic, in what
way is your brain different from the brain of someone who has no trouble reading? How about if you
stutter?

As you can see, neurolinguistics is deeply entwined with psycholinguistics, which is the study of the
language processing steps that are required for speaking and understanding words and sentences,
learning first and later languages, and also of language processing in disorders of speech,
language, and reading. Information about these disorders is available from the American
Speech-Language Hearing Association (ASHA), at http://www.asha.org/public/.

How our brains work


Our brains store information in networks of brain cells (neurons and glial cells). These neural
networks are ultimately connected to the parts of the brain that control our movements (including
those needed to produce speech) and our internal and external sensations (sounds, sights, touch,
and those that come from our own movements). The connections within these networks may be
strong or weak, and the information that a cell sends out may increase the activity of some of its
neighbors and inhibit the activity of others. Each time a connection is used, it gets stronger.
Densely connected neighborhoods of brain cells carry out computations that are integrated with
information coming from other neighborhoods, often involving feedback loops. Many computations
are carried out simultaneously (the brain is a massively parallel information processor).
Learning information or a skill happens by establishing new connections and/or changing the
strengths of existing connections. These local and long-distance networks of connected brain cells
show plasticity http://merzenich.positscience.com/?page_id=143 – that is, they can keep changing
throughout our lives, allowing us to learn and to recover (to some extent) from brain injuries. For
people with aphasia http://www.asha.org/public/speech/disorders/Aphasia.htm (language loss due
to brain damage), depending on how serious the damage is, intense therapy and practice, perhaps
in combination with transcranial magnetic stimulation (TMS), may bring about major improvements
in language as well as in movement control; see the Aphasia section below, and the links posted
there. Computer-based methods for enabling such intense language practice under the supervision
of a speech-language pathologist are becoming available.

Where is language in the brain?


This question is hard to answer, because brain activity is like the activity of a huge city. A city is
organized so that people who live in it can get what they need to live on, but you can't say that a
complex activity, like manufacturing a product, is 'in' one place. Raw materials have to arrive at the
right times, subcontractors are needed, the product must be shipped out in various directions. It's
the same with our brains. We can't say that language is 'in' a particular part of the brain. It's not
even true that a particular word is 'in' one place in a person's brain; the information that comes
together when we understand or say a word arrives from many places, depending on what the word
means. For example, when we understand or say a word like ‘apple’, we are likely to use
information about what apples look, feel, smell, and taste like, even though we aren’t aware of
doing this. So listening, understanding, talking, and reading involve activities in many parts of the
brain. However, some parts of the brain are more involved in language than other parts.

Most of the parts of your brain that are crucial for both spoken and written language are in the left
side of the cortex of your brain (the left hemisphere), regardless of what language you read and how
it is written. We know this because aphasia is almost always caused by left hemisphere injury, not
by right hemisphere injury, no matter what language you speak or read, or whether you can read at
all. (This is true for about 95% of right-handed people and about half of left-handed people.) A large
part of the brain (the 'white matter') consists of fibers that connect different areas to one another,
because using language (and thinking) requires the rapid integration of information that is stored
and/or processed in many different brain regions. 

Areas in the right side are essential for communicating effectively and for understanding the point of
what people are saying. If you are bilingual but didn’t learn both languages from birth, your right
hemisphere may be somewhat more involved in your second language than it is in your first
language. Our brains are somewhat plastic – that is, their organization depends on our experiences
as well as on our genetic endowment. For example, many of the ‘auditory’ areas of the brain, which
are involved with understanding spoken language in people with normal hearing, are used in
(visually) understanding signed language by people who are deaf from birth or who became deaf
early (and do not have cochlear implants). And blind people use the ‘visual’ areas of their brains in
processing words written in Braille, even though Braille is read by touch.
Bilingual speakers develop special skills in controlling which language to use and whether it is
appropriate for them to mix their languages, depending on whom they are speaking to. These skills
may be useful for other tasks as well. 

Aphasia
What is aphasia like? Is losing language after brain damage the reverse of learning it? People who
have difficulties speaking or understanding language because of brain damage are not like children.
Using language involves many kinds of knowledge and skill. People with aphasia have different
combinations of things that they can still do in an adult-like way and things that they now do
clumsily or not at all. In fact, we can see different patterns of profiles of spared and impaired
linguistic abilities across different people with aphasia.

Therapy can help aphasic people to improve on or regain lost skills and make the best use of
remaining abilities. Adults who have had brain damage and become aphasic recover more slowly
than children who have had the same kind of damage, but they continue to improve slowly over
decades if they have good language stimulation and do not have additional strokes or other brain
injuries. For more information, consult ASHA
 (http://www.asha.org/public/speech/disorders/Aphasia.htm), the National Aphasia Association
(http://aphasia.org/), Aphasia Hope (http://www.aphasiahope.org/), or the Academy of Aphasia
(http://www.academyofaphasia.org/ClinicalServices/)

Dyslexia and stuttering


What about dyslexia, and children who have trouble learning to talk even though they can hear
normally? Why do people have reading difficulties? Research suggests that dyslexics have trouble
processing the sounds of language and have difficulty relating the printed word to sounds. Genetic
differences and genetically-based brain differences have been found in families with dyslexia and
developmental language disorders, and research in this area is helping us understand how genes
act in setting up the initial ‘wiring’ of all of our brains. There is solid evidence that appropriate
language-based therapy is effective for children with developmental disorders of reading and
language, including stuttering. ASHA provides helpful information about both of these disorders:
see  http://www.asha.org/public/speech/disorders/lbld.htm.

How neurolinguistic ideas have changed


Many established ideas about neurolinguistics – in particular, roles of the traditional ‘language
areas’ (Broca’s area, Wernicke’s area) in the left hemisphere of the brain - have been challenged
and in some cases overturned by recent evidence. Probably the most important recent findings are
1) that extensive networks involving areas remote from the traditional language areas are deeply
involved in language use, 2) that the language areas are also involved in the processing of
non-language information, such as some aspects of
music http://www.youtube.com/watch?v=ZgKFeuzGEns, and 3) that the correlations of particular
areas of the brain with particular language impairments are much poorer than had been thought.
This new information has become available because of major improvements in our ability to see
what is happening in the brain when people speak or listen, and from the accumulation and analysis
of many years of detailed aphasia test data.

How neurolinguistic research has changed


For over a hundred years, research in neurolinguistics was almost completely dependent on the
study of language comprehension and production by people with aphasia. These studies of their
language ability were augmented by relatively crude information about where the injury was located
in the brain. Neurologists had to deduce that information, such as it was, by considering what other
abilities were lost, and by autopsy information, which was not often available. A few patients who
were about to undergo surgery to relieve severe epilepsy or tumors could be studied by direct brain
stimulation, when it was medically needed to guide the surgeon away from areas essential for the
patient’s use of language.

Early-generation computerized x-ray studies (CAT scans, CT scans) and radiographic cerebral
blood-flow studies (angiograms) began to augment experimental and observational studies of
aphasia in the 1970s, but they gave very crude information about where the damaged part of the
brain was located. These early brain-imaging techniques could only see what parts of the brain had
serious damage or restricted blood flow. They could not give information about the actual activity
that was taking place in the brain, so they could not follow what was happening during language
processing in normal or aphasic speakers. Studies of normal speakers in that period mostly looked
at which side of the brain was most involved in processing written or spoken language, because
this information could be gotten from laboratory tasks involving reading or listening under difficult
conditions, such as listening to different kinds of information presented to the two ears at the same
time (dichotic listening).

Since the 1990s, there has been an enormous shift in the field of neurolinguistics. With modern
technology, researchers can study how the brains of normal speakers process language, and how a
damaged brain processes and compensates for injury.  This new technology allows us to track the
brain activity that is going on while people are reading, listening, and speaking, and also to get very
fine spatial resolution of the location of damaged areas of the brain. Fine spatial resolution comes
from magnetic resonance imaging (MRI), which gives exquisite pictures showing which brain areas
are damaged; the resolution of CT scans has also improved immensely. Tracking the brain’s
ongoing activity can be done in several ways. For some purposes, the best method is detecting the
electrical and magnetic signals that neurons send to one another by using sensors outside the skull
(functional magnetic resonance imaging, fMRI; electro-enecephalography, EEG;
magnetoencephalography, MEG; and event-related potentials, ERP). Another method is observing
the event-related optical signal, EROS; this involves detecting rapid changes in the way that neural
tissue scatters infra-red light, which can penetrate the skull and see about an inch into the brain. A
third family of methods involves tracking the changes in the flow of blood to different areas in the
brain by looking at oxygen concentrations (BOLD) or at changes the way in which the blood
absorbs near-infrared light (near-infrared spectroscopy, NIRS). Brain activity can also be changed
temporarily by transcranial magnetic stimulation (stimulation from outside the skull, TMS), so
researchers can see the effects of this stimulation on how well people speak, read, and understand
language. NIRS, EROS, ERP, and EEG techniques are risk-free, so they can ethically be used for
research on normal speakers, as well as on people with aphasia who would not particularly benefit
by being in a research study. TMS also appears to be safe.

It is very complicated to figure out the details of how the information from different parts of the brain
might combine in real time, so another kind of advance has come from the development of ways to
use computers to simulate parts of what the brain might be doing during speaking or reading.

Investigations of exactly what people with aphasia and other language disorders can and cannot do
also continue to contribute to our understanding of the relationships between brain and language.
For example, comparing how people with aphasia perform on tests of syntax, combined with
detailed imaging of their brains, has shown that there are important individual differences in the
parts of the brain involved in using grammar. Also, comparing people with aphasia across
languages shows that the various types of aphasia have somewhat different symptoms in different
languages, depending on the kinds of opportunities for error that each language provides. For
example, in languages that have different forms for masculine and feminine pronouns or masculine
and feminine adjectives, people with aphasia may make gender errors in speaking, but in languages
that don’t have different forms for different genders, that particular problem can’t show up.

You might also like