Planning: Speech: Levelt's Model of L1 Production

Download as pdf or txt
Download as pdf or txt
You are on page 1of 22

PLANNING: SPEECH

❖ Levelt’s model of L1 production


The speech production system advanced by Levelt (1989, 1993) consists
of a number of autonomous components which are responsible for different
aspects of speech production. These components include:

-the conceptualizer, a component that is responsible generating an idea or set of


ideas for expression and planning how to express them.

- the formulator, The Formulator translates this conceptual


structure into a linguistic one. It first engages in a process of
grammatical encoding which builds an abstract syntactic structure ,
identifying the lexical items that will be slotted into it. . This is followed by
phonological encoding, in which the syntactic structure is tagged for
inflection and is then given phonological form. Other processes specify
the form and duration of the syllables as they are to occur in connected
speech and add rhythm and prosody. The outcome of these operations
is a phonetic or articulatory plan, a representation of how the planned
utterance is to be articulated. It is temporarily stored in an articulatory
buffer.

- the articulator, The Articulator then retrieves chunks of internal


speech from the buffer, unpacks them into sets of motor commands
and issues the commands to the muscles controlling the larynx, the
articulators and the respiratory system.
The most comprehensive and authoritative account is offered by Levelt (1989).

-Self-monitoring: focusing attention on the message just before or while it


is
uttered in order to check for accuracy, clarity and appropriacy.

1-Message conceptualization
The first component in Levelt’s (1989, 1993) production system is the
conceptualizer. This component is responsible for generating the communicative
intention3 and for encoding it into some kind of coherent conceptual plan. In Addition,
the conceptualizer monitors what is about to be said as well as what has been said and
how. In order to generate a message, declarative knowledge is accessed. Declarative
knowledge includes encyclopedic knowledge (about the person’s general experience of
the world), knowledge about the situation (e.g. the interlocutor/s and the communicative
context, among others), as well as information about the discourse record, that is, what
has already been said.
➢ macro planning and micro planning.

Speech planning can be conceived as taking place at a number of levels. In Levelt’s


(1989) model, ideas are first shaped through conceptualisation. This involves two stages.
Levelt distinguishes two stages in message planning:

Macro Planning breaks the communicative goal into a series of subgoals and retrieves
the information necessary to realize these goals.In other words, it involves generating
speech act intentions, like to narrate an event or express an opinion. In Levelt’s terms
(1993, p. 3): “The speaker’s planning of a speech act, his selection of information to be
expressed, and his linearization of that information are called ‘macro planning’”.

Example;
● Getting the various ideas organized in a way that is going to best suit the
communication is part of macro planning,
● Deciding how to achieve an intended communicative goal using relevant speech
acts

-Microplanning involves attaching the right propositional structure to each of these


chunks of information, and taking account of where the focus of information is to lie. As
well as sorting out the type and order of speech acts in macroplan-ning, speakers carry
out detailed planning of each individual speech act,known as microplanning. This
involves determining the perspective and information structure that is most appropriate
for a given speech act, and deciding what should be highlighted as new or topical
information.

Macro And micro planning should be thought of as two levels of planning,


rather than temporally distinct stages. That is, once a speaker has made
initial decisions about the sequence of speech acts required to achieve
some communicative goal, individual acts can be planned in more detail,
even before the overall plan has been finalised.The outcome of macro- and
micro-planning is still not ‘language’. It is a pre-verbal message representing the
speaker’s ideas or propositions, and which still needs to be converted to linguistic form.
In terms of the sketch of a language production , we have reached the output of the
conceptualisation process.

➢A speech act case


the performance of some action through saying something, such as
asking for information, making a promise, and so on (Clark, 1996 ). Part of
our knowledge about how language works is that we know what speech
acts work well in achieving particular goals in a given context and for
given addresses.Not all goals have single corresponding speech acts. Some goals require
multiple speech acts, such as giving route directions.Some speech acts achieve multiple
goals, such as saying I saw the bastard last night , which tells the listener that the
speaker saw the person in question at a particu- lar time, and also conveys the speaker’s
opinion of that person. When a communicative goal requires a series of speech acts,
such as route direc- tions, then the speaker needs to both select and sequence those
speech acts. This involves linearisation, i.e. choosing the order in which informa-
tion should be expressed.
2-Message formulation
This is the second stage of speech production, when the messages are framed into words,
phrases, and clauses by the speaker. Essentially, this process involves translating the conceptual
representation into a linguistic form. A first step in formulating is to retrieve the appropriate
words from thE mental lexicon and to embed them in the developing syntactic structure .
Occasional errors of lexical selection (such as "Don't burn your toes" where fingers were
intended) show that the lexicon has a semantic organization.The standard explanation for such
errors is that activation spreads through a semantically organized network . In such a network,
each node has an activation level between 0 and When the lexical concept node SHEEP is
active, then activation spreads to semantically related concept nodes, such as GOAT. Both nodes
spread activation "down" to their lemma nodes.

➢spreading activation
Levelt presents Dell’s (1986) spreading activation theory as the most promising one to
account for how lexical access takes place during real time performance.a chunk in the
preverbal plan activates a number of lemmas in the lexicon. The lemmas which receive
the highest activation because their semantic specifications match the concepts in the
preverbal plan will be selected.To illustrate .Competition between words is often
represented in terms of this activation. Prompted by a particular string of letters or
sounds, people access a number of possible word matches. They are activated to
different degrees – with the more likely ones (those that are most frequent and those
that form the closest match to what is in the input) receiving more activation than the
others. Activation level can change as the language user reads or hears more of the
word – so some candidate words may have their activation boosted by late-arriving
information while others may have their activation depressed.

Example 1;, if a speaker wants to produce the sentence ‘The man gave the
woman the money”, out of 30,000 words average speakers have active in their
lexicon the four content words ‘man’, ‘give’, ‘woman’ and ‘money’ will receive
the highest activation because they best match the pre-verbal plan. This does
not mean that other items do not get activated. Together with ‘man’, other
entries which share similar conceptual specifications get activated, but it is
‘man’ that gets the highest activation .

man —--------woman—---child—---------person
Example 2; After encountering a word such as doctor, they automatically
activate closely linked words such as nurse or patient, recognising them more
readily if and when they occur.Researchers investigate lexical connections of
this kind by means of a method known as priming which measures how much
faster words are recognised when preceded by a word that appears to be
associated with them.As soon as a lemma is retrieved, its syntactic properties
become available .Each lemma requires its own specific syntactic environment
or "frame". Syntactic planning is like solving a set of simultaneous equations.

- Grammatical encoding;

. In Levelt’s model, as well as in several other models, grammatical and phonological


encoding are lexically driven. For grammatical encoding to take place, both lexical
access procedures and syntactic procedures are applied. In the lexicon, each lexical item
is specified for semantic and syntactic information (lemmas), and morphological and
phonological information (lexemes).When a lemma is retrieved because it matches part of
the preverbal message, its syntactic properties become available and they trigger syntactic
building procedures. For example, the entry for ‘give’ will contain its conceptual specification
and conceptual arguments7, the syntactic category (verb), the grammatical functions it requires
(subject, direct and indirect objects), its relations to verbal complements (none in this case), a
lexical pointer which points to a specific form address, and a number of diacritic parameters
such as tense, mood, aspect, person, and number.The procedural knowledge stored in the
grammatical encoder which is activated by the syntactic information in lemmas works to build
the syntactic structure of the sentence, which Levelt refers to as surface structure9. At this point,
however, the specific forms of the different elements are not fully specified yet. What we have is
a string of lemmas which have been organized into phases and subphases according to their
semantic and syntactic specifications.

- Phonological encoding; Phonological encoding refers to the processes that are


responsible for determining the phonological word forms and prosodic content of the
sentence.The lexical pointer specified in the lemma then triggers the phonological
encoding process which results in the selection of specific morphological and
phonological forms . For example, if the intention of the speaker is to express ‘the man
gave the woman the money‘, ‘give’ and ‘gave’ among other lemmas will be activated but
‘gave’ will receive the highest activation because its diacritic parameter ‘tense’ matches
the lemma.Subsequently, by means of a series of phonological procedures which will
draw on the syllabary, the form information of each lexical item will be further
specified. Briefly put, a series of phonological segments are activated, and a
phonological word is produced, which leads to the generation of the
phonetic-articulatory plan. The output of the formulator is a phonetic plan or
articulatory plan which is ready for articulation10. Levelt refers to this articulatory
plan as internal speech, as opposed to already articulated overt speech.

3-Articulation
This is the third and very important stage of speech production, that is, the physical
stage of sound speech, where words are translated into the sounds and syllables of
actual speech. Additionally, in this stage our thoughts and linguistic plan are sent from
the brain to the speech systems in order to execute the required movements and
produce the desired sounds. Therefore, this is however the starting point for the speech
comprehension.

To sum UP

Summing up what we have so far, if a speaker wants to produce the utterance


“the man gave the woman the money”, he or she will first pay attention to what
he or she wishes to say. He or she will then select the information he or she
needs from his or her encyclopedic knowledge, and by considering the
communicative situation and what has been said so far. As chunks of the
intended conceptual message are decided on, and if everything goes well, he or
she will activate and select the appropriate lemmas which will construct the
surface structure of the utterance and that will point to the most appropriate
word forms. Once each lemma is given a morphological and phonological form,
articulation will begin and overt speech will take place. Throughout the whole
process, the conceptualizer will supervise the message by checking the
pre-verbal plan against the intention, the pre-articulatory plan against the
conceptual plan, and the already uttered message against what was intended.
This description may give the impression that production happens in a linear
fashion, by constructing the elements one after the other

❖ What evidence suggests the existence of these distinct stages in


utterance planning?
The evidence for the existence of the conceptualization, formulation, and
articulation stages in language production comes from various sources,
including behavioral studies, brain imaging techniques, and studies of speech
errors.

+Behavioral studies have shown that speakers engage in planning and


preparation well before they begin to speak. For example, studies of sentence
production have shown that speakers plan the structure of a sentence before
they begin to articulate it, suggesting that the conceptualization and formulation
stages occur prior to articulation.
+Brain imaging techniques, such as fMRI and EEG, have provided evidence that
different brain regions are involved in different stages of language production.
For example, studies have shown that the left inferior frontal gyrus is active
during the formulation stage, while the posterior superior temporal gyrus is
active during the conceptualization stage.
+Studies of speech errors, such as slips of the tongue and word substitutions,
also provide evidence for the existence of the conceptualization, formulation,
and articulation stages in language production. These errors suggest that
speakers sometimes plan and select the wrong words or linguistic forms,
providing evidence for the formulation stage. Similarly, they also suggest that
speakers sometimes have difficulty physically articulating speech sounds,
providing evidence for the articulation stage.

❖ planning An utterances
Conceptualisation is pre-linguistic. That is, it does not involve forms of
language, but is all done ‘in the head’ in abstract terms. The speaker
needs to make some very general decisions about what to say, taking
into account facts about the situation that they are in, and relying on
their general knowledge as well as on their understanding of how
communica-tion (including conversation) works. The result of the
process of conceptualisation is a pre-verbal message, i.e. it is still not
language. It consists of a set of ideas ,which are ideas form part of the
mental model ) of what the speaker wants to say.Although speech appears
to be spontaneous, it requires a planning process in which the components
(clauses, words, phonemes) are assembled. Critical to the process are pauses
in the flow of speech, which enable a speaker to construct a new chunk of
language. When experimenters force speakers to suppress pausing, it results
in confused and sometimes incoherent discourse.

Evidence of planning
Evidence for a unit of planning has been sought in pausing, in speech errors,
in intonation patterns and in the gestures which accompany speech. Pauses
tend to come at or near clause boundaries, suggesting that the clause is a
major unit of planning. This is supported by evidence from Slips of the
Tongue, in which most word misplacements take place within a single clause.
To illustrate more, A key source of information about the processes of
conceptualisation and planning in producing spoken output is the pattern of
pausing that speakers produce. The argument is that planning involves mental
activity that competes for our attention resources with the actual process of
speaking. The more planning we need to do, the less easy it is for us to
continue speaking and the more likely it is that we will hesitate.Studies of the
pausing and hesitation patterns of speakers provide insights into the way
speech is planned and executed. The planning of speech takes place under
enormous pressures of time. A degree of thinking ahead occurs while the
speaker is actually articulating, but brief pauses seconds are normally
essential for planning the form of the next utterance. If the opportunity to
pause is suppressed, speech production is adversely affected. Juncture
pauses of this type occur mainly at syntactic boundaries.The clause
appears to form an important unit of planning, while hesitation pauses (filled
and unfilled) can occur anywhere within an utterance and may result from a
failure to retrieve a word or a failure of planning. Slips of the Tongue research
also suggests that a syntactic frame is prepared in advance of lexical items
being slotted into it.

Examples
The type of speech we utter in every speech situation bring many differences
in the amount and types of planning involved.Let us take the differences
between read speech and unprepared speech ,which reflect the equally
obvious differences in the planning involved in the two tasks.

-When the task is reading aloud, most of the planning has already been done
in preparing the text in the first place. As a consequence, when fluent readers
speak aloud from a prepared manuscript, they do not need to pause for
plan-ning purposes but instead pause almost exclusively at points marked
punctuation.Planning in spontaneous speech involves deciding what to say
and which words and sentences to use. Spontaneous speech also contains
more self-interruptions and false starts, and people continuously monitor what
they are saying to convey their message effectively.

❖ Type of planning
Articulatory pauses: has to do with pauses within words due to a phoneme or
syllable.Example: the phoneme /p/ in the word space.

-Delimitative pauses can occur at places where a written text might have
punctuation,breaking utterances into constituent parts, possibly to help the
listener.Intonation and other aspects of prosody can also contribute to this
fonction.
-Physiological pauses are those that help speakers to regulate
their breathing while speaking. In practice, speakers mostly breathe at
points where they might have to pause for some other reason.

NOTES!!!!We can include fillers to pause or show that one is not done with what they
have to say.Fillers are words or sounds that are used to fill pauses in speech, such as
"uh", "um", "like", "you know", "so", and "well". These fillers are often used to signal
to the listener that the speaker is not finished speaking and needs more time to
formulate their thoughts or find the right words to express themselves. Fillers can also
indicate hesitation, uncertainty, or a lack of confidence in what the speaker is saying.
While fillers are a common aspect of everyday speech, excessive use of fillers can make a
speaker appear unprepared, nervous, or lacking in credibility.
Side note: •Planning pauses are longer than physiological pauses but shorter than
delimitative pauses. •Delimitative pauses are longer than physiological pauses

❖ Thee structure of the mental Lexicon


What is the structure of the mental lexicon
How are words or ‘lexical enteries listed
How we retrieve the words that we want

1-The lexicon
The nature of lexical knowledge has been a major area of psycholinguistic enquiry. A
language user is envisaged as possessing a vocabulary store in the mind (a mental
lexicon).the lexicon is The system of vocabulary which is stored in the mind in the form
of a lexical entry for each item. linguists use the term LEXICON to refer to the
collection of all the words (or meaningful elements) in the language.” Each individual
word is referred to as a LEXICAL ITEM. For each lexical item, the lexicon must specify
how it is pronounced, what it means, and how it patterns in the grammar.All of the
phonological, semantic, and grammatical information which is specific to a particular
word is included in its LEXICAL ENTRY.
➢ LEXICAL ENTRY
The information that is stored in the mind concerning a particular lexical item. Levelt
(1989) represents a lexical entry ,consisting of two parts, one related to form and one
(the lemma) related to meaning and use.

➔ ‘Form’ includes:
Mental representations of the item which enable it to be identified when it is encountered.Form
includes phonological and orthographic representations that allow for variation. the fact
that a speaker may have any one of a number of accents or that a written text may appear in
any one o a number of different typefaces. It also includes information on the morphology of the
item – both inflectional (providing a plural for a noun or a past tense form for a verb) and
derivational (indicating the component parts of a word such as UN-HAPPI-NESS).

➔ The lemma of an entry includes:


Information on the syntactic structures in which the item features. The lexical entry
needs to contain information on word class to enable the word to be used in
generating sentences. It also needs to include information on the types of syntactic
structure that are associated with the word. Thus, the entry for GIVE might include
GIVE + NP (noun phrase) + NP and GIVE+ NP + to þ+ NP, indicating that, once we
choose to construct a sentence around the verb GIVE, we commit ourselves to using
one of two sentence patterns: give Mary a present or give a present to Mary. The
entry contains additional semantic information about what fits into each of the NP
slots. It might tell us that, in the GIVE + NP + NP pattern, the first NP has to be a
recipient (probably animate) and the second NP has to be a gift (probably inanimate).
A range of senses for the word. The issue of word meaning is complicated by the fact
that many words do not refer to single objects in the real world, but represent a whole
class of objects or actions. There are two important issues here, so far as lexical
storage is concerned. Firstly, the area of meaning covered by any given word is
heavily influenced by the existence of other words alongside it. We can only fully
understand how to use the word HAPPY if we recognise the existence of alternatives
such as CONTENT or PLEASED or DELIGHTED, which limit the semantic boundaries
within which HAPPY operates. There must be very close links between lexical entries
that fall within a particular area of meaning; only in this way are we able to select
exactly the item we need and rule out others. Secondly, the area of meaning that we
associate with a word is heavily dependent upon the way in which we categorize the
world around us. A major area of research in psycholinguistics attempts to establish
the nature of the categories that we form, and how they become established in
the process of acquiring our first language.

Lexicalization as a two-stage process


Lexicalization is How thoughts are put into words .The process in speech
production whereby we turn the thoughts underlying words into sounds: We
translate a semantic representation (the meaning) of a content word into its
phonological representation of form (its sound).The lexical entry consists of two
parts, one relating to form and one to meaning. It is argued that there are two
stages or levels to this process.
Lemma access/ word finding/lexical selection: the lemma represents the range
of possible senses associated with the item. Lemma takes place as a result
of semantic activation at the functional level of syntactic planning.Lemmas are
specified syntactically and semantically but not phonologically.

Lexeme retrieval/ word building/phonological encoding ;


It includes the mental representations of the phonological and orthographic
forms of an item together with inflectional information.

HOW THE CHILD ACQUIRE CONCEPTS OR LEXIMS


Learning A new lexical item is not just a matter of mastering the form of the item and
associating it with a sense or range of senses. The item also has to be linked to the whole
network of previously learned words. If a child learns the word TERRIFIED, it has to
(a) form a connection with HORRIFIED and TERRIER which are similar in form; (b)
form a connection with AFRAID and SCARED which are similar (but distinct) in
meaning.

❖Lexicalization process
The sketch of the lexicalisation process broke lexicalisation down into two stages –
finding words and building words.

-Lemma ACCESS : the first stage of lexical access in Speaking. In the first stage we
select the word that corresponds to the chosen concept. In the view of Levelt et al.
(1999), the speaker first selects a lemma, or syntactic word unit. Lemmas specify the
syntactic class of the word and often additional syntactic information, such as whether a
verb is intransitive (e.g., sleep) or transitive (e.g., eat) and, if transitive, what arguments
it takes. Lemma selection is a competitive process. Several lemmas may be activated at
once because several concepts are more or less suitable to express the message, and
because lemmas that correspond to semantically similar concepts activate each other via
links to shared superordinate concepts or conceptual features. A lemma is selected as
soon as its activation level exceeds the summed activation of all competitors.

-Lexeme retrieval: morpho-phonological encoding, begins with the retrieval of the


morphemes corresponding to the selected lemma. For the lemma baby there is only one
morpheme to retrieve, but for grandson or walked two morphemes must be retrieved.
Evidence that speakers access morphological information comes from a variety of
sources. For instance, people sometimes make speech errors such as “imagine getting
your model renosed”, where stems exchange while affixes remain in place . Other
evidence shows that morphologically related primes have different effects on the
production of target words than do semantically or phonologically related primes .

Phonological encoding:The next processing step is the generation of the phonological


form of the word. Word forms are not simply retrieved as units, but are first
decomposed into individual segments (or perhaps segments and certain groups of
segments, such as /st/), which are subsequently mapped onto prosodic patterns. The
most convincing evidence for phonological decomposition stems from studies of speech
errors (e.g., Fromkin, 1971). Speakers sometimes make errors in which they replace or
misorder single phonemes, as in “perry pie” instead of “cherry pie.” These errors show
that the words’ segments constitute processing units; if word forms were retrieved as
units, such errors could not occur. Thus, for the word “baby,” the segments /b/, /e/, /b/,
/i/ are retrieved. In the model of Levelt et al., the string of segments is subsequently
syllabified following the syllabification rules of the language and is assigned stress.
Many words are stressed according to simple default rules: For example, bisyllabic
English words are usually stressed on the first syllable. For words that deviate from
these rules, stress information is stored in the lexicon. During phonological encoding,
the segmental and stress information are combined. Results from a large number of
experiments using various types of priming and interference paradigms suggest that all
phonemes of a word may be activated at the same time, but that the formation of
syllables is a sequential process, proceeding from the beginning of the word to the end

❖Building words
In order to get deeply immersed in the production process ,it is crucial to state the four
levels of processing ,which include the message level, the functional level, the positional
level, and the phonological level. The message captures features of the speaker's
intended meaning and provides the raw material for the processes of grammatical
encoding. These processes are grouped into two sets, functional and positional. The
primary subcomponents of functional processing are

-lexical selection which involves the identification of lexical concepts that are
suitable for conveying the speaker's meaning)

-function assignment which involves the assignment of grammatical roles or


syntactic functions).
2)Positional processing involves the creation of an ordered set of word slots (constituent
assembly) and morphological slots (inflection). Finally, phonological encoding involves
spelling out the phonological structure of the utterance, in terms of both the
phonological segments of word forms and the prosody of larger units.The processes of
grammatical encoding can be more concretely specified by going through the steps
involved in generating a simple utterance and constructing errors that might arise at
each step.

=The first step, lexical selection, involves identifying the lexical concept and LEMMAS
suitable for conveying the message. Lemmas carry the grammatical information
associated with individual lexical concepts, such as their form class (noun, verb, etc.)

Error:A common type of speech error that appears to reflect a problem of lexical
selection is a SEMANTIC SUBSTITUTION,
Target: Where is my tennis racquet?
Error: Where is the tennis bat?
These substitutions preserve general features of the meaning of the intended word
(Hotopf, 1980) and are nearly always members of the same grammatical form class
(noun, verb, adjective, adverb, or preposition).t.

-The second step is function assignment. This involves assigning syntactic relations or
grammatical functions (e.g., subject-nominative, object-dative). for example, during the
formulation '' She was handing him some broccoli'' the feminine pronoun lemma should
be linked to the nominative (subject) function, the masculin or the argument ''him''
should be assigned Accusative function, and hand to the main verb function.

Errors :Errors of function assignment arise when elements are assigned to


the wrong functions. For example, if the feminine and masculine pronoun lemmas were
linked to the dative and nominative functions respectively, the resulting utterance would
most likely be He was handing her some broccoli. These EXCHANGE errors, like other
types of exchanges, involve constituents of the same type (both are noun phrases). They
are not simple exchanges of word forms, as our example illustrates: The error is not
Him was handing she some broccoli.

The next two steps constitute positional processing, so called because iT fixes the order
of the elements in an utterance.We consider constituent assembly first. This is the
creation of a control hierarchy for phrasal constituents that manages the order of word
production and captures dependencies among syntactic functions. In other words, the
basic features of such hierarchies are largely predictable from the types Of syntactic
functions that have to be represented and from the syntactic features of the selected
lemmas;The last of the grammatical encoding processes, inflection, involves the
generation of fine-grained details at the lowest levels of this structure. In English, many
of these details involve elements that carry information about number, tense, and aspect
but are bound to other words.

Errors 1:One type of error that is identified with inflection is known as


STRANDING. Stranding is illustrated in the utterance of a speaker who intended to say
You ended up ordering some fish dish and instead said You ordered up ending some
fish dish . In such errors, the bound suffixes (-ed, -ing) show
up in their proper locations in the utterance but affixed to the wrong words, arguing
that the inflections are positioned separately from their word stems.

Error 2 :Another type of error that may arise during inflection is called a
SHIFT (Garrett, 1975) and consists of the mislocation of an affix. Such an error could
lead to the utterance of'' She was hand himming some broccoli by our hypothetical
speaker''. The elements involved in such errors are much more likely to be involved in
errors than the final syllables of word stems, implying that strandings and shifts are not
simplemislocations of syllables but mis locations of pieces of grammatical structure.

❖ Generation of sentences in spoken language production


When speakers q sentence they retrieve words as described earlier. However, because
sentences are not simply sets of words but have syntactic structure, speakers must apply
syntactic knowledge to generate sentences. Following Garrett (1975), models of sentence
production generally assume that two distinct sets of processes are involved in
generating syntactic structure (Bock & Levelt, 1994; Levelt, 1989).

● functional planning processes, assigns grammatical functions, such as subject,


verb, or direct object, to lemmas. These processes rely primarily on information
from the message level and the syntactic properties of the retrieved lemmas.
● positional encoding, uses the retrieved lemmas and the functions they have been
assigned to generate syntactic structures that capture the dependencies among
constituents and their order. In English, the mapping from the functional to the
positional level is usually quite straightforward: The subject usually precedes the
verb, and the direct object and indirect object follow it. However, inversions can
occur, as in “I don’t mind bikes; cars I hate.”

Evidence :Evidence for the distinction between functional and positional processes
comes from the finding that some speech errors (e.g., exchanges of words from different
phrases, as in “put the tables on the plate”) can best be explained as errors of functional
encoding. Other errors with different properties (e.g., shifts of morphemes within
phrases, as in “the come homing of the queen”) can best be explained as errors of
positional encoding. The distinction is further supported by the results of structural
priming studies. In such studies, people first hear or say a sentence such as “The woman
shows the man the dress.” They later see a picture that can be described using the same
kind of structure (e.g., “The boy gives the teacher the flowers”) or a different one (“The
boy gives the flowers to the teacher”). Speakers tend to repeat the structure used on
previous trials, even when the words featured in prime and target sentences are
different and even when the events are unrelated. The results of many such studies
strongly suggest that the priming effect arises during the positional encoding processes
(Bock, 1986; Bock & Loebell, 1990; Chang, Dell, Bock, & Griffin, 2000)

sleep of the tongue


A slip of the tongue phenomenon has engaged the attention of psycholinguists for a
long time. It is an important source of data in our study of speech production and
language development. Much is now known about the cognitive procedures involved
in metal processing of language. Through speech error, we can now assess better and
distinguish between language competence and performance. Such a study will
contribute to the establishment of models of speech production. Experimenters can create the
'Tip of the Tongue' state by providing definitions of obscure words, leading to inaccurate
responses that reveal how we search for words in our lexicon. This supports evidence from
'Slips of the Tongue', showing that certain aspects of a word are less likely to be
misrepresented. The initial and final syllables, stressed vowels, lexical stress distribution, and
number of syllables help us locate words. This suggests that a lexical entry has two parts, one
for form and one for meaning, which can be accessed separately. We first identify a word by its
abstract meaning code, then insert its phonological form when planning speech.

the production of morphological structure and speech Errors


The components of the word that need to be fleshed out include its mor- phology and its
phonology. The morphology of the word is its structure defined in terms of the
meaningful parts that constitute it – these include the stem or base form of the word
and its various possible affixes, such as English -s meaning ‘plural’ in cats or in- to show
‘negative’ in inadequate. Words are made up of one or more morphemes. The
morpheme is the smallest unit of meaning, and can be free-standing such as cat or
bound to other morphemes as is the case with the plural -s. A morpheme, which is often
represented using a definition of its meaning and by being conven- tionally placed inside
{}, can have more than one form or allomorph. So the English (plural) morpheme has
/s/, /z/ and / z/ ǝallomorphs occurring in different phonetic contexts (contrast the
pronunciation of -s in cats, dogs and horses).

One of the earliest developed psycholinguistic techniques to study morphological effects


on speech production is the observation and elicitation of speech error.

Morphologically complex words


A key issue in the discussion of the speech production brought morphological issues to
the attention of many psychologists, raising a question on whether morphologically
complex words are constructed whenever a speech is uttered or it is being fully listed
in the mental lexicon.

Straddling Errors
Morphological Stranding errors show that word stems and word ending are treated
separately during at least some stages of the process of speaking.Accordingly, it is
suggested that the affixed form is constructed during speech production. If this were
not the case, if in fact the complete affixed form is retrieved from the mental lexicon, the
it is normal to predict that the affix remains with the relevant stem.For instance ,

Target: He has already packed two trunks


Error: He has already trunked two packs

In this case ,the Morphemes remain in place but are attached to the wrong words,which
lead to what we call Morpheme Stranding. The existence of such a speech error can be
taken as evidence that roots (trunk, pack) exist at a separate representational level from
the level which stores both the form and semantics of affixes (-s). Thus, the distribution
of speech errors involving inflectional or derivational morphology (e.g., intended word
grouping → actual word groupment; Melinger, 2003) has become an important evidence
for developing representational models of speech production
Infections
Grammatical marking is part of the syntactic frame with affixes attached during speech
production,including tense and plural markers,which is one of the most frequent
elements involved in stranding errors in English.However ,There are some exception in
the english language ,which is irregularities .those exception refers to deviations from
the logical rules in a few grammatical constructions .In this regard,Not all English
plural and past tense forms involve the simple affixation of endings onto stems. English
has plenty of irregular plurals, like person => people ,mouse => mice and irregular past
tense forms like Buy=> bought and go=>went .
Such forms can- not be predicted by a rule such as the plural rule and so it would
reasonable to expect these words to be stored as complete forms in the lexicon and
accessed as such rather than being constructed as and when
needed. Nevertheless, irregular past tense forms are also involved in
English speech errors. For instance ,

‘’know one if I heard it → I’d hear one if I knew it’’The exchange is clearly of the
underlying morphemes for the stems know and hear .know is inserted with the abstract
past feature associated with the frame at the functional stagE and the subsequent
process of specifying word forms results in the insertion of the correct irregular form,
rather than a regularized knowed .

The distinction between derivational and inflectional affixes is based in part upon
whether they change the grammatical category of the word to which they apply. By this
criterion, the plural affix for nouns and number, tense, and aspect affixes for verbs are
inflectional, whereas derivational affixes change verbs into nouns (e.g., -tion, as in
creation), nouns into verbs (e.g., -ate, as in pulsate), nouns into adjectives (e.g., -ly, as in
princely), and so on. However, not all derivational affixes change form class (e.g., un-,
mis°Errors involving derivational prefixes have also been taken to indicate
that morphological structure is represented in the production lexicon.
Examples (4.12)
admitting entails inserting . . . entails asserting (FSED)

❖ Words and rules

1)the arguments against the idea that all inflected forms are looked up in
the mental lexicon.( the full-listing hypothesis)

1-Regular verbs exhibit a rule-based behavior in the past


2- Regular plurals are also rule-based in their formation
3-Another related argument is that children learn at quite an early stage that forming
plurals – an inflectional process – is highly regular and,therefore predictable, as is
making past tenses. The so-called ‘wug test’ was used in a study with young children
who were shown a cartoon pic- ture of a creature, were told it was a wug and were then
asked what they would call two of them (Berko, 1958 ). As predicted, the children said
they were wugs .

°Morpheme-shift errors indicate that the inflection may not be associated with the verb
in the mental lexicon. (e.g Point outed). Affixation is applied to the multi-word unit and
it accommodates to its context of insertion. The allomorph is appropriate to the base to
which it is attached and not to the base it should have attached.

Slots and fillers


In essence, the approach as applied to phonological encoding suggests that when a
lexeme is converted into a sound pattern, phonetic segments are mapped onto a
template for the lexeme, in which metrical and syllabic structures are specified. Since
the speaker has a number of words lined up for production (as we know from word
exchanges), there is potential for the sounds of these words to be misal- located. The
sound errors imply that the misallocation is not random. Rather, errors reflect the need
to match the properties of the sounds to those of their places in the template. If there are
competing similarly-defined positions in the template (e.g. initial voiceless stop
consonants in a stressed syllable, as in car park ), then these are likely to attract the
sounds, resulting in error.

Derivational morphology Derivational Morphology involves the construction of new


words from base forms.Experimental data indicates the derivationally complex words
are stored as such in the mental lexicon • Variation in the respondents’ answers with
derivation unlike inflection • Morphological structure is represented in the
production lexicon

Negative préfixesThe negative prefix is represented as NEG in the mental lexicon at the
functional level. It may be associated with the wrong position at the positional level as in
the example . Instead of surfacing on precise, it is attached to regard.If there was
anything that was unclear vs if there was nothing that was clear

Morphology and the lexiconone can say that the regular morphology and the
irregular lexicon are separate entities; one might imagine the two having very little to
do with one another, since the morphology deals only with potential words and the
lexicon only with existing words. In fact, the two systems do have a great deal to do with
one another, for two simplereasons. The first is that they serve the same role in a
language: both provide words. The second reason is that morphology and the lexicon
are interdependent. Most centrally, the morphology, which forms words from words,
finds the words that it operates on (its bases) in the lexicon.

The lexicon and Productivity :The term “productivity” is used in a wide


variety of contexts. Syntactic rules are “productive” in the sense that they can be used to
generate new phrases. The same can be said of some morphological rules. A linguistic
process is productive if It can be used to produce novel forms. If a rule is productive,
then: Novel forms (previously unheard) can be created . A word-formation rule or affix
is considered productive if it has the ability to coin new words by other word-formation
processes. However , Not all affixes possess this property to the same degree, some
affixes do not possess it at all.

For example, that nominal -th (as in length) can only attach to a small number of
specific words, but cannot attach to any other words beyond that set. This suffix can
therefore be considered unproductive. Even among affixes that can in principle be used
to coin new words, there seem to be some that are more productive than others. For
example, the suffix -ness (as cuteness,happiness ) gives rise to many more new words.is
suggests that –ness suffixation is a productive derivational process.

Example-2 -s (books) plural and –en (children) plural in English. In this example –s is
productive and –en is unproductive.

Thus ,• Inflectional morphemes are generally productive


• Derivational morphemes vary in their productivity.
• Other afixes that apply only to a closed set of words are said to be lexicalized (ante in
antecedent meaning preceding.

The hidden mechanism of productivity


More access to lexical entry for affix pattern (parsing) à more productivity of affix.
Psycholinguistic evidence: lexicon is a network in which whole words are frequently
stored and connected to each other associatively ¡ Based on phonological,
morphological, and/or semantic similarity ;Generalizations about word-form
relatedness can be abstracted from these lexical entries + associative connections (e.g.,
un-Xadj). These are morphological patterns Some morphological patterns are more
likely to be activated during lexical access than others ¡ But not straightforwardly
related to regular vs. irregular.¡Amount of activation determines the productivity of the
pattern .Productivity is thus a product of the structure of the lexicon (and word
processing), and conversely, is informative about the lexicon

Morphology and lexical stress errors


Lexical stress errors are errors where the correct word has been produced. But with the
wrong stress pattern. Since English has many cases of mor- phologically related words
that have differences in their stress pattern. It is interesting to note whether the
incorrect stress pattern is in fact the correct stresspattern for a related word. If this is
the case, then it could imply that there are connections between morphologically related
words in the mental lexicon, and that during the process of selecting the target word
there has been some interference from a linked word.

Layers of representation:
Phonological encoding has two layers of representation :
• Segmental layer : Phonological segments
• Metrical layer : Syllable structure and stress patterns
• Segments and syllable frames are independently
retrieved and then the segments are associated to
positions in the syllable frames.
• Evidence : The syllable position constraint ,Misplaced segments almost always move
from their target position to corresponding positions in other syllables. (e.g. Onsets
remain onsets.)
Chapter 6: The Use of Gesture

Speakers perform “communicative movements of the hands and arms, which,


similar to language, are used to express the thoughts, feelings, and intentions
of a speaker . . .” USE of language in the face-to-face context involves
production and perception of speech using many visual articulators, such as
the lips, face, or hand gestures. These visual articulators convey relevant
information to what is expressed in speech and at different levels. For
example, while lips convey information at the phonological level, hand
gestures (and face and head movements to some extent) contribute to
semantic, pragmatic, and even syntactic information.Gestures that accompany
speech, called co-speech or speech-related gestures, are hand, head and arm
movements produced by the speaker in a manner that is closely time-locked to
these semantic and pragmatic properties of the corresponding speech.
Gestures communicate information to listeners, help speakers to formulate
and manage their utterances and play an important role in conversation
organization. In short, they convey meanings as part of speech.

Approaches of gestures
Research into the use of gestures in speaking took two different approaches:

1- A psychoanalytic approach to look for gestures that reveal information


about the speaker’s emotional and mental state.

2- A linguistic and psycholinguistic approaches : that assume that gestures


provide an additional channel of information, complementing the spoken
channel.

Phases of gesturing
Neurophysiological studies of listening participants who were in front of
speakers who use gestures while speaking resulted in a close link between
speech and gesture for the listener. The hard evidence is that gestures help
speakers to maintain an image of the concepts that they are trying to express
by making the connection between the concepts (lemmas) and the linguistic
expression (lexemes). Gestures help in accessing a word in the mental
dictionary by a mechanism of ‘visual priming’ just as when seeing a picture
helps the speaker Gesture phases:

)Rest position: a stable position from where the gesticulation is initialized.


)Preparation phase: A movement away from the resting position to prepare the
next phase.
)Gesture stroke: The meaningful part of the gesture.
)Holds: Motionless phases potentially occurring before or after the stroke.
)Retraction : recovery phase during which the hands are retracted to a rest
position.r to find the spoken word used to name that picture, thus, they help in
word retrieval

Functions of Gestures:
Content-related gestures convey the semantic content of the speech
Discourse management gestures are used for managing the conversation that
the speaker is involved in.

1- Gestures as a Content Through the analysis of the linguistic use of gestures


and experimental work, psycholinguists suggest that there are content related
gestures which contribute to fluency in speech production and reduce
vocabulary size and finding out that speakers who are prevented from using
their hands while speaking, use more vocabulary and spend time in looking for
the appropriate terms than those who are free to use their hands. Content
related gestures can be: symbols, indices and icons

1 Symbols: They often replace aGestures that stand for something, sometimes
referred to as emblems. whole utterance and can also be used along with
information conveyed in a sentence. They are used for interpersonal control
(hello, be quiet) or to express personal states ( I agree, I do not know) and for
the evaluation of others ( he is crazy)

Gesture Meaning Gesture Meaning

Head nod Yes Head shake No

Thumb up I approve Thumb down I reject

Greeting wave Hello Farewell Good bye

Shoulder shrug I do not know Wink I'm kidding

Thumb and Index Excellent protruding lips Be quiet.


in circle

1-2 Indices or Indicative gestures are mainly used to direct the listener’s
attention to particular objects, these gestures involve a part of the body , an
instrument ( ex: the index) or a locative action ( ex: to point at something)
depending on cultural differences. Indices accompany speech and coincide
with certain parts of it as in the following example: Example: Can you put the
carpet down here ? (pointing with our index to the place we wish to put the
carpet) The Indice can be of extreme necessity in some utterances and without
the indice comprehension is not possible as in the following example where
the painter is saying that he painted two walls in a room: “I have painted these
two just now” As we can see, there is no possibility for the listener to know
which walls if the speaker does not use the pointing gesture.

3 Iconic Gestures :Such gestures present images of concrete entities and/or


actions .They depict what is being talked about and are used either by the
speaker or the listener as a response (such as when they smile or give a look
of surprise). Iconic gestures function as an informative signal by which the
speakers transmits a meaning like in the following utterance:
“Then, the big guy went…..(gesture of someone pushing another out of the
way) and I just got out of there” Iconic gestures are found most often before
nouns, verbs and adjectives

Metaphorics, or metaphorical gestures, are true to their name in that they are hand symbols
that represent abstract ideas that are impossible to represent directly.
Example: A fist motion upwards and twisted to the left may stand for the idea of freedom.

Symbols/Emblems:
Emblems, a term first used by researcher, David Efron, describe gestures that
have very precise meanings known within an ethnic, cultural, or sub-cultural
group.Hence, Emblems are culturally specific, have standard forms and
significances, and vary from place to place. Emblems are used to take the
place of speech or to accompany specific words. Emblems are culturally
specific, have standard forms and significance, and vary from place to place.

For example, circling the index finger around at the side of your head says “He
or she is crazy,” or rolling your hands over and over in front of you says “Move
on.”In America someone might ask, “are you OK?” In response, it would be
common for someone to respond by placing their thumb and forefinger
together while raising their other three fingers to form the “OK” sign. However,
people from Germany, Brazil, or Russia might interpret this gesture differently
since it is offensive in their cultures.

There are also Auditory symbols/emblems,which are sounds that carry similar
functions to gestures.For example: Clap =I approve ,Hiss=I disapprove

Symbols as Junctions:

Junctions : gesture that use usually requires more than one person to be
actively Participating and are typically joint physical actions, such as shaking
hands, hugging, kissing (Clark, 1996 )

Crosslinguistic differences
Indeed, some cultures prescribe deixis with the lips (Enfield 2001). Deixis
entails locating entities and actions in space vis-à-vis a reference point, which
Bühler called the origo (Bühler 1982, Haviland 2000).Much of the pointing we
see in adult conversation and storytelling is not pointing at physically present
objects or locations but is abstract pointing, which Bühler referred to as deixis
at phantasma. The emergence of abstract pointing is a milestone in children’s
development.
❖ Discourse management gestures :
Discourse management gestures are used for managing the conversation that
the speaker is involved in.Such gestures perform a number of discourse
management functions.

+Delivery gestures indicate that the subject matter is being communicated to


the listener, often by a specific ‘handing-over’ gesture directed towards a
particular recipient.
+Citing Gestures are used in citing such as refering back to an earlier
contribution to the conversation, by pointing at an addressee who made the
earlier contribution
+Seeking gesture usually requests a response. The speaker is trying to get a
particular response from an addressee (e.g. by looking at the addressee )
+Turn-taking gestures contribute more obviously to the management of
conversation. An example would be a gesture that indicates that another
speaker may now have a turn,
+Collateral gestures contribute to the listener’s part in the successful
completion of the communicative event through gestures such as a head nod
done to offer feedback to the speaker or a grimace to express sympathy.

Gesturing for emphasis


While speaking, we might put some stress on some words because they carry
more important information or when we wish to reinforce an idea or a
message. Batonic (or beat) are rhythmic motions of the hand or fingers that
seem not to bear any representational meaning. They are related to
the phonology of the accompanying speech.
For example, the “beating of a table or to make a pause looking at the listener
directly as in the example: “I have told you before, but it looks like I need to tell
you again [ Pause] you must not use other people’s essays in your work
without giving references [pause] you must not go over the word limit
[pause]and you must not hand your work in after the due date without getting
clearance from me first.”

Gestures, conceptualisation and lexicalisation


The study of Kita

In a comparative study of English and Japanese native speakers, Kita ( 2000 )


looked at the gestures associated with a description of a sequence in a
cartoon story where a cat is swinging on a rope across a street from one
building to another, where he has seen a bird in a window. A linguistic
difference between English and Japanese is that while English has an
intransitive verb that depicts the down-then-up trajectory of the cat across the
street ( the cat swings across the street ), Japanese does not. Therefore, when
Japanese participants talked about this part of the cartoon, they did not use
expressions that encoded the arc of the swinging action. Instead, they used
phrases which might be translated as ‘jump across to’ or ‘go in the direction of
the bird’, etc.

The results Interestingly, the analysis of their gestures showed that the
Japanese participants were likely to use straight-line gestures to show the
move- ment of the cat from one building to the other, while the
English-speaking participants almost exclusively used arc-like gestures. While
some of the Japanese speakers did use a separate arc-like gesture in addition
to the straight-line gesture, only the latter coincided with the words indicating
the movement.

The conclusion Gestures can relate either to the pre-linguistic


conceptualisation of what is being talked about or to the linguistic formulation
or to bothThe japanese associate the arc gesture at the conceptual stage and
followed it with a straight-line gesture at the formulation stage due
to the lack of an appropriate lexical item expressing the idea.
The English associate the arc gesture at botH conceptual level and formulation
level because the verb « swing to » expresses agentive change of
location with an arc trajectory.

Motion path and manner encoding

Verbs expressing motion express the manner of motion (run, walk, climb) or
the path/direction of motion (exit, enter, pass). Manner of motion verbs encode
their path with an adverbial particle (run to the store, run out of the store, run
into the store.Languages tend to fall in one of two categories:

Verb-framed languages : Information about the path of movement is expressed


in a verb (Arabic, Turkish, Japanese). The manner expressed outside the verb
(e.g. Gerundive adverbials in Arabic)

Satellite-framed languages : Information about a path is expressed outside the


verb by by adverbial particles for example (English) The motion, path and
manner meaning components are all part of the speaker’s intention during the
conceptualisation stage in both sattelite-framed and verb-framed languages.
The grammatical encoding processes differ depending on how the language
expresses these two components with different sequencing in the sentence
structures for the two languages.

Empirical results on gesturing: (Kita & Özyürek , 2003 )

This encoding difference is also reflected in the gestures used by speakers of


sattelite-framed and verb-framed languages.English speakers use a single
hand-gesture combining a circling movement for the roll with a sideways
movement of the hand for the direction. Japanese and Turkish speakers are
more likely to show the path and the manner in two separate gestures,
coordinated with the two clauses of the speech.

Spatial Frame of reference:

Reference frames are coordinate systems used to interpret linguistic and


nonlinguistic representations of the location, motion, and orientation of
entities.There are three types :

Intrinsic frame of reference is a binary spatial relation in which the location of


an object is defined in relation to a part of another object (its side, back, front,
etc.). Relative frames use one's facing direction (left, right); whereas Absolute
frame of reference is also a binary system in which the location of an object is
defined in relation to arbitrary fixed bearings, such as cardinal directions
(North, South, East, West).

Spatial frame of reference and gesturing

Westerners will use left-to-right gestures for a left-to-right description, in


relative space and regardless of which way they are looking.
Speakers of languages that use absolute spatial relationships will change the
direction of their gestures, depending on which direction they are facing. They
keep the absolute direction.

The function of gestures


Do speakers gesture to facilitate speech production to help the
does the listener understand better ?

Jacobs & Garnham ( 2007 ) tested the role of conversational hand gestures in
a narrative task to see whether they are meant to serve speech production
(The Production Hypothesis) or the speech comprehension (The
Communication Hypothesis) . here are two main hypotheses of what role
gesture plays in speech,

Research suggests that gesturing is beneficial for


Communication.Indeed,gesture work to relieve speech ambiguity and supply
additional meaning .For example, When one is faced with ambiguity in speech,
possibly due to external factors,including background noise,
incomprehensible requests, or unclear speech, a listener tends to rely on
gestural input to provide extra information. Moreover ,Even if the speech is
comprehensible as studies have shown,participants still exhibit a more
accurate understanding of instructions or narratives when speech is paired
with gestures.co-speech gesture has been shown to facilitate the learning of
new words in a foreign language [8], and young children may be able to
understand the meaning of novel verbs when presented with gestural
information and no speech. the number of gestures produced should vary as a
function of the understanding of the listener. the number of gestures will
decrease when an explanation is given several times to the same listener, or to
a listener who already knows the content of the message being conveyed.
relating the same content to different listeners will not cause the number of
gestures to decrease

The Production Hypothesis :


The second hypothesis of why we use gestures is to facilitate lexical access,
or in different terms, to aid working memory in word retrieval during speech.
If the primary function of gestures is that of speech production, specifically
aiding lexical access, then

°Repeating the message should decrease the number of gestures produced


°imparting a second (novel) message to a listener who is made aware of
the content of the message should not show a decrease in the number of
gestures required.

Materials : monologue cartoon narration tasks

Experimental conditions :In the first condition, the speaker is required to


repeat the same comic strip story three times to the same listener.In the
second condition, the speaker relates the same message three times to
different listeners.In the third condition, the speaker relates three different
messages to the same listener.In the final condition, the speaker narrates three
different strips to three different listeners.

Results:Condition one (the same strip narrated to one listener) was found to
produce a significantly lower gesture rate than condition two (the same strip
narrated to different listeners) Condition three (different strips related to
informed listeners) was found to produce a significantly lower gesture rate
than condition four (different strips related to different listeners). There was no
significant difference between conditions one and three or between conditions
two and fourWhen the listeners already knew what was being described
(either by having heard it described before or by dint of being able to see the
cartoons themselves), then the speaker used
fewer gestures.These results are as predicted by the communication
hypothesis but not the speech production hypothesis.This suggests that
gestures primarily serve a communicative function

You might also like