Assessing Speaking Kelompok 2 3C

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 18

LANGUAGE ASSESSMENT II

ASSESSING WRITING
Lectured by Mahendra Puji P. A, M.Pd

Written by Group 2
1.
2.
3.
4.
5.
6.
7.

Merina Hartikasari
Lailatul Khafifah
Dewanti Anggariza
Nanda Hastutti P
Lea Rizky Sukmana P
Theresia Ellia
Ahmad Arri Dhowi

(14.1.01.08.0084)
(14.1.01.08.0088)
(14.1.01.08.0095)
(14.1.01.08.0098)
(14.1.01.08.0101)
(14.1.01.08.0124)
(14.1.01.08.0156)

C CLASS
ENGLISH DEPARTMENT
TEACHER TRAINING AND EDUCATION FACULTY
UNIVERSITY OF NUSANTARA PGRI KEDIRI
2016

Preface

First of all, thanks to Gods love and grace for us. Thanks to God for help us and give us
chance to finish this assignment timely. And we would like to say thank you to Mr. Mahendra
Puji Permana Aji M.pd as the lecturer from LANGUAGE LEARNING ASSESMENT II that
always teaches us and give much knowledge about how to practice English well. This
assignment is the one of English task that composed of Assessing Speaking. We realized this
assignment is not perfect. But we hope it can be useful for us. Critics and suggestion are needed
here to make this assignment be better.
Hopefully we as a student in University of Nusantara PGRI Kediri can work more
professional by using English as the second language whatever we done. Thank you.
Kediri, Oktober 2016.

Writer

A. ASSESSING SPEAKING.

While speaking is a productive skill that can be directly and empirically observed,
those observations are invariably colored by the accuracy and effectiveness of a testtakers listening skill, which necessarily compromises the reliability and validity of an
oral production test. How do you know for certain that a speaking score is exclusively a
measure of oral production without the potentially frequent clarifications of an
interlocutor? This interaction of speaking and listening challenges the designer of an oral
production test to tease apart, as much as possible, the factors accounted for by aural
intake.
Another challenge is the design of elicitation techniques. Because most speaking
is the product of creative construction oflinguistic strings, the speaker makes choices of
lexicon, structure, and discourse:If your goal is to have test-takers demonstrate certain
spoken grammatical categories, for example, the stimulus you design must elicit those
grammatical categories in ways that prohibit the test-taker from avoiding or paraphrasing
and thereby dodging production of the target form.
B. TYPES OF SPEAKING.
1. Imitative. At one end of a continuum of types of speaking performance is the ability
to simply parrot back (imitate) a word or phrase or possibly a sentence. While this is a
purely phonetic level of oral production, a number of prosodiC, lexical, and
grammatical properties of language may be included in the criterion performance.
2. Intensive. A second type of speaking frequently employed in assessment contexts is
the production of short stretches of oral language designed to demonstrate
competence in a narrow band of grammatical, phrasal, lexical, or phonological
relationships (such as prosodic elements-intonation, stress, rhythm, juncture).
Examples of intensive assessment tasks include directed response tasks, reading
aloud, sentence and dialogue completion.
3. Responsive. Responsive assessment tasks include interaction and test com-v
prehension but at the somewhat limited level of very short conversations, standard
greetings and small talk, simple requests and comments, and the like.
Example :
Jeff
: Hey, Stef, hows it going?
Steff
: Not bad, and youself?
Jeff
: Im good.
Stef
: Cool, Okay, gotta go.

4. Interactive. The difference between responsive and interactive" speaking is in the


length and complexity of the interaction, . Interaction can take the two forms of
transactional language,which has the purpose of exchanging specific information, or
interpersonal exchanges, which have the purpose of maintaining social relationships.
5. Extensive (monologue). Extensive oral production tasks include speeches, oral
presentations, and story-telling, during which the opportunity for oral interaction from
listeners is either highly limited (perhaps to nonverbal responses) or ruled out
altogether.
C. MICRO AND MACRO SKILL.
The microskills refer to producing the smaller chunks of language such as
phonemes, morphemes, words, collocations, and phrasal units. The macroskills imply the
speaker's focus on the larger elements: fluency, discourse, function, style, cohesion,
nonverbal communication, and strategic options. The micro-and macroskills total roughly
16 different objectives to assess in speaking.
1. Microskills.
a) Produce differences among English phonemes and allophonic variants.
b) Produce chunks of language of different lengths.
c) Produce English stress patterns, words in stressed and unstressed positions,
rhythmic structure, and intonation contours.
d) Produce reduced forms of words and phrases.
e) Use an adequate number of lexical units (words) to accomplish pragmatic
purposes.
f) Produce fluent speech at different rates of delivery.
g) Monitor one's own oral production and use various strategic devicespauses,
fillers, self-corrections, backtracking-to enhance the clarity of the message.
h) Use grammatical word classes (nouns, verbs, etc.), systems (e.g., tense,
agreement, pluralization), word order, patterns, rules, and elliptical forms.
i) Produce speech in natural constituents: in appropriate phrases, pause groups,
breath groups, and sentence constituents.
j) Express a particular meaning in different grammatical forms.
Use cohesive devices in spoken discourse.
2. Macroskill.
a) Appropriately accomplish communicative functions according to situations,
participants, and goals.

b) Use appropriate styles, registers, implicature, redundancies, pragmatic


conventions, conversation rules, floor-keeping and -yielding, inte'rrupting, and
other sociolinguistic features in face-to-face conversations.
c) Convey links and connections between events and communicate such relations as
focal and peripheral ideas, events and feelings, new information and given
information, generalization and exemplification.
d) Convey facial features, kinesics, body language, and other nonverbal cues along
with verbal language.
e) Develop and use a battery of speaking strategies, such as emphasizing key words,
rephrasing, providing a context for interpreting the meaning of words, appealing
for help, and accurately assessing how well your interlocutor is understanding
you.
D. DESIGNING ASSESSMENT TASK.
a. Imitative Speaking.
Imitative speaking requires students to "parrot back" a word, phrase, or sentence
(Brown 2004.
1) Phonepass Test.
An example of a popular test that uses imitative (as well as intensive) production
tasks is PhonePass, a widely used, commercially available speaking test in many
countries. Among a number of speaJdng tasks on the test, repetition of sentences
(of 8 to 12 words) occupies a prominent role. The PhonePass test elicits
computer-assisted oral production over a telephone. Test-takers. read aloud, repeat
sentences, say words, and answer questions. With a downloadable test sheet as a
reference, test-takers are directed to telephone a designated number and listen for
directions. Scores for the PhonePass test are calculated by a computerized scoring
temp-late and reported back to the test-taker within minutes. Six scores are given:
an overall score between 20 and 80 and five subscores on the same scale that rate
pronunciation, reading fluency, repeat accuracy, repeat fluency, and listening
vocabulary.
b. Intensive Speaking.
At the intensive level, test-takers are prompted to produce short stretches of
discourse (no more than a sentence) through which they demonstrate linguistic ability
at a specified level of language.

1) Directed Response Tasks.


In this type of task, the test administrator elicits a particular grammatical form or
a transformation of a sentence. Such tasks are clearly mechanical and not
communicative,but they do require minimal processing ofmeaning in order to
produce the correct grammatical output
2) Read-Aloud Tasks.
Intensive reading-aloud tasks include reading beyond the sentence level up to a
paragraph or two. This technique is easily administered by selecting a passage
that incorporates test specs and by recording the test-taker's output; the scoring is
relatively easy because all of the test~taker's oral production is controlled.
Because of the results of research on the PhonePass test, reading aloud may
actually be a surprisingly strong indicator of overall oral production ability.
For many decades, foreign language programs have used reading passages
to analyze oral production. Prator's (1972) Manual ofAmerican English
Pronunciation included a "diagnostic passage" of about 150 words that students
could read aloud into a tape recOrder. Teachers listening to the recording would
then rate students on a number of phonological factors (vowels, diphthongs,
consonants, consonant clusters, stress, and intonation) by completing a two-page
diagnostic checklist on which all errors or questionable items were noted. These
checklists ostensibly offered direction to the teacher for emphases in the course
to come.
Underhill (1987, pp. 77-78) suggested some variations on the task of
simply reading a short passage:

reading a scripted dialogue, with someone else reading the other part

reading sentences containing minimal pairs, for example: Try not to heat/hit
the pan too much. The doctor gave me a bilpill.

reading information from a table or chart


If reading aloud shows certain practical advantages (predictable output,

practicality, reliability in scoring), there are several drawbacks to using this


technique for assessing oral production. Reading aloud is somewhat inauthentic
in that we seldom read anything aloud to semeone else in the real world, with

the exception of a parent reading to a child, occasionally sharing a written story


with someone, or giving a scripted oral presentation. Also, reading aloud calls
on certain specialized oral abilities that may not indicate one's pragmatic ability
to communicate orally ill face-ta-face contexts. You should therefore employ
this technique with some caution, and certainly supplement it as an assessment
task with other, more communicative procedures.
3) Sentence/Dialogue Completion Tasks and Oral Questionnaires
Another technique for targeting intensive aspects of language requires testtakers to read dialogue in which one speaker's lines have been omitted. Testtakers are first given time to read through the dialogue to get its gist and to think
about appropriate lines to fill in. Then as the tape, teacher, or test administrator
produces one part orally, the test-takers respons.
An advantage of this technique lies in its moderate control of the output of
the test-taker. While individual variations in responses are accepted, the
technique taps into a learner's ability to discern expectancies in a conversation
and to produce sociolinguistically correct language. One disadvantage of this
technique is its reliance on literacy and an ability to transfer easily from written
to spoken English. Another disadvantage is the contrived, inauthentic nature of
this task: Couldn't the same criterion performance be elicited in a live interview
in which an impromptu role-play technique is used?
Perhaps more useful is a whole host of shorter dialogues of two or three
lines, each of which aims to elicit a specified target. In the following examples,
somewhat unrelated items attempt to elicit the past tense, future tense, yes/no
question formation, and asking for the time. Again, test-takers see the stimulus
in written form.
One could contend that performance on these items is responsive, rather
than intensive. True, the discourse involves responses, but there is ,a degree of
control here that predisposes the test-taker to respond with certain expected
forms.
Underhill (1987) describes yet another technique that is useful for
controlling the test-taker's output: form-filling, or what I might rename "oral
questionnaire." Here the test-taker sees a questionnaire that asks for certain

categories of information (personal data, academic information, job experience,


etc.) and supplies the information orally.
4) Picture-Cued Tasks
One of the more popular ways to elicit oral language performance at both
intensive and extensive levels is a pictl1re-cued stimulus that requires a
description from the test-taker. Pictures may be very simple, designed to elicit
a word or a phrase; somewhat more elaborate and "busy"; or composed of a
series that tells a story or incident.
Notice that a little sense of humor is injected here: the family, bundled up
in their winter coats, is looking forward to leaving the wintry scene behind
them! A touch of authenticity is added in that almost everyone can identify
with looking forward to a vacation on a tropical island.
Scoring responses on picture-cued intensive speaking tasks varies,
depending"on the expected performance criteria. The tasks above that asked
just for one-word or simple--sentence responses can be evaluated simply as
"correct" or "incorrect." The three-point rubric (2, 1, and 0) suggested earlier
may apply as well, with these modifications:
Opinions about paintings, persuasive monologue., and directions on a map
create a more complicated problem for scoring. More demand is placed on the
test administrator to make calculated judgments, in which case a modified form
of a scale such as the one suggested for evaluating interviews (below) could be
used:

Grammar

Vocabulary

Comprehension

Fluency

Pronunciation

Task (accomplishing the objective of the elicited task)


Each category may be scored separately, with an additional composite

score that attempts to synthesize overall performance. To attend to so many

factors, you will probably need to have an audiotaped recording for multiple
listening.
One moderately successful picture-cued technique involves a pairing of
two test-takers. They are supplied with a set of four identical sets of numbered
pictures, each minimally distinct from the others by one or two factors.
The task here is simple and straightforward and clearly in the intensive
category as the test-taker must simply produce the relevant linguistic markers.
Yet it is still the task of the test administrator to determine a correctly produced
response and a correctly understood response since sources of incorrectness
may not be easily pinpointed. If the pictorial stimuli are more complex than the
above item, greater burdens are placed on both speaker and listener, with
consequently greater difficulty in identifying which committed the error.
5) Translation (of Limited Stretches of Discourse)
Translation is a part of our tradition in language teaching that we tend to
discount or disdain, if only because our current pedagogical stance plays down
its importance. Translation methods of teaching are certainly passe in an era of
direct approaches to creating communicative classrooms. But we should
remember that in countries where English is not the native or prevailing
language, translation is a meaningful communicative device in contexts where
the English user is. called on to be an interpreter. Also, translation is a wellproven communication strategy for learners of a second language.
Under certain constraints, then, it is not far-fetched to suggest translation
as a device to check oral production. Instead of offering pictures or written
stimuli, the test-taker is given a native language word, phrase, or sentence and is
asked to translate it. Conditions may vary from expecting an instant translation
of an orally elicited linguistic target to allowing more thinking time before
producing a translation of somewhat longer texts, which may option3.ny be
offered to the test-taker in written form.
c. Responsive Speaking.

Assessment of responsive tasks involves brief interactions with an interlocutor,


differing from intensive tasks in the increased creativity given to the test-taker and
from interactive tasks by the somewhat limited length of utterances.
1) Question and Answer
Question-and-answer tasks can consist of one or two questions from an
interviewer, or they can make up a portion of a whole battery of questions and
prompts in an oral interview. The first question is intensive in its purpose; it is a
display question intended to elicit a predetermined correct response. We have
already looked at some of these types of questions in the previous section.
Questions at the responsive level tend to be genuine referential questions in which
the test-taker is given more opportunity toproduce meaningful language in
response.
Responsive questions may take the following forms:
Questions eliciting open-ended responses (see table)
Table Elicitation of questions from the test-taker (see table)
2) Giving Instructions and Directions .
We are all called on in our daily routines to read instructions on how to
operate an appliance, how to put a bookshelf together, or how to create a
delicious clam chowder. Somewhat less frequent is the mandate to provide
such instructions orally, but this speech act is still relatively common. Using
such a stimulus in an assessment context provides an opportunity for the testtaker to engage in a relatively extended stretch of discourse, to be very clear
and specific, and to use appropriate discourse markers and connectors. The
technique is Simple: the administrator poses the problem, and the test-taker
responds. Scoring is based primarily on comprehensibility and secondarily on
other specified grammatical or discourse categories. Here are some
possibilities.
Table Eliciting instructions or directions (see table)
This task can be designed to be more complex, thus placing it in the
category of extensive speaking. If your objective is to keep the response short
and simple, then make sure your directive does not take the test-taker down a
path of complexity that he or she is not ready to face.
3) Paraphrasing

Another type of assessment task that can be categorized as responsive asks


the test-taker to read or hear a limited number of sentences (perhaps two to
five) and-produce a paraphrase of the sentence. For example:
Paraphrasing a story
Paraphrasing a phone message
4) Test Of Spoken English (Tse@)
Somewhere straddling responsive, interactive, and extensive speaking
tasks lies another popular commercial oral production assessment, the Test of
Spoken English (TSE)'. The TSE is a 20-minute audiotaped test of oral
language ability within an academic or professional environment. TSE scores
are used by many North American institutions ofhigher education to select
international teaching assistants.
The tasks on theTSE are designed to elicit oral production in various
discourse categories rather than in selected phonologycal, gramatical, or
lexical targets. The follOwing content specifications for the TSE represent the
discourse and pragmatic contexts assessed in each administration.
a

Describe something physical.

b Narrate from presented material.


c

Summarize information of the speaker's own choice.

d Give directions based on visual materials.


e

Give instructions.

Give an opinion.

Support an. opinion.

h Compare/contrast.
i

Hypothesize.

Function "interactively."

k Define.
Using these specifications, Lazaraton andWagner (1996) examined 15
different specific tasks in collecting background data from native and nonnative speakers of English.
a

giving a personal description

escribing a daily routine.

suggesting a gift and supporting one's choice

recommending a place to visit and supporting one's choice

giving directions

describing a favorite movie and supporting one's choice

telling a story from pictures

hypothesizing about future action

hypothesizing about a preventative action

making a telephone call to the dry cleaner

describing an important news event

giving an opinion about animals in the zoo

m defining a technical term


n

describing information in a graph speculating about its implications

giving details about a trip schedule.


From their fmdings, the researchers were able to report on the validity of

the tasks, especially the match between the intended task functions and the
actual output of both native and non-native speakers. Following is a set of
sample items as they appear in the TSE ManuaI, which is downloadable from
theTOEFL website (see reference on page 167). ( see Table Test of Spoken
English sample items)
TSE test-takers are given a holistic score ranging from 20 to 60, as
described in the TSE Manual (see Table 7.1 Test of spoken English Scoring
guide(1995).
Holistic scoring taxonomies such as these imply a number of abilities that
comprise "effective" communication and "competent" perfornlance of the
task. The original version of the TSE (1987) specified three contributing
factors to a fmal score on "overall comprehensibility": pronunCiation,
grammar, and fluency_ The current scoring scale of 20 to 60 listed above
incorporates task performance~ function, appropriateness, and coherence as
well as the form-focused factors. From reported scores, institutions are left to
determine their own threshold levels of acceptability, but because scoring is

holistic, they will not receive an analytic score of how each factor breaks
down (see Douglas & Smith, 1997, for further infornlation). Classroom
teachers who propose to model oral production assessments after the tasks on
the TSE must, in order to provide some washback effect, be more explicit in
analyzing the ~arious components oftest-takers' output. Such scoring rubrics
are presented in the next section. Following is a summary of information on
the TSE: ( see table Test of Spoken English (TSE)
d. Interactive Speaking.
Interactive tasks are what some would describe as interpersonal, while the fmal
category includes more transactional speech events.
1) Interview
When"oral production assessment" is mentioned, the first thing that comes to
mind is an oral interview: a test administrator and a test-taker sit down in a direct
face-toface exchange and proceed through a protocol of questions and directives.
Interviews can vary in length from perhaps five to forty-five minutes, depending
on their purpose and context.

Every effective interview contains a number

ofnlandatory stages. Two decades ago, Michael Canale (1984) proposed a


framework for oral proficiency testing that has withstgod the test of time. He
suggested that test-takers will perform at their best if they are led through four
stages:
a) Warm-up. In a minute or so of preliminary small talk, the interviewer directs
mutual introductions, helps the test-taker become comfortable with the
situation, apprises the test-taker of the format,and allays anxieties. No scoring
of this phase takes place.
b) Level check. Through a series of preplanned questions, the interviewer
stimulates the test-taker to respond using -expected or predicted forms and
functions.
c) Probe. Probe questions and prompts challenge test-takers to go to the heights
of their ability, to extend beyond the limits ofthe interviewer'S expectation
through increaSingly difficult questions.
d) Wind-down. This fmal phase of the interview is simply a short period of time
during which the interviewer encourages the test-taker to relax with some easy

questions, sets the test-laker's mind at ease, and provides information about
when and where to obtain the results of the interview. This part is not scored.
e) The suggested set of content specifications for an oral interview (below) may
serve as sample questions that can be adapted to individual situations.
f) Here are some possible questions, probes, and comments that fit those
specifications.
Warm-up:
How are you?
What's you r name?
Level check:
How long have you been in this (country, city)?
Tell me about your family.
What is your (academic major, professional interest, job)?
Probe:
What are your goals for learning English in this program?
Describe your-(academic field, job) to me.
What do you like and dislike, about it?
Wind-down:
Did you feel okay about this interview?
What are your plans for (the weekend, the rest of today, the future)?
Do you have any questions you want to ask me? It was interesting to
taIk with you.
2) Role Play
Role playing is a popular pedagogical activity in communicative
language-teaching classes. Within constraints set forth by the guidelines, it frees
students to be somewhat creative in their linguistic output. In some versions, role
play allows some rehearsal time so that students can map out what they are going
to say. And it has the effect of lowering anxieties as students can, even for a few
moments, take on the persona of someone other than themselves.
3) Discussions and Conversations
As formal assessment devices, discussions and conversations with and
among students are difficult to specify and even more difficult to score. But as
informal techniques to assess learners, they offer a level of authenticity and
spontaneity that other assessment techniques may not provide.
*politeness, formality, and other sociolinguistic factors.
4) Games
Among informal assessment devices are a variety of games that directly
involve language production. Consider the following types:
a) Assessment-games

Crossword puzzles are created in which the names of all members of a


class are clued by obscure information about them. Each class member
-must ask questions of others to determine who matches the clues in

the puzzle.
City maps are distributed to class members. Predetermined map
directions are given-to one student who, with a city map in front of
him or her, describes the route to a partner, who must then trace the

route and get to the correct final destination.


5) Oral Proficiency Interview (Opi)
The best-known oral interview format is one that has gone through a considerable
metamorphosis over the last half-century, the Oral Proficiency Interview (OPI).
Originally known as the Foreign Service Institute (FSI) test, the OPI is the result
of a historical progression of revisions under the auspices of several agencies,
including the Educational Testing Service and the American Council on Teaching
Foreign Languages (ACTFL).
e. Extensive Speaking.
Extensive speaking tasks involve complex, relatively lengthy stretches of
discourse. They are frequently variations on monologues, usually with minimal verbal
interaction.
1) Oral Presentations
In the academic and professional arenas, it would not be uncommon to be
called on to present areport-,a-paper;a marketing plan,a-sales-idea, a design of a
new product, or a method. A summary of oral assessment techniques would
therefore be incomplete without some consideration of extensive speaking tasks.
Once again the rules for effective assessment must be invoked: (a) specify the
criterion, (b) set appropriate tasks, (c) elicit optimal output, and (d) establish
practical, reliable scoring procedures. And once again scoring is the key
assessment challenge."
For oral presentations, a checklist or grid is a common means of scoring or
evaluation. Holistic scores are tempting to use for their apparent practicality, but
they may obscure the variability of performance across several subcategories,
especially the two major components of content and delivery. Following is an

example of a checklist for a prepared oral presentation at the intermediate or


advanced level of English.
Such a checklist is reasonably practical. Its reliability can vary if clear
standards for scoring are not maintained. Its authenticity can be supported in that
all of the items on the list contribute to an effective presentation. The washback
effect of such a checklist will be enhanced by written comments from the teacher,
a conference with the teacher, peer evaluations using the same form, and selfassessment.
2) Picture-Cued Story-Telling
One of the most common techniques for eliciting oral production is
through visual pictures, photographs, diagrams, and charts. We have already
looked at this' elicitation device for intensive tasks, but at this level we consider a
picture or a series of pictures as a stimulus for a longer story or deSCription.
Consider the following set of pictures:
It's always tempting to throw any picture sequence at test-takers and have
them talk for a minute or so about them. But as is true of every assessment of
speaking ability, the objective of eliciting narrative discourse needs to be clear. In
the above example (with a little humor added!), are you testing for oral
vocabulary (girl, alarm, coffee, telephone, wet, cat, etc.), for time relatives
(before, after, when), for sentence connectors (then, and then, so), for past tense of
irregular verbs (woke, drank, rang), and/or for fluency in general? Ifyou are
eliCiting specific grammatical or discourse features, you might add to the
directions something like "Tell the story that these pictures describe. Use the past
tense of verbs." Your criteria for scoring need to be clear about what it is you are
hoping to assess. Refer back to some of the guidelines suggested under the section
on oral interviews, above, or to the OPI for some general suggestions on scoring
such a narrative.
3) Retelling a Story or News Event
In this type of task, test-takers hear or read a story or news event that they
are asked to retell. This differs from the paraphrasing task discussed above (pages
161-162) in that it is a longer stretch of discourse and a different genre. The
objectives in assigning such.a task vary from listening comprehension of the
original to production of a number of oral discourse features (communicating

sequences and relationships 01 events, stress and emphasis patterns, .


"expression" in the case of a dramatic story), fluency, and interaction with the
hearer. Scoring should of course meet the intended criteria.
4) Translation (of Extended Prose)
Translation of words, phrases, or short sentences was mentioned under the
category of-intensive speaking. Here, longer texts are presented for the test-taker
to read in the native language and then translate into English.Those texts could
come in many forms: dialogue, directions for assembly of a product, a synopsis of
a story or play or movie, directions on how to find something on a map, and other
genres. The advantage of translation is in the control of the content, vocabulary,
and, to some extent, the grammatical and discourse features. The disadvantage is
that translation of longer texts is a highly specialized skill for which some
individuals obtain post-baccalaureate degrees! To judge a nonspecialist's oral
language ability on such a skill may be completely invalid, especially if the testtaker has not engaged in translation at this level. Criteria for scoring should
therefore take into account not only the purpose in stimulating a translation but
the possibility of errors that are unrelated to oral production ability.

REFERENCES.
Brown, Doughlas (2004). Language Assesment: Principles and Classroom Practices.
New York: Pearson Education, Inc.

You might also like