Chapters-1-3-lecture-only-for-MT
Chapters-1-3-lecture-only-for-MT
Chapters-1-3-lecture-only-for-MT
TRANSLATION STUDIES
================================================
VISION
MISSION
CPSU is committed to produce competent graduates who can generate an extend leading technologies
in multi-disciplinary areas beneficial to the community
GOAL
CPSU commits to deliver quality higher and advanced education through instruction, research,
extension, production and administrative support services.
1. Continually improve its Quality Management System (QMS) at par with International standards;
2. Provide timely, efficient and effective delivery of products and services;
3. Satisfy the needs and expectations of the customers and relevant Interested parties; and
4. Uphold applicable statutory, regulatory, organizational and International standard requirements.
Note to the Readers
This instructional learning material aims to give you a critical but balanced survey of many of the most
important trends and contributions to translation studies. It specifically aims to enable you to develop your
understanding of the issues and associated technical language (metalanguage), and to begin to apply
the models for yourself.
As a student, you are also encouraged to carry out a closer examination of specific issues and to pursue
further reading in those areas that are of your greatest interest. In this way, this material may provide a
stimulating introduction to a range of theoretical approaches to translation that are relevant to your
academic study of translation.
Each of the chapters in this material surveys a major area of the discipline. Each is designed to be self-
standing, so that you can quickly find the descriptions that are of most interest to your focus.
The progression of ideas from the first chapter to the next is presented in a detailed, interconnected and
progressive manner from the introductory (presenting main issues of translation studies in Chapter 1) to
the more complex, as you become more accustomed to the terminology and concepts. In general, the
progression is chronological, from pre-twentieth-century theory in Chapter 2 to linguistic-oriented theories
(Chapters 3 to 6) and to more recent developments from cultural studies such as postcolonialism and
from sociology (Chapter 7).
COURSE OUTLINE
MIDTERM TOPICS
Chapter 1 discusses what we mean by ‘translation’ and what the scope is of the discipline of translation
studies. It discusses the three types of translation defined by Jakobson: intralingual, interlingual and
intersemiotic. It then presents the well-known Holmes/Toury conceptual map of the discipline, and
critiques it with new conceptualizations and knowledge structures used in the construction of the online
publications database, the Benjamins Translation Studies Bibliography.
Chapter 2 describes some of the major issues that are discussed in writings about translation up to the
twentieth century. This huge range of over 2,000 years, beginning with Cicero in the first century @AC,
focuses on the ‘literal vs. free’ translation debate, an imprecise and circular debate from which theorists
have emerged only in the last sixty years. The chapter describes some of the classic writings on
translation over the years, making a selection of the most wellknown and readily available sources.
Chapter 3 deals with the concepts of meaning, equivalence and ‘equivalent effect’. Translation theory in
the 1960s under Eugene Nida shifted the emphasis to the receiver of the message. This chapter
encompasses Nida’s model of translation transfer, influenced by Chomsky’s generative grammar, and
his concepts of formal equivalence and dynamic equivalence. Newmark’s similarly influential categories
of semantic translation and communicative translation are also discussed, as is Koller’s analysis of
equivalence.
Chapter 4 overviews attempts that have been made to describe the product and process of translation.
These include classifications of the linguistic changes or ‘shifts’ which occur in translation. The main
model described here is Vinay and Darbelnet’s classic taxonomy, but reference is also made to other
traditions, such as Loh’s English-Chinese model, and to Catford’s linguistic model. The latter part of the
chapter introduces some of the work that has been conducted from a cognitive perspective, which seeks
to explain message processing and how translation as communication is achieved. This section covers
the interpretive model of the Paris School, Gutt’s work on relevance theory and recent advances in
empirical studies.
Chapter 5 covers Reiss and Vermeer’s text-type and skopos theory of the 1970s and 1980s and Nord’s
text-linguistic approach. In this chapter, translation is analysed according to text type and function in the
TL culture, and prevailing concepts of text analysis – such as word order, information structure and
thematic progression – are employed. Hybrid and multimodal text genres are also discussed.
Linked closely to the previous chapter, Chapter 6 moves on to consider House’s recently modified
Register analysis model and the development of discourse-oriented approaches in the 1990s by Baker,
and Hatim and Mason, who make use of Hallidayan linguistics to examine translation as communication
within a sociocultural context.
Abbreviations
The main aim of this learning material is to introduce you to the major concepts and models of translation
studies. Because the research being undertaken in this field is now so extensive, the discussions selected
are necessarily only representative and illustrative of the major trends.
This particularly focuses on written translation rather than oral translation (the latter is commonly known
as interpreting or interpretation), although the overlaps make a clear distinction impossible (cf. Gile
2004).
More subtly, interpreting is defined, by Otto Kade, as ‘a form of Translation (in the wider sense) in which
(a) the source language text is presented only once and thus cannot be reviewed or replayed, and (b)
the target language text is produced under time pressure, with little chance for correction and revision’
(Pöchhacker 2009: 133, following Kade 1968).
The English term translation, first attested in around 1340, derives either from Old French translation or
more directly from the Latin translatio (‘transporting’), itself coming from the participle of the verb
transferre (‘to carry over’). In the field of languages, translation today has several meanings:
(1) the general subject field or phenomenon (‘I studied translation at university’)
(2) the product – that is, the text that has been translated (‘they published the Arabic translation of the
report’)
(3) the process of producing the translation, otherwise known as translating (‘translation service’).
The process of translation between two different written languages involves the changing of an original
written text (the source text or ST) in the original verbal language (the source language or SL) into a
written text (the target text or TT) in a different verbal language (the target language or TL):
(1) intralingual translation, or ‘rewording’ – ‘an interpretation of verbal signs by means of other signs of
the same language’
(2) interlingual translation, or ‘translation proper’ – ‘an interpretation of verbal signs by means of some
other language’
(3) intersemiotic translation, or ‘transmutation’ – ‘an interpretation of verbal signs by means of signs of
non-verbal sign systems’. (Jakobson 1959/2012: 127)
These definitions draw on semiotics, the general science of communication through signs and sign
systems, of which language is but one (Cobley 2001, Malmkjær 2011).
➢ The use of the term semiotics is significant here because translation is not always limited to
verbal languages. Intersemiotic translation, for example, occurs when a written text is translated into a
different mode, such as music, film or painting.
➢ Intralingual translation would occur when we produce a summary or otherwise rewrite a text in
the same language, say a children’s version of an encyclopedia. It also occurs when we rephrase an
expression in the same language.
In the following example, revenue nearly tripled is a kind of intralingual translation of the first part of the
sentence, a fact that is highlighted by the trigger expression in other words.
In the decade before 1989 revenue averaged around [NZ]$1 billion a year while in the decade after
it averaged nearly [NZ]$3 billion a year – in other words, revenue nearly tripled.
➢ It is interlingual translation, between two different verbal sign systems, that has been the
traditional focus of translation studies.
Throughout history, written and spoken translations have played a crucial role in inter-human
communication, not least in providing access to important texts for scholarship and religious purposes.
As world trade has grown, so has the importance of translation.
In the English-speaking world, this discipline is now generally known as ‘translation studies’, thanks to
the Dutch-based US scholar James S. Holmes (1924–1986). In his key defining paper delivered in 1972,
but not widely available until 1988, Holmes describes the then nascent discipline as being concerned
with ‘the complex of problems clustered round the phenomenon of translating and translations’ (Holmes
1988b/2004: 181).
There are four (4) very visible ways in which translation studies has become more prominent.
Unsurprisingly, these reflect a basic tension between the practical side of professional translating and
the often more abstract research activity of the field.
First, just as the demand for translation has soared, so has there been a vast expansion in specialized
translating and interpreting programs at both undergraduate and postgraduate level. These programs,
which attract thousands of students, are mainly oriented towards training future professional commercial
translators and interpreters and serve as highly valued entry-level qualifications for the professions.
Second, the past decades have also seen a proliferation of conferences, books and journals on
translation in many languages. Longer-standing international translation studies journals such as Babel
(the Netherlands) and Meta (Canada), first published in 1955, were joined by TTR (Traduction,
terminologie, rédaction, Canada) in 1988, Target (the Netherlands) in 1989, Perspectives (Denmark) in
1993 and The Translator (UK) in 1995.
Third, as the number of publications has increased so has the demand for general and analytical
instruments such as anthologies, databases, encyclopedias, handbooks and introductory texts. Their
number is ever-growing. Among these are Translation Studies (Bassnett 1980/1991/2002/2013),
Contemporary Translation Theories (Gentzler 1993/2001), The Routledge Handbook of Translation
Studies (Millán and Bartrina 2013) and A Companion to Translation Studies (Bermann and Porter 2014).
The best known searchable online bibliographies are Translation Studies Bibliography (John
Benjamins/Routledge) and the free-access BITRA (University of Alicante).
Fourth, international organizations have also prospered. The Fédération Internationale des
Traducteurs (International Federation of Translators, FIT) was established in 1953 by the Société
française des traducteurs and its president Pierre-François Caillé (1907–79). It brought together national
associations of translators.
Writings on the subject of translating go far back in recorded history. The practice of translation was
crucial for the early dissemination of key cultural and religious texts and concepts. In the west, the
different ways of translating were discussed by, among others, Cicero and Horace (first century @AC)
and St Jerome (fourth century AC).
In St Jerome’s case, his approach to translating the Greek Septuagint Bible into Latin would affect later
translations of the Scriptures. While the practice of translation is long established, the study of the field
developed into an academic discipline only in the latter part of the twentieth century.
Before that, translation had often been relegated to an element of language learning. In fact, from the
late eighteenth century to the 1960s and beyond, language learning in secondary schools in many
countries had come to be dominated by what was known as grammar-translation (Cook, 2010).
Study of a work in translation was generally frowned upon once the student had acquired the necessary
skills to read the original. Grammar-translation therefore fell into increasing disrepute, particularly in many
English-language countries, with the rise of alternative forms of language teaching such as the direct
method and the communicative approach from the 1960s and 1970s (Cook 2010).
The communicative approach stressed students’ natural capacity to learn language and attempts to
replicate ‘authentic’ language-learning conditions in the classroom. It often privileged spoken over written
forms, at least initially, and generally avoided use of the students’ mother tongue. This led to the
abandoning of translation in language learning.
Another area in which translation became the subject of research was contrastive linguistics. This is
the study of two languages in contrast in an attempt to identify general and specific differences between
them. It developed into a systematic area of research in the USA from the 1930s onwards and came to
the fore in the 1960s and 1970s.
A seminal paper in the development of the field as a distinct discipline was James S. Holmes’s ‘The
name and nature of translation studies’ (Holmes, 2004).
Crucially, Holmes put forward an overall framework, describing what translation studies covers. This
framework was subsequently presented by the leading Israeli translation scholar Gideon Toury as in
Figure 1.1.
In Holmes’s explanations of this framework (Holmes 2004), the objectives of the ‘pure’ areas of research
are: (1) the description of the phenomena of translation; and (2) the establishment of general principles
to explain and predict such phenomena (translation theory).
The ‘theoretical’ branch is divided into general and partial theories. By ‘general’, Holmes is referring to
those writings that seek to describe or account for every type of translation and to make generalizations
that will be relevant for translation as a whole. ‘Partial’ theoretical studies are restricted according to the
parameters discussed below (medium, text-type, etc.).
The descriptive branch of ‘pure’ research in Holmes’s map is known as descriptive translation studies
(DTS, see Chapter 7). It may examine: (1) the product; (2) the function; and (3) the process.
(1) Product-oriented DTS examines existing translations. This may involve the description or analysis
of a single ST–TT pair or a comparative analysis of several TTs of the same ST (into one or more TLs).
These smaller-scale studies can build up into a larger body of translation analysis looking at a specific
period, language or text/discourse type.
(2) By function-oriented DTS, Holmes means the description of the ‘function [of translations] in the
recipient sociocultural situation: it is a study of contexts rather than texts’. Issues that may be researched
include which texts were translated when and where, and the influences that were exerted.
For example, the study of the translation and reception of Shakespeare into European languages, or the
subtitling of contemporary cartoon films into Arabic. Holmes terms this area ‘socio-translation studies’.
Nowadays it would probably be called the sociology and historiography of translation.
(3) Process-oriented DTS in Holmes’s framework is concerned with the psychology of translation, i.e. it
is concerned with trying to find out what happens in the mind of a translator. Work from a cognitive
perspective includes think-aloud protocols (where recordings are made of translators’ verbalization of the
translation process as they translate).
◼ Medium-restricted theories subdivide according to translation by machine and humans, with further
subdivisions according to whether the machine/ computer is working alone (automatic machine
translation) or as an aid to the human translator (computer-assisted translation), to whether the
human translation is written or spoken and to whether spoken translation (interpreting) is consecutive
or simultaneous.
◼ Area-restricted theories are restricted to specific languages or groups of languages and/or cultures.
Holmes notes that language-restricted theories (e.g. for the Japanese< >English pair) are closely
related to work in contrastive linguistics and stylistics.
◼ Rank-restricted theories are linguistic theories that have been restricted to a level of (normally) the
word or sentence. At the time Holmes was writing, there was already a trend towards text linguistics,
i.e. analysis at the level of the text, which has since become far more popular.
◼ Text-type restricted theories look at discourse types and genres; e.g. literary, business and
technical translation.
◼ The term time-restricted is self-explanatory, referring to theories and translations limited according
to specific time frames and periods. The history of translation falls into this category.
◼ Problem-restricted theories may refer to certain problems such as equivalence (a key issue that
came to the fore in the 1960s and 1970s) or to a wider question of whether so-called ‘universals’ of
translation exist.
The van Doorslaer ‘Map’
In order to deal with such a breadth of work, a new conceptual tool was developed as explained by van
Doorslaer (2007).
In the new maps, a distinction is drawn between ‘translation’ and ‘translation studies’, reflecting the
different centers of interest of research. ‘Translation’ looks at the act of translating and, in the new map
(van Doorslaer 2007), is subdivided into:
◼ lingual mode (interlingual, intralingual);
◼ media (printed, audiovisual, electronic);
◼ mode (covert/overt translation, direct/indirect translation, mother tongue/other tongue translation,
pseudo-translation, retranslation, self-translation, sight translation, etc.);
◼ field (political, journalistic, technical, literary, religious, scientific, commercial).
◼ approaches (e.g. cultural approach, linguistic approach);
◼ theories (e.g. general translation theory, polysystem theory);
◼ research methods (e.g. descriptive, empirical);
◼ applied translation studies (criticism, didactics, institutional environment).
Alongside these is a ‘basic transfer map’ of terminology to describe the linguistic maneuversers that,
despite the cultural turn, remain central to the concrete translating process.
Linguistic transfer of course still occurs within a sociocultural and historical context and institutional
environment that place their own constraints on the process.
===========================================================================
Summary
Translation studies is an academic research area that has expanded massively over the years.
Translation was formerly studied as a language-learning methodology or as part of comparative literature,
translation ‘workshops’ and contrastive linguistics courses.
The discipline as we now know it owes much to the work of James S. Holmes, who proposed both a
name and a structure for the field, but the context has now advanced. The interrelated branches of
theoretical, descriptive and applied translation studies initially structured research. Over time the
interdisciplinarity and specialization of the subject have become more evident and theories and models
have continued to be imported from other disciplines but also forged from within translation studies itself.
CHAPTER 2 :
Translation Theory before the 20th Century
1. Describe some of the major issues about translation up to the twentieth century;
2. Explain the concept of ‘literal’ and ‘free’ translation;
3. Compare the claims of Dryden, Dolet, Tytler and Yan Fu on the process of translation.
Introduction
The aim of this chapter is to focus on the central recurring theme of ‘word-for-word’ and ‘sense-for-sense’
translation, a debate that dominated much of translation theory in what called the ‘pre-linguistics period
of translation’.
“Word-for-word” or “Sense-for-sense”?
Up until the second half of the twentieth century, western translation theory seemed locked in what
George Steiner (1998: 319) calls a ‘sterile’ debate over the ‘triadic model’ of ‘literalism’, ‘paraphrase’ and
‘free imitation’. The distinction between word-for-word’ (i.e. ‘literal’) and ‘sense-for-sense’ (i.e. ‘free’)
translation goes back to Cicero (106–43 @AC) and St Jerome (347–420 AC).
The Roman rhetorician and politician Marcus Tullius Cicero outlined his approach to translation in De
optimo genere oratorum (46 @AC/1960 AC), introducing his own translation from the Greek of speeches
of the fourth-century @AC Attic orators Aeschines and Demosthenes:
And I did not translate them as an interpreter, but as an orator, keeping the same ideas and forms, or
as one might say, the ‘figures’ of thought, but in language which conforms to our usage. And in
so doing, I did not hold it necessary to render word for word, but I preserved the general style and
force of the language. (Cicero 46 @AC/1960 AC: 364)
The ‘interpreter’ of the first line is often read by translation studies as being the literal (‘word-for-word’)
translator, while the ‘orator’ tried to produce a speech that moved the listeners. However, McElduff (2009:
136) points out that in ancient Rome it was the low social status of the ‘interpreter’ – a mediator of various
kinds – that was disparaged because of a lack of education. This then led to ‘limited and pedantic
understanding’ and to an inelegant, ‘word for word’, Latin style.
The disparagement of word-for-word translation came from others as well, such as the poet Horace,
who, in a short but famous passage from his Ars Poetica (c.20 @AC), underlines the goal of producing
an aesthetically pleasing and creative poetic text in the TL. This attitude had great influence on the
succeeding centuries.
Thus, St Jerome, the most famous of all western translators, cites the authority of Cicero’s approach to
justify his own Latin revision and translation of the Christian Bible, later to become known as the Latin
Vulgate. (Vulgate is the principal Latin version of the Bible, prepared mainly by St. Jerome in the late 4th
century, and (as revised in 1592) adopted as the official text for the Roman Catholic Church.)
By comparing the Greek Septuagint translation with the Hebrew original, Jerome noted points where the
two versions differed. His overall translation strategy is formulated in De optimo genere interpretandi, a
letter addressed to his friend, the senator Pammachius, in 395 AC.
In it, Jerome responds specifically to public criticisms of his originally private translation of a letter from
Pope Epiphenius to John, the Bishop of Jerusalem. In perhaps the most famous statement ever made
on the translation process, St Jerome defends himself against accusations of ‘incorrect’ translation and
describes his strategy in the following terms:
Now I not only admit but freely announce that in translating from the Greek – except of course
in the case of the Holy Scripture, where even the syntax contains a mystery – I render not word-for-
word, but sense-for-sense. (St Jerome 395 AC/1997: 25)
Although some scholars (e.g. Vermeer 1994: 7) argue that these terms have been misinterpreted,
Jerome’s statement is now usually taken to refer to what came to be known as ‘literal’ (word-for-word)
and ‘free’ (sense-for-sense) translation.
Jerome rejected the word-for-word approach because, by following so closely the form of the ST, it
produced an absurd translation, cloaking the sense of the original.
The sense-for-sense approach, on the other hand, allowed the sense or content of the ST to be
translated. In these poles can be seen the origin of both the ‘literal vs. free’ and ‘form vs. content’ debate
that has continued until modern times.
Within western society, issues of free and literal translation were for over a thousand years after St.
Jerome bound up with the translation of the Bible and other religious and philosophical texts. Before the
arrival of the printing press (in China in the eleventh century AC and in Europe in the fifteenth century
AC), texts were laboriously copied by hand, which led to numerous errors or variant readings.
Language and translation became the sites of a huge power struggle. Latin, controlled by the Church in
Rome, had a stranglehold over knowledge and religion until challenged by the European Humanist
movement of the fourteenth and fifteenth centuries.
The Humanists sought liberation from the power of the Church by recovering the refinement of Classical
Latin and Greek and their secular writers, free from the changes wrought by the Middle Ages (Casanova,
2004).
Then, in the early fifteenth century, the Protestant Reformation of northern Europe, which was to lead
to a huge schism within Christianity, began to challenge Latin through the translation of the Bible into
vernacular languages.
In such circumstances, the translation of any book which diverged from the Church’s interpretation ran
the risk of being deemed heretical and of being censured or banned. Even the mere act of translation
could be considered a threat to the established order – for instance, the 1551 Index of the Spanish
Inquisition prohibited the publication of the Bible in any vernacular language (Barnstone, 1993).
An even worse fate lay in store for some of the translators who sought to make such texts available to a
wider public. The most famous examples are the English theologian-translator William Tyndale (c.1490–
1536) and the French humanist Étienne Dolet (1509–1546). Tyndale was a formidable linguist who was
said to have mastered ten languages, including Hebrew.
His extraordinary English Bible, produced in exile, was later used as the basis for the Geneva Bible (1560)
and King James version (1611). It was banned and copies confiscated on the orders of King Henry VIII.
Tyndale was abducted, tried for heresy and executed in the Netherlands in 1536 (Bobrick 2003).
Dolet was condemned by the theological faculty of the Sorbonne in 1546, apparently for adding, in his
translation of one of Plato’s dialogues, the phrase rien du tout (‘nothing at all’) in a passage about what
existed after death. The addition led to the charge of blasphemy, the assertion being that Dolet did not
believe in immortality. For such a translation ‘error’, he was burned at the stake.
An English poet and translator, John Dryden’s (1631–1700) brief description of the translation
process earned an enormous impact on subsequent translation theory and practice. In the preface to
his translation of Ovid’s Epistles in 1680, Dryden (1992) reduces all translation to three categories:
(1) ‘metaphrase’: ‘word by word and line by line’ translation, which corresponds to literal
translation;
(2) ‘paraphrase’: ‘translation with latitude, where the author is kept in view by the translator, so as
never to be lost, but his words are not so strictly followed as his sense’; this involves changing
whole phrases and more or less corresponds to faithful or sense-for-sense translation;
(3) ‘imitation’: ‘forsaking’ both words and sense; this corresponds to Cowley’s very free translation
and is more or less what today might be understood as adaptation.
Dryden criticizes translators such as Ben Jonson (1572–1637), who adopts metaphrase, as being a
‘verbal copier’ (ibid.). Such ‘servile, literal’ translation is dismissed with a now famous simile: ‘ ’Tis much
like dancing on ropes with fettered legs – a foolish task.’
Similarly, Dryden rejects imitation, where the translator uses the ST ‘as a pattern to write as he supposes
that author would have done, had he lived in our age and in our country’ (ibid.). Imitation, in Dryden’s
view, allows the translator to become more visible, but does ‘the greatest wrong . . . to the memory and
reputation of the dead’.
Dryden thus prefers paraphrase, advising that metaphrase and imitation be avoided. This three-part, or
‘triadic’ model proposed by Dryden was to exert considerable influence on later writings on translation.
Other early writers on translation also began to state their principles in a similarly prescriptive
fashion. One of the first had been Étienne Dolet, whose objective was to disseminate Classical teachings
through a Humanist lens and to contribute to the development of the French language.
In his 1540 manuscript La manière de bien traduire d’une langue en aultre (‘The way of translating well
from one language into another’; Dolet (1540/1997), set out five principles in order of importance as
follows:
(1) The translator must perfectly understand the sense and material of the original author, although
he should feel free to clarify obscurities.
(2) The translator should have a perfect knowledge of both SL and TL, so as not to lessen the
majesty of the language.
(5) The translator should assemble and liaise words eloquently to avoid clumsiness.
Here again, the concern is to reproduce the sense and to avoid word-for-word translation.
In English, the first comprehensive and systematic study of translation is Alexander Fraser
Tytler’s ‘Essay on the principles of translation’, published in 1790. Rather than Dryden’s author-
oriented description (‘write as the original author would have written had he known the target language’),
Tytler (1747–1813) defines a ‘good translation’ as being oriented towards the target language reader:
And, where Dolet has five ‘principles’, Tytler (ibid.) has three general ‘laws’ or ‘rules’.
(1) The translation should give a complete transcript of the ideas of the original work.
(2) The style and manner of writing should be of the same character with that of the original.
(3) The translation should have all the ease of the original composition.
Tytler’s first law ties in with Dolet’s first two principles in that it refers to the translator having a ‘perfect
knowledge’ of the original, being competent in the subject and giving ‘a faithful transfusion of the sense
and meaning’ of the author.
Tytler’s second law, like Dolet’s fifth principle, deals with the style of the author and involves the
translator’s both identifying ‘the true character’ of this style and having the ability and ‘correct taste’ to
recreate it in the TL.
The third law talks of having ‘all the ease of composition’ of the ST. Tytler regards this as the most
difficult task and likens it, in a traditional metaphor, to an artist producing a copy of a painting.
Thus, ‘scrupulous imitation’ should be avoided, since it loses the ‘ease and spirit of the original’. Tytler’s
solution is for the translator to ‘adopt the very soul of his author’.
Tytler’s laws are said by some (see Chan 2004: 68) to have influenced the work of the renowned Chinese
thinker and translator Yán Fù (1854–1921).
Yán Fù states his three translation principles as xìn (fidelity/ faithfulness/trueness), dá (fluency/
expressiveness/ intelligibility/ comprehensibility) and yaˇ (elegance/gracefulness).
These concepts became central to twentieth century Chinese translation practice and theory.
==============================================================================
Summary
The general trend of western translation theory from Cicero in Classical antiquity to the twentieth century
centered on the recurring debate as to whether translations should be literal (word-for-word) or free
(sense-for-sense), a diad that is famously discussed by St Jerome in his translation of the Bible into Latin.
Controversy over the translation of the Bible was central to translation theory in the west for well over a
thousand years. Early western theorists tended to be translators who presented a justification for their
approach in a preface to the translation. They are often portrayed as paying little attention (or not having
access) to what others before them had written.
However, they reflected a faithfulness to the religious text, often manifested in Early Modern literalism,
or a Classical view of language based on principles of clarity, logic and elegance that came to the fore
with the advent of European Humanism.
In the late seventeenth century, Dryden’s proposed triad of metaphrase, paraphrase and imitation is said
to mark the beginning of a more systematic and precise definition of translation.
CHAPTER 3 :
Equivalence and Equivalent Effect
Introduction
In this chapter we shall look at several major works of the time: Eugene Nida’s seminal concepts of
formal and dynamic equivalence and the principle of equivalent effect, Peter Newmark’s semantic and
communicative translation, and Werner Koller’s Korrespondenz and Äquivalenz.
In the first chapter we discussed how structuralist Roman Jakobson describes three kinds of translation:
intralingual, interlingual and intersemiotic, with interlingual referring to translation between two different
written sign systems.
Jakobson goes on to examine key issues of this type of translation, notably linguistic meaning and
equivalence. Jakobson follows the theory of language proposed by the famous Swiss linguist Saussure
(1857–1913). Saussure distinguished between the linguistic system (langue) and specific individual
utterances (parole).
Central to his theory of langue, he differentiated between the ‘signifier’ (the spoken and written signal)
and the ‘signified’ (the concept), which together create the linguistic ‘sign’. Thus, in English the word
cheese is the acoustic signifier which ‘denotes’ the concept ‘food made of pressed curds’ (the signified).
Crucially, the sign is arbitrary (Saussure 1916/1983: 67–9). Instead of cheese, the signifier could easily
have been bread, soup, samgyeopsal or any other word. Jakobson also stresses that it is possible to
understand what is signified by a word even if we have never seen or experienced the concept or thing
in real life.
Examples he gives are ambrosia and nectar, words which modern readers will have read in Greek myths
even if they have never come across the substances in real life; this contrasts with cheese, which they
almost certainly have encountered first-hand in some form.
Jakobson then moves on to consider the thorny problem of equivalence in meaning between words in
different languages, part of Saussure’s parole. He points out (1959/2012) that ‘there is ordinarily no full
equivalence between code-units’.
Thus, the Russian syr is not identical to the English cheese (or, for that matter, the Spanish queso, the
German Käse, the Korean chijeu, etc.) since the Russian ‘code-unit’ does not include the concept of soft
white curd cheese known in English as cottage cheese. In Russian, that would be tvarog and not syr.
This general principle of interlinguistic difference between terms and semantic fields importantly also has
to do with a basic issue of language and translation.
Thus, a translation of cottage cheese would not be the TT unit for cottage plus the unit for cheese; the
message cottage cheese would be considered and translated as a whole. For the message to be
‘equivalent’ in ST and TT, the code-units will necessarily be different since they belong to two different
sign systems (languages) which partition reality differently (the cheese/syr example above).
In Jakobson’s discussion, the problem of meaning and equivalence focuses on differences in the
structure and terminology of languages rather than on any inability of one language to render a message
that has been written or uttered in another verbal language.
For Jakobson, cross-linguistic differences, which underlie the concept of equivalence, center around
obligatory grammatical and lexical forms: ‘Languages differ essentially in what they must convey and not
in what they may convey’. Examples of differences are easy to find. They occur at:
◼ the level of gender: e.g. house is feminine in Romance languages, neuter in German and English;
honey is masculine in French, German and Italian, feminine in Spanish, neuter in English, etc.;
◼ the level of aspect: in Russian, the verb morphology varies according to whether the action has been
completed or not;
◼ the level of semantic fields, such as kinship terms: e.g. the German Geschwister is normally
explicated in English as brothers and sisters, since siblings is rather formal. Similarly, in Chinese it
would be 兄弟姐妹 (‘xio¯ ng dì jieˇ mèi’, literally meaning ‘elder brother, younger brother, elder sister,
younger sister’).
Even what for many languages is a basic relational concept such as to be (English), être (French) and
sein (German) is broken down in Spanish to ser and estar, while Arabic, Russian and many others do
not use such a verb explicitly in the present tense.
These examples illustrate differences between languages, but they are still concepts that can be
rendered interlingually. As Jakobson puts it, ‘all are conveyable in any existing language’. For him, only
poetry, with its unity of form and sense and where ‘phonemic similarity is sensed as semantic
relationship’, is considered ‘untranslatable’ and requires ‘creative transposition’.
Nida and ‘The Science of Translating’
Eugene Nida’s theory of translation developed from his own practical work from the 1940s onwards when
he was translating and organizing the translation of the Bible.
His more systematic approach borrows theoretical concepts and terminology both from semantics and
pragmatics and from Noam Chomsky’s work on syntactic structure which formed the theory of a universal
generative–transformational grammar (Chomsky 1957, 1965).
(2) transformed by transformational rules relating one underlying structure to another (e.g. active to
passive), to produce
(3) a final surface structure, which itself is subject to phonological and morphemic rules.
The structural relations described in this model are held by Chomsky to be a universal feature of human
language. The most basic of such structures are kernel sentences, which are simple, active, declarative
sentences that require the minimum of transformation (e.g. the wolf attacked the deer).
‘Kernel’ is a key term in this model. Just as kernel sentences were the most basic structures of Chomsky’s
initial model, so, for Nida and Taber, kernels ‘are the basic structural elements out of which language
builds its elaborate surface structures’.
When it comes to analyzing individual words, Nida (1964) describes various ‘scientific approaches to
meaning’ related to work that had been carried out by theorists in semantics and pragmatics.
Central to Nida’s work is the move away from the old idea that a word has a fixed meaning and towards
a functional definition of meaning in which a word ‘acquires’ meaning through its context and can produce
varying responses according to culture.
Meaning is broken down into the following:
(1) Linguistic meaning: the relationship between different linguistic structures, borrowing elements of
Chomsky’s model. Nida provides examples to show how the meaning crucially differs even where
similar classes of words are used. For instance, the following three expressions with the possessive
pronoun his all have different meanings: his house means ‘he possesses a house’, his journey equals
‘he performs a journey’ and his kindness is ‘kindness is a quality of him’.
(2) Referential meaning: the denotative ‘dictionary’ meaning. Thus, son denotes a male child.
(3) Emotive or connotative meaning: the associations a word produces. So, in the phrase ‘Don’t worry
about that, son’, the word son is a term of endearment or may in some contexts be patronizing.
The old terms such as ‘literal’, ‘free’ and ‘faithful’ translation, which were examined in Chapter 2, are
discarded by Nida in favor of ‘two basic orientations’ or ‘types of equivalence’ (Nida, 1964): (1) formal
equivalence; and (2) dynamic equivalence. These are defined by Nida as follows:
(1) Formal equivalence: Formal equivalence focuses attention on the message itself, in both form and
content… One is concerned that the message in the receiver language should match as closely as
possible the different elements in the source language. (Nida, 1964)
Formal equivalence, later called ‘formal correspondence’ (Nida and Taber, 1969) is thus keenly
oriented towards the ST structure, which exerts strong influence in determining accuracy and
correctness.
This type of translation is often used in an academic or legal environment and allows the reader closer
access to the language and customs of the source culture.
(2) Dynamic equivalence: Dynamic, later ‘functional’ equivalence is based on what Nida calls ‘the
principle of equivalent effect’, where ‘the relationship between receptor/receiver and message should
be substantially the same as that which existed between the original receptors and the message’. (Nida
1964).
The message has to be tailored to the receptor’s linguistic needs and cultural expectation and ‘aims at
complete naturalness of expression’. ‘Naturalness’ is a key requirement for Nida.
Indeed, he defines the goal of dynamic equivalence as seeking ‘the closest natural equivalent to the
source-language message’ (Nida and Taber, 1969). This receptor-oriented approach considers
adjustments of grammar, of lexicon and of cultural references to be essential in order to achieve
naturalness. The TT language should not show interference from the SL, and the ‘foreignness’ of the ST
setting is minimized (Nida, 1964).
For Nida, the success of the translation depends above all on achieving equivalent effect or response.
It is one of the ‘four basic requirements of a translation’, which are (ibid.: 164):
Peter Newmark (1916–2011)’s Approaches to Translation (1981) and A Textbook of Translation (1988)
have been widely used on translator training courses and combine a wealth of practical examples of
linguistic theories of meaning with practical applications for translation. Yet Newmark departs from Nida’s
receptor-oriented line.
He feels that the success of equivalent effect is ‘illusory’ and that ‘the conflict of loyalties, the gap between
emphasis on source and target language, will always remain as the overriding problem in translation
theory and practice’ (Newmark 1981: 38).
Newmark suggests narrowing the gap by replacing the old terms with those of ‘semantic’ and
‘communicative’ translation.
This resembles Nida’s dynamic equivalence in the effect it is trying to create on the TT reader, while
semantic translation has similarities to Nida’s formal equivalence.
However, Newmark distances himself from the full principle of equivalent effect, since that effect ‘is
inoperant if the text is out of TL space and time’.
An example would be a modern British English translation of Homer. No modern translator, irrespective
of the TL, can possibly hope or expect to produce the same effect on the reader of the written TT as the
oral ST had on its listeners in ancient Greece.
Importantly, as long as equivalent effect is achieved, Newmark holds literal translation to be the best
approach:
In communicative as in semantic translation, provided that equivalent effect is secured, the literal
word-for-word translation is not only the best, it is the only valid method of translation.
Nida’s move towards a science of translation proved to be especially influential in Germany, where the
common term for translation studies is Übersetzungswissenschaft (‘translation science’).
Important work to refine the concept of equivalence was carried out by Werner Koller in Heidelberg (West
Germany) and Bergen (Norway). Koller’s Einführung in die Übersetzungswissenschaft ([Research into
the science of translation] 1979a; see also Koller 1979b/1989 and 1995) examines the concept of
equivalence more closely along with its linked term ‘correspondence’ (Koller 1979).
(1) Correspondence falls within the field of contrastive linguistics, which compares two language
systems and describes differences and similarities contrastively. Its parameters are those of
Saussure’s langue (Saussure, 1983). This would include the identification of false friends (e.g.
German aktuel means current and not English actual) and of signs of lexical, morphological and
syntactic interference.
NOTE:
False friends are word pairings in two different languages that appear to have a similar
phonetic form, but in reality have entirely different meanings, origins, and spelling.
For example, the German word “enkel”, although pronounced nearly the same as English
word “ankle”, means “grandchild.”
So if you roll your ankle in Germany and ask for help with your “enkel,” you may confuse a
few people.
This is the concept of a false friend—a word that deceives you into thinking it’s the proper
translation of a word from your native language.
(2) Equivalence, on the other hand, relates to equivalent items in specific ST– TT pairs and contexts.
The parameter is that of Saussure’s parole.
In an attempt to answer this question, Koller (1995 and Hatim and Munday 2004) differentiates five types
of equivalence relations, constrained, in what is known as double linkage, by the ST on the one hand
and by the communicative conditions of the receiver on the other. These equivalence types are listed
below:
Koller describes the different types of equivalence in terms of their research focus.
(END OF MIDTERM LESSON COVERAGE)