Jump to content

Textual criticism

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Jossi (talk | contribs) at 00:12, 4 April 2007 (References: ++ Footnotes). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Carmina Cantabrigiensia, Manuscript C, folio 436v, 11th century

Textual criticism or lower criticism is a branch of philology or bibliography that is concerned with the identification and removal of errors from texts and manuscripts. Normally, the product of textual criticism is an edited text that the editor believes comes as close as possible to a lost original (called the archetype), or some other version of a text as it existed – or was intended to exist – in the past. This term is used in contrast with higher criticism, which is the endeavor to establish the authorship, date and place of composition of the text.

Marvin Richardson Vincent, describes textual criticism as "that process which it sought to determine the original text of a document or a collection of documents, and to exhibit, freed from all the errors, corruptions, and variations which may have been accumulated in the course of its transcription by successive copyings."[1]

Overview

Folio from Papyrus 46, containing 2 Corinthians 11:33-12:9

Textual criticism has been practiced for over two thousand years. Early textual critics were concerned with preserving the works of antiquity, and later with medieval and early modern manuscript writings. Many ancient works, such as the Bible and the Greek tragedies, survive in hundreds of copies, and the relationship of each copy to the original may be unclear, though the question if the Bible has ever had only one original has been discussed. Textual scholars have debated for centuries which sources are most closely derived from the original, and which readings in those sources are correct.

In the English language, the works of Shakespeare have been a particularly fertile ground for textual criticism – both because the texts as transmitted contain a considerable amount of error, and because the effort and expense to produce superior editions of his works has always been widely viewed as worthwhile. The principles of textual criticism, although originally developed and refined for works of antiquity, the Bible, and Shakespeare, have been applied to the works of many authors, representing periods from ancient times to the twentieth century.

When comparing different documents (or, "witnesses") of a single text, it is not always apparent which variant is original and which is an error. The textual critic's task, therefore, is to sort through the variants and establish a "critical text" that is intended to best represent the original by explaining the state of all extant witnesses. In establishing the critical text, the textual critic considers both "external" evidence (the age, provenance, and affiliation of each witness) and "internal" considerations (what the author and scribes, or printers, were likely to have done).

Before the invention of printing, literary works had to be copied by hand, and each time a manuscript was copied, errors might be introduced by the human scribe. The age of printing reduced the need for handwritten copies, but printed editions are subject to many of the same kinds of errors. Instead of a scribe miscopying his source, it is a compositor or a printing shop (Gaskell, 1978). Since each scribe or printer commits different errors, reconstruction of the lost original may require a selection of readings taken from many sources. An edited text that draws from multiple sources is said to be eclectic. In contrast to this approach, some textual critics prefer to identify the single best surviving text, and not to combine readings from multiple sources.

Often, the product of a textual critic's work is a "critical edition," containing the critically-established text and an "apparatus" that presents the evidence that the editor considered, the editor's analysis of that evidence, and a record of rejected variants.

There are three fundamental approaches to textual criticism: eclecticism, stemmatics, and copy-text editing. Techniques from the biological discipline of cladistics are currently also being used to determine the relationships between manuscripts.

Eclecticism

Eclecticism is the practice of examining a wide number of witnesses and selecting the variant that seems best. The result of the process is a text with readings drawn from many witnesses. In a purely eclectic approach, no single witness is theoretically favored. Instead, the critic forms opinions about individual witnesses, relying on both external and internal evidence.

Since the mid-19th century, eclecticism, in which there is no a priori bias to a single manuscript, has been the dominant method of editing the Greek text of the New Testament (currently, the United Bible Society, 4th ed. and Nestle-Aland, 27th ed.). Even so, the oldest manuscripts, being of the Alexandrian text-type, are the most favored, and the critical text has an Alexandrian disposition.

External evidence

External evidence is evidence of each physical witness, its date, source, and relationship to other known witnesses. Critics will often prefer the readings supported by the oldest witnesses, a majority of witnesses, the most geographically diverse witnesses, or the best witnesses. Readings that depart from the known practice of a scribe or a given period may be deemed more reliable, since a scribe is unlikely on his own initiative to have departed from the usual practice.

Internal evidence

Internal evidence is evidence that comes from the text itself, independent of the physical characteristics of the document. Various considerations can be used to decide which reading is the most likely to be original. Sometimes these considerations can be in conflict.

One of the techniques is Lectio difficilior potior ("the harder reading is stronger"), based on taking the more difficult reading as being more likely to be the original. It is based on the idea that copyists are more likely to simplify and smooth a text they do not fully understand. Some developing research, however, suggests that this principle cannot be applied universally, as is the case with the Book of Revelation.

Another scribal tendency is called homoioteleuton, meaning "same endings". Homoioteleuton occurs when two words/phrases/lines end with the same sequence of letters. The scribe, having finished copying the first, skips to the second, omitting all intervening words. Homeoarchy refers to eye-skip when the beginnings of two lines are similar.

The critic may also examine the other writings of the author to decide what words and grammatical constructions match his style. The evaluation of internal evidence also provides the critic with information that helps him evaluate the reliability of individual manuscripts. Thus, the consideration of internal and external evidence is related.

Canons of textual criticism

Luke 11:2 in Codex Sinaiticus

Various scholars have developed guidelines, or canons of textual criticism, to guide the exercise of the critic's judgment in determining the best readings of a text. One of the earliest was Johann Albrecht Bengel (1687–1752), who in 1734 produced an edition of the Greek New Testament. In his commentary, he established the rule Proclivi scriptioni praestat ardua, "The difficult reading is to be preferred to that which is easy." [1]

Johann Jakob Griesbach (1745–1812) published several editions of the New Testament. In his 1796 edition, he established fifteen critical rules. Among them was a variant of Bengel's rule, Lectio difficilior potior, "the hardest reading is best." Another was Lectio brevior praeferenda, "the shorter reading is best, based on the idea that scribes were more likely to add than to delete. This rule cannot be applied uncritically, as scribes may omit material inadvertently. [2]

Brooke Foss Westcott (1825–1901) and Fenton J. A. Hort (1828–1892) published an edition of the New Testament in 1881. They proposed nine critical rules, including a version of Bengel's rule, "The reading is less likely to be original that shows a disposition to smooth away difficulties." They also argued that "Readings are approved or rejected by reason of the quality, and not the number, of their supporting witnesses," and that "The reading is to be preferred that most fitly explains the existence of the others." [3]

Many of these rules, although originally developed for Biblical textual criticism, have wide applicability to any text susceptible to errors of transmission.

Limitations of eclecticism

Since the canons of criticism are highly susceptible to interpretation, and at times even contradict each other, they can often be employed to justify any result that fits the text critic's aesthetic or theological agenda. Starting in the nineteenth century, scholars sought more rigorous methods to guide editorial judgment. Best-text editing (a complete rejection of eclecticism) became one extreme. Stemmatics and copy-text editing – while both eclectic, in that they permit the editor to select readings from multiple sources – sought to reduce subjectivity by establishing one or a few witnesses presumably as being favored by "objective" criteria.

Stemmatics

Overview

Stemmatics or stemmology is a rigorous approach to textual criticism developed by Karl Lachmann (1793–1851) and others. It takes its name from the stemma, or "family tree," which shows the relationships of the surviving witnesses. The method works from the principle that "community of error implies community of origin." That is, if two witnesses have a number of errors in common, it may be presumed that they were derived from a common intermediate source, called a hyparchetype. Relations between the lost intermediates are determined by the same process, placing all extant manuscripts in a family tree or stemma codicum descended from a single archetype. The process of constructing the stemma is called recension, or the Latin recensio.

Having completed the stemma, the critic proceeds to the next step, called selection or selectio, where the text of the archetype is determined by examining variants from the closest hyparchetypes to the archetype and selecting the best ones. If one reading occurs more often than another at the same level of the tree, then the dominant reading is selected. If two competing readings occur equally often, then the editor uses his judgment to select the correct reading.

After selectio, the text may still contain errors, since there may be passages where no source preserves the correct reading. The step of examination, or examinatio is applied to find corruptions. Where the editor concludes that the text is corrupt, it is corrected by a process called "emendation," or emendatio (also sometimes called divinatio). Emendations not supported by any known source are sometimes called conjectural emendations.

The process of selectio resembles eclectic textual criticism, but applied to a restricted set of hypothetical hyparchetypes. The steps of examinatio and emendatio resemble copy-text editing. In fact, the other techniques can be seen as special cases of stemmatics, but in which a rigorous family history of the text cannot be determined but only approximated. If it seems that one manuscript is by far the best text, then copy text editing is appropriate, and if it seems that a group of manuscripts are good, then eclecticism on that group would be proper.

The Hodges-Farstad edition of the Greek New Testament attempts to use stemmatics for some portions.

Limitations and criticism

The stemmatic method assumes that each witness is derived from one, and only one, predecessor. If a scribe refers to more than one source when creating his copy, then the new copy will not clearly fall into a single branch of the family tree. In the stemmatic method, a manuscript that is derived from more than one source is said to be contaminated.

The method also assumes that scribes only make new errors – they do not attempt to correct the errors of their predecessors. When a text has been improved by the scribe, it is said to be sophisticated, but "sophistication" impairs the method by obscuring a document's relationship to other witnesses, and making it more difficult to place the manuscript correctly in the stemma.

The stemmatic method requires the textual critic to group manuscripts by commonality of error. It is required, therefore, that the critic can distinguish erroneous readings from correct ones. This assumption has often come under attack. W. W. Greg noted, "That if a scribe makes a mistake he will inevitably produce nonsense is the tacit and wholly unwarranted assumption" (Greg 1950, p. 20).

The critic Joseph Bédier (1864–1938) launched a particularly withering attack on stemmatics in 1928. He surveyed editions of medieval French texts that were produced with the stemmatic method, and found that textual critics tended overwhelmingly to produce trees divided into just two branches. He concluded that this outcome was unlikely to have occurred by chance, and that therefore, the method was tending to produce bipartite stemmas regardless of the actual history of the witnesses. He suspected that editors tended to favor trees with two branches, as this would maximize the opportunities for editorial judgment (as there would be no third branch to "break the tie" whenever the witnesses disagreed). He also noted that, for many works, more than one reasonable stemma could be postulated, suggesting that the method was not as rigorous or as scientific as its proponents had claimed.

The stemmatic method's final step is emendatio, also sometimes referred to as "conjectural emendation." But in fact, the critic employs conjecture at every step of the process. Some of the method's rules that are designed to reduce the exercise of editorial judgment do not necessarily produce the correct result. For example, where there are more than two witnesses at the same level of the tree, normally the critic will select the dominant reading. However, it may be no more than fortuitous that more witnesses have survived that present a particular reading. A plausible reading that occurs less often may, nevertheless, be the correct one.

Lastly, the stemmatic method assumes that every extant witness is derived, however remotely, from a single source. It does not account for the possibility that the original author may have revised his work, and that the text could have existed at different times in more than one authoritative version.

Copy-text editing

Page from Codex Vaticanus Graece 1209, B/03

With copy-text editing, the textual critic selects a base text from a manuscript thought to be reliable. Often, the base text is selected from the oldest manuscript of the text, but in the early days of printing, the copy text was often a manuscript that was at hand.

Using the copy-text method, the critic examines the base text and makes corrections (called emendations) in places where the base text appears wrong to the critic. This can be done by looking for places in the base text that do not make sense or by looking at the text of other witnesses for a superior reading. Close-call decisions are usually resolved in favor of the copy-text.

The first published, printed edition of the Greek New Testament was produced by this method. Erasmus, the editor, selected a manuscript from the local Dominican monastery in Basle and corrected its obvious errors by consulting other local manuscripts. The Westcott and Hort text, which was the basis for the Revised Version of the English bible, also used the copy-text method, using the Codex Vaticanus as the base manuscript.

McKerrow's concept of copy-text

The bibliographer Ronald B. McKerrow introduced the term copy-text in his 1904 edition of the works of Thomas Nashe, defining it as "the text used in each particular case as the basis of mine." McKerrow was aware of the limitations of the stemmatic method, and believed it was more prudent to choose one particular text that was thought to be particularly reliable, and then to emend it only where the text was obviously corrupt. The French critic Joseph Bédier likewise became disenchanted with the stemmatic method, and concluded that the editor should choose the best available text, and emend it as little as possible.

In McKerrow's method as originally introduced, the copy-text was not necessarily the earliest text. In some cases, McKerrow would choose a later witness, noting that "if an editor has reason to suppose that a certain text embodies later corrections than any other, and at the same time has no ground for disbelieving that these corrections, or some of them at least, are the work of the author, he has no choice but to make that text the basis of his reprint" (quoted in Greg 1950, pp. 23–24).

By 1939, in his Prolegomena for the Oxford Shakespeare, McKerrow had changed his mind about this approach, as he feared that a later edition – even if it contained authorial corrections – would "deviate more widely than the earliest print from the author's original manuscript." He therefore concluded that the correct procedure would be "produced by using the earliest "good" print as copy-text and inserting into it, from the first edition which contains them, such corrections as appear to us to be derived from the author." But, fearing the arbitrary exercise of editorial judgment, McKerrow stated that, having concluded that a later edition had substantive revisions attributable to the author, "we must accept all the alterations of that edition, saving any which seem obvious blunders or misprints" (McKerrow 1939, pp. 17–18, quoted in Greg 1950, p. 25).

W. W. Greg's rationale of copy-text

Anglo-American textual criticism in the last half of the twentieth century came to be dominated by a landmark essay by Sir Walter W. Greg, "The Rationale of Copy-Text" (Greg 1950). Greg proposed:

[A] distinction between the significant, or as I shall call them 'substantive', readings of the text, those namely that affect the author's meaning or the essence of his expression, and others, such in general as spelling, punctuation, word-division, and the like, affecting mainly its formal presentation, which may be regarded as the accidents, or as I shall call them 'accidentals', of the text (Greg 1950, p. 21).

Greg observed that compositors at printing shops tended to follow the "substantive" readings of their copy faithfully, except when they deviated unintentionally; but that "as regards accidentals they will normally follow their own habits or inclination, though they may, for various reasons and to varying degrees, be influenced by their copy" (Greg 1950, p. 22).

He concluded:

The true theory is, I contend, that the copy-text should govern (generally) in the matter of accidentals, but that the choice between substantive readings belongs to the general theory of textual criticism and lies altogether beyond the narrow principle of the copy-text. Thus it may happen that in a critical edition the text rightly chosen as copy may not by any means be the one that supplies most substantive readings in cases of variation. The failure to make this distinction and to apply this principle has naturally led to too close and too general a reliance upon the text chosen as basis for an edition, and there has arisen what may be called the tyranny of the copy-text, a tyranny that has, in my opinion, vitiated much of the best editorial work of the past generation (Greg 1950, p. 26).

Greg's view, in short, was that the "copy-text can be allowed no over-riding or even preponderant authority so far as substantive readings are concerned." The choice between reasonable competing readings, he said:

[W]ill be determined partly by the opinion the editor may form respecting the nature of the copy from which each substantive edition was printed, which is a matter of external authority; partly by the intrinsic authority of the several texts as judged by the relative frequency of manifest errors therein; and partly by the editor's judgement of the intrinsic claims of individual readings to originality -- in other words their intrinsic merit, so long as by 'merit' we mean the likelihood of their being what the author wrote rather than their appeal to the individual taste of the editor (Greg 1950, p. 29).

Although Greg argued that an editor should be free to use his judgment to choose between competing substantive readings, he suggested that an editor should defer to the copy-text when "the claims of two readings...appear to be exactly balanced.... In such a case, while there can be no logical reason for giving preference to the copy-text, in practice, if there is no reason for altering its reading, the obvious thing seems to be to let it stand" (Greg 1950, p. 31). The "exactly balanced" variants are said to be indifferent.

Editors who follow Greg's rationale produce eclectic editions, in that the authority for the "accidentals" is derived from one particular source (usually the earliest one) that the editor considers to be authoritative, but the authority for the "substantives" is determined in each individual case according to the editor's judgment. The resulting text, except for the accidentals, is constructed without relying predominantly on any one witness.

Greg–Bowers–Tanselle

W. W. Greg did not live long enough to apply his rationale of copy-text to any actual editions of works. His rationale was adopted and significantly expanded by Fredson Bowers (1905–1991). Starting in the 1970s, G. Thomas Tanselle (1934-) vigorously took up the method's defense and added significant contributions of his own. Greg's rationale as practiced by Bowers and Tanselle has come to be known as the "Greg–Bowers" or the "Greg–Bowers–Tanselle" method.

Application to works of all periods

William Shakespeare, A Midsummer Night's Dream

In his 1964 essay, "Some Principles for Scholarly Editions of Nineteenth-Century American Authors," Bowers said that "the theory of copy-text proposed by Sir Walter Greg rules supreme" (Bowers 1964, p. 224). Bowers's assertion of "supremacy" was in contrast to Greg's more modest claim that "My desire is rather to provoke discussion than to lay down the law" (Greg 1950, p. 36).

Whereas Greg had limited his illustrative examples to English Renaissance drama, where his expertise lay, Bowers argued that the rationale was "the most workable editorial principle yet contrived to produce a critical text that is authoritative in the maximum of its details whether the author be Shakespeare, Dryden, Fielding, Nathaniel Hawthorne, or Stephen Crane. The principle is sound without regard for the literary period" (Bowers 1972, p. 86). For works where an author's manuscript survived – a case Greg had not considered – Bowers concluded that the manuscript should generally serve as copy-text. Citing the example of Nathaniel Hawthorne, he noted:

When an author's manuscript is preserved, this has paramount authority, of course. Yet the fallacy is still maintained that since the first edition was proofread by the author, it must represent his final intentions and hence should be chosen as copy-text. Practical experience shows the contrary. When one collates the manuscript of The House of the Seven Gables against the first printed edition, one finds an average of ten to fifteen differences per page between the manuscript and the print, many of them consistent alterations from the manuscript system of punctuation, capitalization, spelling, and word-division. It would be ridiculous to argue that Hawthorne made approximately three to four thousand small changes in proof, and then wrote the manuscript of The Blithedale Romance according to the same system as the manuscript of the Seven Gables, a system that he had rejected in proof (Bowers 1964, p. 226).

Following Greg, the editor would then replace any of the manuscript readings with substantives from printed editions that could be reliably attributed to the author: "Obviously, an editor cannot simply reprint the manuscript, and he must substitute for its readings any words that he believes Hawthorne changed in proof (Bowers 1964, p. 226).

Uninfluenced final authorial intention

McKerrow had articulated textual criticism's goal in terms of "our ideal of an author's fair copy of his work in its final state" (McKerrow 1939, pp. 17–18, quoted in Bowers 1974, p. 82, n. 4). Bowers asserted that editions founded on Greg's method would "represent the nearest approximation in every respect of the author's final intentions" (Bowers 1964, p. 227). Bowers stated similarly that the editor's task is to "approximate as nearly as possible an inferential authorial fair copy" (quoted in Tanselle 1976, p. 168). Tanselle notes that, "Textual criticism...has generally been undertaken with a view to reconstructing, as accurately as possible, the text finally intended by the author" (Tanselle 1995, p. 16).

Bowers and Tanselle argue for rejecting textual variants that an author inserted at the suggestion of others. Bowers said that his edition of Stephen Crane's first novel, Maggie, presented "the author's final and uninfluenced artistic intentions" (quoted in Zeller 1975, p. 247). In his writings, Tanselle refers to "unconstrained authorial intention" or "an author's uninfluenced intentions" (Tanselle 1986, p. 19). This marks a departure from Greg, who had merely suggested that the editor inquire whether a later reading "is one that the author can reasonably be supposed to have substituted for the former" (Greg 1950, p. 32), not implying any further inquiry as to why the author had made the change.

Tanselle discusses the example of Herman Melville's Typee. After the novel's initial publication, Melville's publisher asked him to soften the novel's criticisms of missionaries in the South Seas. Although Melville pronounced the changes an improvement, Tanselle rejected them in his edition, concluding that "there is no evidence, internal or external, to suggest that they are the kinds of changes Melville would have made without pressure from someone else" (Tanselle 1976, p. 194).

Bowers confronted a similar problem in his edition of Maggie. Crane originally printed the novel privately in 1893. To secure commercial publication in 1896, Crane agreed to remove profanity, but he also made stylistic revisions. Bowers's approach was to preserve the stylistic and literary changes of 1896, but to revert to the 1893 readings where he believed that Crane was fulfilling the publisher's intention rather than his own. There were, however, intermediate cases that could reasonably have been attributed to either intention, and some of Bowers's choices came under fire – both as to his judgment, and as to the wisdom of conflating readings from the two different versions of Maggie (Davis 1977, pp. 72–3).

Hans Zeller argued that it is impossible to tease apart the changes Crane made for literary reasons and those made at the publisher's insistence:

Firstly, in anticipation of the character of the expected censorship, Crane could be led to undertake alterations which also had literary value in the context of the new version. Secondly, because of the systematic character of the work, purely censorial alterations sparked off further alterations, determined at this stage by literary considerations. Again in consequence of the systemic character of the work, the contamination of the two historical versions in the edited text gives rise to a third version. Though the editor may indeed give a rational account of his decision at each point on the basis of the documents, nevertheless to aim to produce the ideal text which Crane would have produced in 1896 if the publisher had left him complete freedom is to my mind just as unhistorical as the question of how the first World War or the history of the United States would have developed if Germany had not caused the USA to enter the war in 1917 by unlimited submarine combat. The nonspecific form of censorship described above is one of the historical conditions under which Crane wrote the second version of Maggie and made it function. From the text which arose in this way it is not possible to subtract these forces and influences, in order to obtain a text of the author's own. Indeed I regard the "uninfluenced artistic intentions" of the author as something which exists only in terms of aesthetic abstraction. Between influences on the author and influences on the text are all manner of transitions (Zeller 1975, pp. 247–248).

Bowers and Tanselle recognize that texts often exist in more than one authoritative version. Tanselle argues that:

[T]wo types of revision must be distinguished: that which aims at altering the purpose, direction, or character of a work, thus attempting to make a different sort of work out of it; and that which aims at intensifying, refining, or improving the work as then conceived (whether or not it succeeds in doing so), thus altering the work in degree but not in kind. If one may think of a work in terms of a spatial metaphor, the first might be labeled "vertical revision," because it moves the work to a different plane, and the second "horizontal revision," because it involves alterations within the same plane. Both produce local changes in active intention; but revisions of the first type appear to be in fulfillment of an altered programmatic intention or to reflect an altered active intention in the work as a whole, whereas those of the second do not (Tanselle 1976, p. 193).

He suggests that where a revision is "horizontal" (i.e., aimed at improving the work as originally conceived), then the editor should adopt the author's later version. But where a revision is "vertical" (i.e., fundamentally altering the work's intention as a whole), then the revision should be treated as a new work, and edited separately on its own terms.

Format for apparatus

Bowers was also influential in defining the form of critical apparatus that should accompany a scholarly edition. In addition to the content of the apparatus, Bowers led a movement to relegate editorial matter to appendices, leaving the critically-established text "in the clear," that is, free of any signs of editorial intervention. Tanselle explained the rationale for this approach:

In the first place, an editor's primary responsibility is to establish a text; whether his goal is to reconstruct that form of the text which represents the author's final intention or some other form of the text, his essential task is to produce a reliable text according to some set of principles. Relegating all editorial matter to an appendix and allowing the text to stand by itself serves to emphasize the primacy of the text and permits the reader to confront the literary work without the distraction of editorial comment and to read the work with ease. A second advantage of a clear text is that it is easier to quote from or to reprint. Although no device can insure accuracy of quotation, the insertion of symbols (or even footnote numbers) into a text places additional difficulties in the way of the quoter. Furthermore, most quotations appear in contexts where symbols are inappropriate; thus when it is necessary to quote from a text which has not been kept clear of apparatus, the burden of producing a clear text of the passage is placed on the quoter. Even footnotes at the bottom of the text pages are open to the same objection, when the question of a photographic reprint arises (Tanselle 1972, pp. 45–46).

Some critics believe that a clear-text edition gives the edited text too great a prominence, relegating textual variants to appendices that are difficult to use, and suggesting a greater sense of certainty about the established text than it deserves. As Shillingsburg notes, "English scholarly editions have tended to use notes at the foot of the text page, indicating, tacitly, a greater modesty about the "established" text and drawing attention more forcibly to at least some of the alternative forms of the text" (Shillingsburg 1989, p. 56, n. 8).

The MLA's CEAA and CSE

In 1963, the Modern Language Association of America (MLA) established the Center for Editions of American Authors (CEAA). The CEAA's Statement of Editorial Principles and Procedures, first published in 1967, adopted the Greg–Bowers rationale in full. A CEAA examiner would inspect each edition, and only those meeting the requirements would receive a seal denoting "An Approved Text."

Between 1966 and 1975, the Center allocated more than $1.5 million in funding from the National Endowment for the Humanities to various scholarly editing projects, which were required to follow the guidelines (including the structure of editorial apparatus) as Bowers had defined them (Tanselle 1975, pp. 167–8). According to Davis (1977, p. 61), the funds coordinated by the CEAA over the same period were more than $6 million, counting funding from universities, university presses, and other bodies.

The Center for Scholarly Editions (CSE) replaced the CEAA in 1976. The change of name indicated the shift to a broader agenda than just American authors. The Center also ceased its role in the allocation of funds. The Center's latest guidelines (2003) [4] no longer prescribe a particular editorial procedure.

Cladistics

Canterbury Tales Woodcut 1484

Cladistics is a technique borrowed from biology, where it is used to determine the evolutionary relationships between different species. The text of a number of different manuscripts is entered into a computer, which records all the differences between them. The manuscripts are then grouped according to their shared characteristics. The difference between cladistics and more traditional forms of statistical analysis is that, rather than simply arranging the manuscripts into rough groupings according to their overall similarity, cladistics assumes that they are part of a branching family tree and uses that assumption to derive relationships between them. This makes it more like an automated approach to stemmatics. However, where there is a difference, the computer does not attempt to decide which reading is closer to the original text, and so does not indicate which branch of the tree is the "root"—which manuscript tradition is closest to the original. Other types of evidence must be used for that purpose.

The major theoretical problem with applying cladistics to textual criticism is that cladistics assumes that, once a branching has occurred in the family tree, the two branches cannot rejoin; so all similarities can be taken as evidence of common ancestry. While this assumption is applicable to the evolution of living creatures, it is not always true of manuscript traditions, since a scribe can work from two different manuscripts at once, producing a new copy with characteristics of both.

Nonetheless, software developed for use in biology has been applied with some success to textual criticism; for example, it is being used by the Canterbury Tales Project to determine the relationship between the 84 surviving manuscripts and four early printed editions of the Canterbury Tales.

Applications of textual criticism

The Hebrew Bible

Textual criticism of the Hebrew Bible centers on the comparison of the manuscript versions of the Masoretic text to early witnesses such as the Septuagint, the Vulgate, the Samaritan Pentateuch, various Syriac texts, and the Biblical texts of the Dead Sea Scrolls.

The New Testament

Byzantine illuminated manuscript, 1020

The New Testament has been preserved in more manuscripts than any other ancient work, having over 5,300 Greek manuscripts, 10,000 Latin manuscripts and 9,300 manuscripts in various other ancient languages including Syriac, Slavic, Ethiopic and Armenian. This compares to less than 700 manuscripts for Homer's Iliad, the next most well-documented work from antiquity. The sheer number of witnesses presents unique difficulties, chiefly in that it makes stemmatics impractical. Consequently, New Testament textual critics have adopted eclecticism after sorting the witnesses into three major groups, called text-types. The most common division today is as follows:

The New Testament portion of the English translation known as the King James or Authorized Version was based on the Textus Receptus, an eclectic Greek text prepared by Erasmus based primarily on Byzantine text manuscripts. However, following Westcott and Hort, most modern New Testament textual critics have concluded that the Byzantine text-type was formalised at a later date than the Alexandrian and Western text-types. Among the other types, the Alexandrian is viewed as more pure than the Western, and so one of the central tenets of current New Testament textual criticism is that one should follow the readings of the Alexandrian texts unless those of the other types are clearly superior.

However, a minority position represented by The Greek New Testament According to the Majority Text edition by Zane C. Hodges and Arthur L. Farstad insists that the Byzantine text-type represents an earlier text-type than the surviving Alexandrian texts, possibly the result of an early attempt at textual criticism. This position is also held by Maurice A. Robinson and William G. Pierpont in their Byzantine Majority Text: The Greek New Testament: Byzantine Textform. Proponents of the Byzantine-text type assert that Egypt, almost uniquely, offers optimal climatic conditions to preserving ancient manuscripts. Thus, the papyri used elsewhere (Asia Minor and Greece) would not have survived due to the unfavorable climatic conditions. The argument states that the far greater number of surviving later Byzantine manuscripts implies an equivalent preponderance of Byzantine texts amongst lost earlier manuscripts; and hence that a critical reconstruction of the predominant text of the Byzantine tradition would have a superior claim to being closest to the autographs. Furthermore, proponent J.P. Green, in Volume II of Unholy, questions over 3,000 differences between Sinaiticus and Vaticanus in the New Testament Gospels.

Other scholars have criticized the current categorization of manuscripts into text-types and prefer either to subdivide the manuscripts in other ways or to discard the text-type taxonomy.

Textual criticism is also present among scholars who believe the New Testament was written in Aramaic, including James Trimmand Raphael Lataster. Often, textual criticism is used by such scholars as evidence for an Aramaic original.

Classical texts

The much smaller number of witnesses to classical texts permits the adoption of stemmatics and, in some cases, to copy text editing. However, unlike the New Testament where the earliest witnesses are within 200 years of the original, the earliest existing manuscripts of most classical texts were written about a millennium after their composition. This puts a greater probability of emendation on classical texts than the New Testament, in which the original reading is more likely to be preserved in at least one witness.

See also

People
Topics
Critical editions

Footnotes

  1. ^ Vincent, Marvin Richardson. A History of the Textual Criticism of the New Testament (1899) Macmillan. Original from Harvard University

References

Stemmatics

  • von Reenen, Pieter (1996). Studies in Stemmatology. Amsterdam: John Benjamins Publishing Company. {{cite book}}: Unknown parameter |coauthors= ignored (|author= suggested) (help)

Cladistics

  • Sober, Elliott. Reconstructing the Past: Parsimony, Evolution, and Inference. Cambridge: MIT Press.

Editing

  • Bowers, Fredson (1972). "Multiple Authority: New Problems and Concepts of Copy-Text". Library, Fifth Series. XXVII (2): 81–115.
  • Davis, Tom (1977). "The CEAA and Modern Textual Editing". Library, Fifth Series. XXXII (32): 61–74.
  • Ehrman, Bart D. (2005). Misquoting Jesus: The Story Behind Who Changed the Bible and Why. HarperSanFrancisco. ISBN 0-060-73817-0.
  • Gaskell, Philip (1978). From Writer to Reader: Studies in Editorial Method. Oxford: Oxford University Press.
  • Love, Harold (1993). "section III". Scribal Publication in Seventeenth-Century England. Oxford: Clarendon Press.
  • McKerrow, R. B. (1939). Prolegomena for the Oxford Shakespeare. Oxford: Clarendon Press.
  • Tanselle, G. Thomas (1995). "The Varieties of Scholarly Editing". Scholarly Editing: A Guide to Research. New York: The Modern Language Association of America. {{cite book}}: Unknown parameter |coauthors= ignored (|author= suggested) (help)