01 Whole
01 Whole
01 Whole
Sixin Liao
1
Table of Contents
Abstract .......................................................................................................................... 7
Acknowledgement ......................................................................................................... 9
2.1 Pros and Cons of Subtitles: Does Redundancy Help or Harm? ..................... 15
2.2.2 Bilingual subtitles: dual cognitive benefits or dual cognitive burden? ... 26
2
Chapter 3. Methodology .............................................................................................. 42
3.1 Sample............................................................................................................ 42
4.1.3 Difference in DT% between subtitles and the visual image ................... 68
3
4.2 The Impact of Subtitle Mode on Cognitive Load .......................................... 71
4.3 The Impact of Subtitle Mode on the Scores of Free Recall Test ................... 73
References .................................................................................................................... 94
4
List of Figures
Figure 2. Screenshots of four video conditions: NS, CS, ES and BS. ......................... 45
Figure 11. Comparison of DT% in L1 and L2 subtitles in the bilingual condition. .... 65
Figure 14. DT% on the visual image across four video conditions. ............................ 68
Figure 15. Difference of DT% between subtitles and visual image for all participants in
Figure 16. Comparison of DT% in image and subtitles in three subtitled conditions. 69
Figure 17. Mean Fixation Duration in subtitles in different subtitled conditions. ....... 70
Figure 18. Average values of self-reported cognitive load and mental effort in four
5
List of Tables
Table 1. Cronbach’s Alpha Coefficients for IL, EL and GL* in Four Conditions ..... 49
Table 4. Means of DT% on the Visual Image in Monolingual and Bilingual Conditions
...................................................................................................................................... 67
...................................................................................................................................... 70
Table 6. Means (SD) of Cognitive Load and Mental Effort in Different Conditions . 72
Table 8. Comparison of Findings for the Overall Time Spent on Subtitles. ............... 82
Studies .......................................................................................................................... 85
6
Abstract
This study investigated the impact of subtitle mode on viewers’ visual attention
distribution, cognitive load and overall comprehension of the video’s content. Twenty
Chinese native speakers watched four videos with English narration, each in a different
(interlingual subtitles), with both Chinese and English subtitles (bilingual subtitles),
and without subtitles. Their eye movements were recorded by means of a remote eye
tracker while watching the video. After watching each video, they were asked to
complete a post hoc Likert scale questionnaire to self-report three types of cognitive
load and mental effort in information acquisition. A free recall test was also used to
evaluate viewers’ comprehension of the video. Results showed that viewers’ visual
attention to L1 subtitles was more stable than that to L2 subtitles and less sensitive to
the increased visual competition in the bilingual condition, which could be attributed to
the language dominance of their native language. Bilingual subtitles did not create more
cognitive load or produce more cognitive gain than monolingual subtitles. However,
compared with the no subtitles condition, bilingual subtitles were found to be more
beneficial as they provided linguistic support to make the video easier to comprehend
7
Acknowledgement
Associate Professor Jan-Louis Kruger for his continuous encouragement and guidance
throughout this study. Without his considerable expertise, insightful advice and careful
editing, this thesis would not have been possible. I am also grateful to my co-supervisor,
Dr. Stephen Doherty, for his meticulous and valuable feedback on my work.
and bringing me back on track when I had a thought of giving up. I have been very
My thanks also go to the staffs at Macquarie University for all their work in building
such a supportive and inspiring environment. I also wish to thank all participants for
Thanks are due to my dear friends and colleagues: Sijia, Weiwei, Leidy, Andrea, Anh,
Xiaomin, for their generous assistance along this journey. You make this journey much
I am deeply indebted to my beloved parents and brother for their unfailing confidence
academic journey possible. Thank you for being supportive and understanding of the
choices I made in life. Your selfless love is all that keeps me going and becoming a
better person.
9
Chapter 1. Introduction
This chapter consists of three sections. It begins with an introduction to the research
background (1.1), underlining the unexplored and yet valuable research issues that
inspired the current study. The second section outlines three research questions that this
study sets out to address (1.2), followed by a structure of the thesis in the last section
(1.3).
audiovisual products around the world, for instance, videos distributed through online
video sharing platforms such as TED Talks or online programs from educational
institutions. In order to minimize the language barrier for audiences of diverse linguistic
dialogue, has become increasingly widespread (Aparicio & Bairstow, 2016; Kruger &
Doherty, 2016).
Based on linguistic parameters, subtitles can be categorized into three types, namely
interlingual subtitles (subtitles in a different language from the spoken dialogue), and
Cintas & Remael, 2007). Compared with intralingual and interlingual subtitles,
bilingual subtitles are less explored as they are normally used in a small number of
10
multilingual countries such as Belgium and Finland (Kuo, 2014; Pedersen, 2011).
However, recent years have seen an increase in the use of bilingual subtitles,
One distinct advantage of bilingual subtitles is that they make audiovisual products
accessible to both native and foreign audiences at the same time by providing dual
communication channels in two different languages. For this reason China Central
to produce bilingual subtitles,” 2015). Bilingual subtitles have also been hailed as an
effective tool in language learning as they combine the benefits of both intralingual and
interlingual subtitles, with intralingual subtitles providing the written forms of spoken
words that can facilitate vocabulary learning and interlingual subtitles providing the
absorption of the content (García, 2017). Bilingual subtitles are therefore particularly
While subtitling has long been a focus of investigation, previous studies mostly focused
on the effects of subtitles on language learning. Few have attempted to explore their
impact on viewers’ cognitive load. Research along these lines is of significant value
learning outcomes. Watching subtitled videos can place high demands on viewers’
attentional and cognitive resources because viewers have to cope with a rich
11
nonverbal), the spoken dialogue (audio-verbal), subtitles (visual-verbal) and
simultaneously may exceed the limited capacity of working memory and result in
compete for visual attention (cf. automatic subtitle reading behavior as demonstrated
by d’Yewalle et al. in 1991), the mere presence of subtitles as additional written texts
may overburden the visual processing channel and interfere with the processing of other
visual elements that could be essential for comprehension (Van der Zee, Admiraal, Paas,
(e.g., the visual image and the spoken dialogue). Processing redundant information that
is not necessary for learning will take away cognitive resources that could have been
available for the processing of essential information and comprehension could therefore
be inhabited (Kalyuga & Sweller, 2005). This is sometimes called the redundancy effect.
Although recent years have seen some research endeavors in examining the impact of
subtitles on cognitive load (Kruger, Hefer, & Matthew, 2014; Kruger & Doherty, 2016;
Kruger, Hefer, & Matthew, 2013; Kruger, Matthew, & Hefer, 2013), progress in this
field is still limited by the types of subtitles that are investigated and the measurement
of cognitive load. Previous studies have been centered on monolingual subtitles, with
little attention being paid to the effects of bilingual subtitles on cognitive load.
12
languages are likely to impose a heavy load on working memory and cause cognitive
overload.
The current study had two objectives. The first objective was to compare the impact of
to subtitles and image. The second objective was to determine whether bilingual
subtitles will result in more cognitive gain by combining the benefits of intralingual and
interlingual subtitles and thus facilitate comprehension or, alternatively, cause more
Making use of eye tracking technology and drawing upon Cognitive Load Theory, this
study therefore set out to answer the three research questions below.
(1) Is there any difference in the visual attention allocated to L1 and L2 subtitles
(2) Is there any difference in the attention allocated to the visual image between
R2: What is the impact of subtitle mode on cognitive load? More specifically, do
bilingual subtitles give viewers more cognitive benefits or generate more cognitive
13
R3: What is the impact of subtitle mode on content comprehension? (Do bilingual
monolingual subtitles?)
review of relevant research in monolingual and bilingual subtitling, cognitive load and
eye tracking. Chapter 3 presents the methodology of this study, including sampling,
materials, data collection and statistical analyses. Results of data analysis are presented
14
Chapter 2. Literature Review
This chapter is composed of five sections. The first section begins with a discussion of
the advantages and disadvantages of subtitles with reference to the verbal redundancy
effect (2.1). The next section (2.2) gives an introduction to three types of subtitles,
namely intralingual, interlingual and bilingual subtitles, and reviews the effects of
introduction to three types of cognitive load as identified in Cognitive Load Theory and
then reviews previous studies that explored the impact of subtitles from a cognitive load
perspective. In the fourth section (2.4), the value and use of eye tracking technology in
Initially used to provide deaf and hard-of-hearing audiences with access to audiovisual
orally delivered information to a written text which is usually presented at the bottom
of the screen simultaneously with the auditory information (Di Giovanni, 2016; Díaz
Cintas & Remael, 2007). The proliferation and global dissemination of audiovisual
products that are made possible by online video sharing websites and platforms such as
Coursera and TED Talks have led to a wider application of subtitles. Subtitling is
currently used widely as an effective method to minimize the language barrier and make
15
audiovisual products more accessible to global audiences (Aparicio & Bairstow, 2016;
Gottlieb, 2012; Kruger et al., 2014). In addition to the lower cost of production
compared with dubbing (Moreno, 2017), the growing popularity of subtitling is also
attributed to its advantage of giving audiences an authentic taste of the foreign culture
In spite of the widespread use of subtitles, there are divergent views regarding the
effects of subtitles. One the one hand, it is believed that subtitles provide on-screen
linguistic support that enables viewers, especially foreign language learners, to segment
the verbal stream and obtain the meaning of words with greater ease, thus more
cognitive resources can be devoted to deeper processing of the video content (cf. Mayer,
Lee, & Peebles, 2014). On the other hand, it is argued that subtitles (as a written form
of the spoken dialogue) generate redundant information which may not contribute to
their limited cognitive resources that could have been used to process other essential
(or to be more specific, the verbal redundancy effect) is necessary to better understand
the effects of subtitles. The redundancy effect has been explored extensively in the field
1
Subtitles do not constitute verbal redundancy when it is the only means to convey verbal information,
such as in the case of deaf or hard-of-hearing audiences. However, as this study focused on hearing
audiences who had intermediate language proficiency of the spoken dialogue, subtitles thus can be seen
as a form of verbal redundancy to these audiences who had access to identical verbal information from
16
of educational psychology, and occurs when presenting the same information in
multiple forms and modalities that are not necessary for learning results in less effective
learning (Kalyuga & Sweller, 2005; Sweller, Ayres, & Kalyuga, 2011). The redundancy
effect has been found in numerous studies using concurrent presentation of written and
spoken words in multimedia learning. For instance, Mayer, Heiser and Lonn (2001)
found that when students watched narrated video with concurrent on-screen text that
summarized or duplicated the narration, they performed worse in retention and transfer
texts than did those who only got access to narration. They claimed that the concurrent
students’ visual information processor, forcing students to split their attention between
visual information and causing less cognitive resources available for learning.
Using spoken texts as redundant information, Diao and Sweller (2007) found that
performance at both lexical and text level and caused a higher mental load than the
written only presentation when reading in English as a foreign language, which was
more obvious when the test was more complex. Based on Cognitive Load Theory, they
explained that as novice learners were not able to recognize the sound-symbol relations
established by the written text and narration, processing the same information in
multiple forms increased extraneous cognitive load that interfered with the
However, a number of studies revealed that the redundancy effect does not always
manifest itself when written and spoken words are presented concurrently. For instance,
a study conducted by Moreno and Mayer (2002a) found that presenting both spoken
and written explanations with animation did not hurt learning compared with presenting
17
only spoken or only written explanations with animation in a multimedia learning
environment. They suspected that students may have avoided reading on-screen text
and only used spoken narration for verbal information. While this explanation could
reasonably account for the absence of the redundancy effect, it needs to be tested by an
Adesope and Nesbit (2012) conducted a meta-analysis to examine the effects of verbal
studies, they reported that spoken-written presentations were more advantageous than
factors such as the absence of pictures or animation and the lack of control over the
pace of display2. They argued that “[a] meaningful interpretation of verbal redundancy
Likewise, Kalyuga (2012) proposed some factors that may moderate the verbal
small segments could eliminate the negative consequences of cognitive load caused by
2
They explained that learners could use other channels to enhance comprehension if they misperceive
information in one channel when they do not have control over the presentation pace. When learners
have control over the presentation pace, they may only use one channel to repeat information for effective
18
the need to reconcile the written text with the transient spoken words within limited
time.
Although the studies of Adesope and Nesbit (2012) and Kalyuga (2012) provide us with
redundancy. This is the case because subtitles are associated with a combination of the
conditions in which verbal redundancy has a facilitative effect (e.g., lack of control over
the pace of display and segmented written texts) and conditions in which verbal
Furthermore, as pointed out by Bisson et al. (2014) and Hinkin, Harris and Miranda
expository prose as experimental materials; relatively less attention has been paid to the
Although the benefits of subtitles have been well documented, most of these benefits
word recognition, vocabulary acquisition and speech perception (Almeida & Costa,
2014; Bird & Williams, 2002; Gernsbacher, 2015; Linebarger, Piotrowski, &
Greenwood, 2010; Matielo, D’Ely, & Baretta, 2015; Mitterer & McQueen, 2009; Saed,
Yazdani, & Askary, 2016; Vanderplank, 2013; Wang, 2014). Language gains do not
multimodal in nature and viewers are required to integrate both pictorial and linguistic
content for effective comprehension. This view can be supported by previous research
19
that set out to investigate whether there exists a significant tradeoff between subtitle
processing and image processing (Perego, Del Missier, Porta, & Mosconi, 2010).
subtitles from the perspective of information acquisition. Fourteen Finnish natives with
no knowledge of Russian and twenty Russian natives with sufficient Finnish language
skills watched a short documentary narrated in Russian with Finnish subtitles. All
viewers were asked to complete a comprehension test that consisted of three types of
questions (information included in both subtitles and narration). The Russian group
which could acquire information from both L1 narration and L2 subtitles obtained
significantly higher scores in the subtitled-related questions than the Finnish group
which could only rely on L1 subtitles for verbal information. Lång (2016) therefore
Using similar stimuli (entertainment media content) and testing method (source-
specific comprehension test), the study of Lavaur and Bairstow (2011) took a further
3
It should be noted that the Russian group may not have acquired information from both channels. As
narration was provided in the viewer’s native language and subtitles in a second language, there could
be a possibility that viewers have mainly acquired information from the audio-verbal channel (narration)
and did not pay much attention to subtitles, especially when they were not familiar with subtitle reading.
20
language proficiency. Ninety French native high school students were divided into
beginner, intermediate and advanced groups based on their English proficiency. They
were asked to watch an English narrated movie extract without subtitles, with English
subtitles and with French subtitles, after which they were required to complete a
found that when watching a movie in a foreign language, subtitles served as a crucial
tool for comprehension for viewers with low language proficiency but had a distracting
An important implication of the study by Lavaur and Bairstow (2011) is that, as is the
case with verbal redundancy effects, the effects of subtitles can be influenced by various
factors that have been found to influence the processing and effects of subtitles include
age (d’Ydewalle & De Bruycker, 2007; Munoz, 2017), translation strategy of subtitles
(Ghia, 2012), text chunking (Rajendran, Duchowski, Orero, Martínez, & Romero-
Fresco, 2013), subtitle position (Fox, 2016), and the relation between the language of
subtitles and the native language of the viewer (Mitterer & McQueen, 2009).
The current study aims at examining whether the linguistic format of subtitles (i.e.,
subtitles, this study hopes to shed some light on the role of redundancy in processing
subtitled materials.
21
2.2 Subtitles in Different Linguistic Formats
Based on linguistic parameters, subtitles can be categorized into three types, namely
that are in the same language as the spoken dialogue, which are primarily used by deaf
language are known as L2 subtitles. Intralingual subtitles are similar to captions and
these two terms are used interchangeably in some studies. Both intralingual subtitling
and captioning provide a written form of the spoken dialogue in the same language,
except that the former does not provide a transcription or translation for nonverbal
information such as sound effects or identify speakers (Garza, 1991; Specker, 2008).
Interlingual subtitles (or translated subtitles) refer to subtitles that are displayed in a
language different from that of the dialogue, normally in viewers’ native language.
Interlingual subtitles are also known as standard subtitles or L1 subtitles (Raine, 2012).
Different from intralingual and interlingual subtitles which consist of subtitles in only
one language, bilingual subtitles (or dual subtitles) present subtitles simultaneously in
two different languages. This type of subtitles are used in multilingual countries or
regions where two or more languages are spoken, such as Finland, Belgium, Israel,
Singapore, Malaysia and Hong Kong (Corrizzato, 2015; Gottlieb, 2004; Kuo, 2014).
broadcaster is stepping up its effort to present television programs with subtitles in both
English and Chinese in order to attract a wider audience. The increasing usage of
bilingual subtitles in online videos is also attributed to the efforts of amateur subtitlers
22
who translate online foreign language videos on a voluntary basis (Hsiao, 2014; Zhang,
2013). These amateur subtitlers form their own online translation groups (known as
fansubbing groups), produce different formats of subtitles and upload subtitled videos
The benefits of intralingual subtitles and interlingual subtitles have been explored
effects of subtitles was conducted by Price (1983). Nearly 500 participants of 20 native
language backgrounds were randomly assigned to two groups watching four English
excerpts: one group with captions (English language subtitles) and one without captions.
Results showed that viewers in the captioned group achieved significantly better
Price’s findings are corroborated by the study of Garza (1991), who used a
included seventy English native speakers who used Russian as a foreign language and
4
Fansubbing groups often create their own websites for video sharing, such as YYets
23
forty students who used English as a foreign language. It was found that all students
performed better in comprehending the linguistic content of the video material when
they were provided with captions. He concluded that the use of captions benefited
advanced foreign language learners by “bridg[ing] the often sizable gap between the
His study is one of the few that explored subtitles in a language that has significantly
different syntactic structure and orthography from that of English and is therefore
The study of Borras and Llafayette (1994) also found that intralingual subtitles
viewers “associate the aural and written forms of words more easily and quickly” (p.
70).
Desmet (2013) conducted a meta-analysis that synthesized and calculated the effect
learning.
of studies (see e.g., Markham & Peter, 2003; Markham, Peter, & Mccarthy, 2001), some
researchers hold an opposite view, arguing that L1 (interlingual) subtitles are more
24
beneficial as they prevent inaccurate inferences of word meaning (Aloqaili, 2014;
Mitterer & McQueen, 2009). Moreover, it is believed that interlingual subtitles produce
more comprehensible input when the written/spoken language is beyond the language
proficiency of viewers and the translation of the spoken dialogue can provide viewers
As pointed out by Borras and Llafayette (1994), the extent to which viewers benefit
from intralingual and interlingual subtitles may depend on their language proficiency.
For instance, Bianchi and Ciabattoni (2008) reported that L1 subtitles were more
facilitative for viewers with lower proficiency and L2 subtitles were more beneficial
Vulchanova et al. (2015) carried out a study to investigate the effects of different
subtitles had positive short-term effects on viewers’ plot and content comprehension of
the audiovisual materials compared with the no subtitle group. While the 16 year-old
age group benefited more from L2 subtitles, the 17 year-old age group seemed to
comprehend the video equally well in both L1 and L2 subtitles, a similar result as found
for the intermediate language learners in the study of Lavaur and Bairstow (2011).
viewers benefited from intralingual and interlingual subtitles in different ways. While
and interlingual subtitles as the benefits of these two types of monolingual subtitles are
subject to certain conditions such as the viewer’s language proficiency and can be
manifested in different ways. Given that intralingual and interlingual subtitles have
question as to whether these benefits can be combined when these two types of subtitles
are presented simultaneously in the form of bilingual subtitles. Following this question,
the next section provides a review of studies on the effects of bilingual subtitles.
While the potential educational benefits of intralingual and interlingual subtitles have
been explored extensively and are well documented, the effects of bilingual subtitles
attract little research interest. On the one hand, bilingual subtitles (as a combination of
the benefits of both intralingual and interlingual subtitles (García, 2017; Kovacs &
Miller, 2014). On the other hand, bilingual subtitles may generate more redundancy
because there is not only overlapping information between subtitles and information in
other modalities such as the visual image and the spoken dialogue, but also between
26
subtitles in two different languages (if the viewer has some level of proficiency in both
cognitive resources for effective comprehension (Kalyuga & Sweller, 2005). Although
it has been found that intralingual or interlingual subtitles do not cause cognitive
overload (Kruger et al., 2014; Kruger et al., 2013), these findings may not apply to
four experimental groups, each of which watched a subtitled movie in one of four
displayed subtitles, and with only transcript of the dialogue. Results showed that the
shortcoming of Yekta’s study is the very small sample size. With five participant
excluded, only 12 participants were investigated. Each experimental group had only
Using a much larger sample size, the study of Wang (2014) sheds light on the potential
benefits of bilingual subtitles in language learning. Eighty Chinese ESL students were
divided into four groups and each group watched four video clips in one of four
treatments: with L1 Chinese subtitles, with L2 English subtitles, with bilingual (L1 and
L2) subtitles, and without subtitles. A vocabulary and a comprehension test were
administered to viewers after watching each video. Bilingual subtitles were found to be
comparison with monolingual and no subtitles conditions. Wang (2014) contended that
27
bilingual subtitles could enhance students’ confidence, facilitate the learning of new
words and allowed them to confirm their comprehension by comparing the original
subtitles and their translation equivalence. However, due to the lack of evidence on the
extent to which subtitles were processed, Wang’s study has the same limitation as most
With the same focus on Chinese and English language groups, the study of Yang (2013),
however, yielded different results. One hundred and twenty one Chinese students were
divided into four groups, each of which watched an English video in one of four
conditions: without subtitles, with Chinese subtitles, with English subtitles, and with
bilingual subtitles (both Chinese and English). While the group that saw subtitled video
performed significantly better in the comprehension test than those who saw the video
that in Yang’s study (2013), participants were from non-English majors while Wang’s
study (2014) used participants from English majors. The different language proficiency
of participants may account for the divergent results of these two studies.
In addition to performance data, Raine (2012) also made use of attitudinal data to
language leaners’ preferences for subtitle modes based on enjoyment and vocabulary
to four different groups: English subtitles group, Japanese subtitles group, bilingual
subtitles group (English and Japanese), and a no subtitle group. Vocabulary test results
28
results showed that the majority of viewers preferred bilingual subtitled for vocabulary
Aloqaili (2014) conducted a similar study to the study of Raine (2012) in a different
language were divided into four groups: no subtitles, with interlingual subtitles (Arabic),
with intralingual subtitles (English), and with bilingual subtitles (Arabic and English).
The pre- and post- test scores for the vocabulary knowledge scale showed that subtitles
participants reported in the questionnaire survey that they preferred bilingual subtitles
it was found that viewers who were provided with bilingual subtitles (both Chinese and
English subtitles) outperformed those who received Chinese only or English only
claimed that in bilingual subtitles, the two different subtitles can “complement one
another, the one bridging different concepts and meaning, the other compensating for
study to examine the effects of different subtitle modes (intralingual, interlingual and
vocabulary test, and explored students’ attitudes towards the usefulness of different
the delayed tests three weeks later. She explained the better performance in word
recognition and recall tests in the bilingual subtitle condition was achieved probably
sound with both written form and meaning, thus establishing a more stable mental
representation of the new words. Although the lack of data on the visual processing of
viewers limits the value of Li’s study, it provides some evidence for the educational
bilingual subtitles is relatively limited. Much of research in this field has been centered
subtitles. Moreover, while findings of these studies are based on analyses of either
attitudinal or performance data, the interpretations of findings are often related to the
visual processing data is therefore required to produce more convincing evidence for
subtitles allows us to elucidate the mixed results regarding the effects of subtitles on
upon Cognitive Load Theory, one of the most influential theoretical frameworks that is
used to account for the cognitive processing during learning (Martin, 2014), the next
section illustrates how the concept of cognitive load has been explored in subtitling
research.
30
2.3 Subtitles and Cognitive Load
Load Theory (CLT) is an instructional theory that provides guidelines for instructional
design to deal with the processing limitations of the human cognitive system. CLT is
based on the assumption that the human cognitive architecture is composed of two
and duration when dealing with novel information, and long-term memory which has
unlimited storage capacity. Different information elements are first processed in the
working memory and then transformed into knowledge in the form of schema stored in
the long-term memory (Sweller et al., 2011). Learning will be inhibited if working
memory is overloaded with too many information elements that need to be processed
long-term memory to working memory to assist with information organization (Low &
Sweller, 2005).
Cognitive load refers to the load that is imposed on the learner when performing a
particular task (Chandler & Sweller, 1991). Three components of cognitive load have
been identified in the literature, namely intrinsic cognitive load, extraneous cognitive
31
load and germane cognitive load. Intrinsic cognitive load is created by dealing with the
interactivity) (Sweller, 2010, p. 124; Van Merriënboer & Sweller, 2005, p. 150). For
learners with the same level of expertise, materials that have high element interactivity
impose a heavy load on working memory because they generate high intrinsic cognitive
load. Intrinsic cognitive load is also determined by the learner’s prior knowledge or
level of expertise. More experienced learners may experience less intrinsic cognitive
load than those with less experience as they have more advantages in schema
automation.
not contribute to learning. For instance, extraneous cognitive load is increased when
Sweller, 2005). Learners are also likely to experience high extraneous cognitive load
when they make efforts to process redundant information that is unnecessary for
Different from intrinsic and extraneous cognitive load, germane cognitive load refers
2010; Sweller, Van Merrienboer, & Paas, 1998). It is created when learners are engaged
limited cognitive resources in working memory, the more intrinsic and extraneous
cognitive load are produced, the less cognitive resources are left for the learner to form
32
schema and thus less germane cognitive load is generated (see Figure 1 for a
visualization of three types of cognitive load). As a result, less learning is gained. Given
Paas, Van der Vleuten, Van Gog, & Van Merriënboer, 2013), a vital principle in
this field has been centered on the effects of subtitles on language learning (see e.g.,
Bird & Williams, 2002; Danan, 2004; Saed et al., 2016; Wang, 2014; Yekta, 2010;
Yoshino, Kano, & Akahori, 2000) or factors that may influence the processing of
33
subtitles (see e.g., d’Ydewalle & De Bruycker, 2007; Ghia, 2012; Hefer, 2013a). Few
attempts have been made to explore the effectiveness of subtitles from a cognitive
perspective.
(2010) carried out a study to investigate whether subtitle reading would interfere with
the processing of the visual image. Forty-one Italian native speakers were recruited to
watch a 15-minute video excerpt narrated in Hungarian with Italian subtitles, after
recognition test and a scene recognition test. Results showed that viewers achieved a
between image processing and subtitle processing. They therefore drew a conclusion
that subtitle processing was cognitively effective and individuals had the ability to
process, integrate and remember information from multiple sources. However, they
pointed out that the effectiveness hypothesis for subtitle processing may not apply to
situations in which viewers are required to cope with very complex visual images.
Moreover, it should be noted that in their study, viewers could only rely on subtitles for
verbal information due to their lack of knowledge of the foreign language in the
soundtrack. A different picture could be drawn when viewers are less compelled to read
subtitles in order to understand the video, for instance, when they are able to understand
As the usage of subtitles is increasing in educational activities (see Kruger & Doherty,
2016), recent years have seen a growing body of research that sets out to explore the
direct and indirect cognitive load measurements, which included eye tracking,
34
electroencephalography (EEG), self-report scales and performance data, Kruger et al.
(2013) found that the presence of same language subtitles in a recorded academic
lecture could reduce ESL students’ cognitive load and did not cause cognitive overload.
In a subsequent study, Kruger et al. (2014) explored the impact of subtitle language on
randomly assigned to three groups, each of which watched an English lecture in one of
the three conditions: without subtitles, with English (L2) subtitles or with Sesotho (L1)
performance was not significantly affected by the presence or the language of subtitles
and participants in the L1 subtitles group reported lower comprehension effort. They
also found that the language of subtitles had an impact on attention distribution, with
less time being spent on L1 subtitles than on L2 subtitles. They explained that as the
concurrent presence of L2 audio information made the L1 subtitles less necessary and
redundant for comprehension, students might use L1 subtitles only to check information
information from multiple channels such as sound and image, viewers also have to
assign attention to subtitles in two different languages that appear on the screen
investigate the impact of bilingual subtitles on cognitive load, which limits our
35
2.4 Eye Tracking in Subtitling Research
behavior, attention allocation and cognitive activities (Hvelplund, 2017; Moreno, 2017).
While the use of eye movement data has been well established in the context of static
reading (see e.g., Rayner, 2009, 2012), the application of eye tracking technology to
subtitling research has a relatively short history. The challenge of using eye tracking to
study subtitle processing lies in the dynamic nature of subtitles which are presented as
In the existing literature, eye tracking has been mostly used to investigate the visual and
subtitled videos (see e.g., d’Yewalle et al., 1991; Kruger, 2016; Kruger et al., 2014;
Kruger et al., 2013; Perego et al., 2010) and to explore how the reading behavior of
combination between the spoken dialogue and subtitles, translation strategies, text
chunking and line segmentation, etc. (see e.g., d’Ydewalle & De Bruycker, 2007; Ghia,
2012; Hefer, 2013b, 2013a; Lång, Mäkisalo, Gowases, & Pietinen, 2013; Mangiron,
investigated the eye movement patterns of children and adults when watching a movie
language soundtrack and native language subtitles) and reversed subtitling (native
language soundtrack and foreign language subtitles). Twelve adults and eight children
participated in the experiment, all of whom are native Dutch speakers and had no
36
knowledge of the foreign language in the movie excerpt (Swedish). They found that
viewers showed a more irregular reading pattern in reversed subtitling as there were
more subtitles skipped, fewer fixations and longer latencies. They also found that
viewers spent less time on one-line subtitles than on two-line subtitles in the standard
subtitling condition. They suggested that this was because one-line subtitles contained
less redundant information and thus required less visual attention. If these findings can
be applied to the present study, viewers should spend more time on bilingual subtitles
compared with monolingual subtitles as there are more redundant information sources
in bilingual subtitles.
In an eye tracking study carried out by Perego et al. (2010), they found that viewers
spent more time (67% of the fixation time) examining the subtitled area while fixations
on the visual area were longer. However, due to a lack of No Subtitles control group in
their study, it is still unclear whether the presence of subtitles has a significant impact
The study of Ross and Kowler (2013) also found that viewers spent a large portion of
time (more than 40%) on the subtitled area, even in the presence of redundant audio
information. They suggested that the way viewers allocated their attention between
subtitles and the visual image was dependent on their inherent habits of being attracted
allocated the subtitles and image. When watching three videos with L1 subtitles (one
37
video with professional subtitles and two with non-professional subtitles), viewers with
high language proficiency in English had less fixations on the subtitled area than on the
image and the data showed less dispersion than the viewers with low language
proficiency, indicating that viewers who had high language proficiency and were able
to acquire verbal information from the soundtrack effectively would rely less on
Winke, Gass and Sydorenko (2013) made the first attempt to investigate the how the
difference between L1 and L2 affects L2 learners’ reading of subtitles and the benefits
they gain from intralingual subtitles. They compared the time spent on the subtitle area
and the rest of the screen by English L1 learners of Arabic, Chinese, Russian and
Spanish. It was found that Chinese learners spent less time on subtitles when watching
unfamiliar content than familiar content, indicating that processing difficult visual-
Bisson et al. (2014) investigated sixty-four viewers’ subtitle reading behavior in three
video conditions: standard subtitling (i.e., foreign language soundtrack with native
language subtitles), reversed subtitling (i.e., native language soundtrack with foreign
foreign language subtitles). They found that viewers spent more time on the subtitle
area when the soundtrack was in an unknown foreign language with subtitles in either
fixation duration, the number of fixations in the subtitle area and the number of skipped
viewers read subtitles in a similar way irrespective of the subtitle language when the
soundtrack was in an unknown foreign language. Their study also revealed that viewers
38
did not use all the subtitle presentation time to read subtitles but instead spent some
time on the image area, which supports Perego et al. (2010)’s finding that subtitle
subtitle reading. Thirty Sesotho L1 students were randomly assigned to three groups:
two test groups (TSE and TES) and one control group (CSS). The TSE group watched
a French video clip with Sesotho (L1) subtitles in the first half and English (L2) subtitles
in the second half of the video while the TES group watched the same video with L2
subtitles in the first half and L1 subtitles in the second half. The CSS group watched
the same video with only L1 subtitles. Ten English L1 students formed another control
group. All students had no knowledge of the language of the soundtrack and their eye
movements were recorded by an eye tracker during video playing. Analysis results of
fixation time, dwell time and fixation count in the subtitled area revealed that L1 and
viewers read L2 subtitles with greater ease. Hefer (2013a) explained that the reason
why viewers spent more time and had greater difficulty reading L1 subtitles was
because they had lower literacy and reading speed in L1 text, although native language
Interestingly, the study of Kruger et al. (2014) which also investigated attention
distribution of Sesotho L1 students reported different results. Kruger et al. (2014) found
that viewers spent less time reading L1 subtitles than L2 subtitles and the proportion of
time spent on subtitles was smaller than what was found in the study of Hefer (2013a).
A possible explanation for these different results is that viewers may reduce their
reliance on subtitles because of their access to the L2 spoken dialogue in the study of
39
Kruger et al. (2014). However, this assumption needs further investigation as d’Yewalle
et al. (1991) found that the knowledge of soundtrack did not affect the time spent on
subtitles.
Using eye tracking methodology, Munoz (2017) examined the effects of age and
were divided into three age groups: children (mean age 11.1), adolescents (mean age
14.6), and adults (mean age 25.8). Participants were also assigned to three language
groups based on their language proficiency: the beginner group, the intermediate group
and the advanced group. Participants were asked to watch two video clips with English
soundtrack, one of each clip with either English subtitles (L2) or Spanish subtitles (L1).
during video watching. Results showed that children spent more time reading subtitles
consistent with the results of Kruger et al. (2014) that students who were able to obtain
verbal information from the L2 spoken dialogue spent less time reading L1 subtitles. It
was also found that intermediate and advanced groups skipped more L1 subtitles than
L2 subtitles, which is probably due to the large overlap between the age and language
proficiency grouping (the beginner group was mainly composed of children and the
advanced group was mainly formed by adults). Munoz (2017) and Kruger et al. (2014)
hold a similar view that L1 subtitles are skipped more often by viewers of relatively
comprehension as they are able to acquire verbal information from the L2 soundtrack.
40
2.5 Summary
Based on the literature review, a number of research gaps can be identified. First, most
The limited existing research on bilingual subtitles has been centered on their effects
on language learning. Little is known about the effects of bilingual subtitles on the
has not been investigated with the same vigor as the processing of monolingual subtitles.
Second, so far only a very limited number of studies have attempted to investigate the
impact of subtitles from a cognitive load perspective (Kruger et al., 2014; Kruger, 2013;
Kruger et al., 2013; Kruger et al., 2013). More research is needed in order to help us
better understand how viewers may benefit from subtitles and optimize the use of
European languages that are in the same language family (Lwo & Lin, 2012); little
research has been conducted to examine Chinese subtitles which are based on very
language pairs with minimal orthographic and phonological similarities has long been
acknowledged by some scholars (see e.g., Bisson et al., 2014; Garza, 1991; Hinkin et
41
Chapter 3. Methodology
This chapter is composed of six sections. The first section provides information about
the samples used in this study (3.1). The second section (3.2) illustrates the selection
criteria of stimuli for the eye tracking experiment, production guidelines of subtitles,
section (3.3) provides information on the apparatus used in the eye tracking experiment,
followed by a description of procedures of the whole experiment (3.4). The last two
sections (3.5 and 3.6) describes how data collected from the eye tracking device are
processed and analyzed, providing details on the exclusion of invalid eye movement
data, the defining of Areas of Interest (AOIs) and the selection of eye tracking measures
to address three research questions. It also discusses the criteria and procedure of
scoring the free recall test which is used to evaluate viewers’ overall comprehension of
the video.
3.1 Sample
Twenty Chinese native speakers in Australia who used English as their second language
were recruited as participants (14 females and 6 males). Eighteen participants were
qualification. The average age of participants was 25.7. Ethics approval was gained for
42
3.2 Materials
3.2.1 Stimulus
Four English videos clips, each lasting approximately five minutes, were used as stimuli
in the eye tracking experiment. These videos clips were from four episodes of a BBC
documentary series (Planet Earth, 2006). The whole documentary series consists of
eleven episodes, each of which features a global overview of a habitat on the planet.
The topics of the four video clips were “Mountains” (Episode Two), “Great Plains”
(Episode Seven), “Jungles” (Episode Eight) and “Shallow Seas” (Episode Nine). These
videos were selected because they were comparable in terms of the density and
(image) and verbal information (narration), as well as the presentation rate of auditory
information. To ensure that all video clips were comparable in terms of the difficulty
of verbal information, a readability test was performed for the transcription of each
Mark, & Li, 2014). Results of Flesch Reading Ease of four videos were 75.71, 69.65,
71.65, and 71.88, which showed that four videos had a similar level of reading ease.
All video clips were examined by the researcher to ensure that there were no
Four experimental conditions were developed for each video clip (see Figure 2):
43
2. English narration with Chinese subtitles (CS),
Subtitles were produced using Aegisub subtitling software5. Bilingual subtitles were
first produced, after which Chinese subtitles and English subtitles were removed
separately to constitute the CS and ES conditions. The display time of subtitles in the
CS, ES and BS groups were the same in order to minimize the impact of other variables
(e.g., display time) and investigate the effects that different subtitles themselves have
NS CS
5
Software available at http://www.aegisub.org/.
44
ES BS
In this study, bilingual subtitles were presented in two lines to avoid excessive pollution
of the image in accordance with conventions (Díaz Cintas & Remael, 2007; Kuo, 2014),
with one line in the same language as the original speech (English) and the other one in
keeping the upper line shorter to limit the obstruction of other visual information (Díaz
Cintas & Remael, 2007), Chinese subtitles were displayed above English subtitles
because they normally occupy less space than English subtitles due to the different
writing systems. While English characters are represented by sound symbols that are
which ideas or words are represented directly by symbols (Cheng, 2014). The Chinese
writing system has therefore been found to be more efficient and information rich
(Chang & Chen, 2002; Zhao & Baldauf, 2007), and in practical terms, simply occupy
less space. Subtitles were displayed at the bottom center of the screen. Chinese subtitles
45
and English subtitles in the BS condition were positioned at (x: 988, y: 1004) and (x:
988, y: 954) respectively. In the CS and ES conditions, subtitles were positioned at (x:
988, y: 1004).
which produced a near verbatim transcript of the spoken text. Each English subtitle
contained no more than 55 characters. The standard number of characters per line in
most guidelines is 37 characters (Díaz Cintas & Remael, 2007; Ivarsson & Carroll,
1998). However, since only one line was used per language, and due to the wider format
of the screen, a line length of 50% longer than the convention was considered to be
functional, particularly since the subtitles were created for use on a computer screen
with the user at a distance of approximately 70cm from the screen (see Figure 3). Line
breaks between subtitles were made to preserve semantic units where possible, in
English subtitles, which reproduced the original text as much as possible in both lexical
and syntactic terms7 (Ghia, 2012, p. 167). Each Chinese subtitle contained no more than
6
While 12 CPS has long been regarded as an appropriate speed for subtitle reading (Díaz-Cintas &
Remael, 2007), viewers in the information age are likely to have a faster reading speed (Szarkowska,
2016).
7
Here are a few examples drawn from the subtitle extracts produced for this experiment:
46
Figure 3. Screenshot of English subtitles. The number of characters was 55 characters
and the font size was 50.
participants, such as age, major, English language proficiency (IELTS scores), etc. (see
Appendix A).
47
3.2.3 Cognitive load questionnaire
A self-report cognitive load questionnaire used in this study was adapted from the
types of cognitive load (see Appendix B). This instrument was selected because it has
been validated and is the first one to differentiate between different types of load. As
this study was based on a context of film comprehension, which was different from the
problem-solving context in which the study by Leppink et al. (2014) was situated, some
adjustments were made to the scale items to reflect the variations of cognitive load.
The cognitive load questionnaire was composed of twelve items with a 0-10 rating scale.
Intrinsic cognitive load (IL) was measured with three items that were related to the
complexity of the video (e.g., “The information covered in this video was very
complex”) and one item concerning the effort invested to cope with the complexity (“I
invested a very high mental effort in the complexity of this video”). Extraneous
cognitive load (EL) was evaluated with three items that were related to the presentation
design (e.g., “The presentation of information in this video was very unclear”) and one
item concerning the effort invested to deal with the presentation design (“I invested a
very high mental effort in unclear and ineffective presentation of information in this
video”). Germane cognitive load (GL) was evaluated with three items referring to the
contribution of the video to information acquisition (e.g., “This video really enhanced
my understanding of the information that was presented”) and one item related to the
effort invested in information acquisition (“I invested a very high mental effort during
0.7 or above is generally regarded to reflect a good level of internal consistency and
across four conditions were high, revealing a high level of reliability of the items used
to measure these two types of cognitive load. However, Cronbach’s alpha coefficients
were low for GL in three conditions. In order to increase the internal consistency, the
last item (item 12) which was related to the mental effort in information acquisition was
removed from GL measurement 8 . The cognitive load that was evaluated by the
remaining three items was referred to as GL*. Item 12 was discussed separately as
Table 1. Cronbach’s Alpha Coefficients for IL, EL and GL* in Four Conditions
NS CS ES BS
8
Leppink et al. (2014) also reported in their study that adding the last item regarding the mental effort in
understanding the video did not increase the internal consistency of the scales used to measure germane
cognitive load.
49
3.3 Apparatus
The eye tracking experiment was conducted on an SMI RED eye tracker with a
sampling rate of 250 Hz. The screen resolution of the eye tracker’s monitor was 1920
× 1080 pixels and the stimulus covered the entire 23-inch screen. Data were collected
3.4 Procedure
All participants completed the eye tracking experiment individually. They were asked
to sign a participation consent form prior to the start of the experiment (see Appendix
C), which gave them a brief introduction to the study and what to expect during the
experiment. The purpose of the study was revealed to participants, but they were not
They were then seated comfortably on a stable chair 700 mm from the stimulus screen
Participants were asked to turn off their mobile phones during the experiment and there
This is a within-subject study with each participant seeing all 4 videos, each in a
different condition (NS, CS, ES, and BS). In order to be able to randomize the texts and
the conditions and ensure that no participant would see the same text more than once
or be exposed to any condition more than once, the video clips, their treatments, and
50
the order in which viewers watched videos were counterbalanced using Latin Squares.
Participants were randomly assigned to one of 4 groups, each group seeing 4 videos in
When participants were ready, they were asked to fill in a biographical questionnaire
on the computer, after which they were given written instructions on the screen (see
Figure 4). Each participant’s eyes were calibrated using a five-point calibration and
validated to ensure that their eye movements could be accurately recorded. After that,
a video clip began to play. When the video finished, they were asked to complete a
cognitive load questionnaire on the computer. Then they were asked to recall and write
down in English as much information as they could remember about the video they
watched. There was no time limit for the recall test. When the participants finished the
recall test, they did the calibration and validation again before watching the next video.
Each participant watched four videos clips separately and the order of video was
2009). The whole experiment for each participant lasted about one and a half hours.
51
Figure 4. Screenshot of instructions for the eye tracking experiment.
Group 1 NS CS ES BS
Group 2 CS ES BS NS
Group 3 ES BS NS CS
Group 4 BS NS CS ES
Twenty participants were coded from P03 to P22 (the first two participants P01 and
P02 were for pilot study and not included in the analysis). Eye tracking data with a
tracking ratio of lower than 85% were discarded (see Figure 5). Three participants’ eye
movement data were therefore excluded. One participant’s data in the BS condition was
also excluded because of a technical problem during data collection. To sum up,
seventeen participants in the NS, CS, and ES conditions and sixteen in the BS condition
52
Subtitles and the whole screen in the ES, CS and BS conditions were marked as
languages were marked as two separate AOIs. The AOI of Chinese subtitles in BS were
marked as “CS_B” and English subtitles as “ES_B”. There was no space between these
This study used three eye movement parameters: dwell time, visible time and mean
2011), dwell time was calculated as the sum of all fixations and saccades within an AOI.
Visible time was calculated as the display time of an AOI. The event detection
Data in all AOIs for each participant were extracted from SMI BeGaze 3.0
(SensoMotoric Instruments GmbH), a software for eye tracking data analysis. Dwell
time percentage of visible time (DT%) was used as a measure of visual attention
cognitive load (Debue & Van De Leemput, 2014). DT% in subtitles was calculated by
dividing the dwell time in subtitles by the visible time of subtitles and multiplying that
by 100 to arrive at a percentage. Visual attention to the rest of the screen (DT% on the
visual image) was calculated by dividing the dwell time on the screen (with the dwell
time of subtitles subtracted) by the visible time of the video and multiplying that by 100
for a percentage.
53
120
100
80
60
40
20
0
P03 P04 P05 P06 P07 P08 P09 P10 P11 P12 P13 P14 P15 P16 P17 P18 P19 P20 P21 P22
54
Figure 7. Screenshot of AOIs of bilingual subtitles.
Each recall test was analyzed into a set of idea units. Each idea unit contained one major
idea. One point was given in the recall test for each corresponding idea unit that was
grammatical mistakes and minor misspellings. 0.5 point was given if specific names
were not given correctly in the recall test. Three participants’ recall scores were
discarded because they only noted a few isolated words. One participant’s data in the
ES and one in the BS condition were also discarded because of technical problems in
video playing. In total, 17 recall tests in the NS and CS conditions and 16 in the ES and
BS conditions were scored. Two researchers scored the recall test separately after first
scoring a sample test, discussing discrepancies and reaching agreement on the scoring
criteria. The average of the two researchers’ scores was used as an evaluation of the
55
3.6 Statistical Analyses
This chapter presents statistical analyses that were carried out using IBM SPSS
Statistics (version 22). Video condition (NS, CS, ES and BS) was used as independent
variable and dependent variables included dwell time percentage of visible time in
subtitles (DT% in subtitles), dwell time percentage of visible time on the visual image
(DT% on the visual image), mean fixation duration in subtitles (MFD), self-reported
Since eye-tracking data often violate the normal distribution requirement of inferential
statistical tests like the ANOVA, or t-test, data that were not normally distributed were
(1) Is there any difference in the visual attention allocated to L1 and L2 subtitles
The DT% in subtitles between three subtitled conditions, namely ES, CS, and BS, were
compared using statistical analysis tools. As there was no outlier in the data as assessed
assessed by Shapiro-Wilk's test (p > 0.05), a one-way repeated measures ANOVA was
56
conducted. To confirm the ANOVA results, paired sample t-tests were also performed
on the data.
conditions were compared. There were no outliers in the data as assessed by inspection
of a boxplot and the differences of DT% between two conditions were normally
In order to examine the difference in the visual attention to L2 subtitles between the
inspection of a boxplot and the differences of DT% between two conditions were
compared. Given that there was no outlier in the data, as assessed by inspection of a
boxplot and the differences of DT% were normally distributed, as assessed by Shapiro-
(2) Is there any difference in the attention allocated to the visual image between
57
To answer this question, DT% on the visual image in four conditions were compared.
To determine the effect of subtitle mode on intrinsic cognitive load (IL), reported values
of IL in four video conditions were compared. One outlier was detected in the BS
their values did not reveal it to be extreme. Data were normally distributed in NS, CS
and ES, as assessed by Shapiro-Wilk's test (p > 0.05). There was violation of normality
test did not reveal any violation of normality (p > 0.05) in BS. A one-way repeated
measures ANOVA was conducted, and to validate the results, a non-parametric test
(Friedman test) was also performed. Both ANOVA and Friedman test produced the
same results.
To determine the effect of subtitle mode on extraneous cognitive load (EL), reported
values of EL in four video conditions were compared. One outlier in ES and one in BS
inspection of their values did not reveal them to be extreme. Shapiro-Wilk's test
revealed that EL scores were normally distributed in NS and CS (p > 0.05) while
58
Kolmogorov-Smirnov test did not reveal any violation of normality in four conditions
(p > 0.05). A one-way repeated measures ANOVA was first performed, which found
(Friedman test) was then conducted to validate the results. Friedman test only found
significant difference between BS and NS, but no difference between NS and CS. As
the data in the NS and CS conditions were normally distributed, a paired-samples t-test
was performed, which found significant difference between NS and CS (t (19) = 3.359,
p = 0.003). As the one-way ANOVA results were confirmed by the paired samples t-
To determine the effect of subtitle mode on germane cognitive load (GL*), reported
values of GL* in four video conditions were compared. There were no outliers in the
data, as assessed by inspection of a boxplot, and data were normally distributed in four
To examine the effect of subtitle mode on mental effort in information acquisition (ME)
reported values of ME in four video conditions were compared. No outlier in the data
Data of mean fixation duration (MFD) in subtitles in four conditions were also
compared to provide more information on cognitive load. Three outliers were kept in
the analysis as an inspection of their values did not reveal it to be extreme. No violation
59
of normality was observed, as assessed by Shapiro-Wilk's test (p > 0.05). A one way
To ascertain the effect of subtitle mode on content comprehension, scores of free recall
test in four conditions were compared. Two outliers (one in ES and one in CS) were
their values did not reveal them to be extreme. Data were normally distributed in four
ANOVA was conducted and paired samples t-tests were run to validate the results.
60
Chapter 4. Results
This chapter presents the results of statistical analyses to address three research
questions. To address the first question regarding the impact of subtitle mode on
attention allocation, statistical analysis results of dwell time percentage of visible time
(DT%) in subtitles and on visual image are discussed. To answer the second question
regarding the impact of subtitle mode on cognitive load, statistical analysis results of
three types of cognitive load (i.e., intrinsic cognitive load, extraneous cognitive load,
germane cognitive load) as well as mental effort are discussed. The third question
The one-way repeated measures ANOVA showed significant differences in the DT%
in subtitles between different subtitling conditions (F (2, 30) = 3.944, p = 0.030). Post
hoc analysis with a Bonferroni adjustment for multiple comparisons revealed that there
61
Table 3. Means of DT% in Subtitles in Monolingual and Bilingual Conditions
CS 21.55 (13.24) 16
ES 32.15 (17.50) 16
BS 33.62 (16.43) 16
CS_B 18.33 (15.91) 16
ES_B 15.29 (15.91) 16
As can be seen from Figure 8, in the monolingual conditions in which subtitles were
presented in only one language, L2 (English) subtitles attracted more visual attention
than L1 (Chinese) subtitles. Figure 9 shows that a majority of viewers (68.75%) had
62
higher DT% in L2 subtitles than in L1 subtitles. Both DT% in Chinese subtitles and
English subtitles decreased from the monolingual condition (CS and ES) to the
bilingual condition (CS_B and ES_B). Paired samples t-tests were run to determine
whether there were significant differences in the DT% when subtitles were presented
between ES and ES_B (t (15) = 2.815, p = 0.013), but this was not the case between CS
and CS_B (t (15) = 0.772, p = 0.452). In other words, viewers spent much less time
whereas they spent a similar amount of time reading L1 subtitles in both conditions.
30
20
10
0
P14 P17 P15 P08 P05 P18 P13 P10 P16 P12 P07 P20 P03 P22 P04 P21
-10
-20
-30
-40
-50
-60
63
Figure 10. Differences of DT% in L1 and L2 subtitles in monolingual conditions.
“English_subtitles” = absolute values of the difference in DT% between English
subtitles and Chinese subtitles in monolingual conditions with higher DT% in English
subtitles. “Chinese_subtitles” = absolute values of the difference in DT% between
English subtitles and Chinese subtitles in monolingual conditions with higher DT% in
Chinese subtitles.
As can be seen from Figure 10 which compares DT% in subtitles between Chinese (L1)
monolingual and English (L2) monolingual conditions, the difference between the time
spent on L1 and L2 subtitles is larger and is also less variable when viewers spent more
time reading L2 subtitles than when they spent more time reading L1 subtitles.
In the bilingual condition where Chinese subtitles and English subtitles coexisted, no
significance in the DT% was identified between CS_B and ES_B (t (15) = 0.539, p =
0.598). Similar DT% in the Chinese subtitles (M = 18.3%, SD = 15.9) and English
amount of time on two different subtitles when watching videos with bilingual subtitles.
However, on closer inspection of the individual data, nearly half of participants had
64
higher DT% in English subtitles while half of them had higher DT% in Chinese
50
40
30
20
10
0
P22 P20 P07 P08 P04 P16 P10 P18 P12 P21 P14 P13 P15 P17 P05 P03
-10
-20
-30
-40
-50
As can been seen from Figure 12 which compares DT% in subtitles between Chinese
(L1) and English (L2) subtitles in the bilingual condition, the difference between the
time spent on L1 and L2 subtitles is similar. It indicates that when viewers chose one
language as a dominant resource, they spent a similar amount of time reading subtitles
in another language.
It can be observed from Figure 13 that a majority of participants (62.5%) changed their
preferences for the language of subtitles when they shifted from a monolingual to a
bilingual condition, with seven participants changing their reliance on subtitles from
3.5
3 3 3 3
3
2.5
2 2 2 2 2 2 2
2
1.5
1 1 1
1
0.5
0 0
0
P03 P21 P04 P20 P22 P05 P12 P13 P14 P15 P17 P18 P07 P08 P10 P16
The one-way repeated measures ANOVA found that DT% in the visual image was
significantly different between subtitling conditions (F (3, 45) = 8.382, p < 0.0005).
Post hoc analysis with a Bonferroni adjustment for multiple comparisons revealed that
subtitling conditions.
Table 4. Means of DT% on the Visual Image in Monolingual and Bilingual Conditions
BS 64.48 (8,75) 16
CS 67.28 (9.61) 16
ES 64.58 (7.07) 16
NS 73.29 (13.45) 16
67
Figure 14. DT% on the visual image across four video conditions.
Paired samples t-tests found that there were significant differences in DT% between
subtitles and image in three different subtitling conditions. DT% was significantly
higher in image than in subtitles for all subtitled conditions: CS (t (16) = -13.354, p <
0.0005), ES (t (16) = -6.491, p < 0.0005) and BS (t (15) = -7.157, p < 0.005). While
almost all participant spent more time on image than on subtitles regardless of the
subtitling condition (see Figure 15), difference of DT% between subtitles and visual
image in BS and ES were roughly the same while a more noticeable gap was observed
68
10
0
P22 P21 P20 P18 P17 P16 P15 P14 P13 P12 P10 P08 P07 P05 P04 P03
-10
-20
-30
-40
-50
-60
-70
-80
CS ES BS
Figure 15. Difference of DT% between subtitles and visual image for all participants
in different subtitled conditions. Positive results = a higher DT% in subtitles. Negative
values = a higher DT% on the image.
80
70
60
50
40
30
20
10
0
BS CS ES
Figure 16. Comparison of DT% in image and subtitles in three subtitled conditions.
69
4.1.4 Mean Fixation Duration in the subtitled area
The one-way repeated ANOVA showed that mean fixation duration (MFD) was not
significantly different between CS, ES, CS_B and ES_B (F (3, 45) = 1.289, p = 0.290).
CS 159.21 (43.07) 16
ES 143.77 (27.13) 16
CS_B 150.42 (40.55) 16
ES_B 139.84 (51.17) 16
70
4.2 The Impact of Subtitle Mode on Cognitive Load
The one-way repeated measures ANOVA showed that there were significant
differences in IL between subtitling conditions (F (3, 51) = 5.321, p = 0.003). Post hoc
analysis with a Bonferroni adjustment for multiple comparisons revealed that there was
different conditions (F (3, 51) = 5.103, p = 0.004). Post hoc analysis with a Bonferroni
adjustment for multiple comparisons revealed that there were significant differences
The one-way repeated ANOVA showed that there were significant differences between
different conditions (F (2.198, 37.373) = 8.424, p = 0.001). Post hoc analysis with a
71
Bonferroni adjustment for multiple comparisons revealed that there was significant
= 0.009). GL* in the CS condition was the highest and lowest in the NS condition.
Results of the Friedman test showed that mental effort in the subtitled area was
0.049).
Table 6. Means (SD) of Cognitive Load and Mental Effort in Different Conditions
Condition IL EL GL* ME
Note. N = 18.
72
25
20
15
10
0
IL EL GL* ME
NS CS ES BS
Figure 18. Average values of self-reported cognitive load and mental effort in four
video conditions: NS, CS, ES and BS.
4.3 The Impact of Subtitle Mode on the Scores of Free Recall Test
The one-way repeated measures ANOVA revealed that there was no significant
difference in the recall scores between different conditions (F (3, 42) = 1.447, p = 0.243),
although the paired samples t-test results found that the difference between NS and BS
NS 8.45 (3.46) 15
CS 9.88 (5.44) 15
ES 9.82 (4.00) 15
BS 10.83 (5.93) 15
73
12
10
0
NS CS ES BS
74
Chapter 5. Discussion
Drawing upon eye movement data and self-reported data, this study investigated
Chinese L1 viewers’ distribution of visual attention and cognitive load when watching
English videos in different conditions: without subtitles, with English subtitles (L2
subtitles), with Chinese subtitles (L1 subtitles), and with bilingual subtitles (L1 + L2
subtitles). This study had two main objectives. The first objective was to compare the
viewers’ visual attention to subtitles and image. The second objective was to determine
whether bilingual subtitles will result in more cognitive gain by combining the benefits
impairing comprehension.
visual attention to subtitles. Viewers spent more time looking at subtitles in the BS
contained two lines of subtitles whereas there were only one-line subtitles in the
between the BS and ES conditions. This would suggest that it is not the number of
subtitle lines but rather the addition of subtitles in a non-native language that results in
75
more attention to the subtitled area. This also provides some evidence for the statement
made by Kruger and Steyn (2014) that “the number of lines do[es] not play as big a
attracted a significantly higher amount of visual attention (nearly the same as the
attention to the bilingual subtitles) than L1 subtitles, it seems that viewers are more
compelled to divert their attentional resources from other visual elements (e.g., image)
An observation of the differences in the time spent on L1 and L2 subtitles between the
bilingual and monolingual conditions revealed that viewers’ visual attention to subtitles
in different languages was not equally sensitive to competition. Viewers spent less time
on L1 subtitles in the bilingual condition than in the L1 monolingual condition, but the
difference did not reach significance. However, the case was different for L2 subtitles:
viewers spent much less time looking at the L2 subtitles in the bilingual condition than
bilingual subtitles did not significantly alter the visual attention to L1 subtitles (they
received the same amount of attention as in the monolingual condition), but it did result
When provided with both L1 and L2 subtitles in the bilingual condition, viewers did
not allocate equal amount of visual attention to two different subtitles or completely
ignored subtitles in one language due to their redundancy. Instead, they chose one
language as a main source of visual-verbal information. It seems that viewers are able
to adjust their viewing pattern and choose the less cognitively demanding way to
76
understand the video because paying equal attention to two subtitles would mean that
viewers have to shift back and forth between two subtitles, which could consume extra
cognitive resources and hinder information acquisition. It can also be explained by the
which posit that stimuli will be filtered at an early stage in order not to overload the
Moreover, the fact that viewers spent time reading subtitles in both languages in spite
of their redundancy provides evidence for the automatic subtitle reading behavior
attention devoted to the L1 and L2 subtitles implies that the two different subtitles in
the bilingual subtitling condition function differently, one is used as the major visual-
complementary subtitles for a similar purpose. As noted by Lavaur and Bairstow (2011),
viewers may refer to the other language subtitles to compare and confirm the
aural/visual input from time to time. However, such confirmation needs to be conducted
effectively and quickly as a long translation process may cause the viewers to lag
behind and lose the track (Saed et al., 2016), which could consequently deprive the
monolingual conditions to the bilingual condition. Nearly half of viewers changed their
77
preferences from L2 subtitles to L1 subtitles when they shifted from monolingual to
There are two possible explanations for that. One is that viewers may have more stable
native language (Heredia & Altarriba, 2001; Heredia, Olivares, & Cies, 2014). In face
of time constraints, viewers are inclined to acquire information in their native language
redundancy than L1 subtitles when L2 audio information is available and therefore are
less attended to by viewers. This would suggest that viewers may have the ability to
filter more redundant information even though they are unable to completely avoid
them in order to save cognitive resources for higher order processing and deeper
elaboration of the messages (Liu, Lai, & Chuang, 2011; Reese, 1984).
It was found that viewers’ visual attention to the dynamic image was not significantly
approximately similar amount of time on image, even though there were more sources
of information in the bilingual condition competing for visual attention. It implies that
viewers’ reliance on image appears to be more stable than their reliance on subtitles.
would rather spend more time on the less redundant information (i.e., image) in order
to maximize information acquisition. This again corroborates the view that viewers are
78
In monolingual conditions where subtitles were presented in either viewers’ native
language (L1 subtitles) or second language (L2 subtitles), viewers spent significantly
more time looking at L2 subtitles, which is in line with the results reported by Kruger
et al. (2014).
As it was found in the studies of Guichon and McLornan (2008) and Tsai and Huang
(2009) that the lexical interference between L1 subtitles and the L2 spoken dialogue
skip more L1 subtitles in order to avoid lexical interference caused by the linguistic
differences between the L1 subtitles and the L2 spoken dialogue. In this sense, viewers’
However, viewers’ more visual attention to L2 subtitles could also be a result of bottom-
meaning of spoken language can impinge on the viewer’s allocation of visual attention.
It has been found that viewers are inclined to fixate on the visual objects that are most
semantically relevant to the spoken words (see e.g., Cooper, 1974; Eichert, Peeters, &
Hagoort, 2017; Mishra, Olivers, & Huettig, 2013; Salverda & Altmann, 2011). It is very
likely that viewers spend more time looking at L2 subtitles because there exist stronger
semantic relations between subtitles and the spoken dialogue that share the same
language. If this is the case, viewers are expected to spend more time reading L1
subtitles if the spoken dialogue is in L1. To verify this postulation, further research is
A comparison of the current study and previous relevant studies revealed that viewers’
understand the spoken dialogue spent less time reading subtitles than those who had no
subtitles. It seems that when information is presented in two different channels, both
aurally and visually, viewers tend to reduce their reliance on one single channel. This
view partially corroborates the findings reported by Sohl (1989) that viewers tried to
follow the speech when watching subtitled videos. An important implication of these
as they are less likely to be cognitively overloaded by processing all information in one
single channel. For instance, as demonstrated the study of Moreno and Mayer (2002b),
students learned more effectively when the visual materials were accompanied by
However, given that reading skills are more developed than listening skills (Garza,
1991) and that subtitles are more efficient than auditory information (Hinkin et al.,
2014), it raises an interesting question as to why viewers still spare cognitive resources
for processing the audio information when they are able to acquire sufficient
information from the visual channel. One possible reason is that viewers split attention
between the subtitles and the spoken dialogue (redundancy between different sensory
channels) in order to relieve the stress on the visual processing memory and compensate
for the information loss caused by the splitting of attention between subtitles and the
image (redundancy within the same sensory channel). According to Dual Coding
pictures and audio information, can produce enhancing effects by making the most use
of two independent systems for processing visual and auditory information (Paivio,
80
1986b, 2007; Reese, 1984; Thompson & Paivio, 1994). Based on a premise that humans
possess two independent processing systems, one is responsible for processing verbal
information and one for nonverbal information, the Dual Coding Theory posits that
It could also be possible that the gap between reading and listening in dynamic contexts
information is not superior to or even less efficient than auditory information. This may
explain why subtitles are found to be beneficial for low proficiency viewers but
distracting for advanced viewers as found in some studies (see, e.g., Lavaur & Bairstow,
2011).
that the integration of audio and visual verbal information could occur automatically as
part of the multisensory processing of human cognition system (Ghazanfar & Schroeder,
2006; Quak, London, & Talsma, 2015). Further research is encouraged to determine
whether making use of audio input is an automatic behavior like subtitle reading
(d’Yewalle et al., 1991) and to what extent the presence of subtitles interferes with the
81
Table 8. Comparison of Findings for the Overall Time Spent on Subtitles.
Intralingual subtitling Interlingual subtitling Knowledge of the foreign language Number of subtitle lines Overall time spent
involved on subtitles
French soundtrack with No (Sesotho L1 viewers without One-line 79%
Sesotho subtitles (L1) knowledge of French)
Two-line 86%
Hefer (2013a)
Overall 83%
(p. 365)
French soundtrack with No (Sesotho L1 viewers without One-line 63%
English subtitles (L2) knowledge of French)
Two-line 76%
Overall 74%
English soundtrack with Yes (Sesotho L1 viewers using one-line and two-line 42.9%
English subtitles (L2) English as a second language)
Kruger et al. (2014) English soundtrack with Yes (Sesotho L1 viewers using one-line and two-line 20.3%
(p. 7) Sesotho subtitles (L1) English as a second language)
English soundtrack with One-line 32.1%
English subtitles (L2)
Current study English soundtrack with Yes (Chinese L1 viewers using One-line 21.6%
(2017) Chinese subtitles (L1) English as a second language)
Yes (Chinese L1 viewers using Two-line 33.6%
Bilingual subtitles English as a second language)
Note. Values were calculated as a percentage of dwell time of the visible time of subtitles.
82
5.2 The Impact of Subtitle Mode on Cognitive Load
Significant differences in three types of cognitive load were found between the NS and
score in GL*, which suggests that adding bilingual subtitles makes the video easier to
understand and allows for more cognitive resources for the learning process than not
providing viewers any written text as linguistic support. It also supports the growing
body of evidence that processing subtitles is cognitively effective and does not cause
cognitive overload (Kruger et al., 2013; Lång, 2016; Perego et al., 2010).
In contrast to Diao and Sweller’s findings in 2007, this study did not find an increase
in extraneous cognitive load in the presence of redundancy between audio and visual
information. It is worth noting, however, that their study compared text only to text
with audio, whereas the present study does not have a text only condition.
As no significant differences were found in cognitive load or mental effort between the
bilingual and monolingual conditions, there was no sufficient evidence for the
arguments that bilingual subtitles give viewers a cognitive gain by combining the
subtitles.
83
5.2.2 Eye tracking measures
cognitively demanding regardless of whether the two subtitles are presented separately
a major channel for visual-verbal information in the bilingual condition – they were
It is interesting to note that while viewers spent significantly less time looking at L2
subtitles in the bilingual condition than in the monolingual condition (refer to the results
of DT% in subtitles), no difference was observed for the mean fixation duration in L2
subtitles between the two conditions. In other words, the reduction of time viewers
spend looking at subtitles did not affect the depth of processing of subtitles. This further
It is also worth noting that mean fixation duration in L1 and L2 subtitles in both
monolingual and bilingual conditions were shorter than that reported by Bisson et al.
(2014) and d’Ydewalle and De Bruycker (2007) (see Table 9). A possible reason is that
viewers in the study of Bisson et al. (2014) had no knowledge of the soundtrack, which
means that they could only rely on subtitles for verbal information in the interlingual
subtitling condition. It indicates that when viewers are able to acquire information from
the spoken dialogue, they may experience less processing difficulties, even though the
subtitles are in a different language from the spoken dialogue. This also partially
84
accounts for viewers’ inclination to rely less on subtitles when the spoken dialogue is
Knowledge of
the Soundtrack
L1 subtitles L2 subtitles
The free recall scores did not differ significantly across the four different conditions,
which implies that viewers comprehend the video equally well regardless of the
presence and linguistic formats of subtitles, although the lowest comprehension rate in
the NS condition suggests that subtitles benefit comprehension, which is consistent with
a number of studies (Chung, 1999; Hayati & Mohmedi, 2011; Hosogoshi, 2016;
Markham et al., 2001; Wang, 2014). However, the lack of significance in the results
Although bilingual subtitles do not seem to produce more cognitive benefits, the lack
dispels the concern that bilingual subtitles which generate more redundancy may cause
Different from some previous subtitling studies that used videos with an unknown
language either in subtitles or in the soundtrack, this study provided viewers with access
L2) channels, which means that viewers are exposed to either two or three sources of
extend research on the redundancy effect from a L1 context to a L2 context. The four
video conditions that were investigated in the current study represent different degrees
information and the bilingual condition encompassing most redundancy. Although the
could generate different degrees of redundancy because the semantic relations between
subtitles and the spoken dialogue differ when the two verbal channels are in the same
This study therefore provides some interesting insights into the influence of redundancy
on visual processing and cognitive load when watching subtitled videos. First, the
conditions suggests that the presence of subtitles as visual-verbal redundancy does not
86
give viewers a significant advantage in video comprehension. However, eye movement
data revealed that viewers spent more than 20% of the time reading subtitles in
monolingual conditions and more than 30% in the bilingual condition. Even though the
two different subtitles in the bilingual condition were redundant to each other, viewers
still spent time reading both subtitles. It appears that it is the presence rather than the
visual attention. This view is in line with previous studies which found that subtitle
reading was an automatic behavior (Bisson et al., 2014; d’Yewalle et al., 1991).
The automatic reading behavior could be attributed to the fact that subtitles are a visual
trigger for automatic or bottom-up visual attention. For instance, the dynamic nature of
as motion (Bisson et al., 2014). In addition, it has been found that people tend to read
any available text as they believe that texts contain richer information (Cerf, Frady, &
Koch, 2009; Ross & Kowler, 2013; Wang & Pomplun, 2012). Subtitles as written text
therefore are likely to grasp visual attention even though they are redundant to
If viewers cannot avoid redundant information, how they allocate their attentional and
memory. Consistent with the findings of the study by Liu et al. (2011), the current
study found that viewers had the ability to filter out information with a higher degree
87
multisensory processing and integration would provide much insights in this regard (see,
e.g., Koelewijn, Bronkhorst, & Theeuwes, 2010; Morís Fernández, Visser, Ventura-
Campos, Ávila, & Soto-Faraco, 2015; Quak et al., 2015; Talsma, Senkowski, Soto-
Faraco, & Woldorff, 2010; Van der Burg, Brederoo, Nieuwenstein, Theeuwes, &
Olivers, 2010; Van der Burg, Talsma, Olivers, Hickey, & Theeuwes, 2011). This also
highlights the potential benefits the subtitling research could gain from other disciplines
Interestingly, findings of the current study do not support previous claims that
the BS condition which presumably features more redundancy reported lower intrinsic
and extraneous cognitive load than NS which contains the least amount of redundant
information. There could be two reasons for that. First, the redundancy effect is
originally based on native language contexts whereas the current study is based on a
may have a different impact on cognitive load than presenting redundant information
examine if there exists any difference in subtitle processing. Second, the video used in
the current study is less image intensive than the animation used in other studies that
explored the redundancy effect. As a result, viewers in the current study may have had
more available cognitive resources for the processing of redundant verbal information.
In contrast to the redundancy effect which suggests that presenting the same
information in multiple forms and modalities will result in a decrease in learning, the
88
current study does not provide evidence for the detrimental effect of redundant
significance in favour of the BS condition, but this would have to be investigated with
a larger sample and possibly longitudinally before any conclusions can be made.
Findings of the current study suggest that the effects of redundancy may be less
why previous research which focused on determining the impact of different sources of
complexity of the task, individual characteristics and presentation modes, among other
factors.
89
Chapter 6. Conclusion
This chapter is composed of two sections. The first section (6.1) summarizes findings
and contributions of the current study while the second section (6.2) discusses some
6.1 Contribution
This study presents an empirical study as part of the growing body of research that
explores the impact of subtitle mode on cognitive processing and video comprehension.
attention allocation and cognitive load, which has not been investigated before.
intralingual and interlingual subtitles. Results showed that viewers’ visual attention to
L1 subtitles was more stable than to L2 subtitles and was less sensitive to the increased
visual competition in the bilingual condition. This study also dismisses the concern that
increased redundancy.
Results revealed that while viewers were inclined to attend to multiple available
90
redundant information, they appeared to have the ability to filter out some redundancy
Findings of the current study also indicate that the presence of redundant information
does not necessarily result in an increase in cognitive load and less learning as
are, to some extent, dependent on viewers’ ability to evaluate the momentary value of
different layers of redundancy, and actively select and integrate different sources of
redundancy based on their individual and dynamic needs to achieve their learning goal.
There are a number of necessary limitations that should be taken into account when
replicating the current study in further research. First, given the time constraints of this
project, the sample size in the current study is small, which does not provide sufficient
content comprehension. Although the sample size is in line with most other eye tracking
Second, as viewers were asked to complete the free recall test in their second language,
their English writing skills may have interfered with their comprehension performance.
Some participants reported that they understood the content but found it difficult to
found that the test type used to measure the effectiveness of subtitles had a significant
91
of subtitles on comprehension may be reduced when productive tests (e.g., recall
protocol) interfere with other language skills, for example, writing skills. Some studies
also found that asking participants to do the free recall test in their native and second
subtitles on listening comprehension (Markham et al., 2001; Tsai & Huang, 2009). A
combination of both receptive and productive tests is therefore advisable for further
studies.
Third, any definite conclusions drawn from the data of the current study should be made
proficiency. Although all participants that produced valid data for eye tracking and
programs at Macquarie University in Australia, they may still possess different English
language proficiency due to their previous educational background and their exposure
to English language skill-related practice. For instance, students from the Translation
and Interpreting studies program could have higher English language proficiency than
students from the accounting program which involves a less intensive training of
A further limitation is that this eye tracking study only made use of the established
measures of dwell time and mean fixation duration. Future studies could consider more
viewers’ visual processing of subtitled audiovisual content. It would also provide more
processed in monolingual and bilingual conditions using the Reading Index for
92
Finally, although this study provides some evidence in support of previous findings that
viewers try to follow the spoken dialogue when processing subtitles and visual image,
93
References
94
Chang, C., & Chen, Y. (2002). The western computer and the Chinese character: Recent
debates on Chinese writing reform. Intercultural Communication Studies, 6(3), 67–84.
Retrieved fromhttp://web.uri.edu/iaics/files/06-Changfu-Chang-Yihai-Chen.pdf
Cheng, Y.-J. (2014). Chinese subtitles of English-Language feature films in Taiwan: A
systematic investigation of solution-types (Doctoral dissertation, Australian National
University). Retrieved from https://openresearch-
repository.anu.edu.au/handle/1885/11789
Chung, J. (1999). The effects of using video texts supported with advanced organizers and
captions on Chinese college students’ listening comprehension: An empirical study.
Foreign Language Annals, 32(3), 295–308. doi: 10.1111/j.1944-9720.1999.tb01342.x
Cooper, R. M. (1974). The control of eye fixation by the meaning of spoken language.
Cognitive Psychology, 107(1), 84–107. doi: 10.1016/0010-0285(74)90005-X
Corrizzato, S. (2015). Spike Lee’s Bamboozled: A Contrastive Analysis of Compliments and
Insults from English into Italian. Cambridge Scholars Publishing.
d’Ydewalle, G., & De Bruycker, W. (2007). Eye movements of children and adults while
reading television subtitles. European Psychologist, 12(3), 196–205. doi: 10.1027/1016-
9040.12.3.196
d’Yewalle, G., Praet, C., Verfaillie, K., & Johan, V. R. (1991). Watching subtitled television:
Automatic reading behavior. Communication Research, 18, 650–666. Retrieved from
http://journals.sagepub.com/doi/abs/10.1177/009365091018005005
Danan, M. (2004). Captioning and subtitling: Undervalued language learning strategies. Meta,
49(1), 67–77. doi: 10.7202/009021ar
Debue, N., & Van De Leemput, C. (2014). What does germane load mean? An empirical
contribution to the cognitive load theory. Frontiers in Psychology, 5, 1–12. doi:
10.3389/fpsyg.2014.01099
DeVellis, R. F. (2003). Scale development: Theory and applications (2nd ed.). Thousand Oaks,
CA: Sage Publications.
Di Giovanni, E. (2016). The layers of subtitling. Cogent Arts & Humanities, 3(1), 1–15. doi:
10.1080/23311983.2016.1151193
Diao, Y., & Sweller, J. (2007). Redundancy in foreign language reading comprehension
instruction: Concurrent written and spoken presentations. Learning and Instruction, 17,
78–88. doi: 10.1016/j.learninstruc.2006.11.007
Díaz Cintas, J., & Remael, A. (2007). Audiovisual translation: Subtitling. Manchester, NH: St
Jerome Publishing lcrome Publishing.
Eichert, N., Peeters, D., & Hagoort, P. (2017). Language-driven anticipatory eye movements
in virtual reality. Behavior Research Methods. doi: 10.3758/s13428-017-0929-z
Field, A. (2009). Discovering statistics using SPSS. London : SAGE.
Fox, W. (2016). Integrated titles: An improved viewing experience? In S. Hansen-Schirra & S.
Grucza (Eds.), Eyetracking and Applied Linguistics (pp. 5–30). Berlin: Language Science
Press.
García, B. (2017). Bilingual subtitles for second-language acquisition and application to
engineering education as learning pills. Computer Applications in Engineering Education,
1, 1–12. doi: 10.1002/cae.21814
Garza, T. J. (1991). Evaluating the use of captioned video material in advanced foreign
language learning. Foreign Language Annals, 3(3), 239–258. doi: 10.1111/j.1944-
9720.1991.tb00469.x
Gernsbacher, M. A. (2015). Video captions benefit everyone. Policy Insights from the
Behavioral and Brain Sciences, 2(1), 195–202. doi: 10.1177/2372732215602130
Ghazanfar, A. A., & Schroeder, C. E. (2006). Is neocortex essentially multisensory? Trends in
Cognitive Sciences, 10(6), 278–285. doi: 10.1016/j.tics.2006.04.008
95
Ghia, E. (2012). The impact of translation strategies on subtitle reading. In E. Perego (Ed.),
Eye tracking in audiovisual translation (pp. 157–182). Roma: Aracne.
Gottlieb, H. (2004). Subtitles and international anglification. Nordic Journal of English Studies,
3(1), 219–230. Retrieved from http://ojs.ub.gu.se/ojs/index.php/njes/article/view/244
Gottlieb, H. (2012). Subtitles - Readable dailogue? In E. Perego (Ed.), Eye tracking in
audiovisual translation (pp. 37–81). doi: 10.13140/2.1.4680.5446
Graesser, A. C., McNamara, D. S., Cai, Z., Mark, C., & Li, H. (2014). Coh-metrix measures
text characteristics at multiple levels of language and discourse. The Elementary School
Journal, 115(2), 210–228. doi: 10.1086/678293
Guichon, N., & McLornan, S. (2008). The effects of multimodality on L2 learners: Implications
for CALL resource design. System, 36(1), 85–93. doi: 10.1016/j.system.2007.11.005
Hayati, A., & Mohmedi, F. (2011). The effect of films with and without subtitles on listening
comprehension of EFL learners. British Journal of Educational Technology, 42(1), 181–
192. doi: 10.1111/j.1467-8535.2009.01004.x
Hefer, E. (2013a). Reading first and second language subtitles: Sesotho viewers reading in
Sesotho and English. Southern African Linguistics and Applied Language Studies, 31(3),
359–373. doi: 10.2989/16073614.2013.837610
Hefer, E. (2013b). Reading second language subtitles: A case study of Afrikaans viewers
reading in Afrikaans and English. Perspectives, 21(1), 22–41. doi:
10.1080/0907676X.2012.722652
Heredia, R. R., & Altarriba, J. (2001). Bilingual language mixing: Why do bilinguals code-
switch? Current Directions in Psychological Science, 10(5), 164–168. doi: 10.1111/1467-
8721.00140
Heredia, R. R., Olivares, M., & Cies, A. B. (2014). It’s all in the eyes: How language
dominance, salience, and context affect eye movements during idiomatic language
processing. In M. Pawlak and L. Aronin (Eds.), Essential topics in applied linguistics and
multilingualism, second language learning and teaching (pp. 21-41). doi: 10.1007/978-3-
319-01414-2
Hinkin, M. P., Harris, R. J., & Miranda, A. T. (2014). Verbal redundancy aids memeory for
filmed entertainment dialogue. The Journal of Psychology, 148(2), 161–176. doi: 10.
1080/00223980.2013.767774
Hosogoshi, K. (2016). Effects of captions and subtitles on the listening process: Insights from
EFL learners’ listening strategies. Jaltcalljournal, 12(3), 153–178. Retrieved from
https://eric.ed.gov/?id=EJ1125240
Hsiao, C.-H. (2014). The moralities of intellectual property: Subtitle groups as cultural brokers
in China. The Asia Pacific Journal of Anthropology, 15(3), 218–241. doi:
10.1080/14442213.2014.913673
Hvelplund, K. T. (2017). Eye tracking in translation process research. In J. W. Schwieter & A.
Ferreira (Eds.), The handbook of translation and cognition (pp. 248–264). doi:
10.1002/9781119241485.ch14
Ivarsson, J., & Carroll, M. (1998). Subtitling. Simrishamn, Sweden: TransEdit HB.
Kalyuga, S. (2012). Instructional benefits of spoken words: A review of cognitive load factors.
Educational Research Review, 7, 145–159. doi: 10.1016/j.edurev.2011.12.002
Kalyuga, S., Chandler, P., & Sweller, J. (1999). Managing split-attention and redundancy in
multimedia instruction. Applied Cognitive Psychology, 13, 351–371. doi:
10.1002/acp.1773
Kalyuga, S., & Sweller, J. (2005). The redundancy principle in multimedia learning. In R. E.
Mayer (Ed.), The Cambridge handbook of multimedia learning (pp. 247–262). doi:
10.1017/CBO9781139547369.013
Kline, R. B. (2005). Principles and practice of structural equation modeling (2nd ed.). New
96
York: Guildford.
Koelewijn, T., Bronkhorst, A., & Theeuwes, J. (2010). Attention and the multiple stages of
multisensory integration: A review of audiovisual studies. Acta Psychologica, 134(3),
372–384. doi: 10.1016/j.actpsy.2010.03.010
Kovacs, G., & Miller, R. C. (2014). Smart subtitles for vocabulary learning. Proceedings of the
32nd Annual ACM Conference on Human Factors in Computing Systems, 853–862. doi:
10.1145/2556288.2557256
Kruger, J.-L. (2013). Subtitles in the classroom: Balancing the benefits of dual coding with the
cost of increased cognitive load. Tydskrif Vir Taalonderrig/Journal for Language
Teaching, 47(1), 29–53. doi: 10.4314/jlt.v47i1.2
Kruger, J.-L. (2016). Psycholinguistics and audiovisual translation. Target, 28(2), 276–287.
doi: 10.1075/target.28.2.08kru
Kruger, J.-L., & Doherty, S. (2016). Measuring cognitive load in the presence of educational
video: Towards a multimodal methodology. Australasian Journal of Educational
Technology, 32(6), 19–31. doi: 10.14742/ajet.3084
Kruger, J.-L., Hefer, E., & Matthew, G. (2013). Measuring the impact of subtitles on cognitive
load. Proceedings of the 2013 Conference on Eye Tracking South Africa - ETSA ’13,
1(August), 62–66. doi: 10.1145/2509315.2509331
Kruger, J.-L., Matthew, G., & Hefer, E. (2013). Measuring the impact of subtitles on cognitive
load: Eye tracking and dynamic audiovisual texts. Proceedings of Eye Tracking South
Afria, Cape Town, 1(August), 29–31. doi: 10.1145/2509315.2509331
Kruger, J.-L., Hefer, E., & Matthew, G. (2014). Attention distribution and cognitive load in a
subtitled academic lecture: L1 vs. L2. Journal of Eye Movement Research, 7(5), 1–15. doi:
10.16910/jemr.7.5.4
Kruger, J.-L., & Steyn, F. (2014). Subtitles and eye tracking: Reading and performance.
Reading Research Quarterly, 49(1), 105–120. doi: 10.1002/rrq.59
Kruger, J.-L., Szarkowska, A., & Krejtz, I. (2015). Subtitles on the moving image: An overview
of eye tracking studies. Refractory: A Journal of Entertainment Media, 25, 1–14.
Retrieved from http://refractory.unimelb.edu.au/2015/02/07/kruger-szarkowska-krejtz/
Kuo, S.-Y. (2014). Quality in subtitling: Theory and professional reality (Doctoral dissertation,
Imperial College London). Retrieved from
https://spiral.imperial.ac.uk/bitstream/10044/1/24171/1/Kuo-SzuYu-2014-PhD-
Thesis.pdf
Laerd Statistics. (2015). Statistical tutorials and software guides. Retrieved from
https://statistics.laerd.com/
Lång, J. (2016). Subtitles vs. narration: The acquisition of information from visual-verbal and
audio-verbal channels when watching a television documentary. In S. Hansen-Schirra &
S. Grucza (Eds.), Eye tracking and applied linguistics (pp. 59–82). Berlin: Language
Science Press.
Lång, J., Mäkisalo, J., Gowases, T., & Pietinen, S. (2013). Using eye tracking to study the
effect of badly synchronized subtitles on the gaze paths of television viewers. New Voices
in Translation Studies, 10(1), 72–86. Retrieved from
https://www.researchgate.net/publication/289176943_Using_eye_tracking_to_study_the
_effect_of_badly_synchronized_subtitles_on_the_gaze_paths_of_television_viewers
Lavaur, J.-M., & Bairstow, D. (2011). Languages on the screen: Is film comprehension related
to the viewers’ fluency level and to the language in the subtitles? International Journal of
Psychology, 46(6), 455–462. doi: 10.1080/00207594.2011.565343
Leppink, J., Paas, F., Van der Vleuten, C. P. M., Van Gog, T., & Van Merriënboer, J. J. G.
(2013). Development of an instrument for measuring different types of cognitive load.
Behavior Research Methods, 45(4), 1058–1072. doi: 10.3758/s13428-013-0334-1
97
Leppink, J., Paas, F., Van Gog, T., Van der Vleuten, C. P. M., & Van Merriënboer, J. J. G.
(2014). Effects of pairs of problems and examples on task performance and different types
of cognitive load. Learning and Instruction, 30, 32–42. doi:
10.1016/j.learninstruc.2013.12.001
Li, M. (2016). An investigation into the differential effects of subtitles (first Language, second
language, and bilingual) on second language vocabulary acquisition (Doctoral
dissertation, The University of Edinburgh). Retrieved from
https://www.era.lib.ed.ac.uk/bitstream/handle/1842/22013/Li2016.pdf?sequence=2&isA
llowed=y
Linebarger, D., Piotrowski, J. T., & Greenwood, C. R. (2010). On-screen print: The role of
captions as a supplemental literacy tool. Journal of Research in Reading, 33(2), 148–167.
doi: 10.1111/j.1467-9817.2009.01407.x
Liu, D. (2014). On the classification of subtitling. Journal of Language Teaching and Research,
5(5), 7. doi: 10.4304/jltr.5.5.1103-1109
Liu, H. -C., Lai, M. -L., & Chuang, H. -H. (2011). Using eye-tracking technology to investigate
the redundant effect of multimedia web pages on viewers’ cognitive processes. Computers
in Human Behavior, 27(6), 2410–2417. doi: 10.1016/j.chb.2011.06.012
Low, R., & Sweller, J. (2005). The modality principle in multimedia learning. In R. E. Mayer
(Ed.), The Cambridge handbook of multimedia learning (pp. 147–158). doi:
10.1017/CBO9781139547369.013
Lwo, L., & Lin, C. -T. M. (2012). The effects of captions in teenagers’ multimedia L2 learning.
ReCALL, 24(2), 188–208. doi: 10.1017/S0958344012000067
Malamed, C. (n.d.). Waht is cognitive load? Retrived from
http://theelearningcoach.com/learning/what-is-cognitive-load/
Mangiron, C. (2016). Reception of game subtitles: An empirical study. The Translator, 72–93.
doi: 10.1080/13556509.2015.1110000
Markham, P. L., & Peter, L. A. (2003). The influence of English language and Spanish
language captions on foreign language listening/reading comprehension. Journal of
Educational Technology Systems, 31(3), 331–341. doi: 10.2190/BHUH-420B-FE23-
ALA0
Markham, P. L., Peter, L. A., & Mccarthy, T. J. (2001). The effects of native language vs.
target language captions on foreign language students’ DVD video comprehension.
Foreign Language Annals, 34(5), 439–445. doi: 10.1111/j.1944-9720.2001.tb02083.x
Martin, S. (2014). Measuring cognitive load and cognition: metrics for technology-enhanced
learning. Educational Research and Evaluation, 20(7–8), 592–621. doi:
10.1080/13803611.2014.997140
Matielo, R., D’Ely, R. C. S. F., & Baretta, L. (2015). The effects of interlingual and intralingual
subtitles on second language learning/ acquisition: A state-of-the-art review. Trab. Ling.
Aplic., Campinas, 54(1), 161–182. doi: 10.1590/0103-18134456147091
Mayer, R. E., Heiser, J., & Lonn, S. (2001). Cognitive constraints on multimedia learning:
When presenting more material reults in less understanding. Journal of Educational
Psychology, 93(1), 187–198. doi: 10. 1037//0022-0663.93.1.187
Mayer, R. E., Lee, H., & Peebles, A. (2014). Multimedia learning in a second language: A
cognitive load perspective. Applied Cognitive Psychology, 28, 653–660. doi:
10.1002/acp.3050
Mayer, R. E., & Moreno, R. (2003). Nine ways to reduce cognitive load in multimedia learning.
Educational Psychologist, 38(1), 43–52. doi: 10.1207/S15326985EP3801_6
Mishra, R. K., Olivers, C. N. L., & Huettig, F. (2013). Spoken language and the decision to
move the eyes: To what extent are language-mediated eye movements automatic?
Progress in Brain Research, 202, 135-149. doi: 10.1016/B978-0-444-62604-2.00008-3
98
Mitterer, H., & McQueen, J. M. (2009). Foreign subtitles help but native-language subtitles
harm foreign speech perception. PLoS ONE, 4(11), 4–9. doi:
10.1371/journal.pone.0007785
Moreno, A. (2017). Attention and Dual Coding Theory : An interaction model using subtitles
as a paradigm (Doctoral dissertation, Autonomous University of Barcelona). Retrieved
from http://www.tesisenred.net/bitstream/handle/10803/405246/aom1ded1.pdf?
sequence=1&isAllowed=y
Moreno, R., & Mayer, R. E. (2002a). Learning science in virtual reality multimedia
environments: Role of methods and media. Journal of Educational Psychology, 94(3),
598–610. doi: 10.1037/0022-0663.94.3.598
Moreno, R., & Mayer, R. E. (2002b). Verbal redundancy in multimedia learning: When reading
helps listening. Journal of Educational Psychology, 94(1), 156–163. doi: 10.1037/0022-
0663.94.1.156
Morís Fernández, L., Visser, M., Ventura-Campos, N., Ávila, C., & Soto-Faraco, S. (2015).
Top-down attention regulates the neural expression of audiovisual integration.
NeuroImage, 119, 272–285. doi: 10.1016/j.neuroimage.2015.06.052
Munoz, C. (2017). The role of age and proficiency in subtitle reading: An eye-tracking study.
System, 67, 77–86. doi: 10.1016/j.system.2017.04.015
Orrego-Carmona, D. (2014). Where is the audience? Testing the audience reception of non-
professional subtitling. In E. Torres-Simon & D. Orrego-Carmona (Eds.), Translation
research projects 5. Retrieved from
http://isg.urv.es/publicity/isg/publications/trp_5_2014/index.htm
Paivio, A. (1986). Mental representations: A dual-coding approach. Oxford, England: Oxford
University Press.
Paivio, A. (2007). Mind and its evolution: A dual coding theoretical approach. Mahwah, NJ:
Erlbaum.
Pedersen, J. (2011). Subtitling norms for television: An exploration focussing on extralinguistic
cultural references. doi: 10.1075/btl.98
Perego, E., Del Missier, F., Porta, M., & Mosconi, M. (2010). The cognitive effectiveness of
subtitle processing. Media Psychology, 13(3), 243–272. doi:
10.1080/15213269.2010.502873
Perez, M. M., Noortgate, W. Van Den, & Desmet, P. (2013). Captioned video for L2 listening
and vocabulary learning: A meta-analysis. System, 41(3), 720–739. doi:
10.1016/j.system.2013.07.013
Plass, J. L., Chun, D. M., Mayer, R. E., & Leutner, D. (2003). Cognitive load in reading a
foreign language text with multimedia aids and the influence of verbal and spatial abilities.
Computers in Human Behavior, 19, 221–243. doi: 10.1016/S0747-5632(02)00015-8
Price, K. (1983). Closed-captioned TV: An untapped resource. MATSOL Newsletter, 12(2), 1-
8. Retrieved from http://www.matsol.org/assets/documents/Currentsv12no2Fall1983.pdf
Quak, M., London, R. E., & Talsma, D. (2015). A multisensory perspective of working
memory. Frontiers in Human Neuroscience, 9, 1–11. doi: 10.3389/fnhum.2015.00197
Raine, P. (2012). Incidental learning of vocabulary through subtitled authentic videos (MA
thesis, University of Birmingham). Retrieved from
http://www.birmingham.ac.uk/Documents/college-artslaw/cels/essays/matefltesldissertat
ions/RAINE619605DISS.pdf
Rajendran, D. J., Duchowski, A. T., Orero, P., Martínez, J., & Romero-Fresco, P. (2013).
Effects of text chunking on subtitling: A quantitative and qualitative examination.
Perspectives, 21(1), 5–21. doi: 10.1080/0907676X.2012.722651
Rayner, K. (2009). Eye movements and attention in reading, scene perception, and visual
search. The Quarterly Journal of Experimental Psychology, 62(8), 1457–1506. doi:
99
10.1080/17470210902816461
Rayner, K. (2012). Eye movements and visual cognition: Scene perception and reading.
Springer Science & Business Media.
Reese, S. D. (1984). Visual‐verbal redundancy effects on television news learning. Journal of
Broadcasting, 28(1), 79–87. doi: 10.1080/08838158409386516
Ross, N. M., & Kowler, E. (2013). Eye movements while viewing narrated, captioned, and
silent videos. Journal of Vision, 13, 1. doi: 10.1167/13.4.1
Saed, A., Yazdani, A., & Askary, M. (2016). Film subtitles and listening comprehension ability
of intermediate EFL learners. International Journal of Applied Linguistics and
Translation, 2(3), 29–32. doi: 10.11648/j.ijalt.20160203.12
Salverda, A. P., & Altmann, G. T. M. (2011). Attentional capture of objects referred to by
spoken language. Journal of Experimental Psychology: Human Perception and
Performance, 37(4), 1122–1133. doi: 10.1037/a0023101
Sohl, G. (1989). Het verwerken van de vreemdtalige gesproken tekst in een ondertiteld TV‒
programma [Processing foreign spoken text in a subtitled television program]. University
of Leuven, Belgium, Vanachter.
Specker, E. (2008). L1/L2 Eye movement reading of closed captioning: A multimodal analysis
of multimodal use (Doctoral dissertation, The University of Arizona). Retrieved from
http://hdl.handle.net/10150/194820
Sweller, J. (1988). Cognitive load during problem solving: Effects on learning. Cognitive
Science, 12(1), 257–285. doi: 10.1016/0364-0213(88)90023-7
Sweller, J. (2010). Element interactivity and intrinsic, extraneous, and germane cognitive load.
Educ Pshchol Rev, (22), 123–138. doi: 10.1007/s
Sweller, J., Ayres, P., & Kalyuga, S. (2011). Cognitive load theory. doi: 10.1007/978-1-4419-
8126-4
Sweller, J., Van Merrienboer, J. J. G., & Paas, F. G. W. C. (1998). Cognitive architecture and
instructional design. Educational Psychology Review, 10(3), 251–296. Retrieved from
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.89.9802&rep=rep1&type=pdf
Szarkowska, A. (2016). Report on the results of an online survey on subtitle presentation times
and line breaks in interlingual subtitling. Part 1: Subtitlers. London. Retrieved from
http://avt.ils.uw.edu.pl/files/2016/10/SURE_Report_Survey1.pdf
Talsma, D., Senkowski, D., Soto-Faraco, S., & Woldorff, M. G. (2010). The multifaceted
interplay between attention and multisensory integration. Trends in Cognitive Sciences,
14(9), 400–410. doi: 10.1016/j.tics.2010.06.008
Thompson, V. A., & Paivio, A. (1994). Memory for pictures and sounds: Independence of
auditory and visual codes. Canadian Journal of Experimental Psychology/Revue
Canadienne de Psychologie Expércompréhension D’un Texte Scientifiquerimentale,
48(3), 380–398. doi: 10.1037/1196-1961.48.3.380
Treisman, A. M. (1968). Strategies and models of visual attention. Psychological Review,
75(84), 127–190. Retrieved from http://psycnet.apa.org/record/1969-10667-001
Tsai, C., & Huang, S. C. (2009). Target language subtitles for comprehensible film language
input. Retrieved from http://120.107.180.177/1832/9901/099-1-11p.pdf
Van der Burg, E., Brederoo, S. G., Nieuwenstein, M. R., Theeuwes, J., & Olivers, C. N. L.
(2010). Audiovisual semantic interference and attention: Evidence from the attentional
blink paradigm. Acta Psychologica, 134(2), 198–205. doi: 10.1016/j.actpsy.2010.01.010
Van der Burg, E., Talsma, D., Olivers, C. N. L., Hickey, C., & Theeuwes, J. (2011). Early
multisensory interactions affect the competition among multiple visual objects.
NeuroImage, 55(3), 1208–1218. doi: 10.1016/j.neuroimage.2010.12.068
Van der Zee, T., Admiraal, W., Paas, F., Saab, N., & Giesbers, B. (2017). Effects of subtitles,
complexity, and language proficiency on learning from online education videos. Journal
100
of Media Psychology, 29(1), 18–30. doi: 10.1027/1864-1105/a000208
Van Merriënboer, J. J. G., & Sweller, J. (2005). Cognitive load theory and complex learning:
Recent developments and future directions. Educational Psychology Review, 17(2), 147-
177. doi: 10.1007/s10648-005-3951-0
Vanderplank, R. (2013). “Effects of” and “effects with” captions: How exactly does watching
a TV programme with same-language subtitles make a difference to language learners?
Language Teaching, 49(2), 235–250. doi: 10.1017/S0261444813000207
Vanderplank, R. (2016). Captioned media in foreign language learning and teaching: Subtitles
for the deaf and hard-of-hearing as tools for language learning. London: Palgrave
Macmillan.
Vulchanova, M., Aurstad, L. M., Kvitnes, I. E. N., & Eshuis, H. (2015). As naturalistic as it
gets: Subtitles in the English classroom in Norway. Frontiers in Psychology, 5, 1–10. doi:
10.3389/fpsyg.2014.01510
Wang, H.-C., & Pomplun, M. (2012). The attraction of visual attention to texts in real-world
scenes. Journal of Vision, 12(6), 26–26. doi: 10.1167/12.6.26
Wang, Y. (2014). The effects of L1/L2 subtitled American TV series on Chinese EFL students’
listening comprehension (MA thesis, Michigan State University). Retrieved from
https://www.google.com.au/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&
uact=8&ved=0ahUKEwiD_5bs2NjWAhWIW7wKHVCGDKMQFggmMAA&url=https
%3A%2F%2Fd.lib.msu.edu%2Fetd%2F2882%2Fdatastream%2FOBJ%2Fdownload%2
FThe_effects_of_L1_L2_subtitled_American_TV_series_on_Chinese_ELF_students__l
istening_comprehension.pdf&usg=AOvVaw2-0qyo5DuacnP2F8OeKF9D
Winke, P., Gass, S., & Sydorenko, T. (2013). Factors influencing the use of captions by foreign
language learners: An eye-tracking study. The Modern Language Journal, 97(1), 254–
275. doi: 10.1111/j.1540-4781.2012.01432.x
Yang, P. (2013) The effects of different subtitling modes on Chinese EFL learners’ listening
comprehension (MA thesis, China University of Geosciences). Retrieved from
http://cdmd.cnki.com.cn/Article/CDMD-11415-1014125449.htm
Yekta, R. R. (2010). Digital media within digital modes: The study of the effects of multimodal
input of subtitled video on the learner’s ability to manage split attention and enhance
comprehension. International Journal of Language Studies, 4(2), 79–90. Retrieved from
http://academic.csuohio.edu/kneuendorf/frames/subtitling/Yekta.2010.pdf
Yoshino, S., Kano, N., & Akahori, K. (2000). The effects of English and Japanese catpions on
the listening comprehension of Japanese EFL students. Language Laboratory, 37, 111–
130. Retrieved from
http://ci.nii.ac.jp/els/contentscinii_20171005135911.pdf?id=ART0009691396
Zhang, X. (2013). Fansubbing in China. MultiLingual, 24(5), 30–37. Retrieved from
http://connection.ebscohost.com/c/articles/89368549/fansubbing-china
Zhao, S., & Baldauf, R. B. J. (2007). Planning Chinese characters: Reaction, evolution or
revolution? doi: 10.1007/978-0-387-48576-8
Zheng, R., Smith, D., Luptak, M., Hill, R. D., Hill, J., & Rupper, R. (2016). Does visual
redundancy inhibit older persons’ information processing in learning? Educational
Gerontology, 42(9), 635–645. doi: 10.1080/03601277.2016.1205365
101
Appendix A. Biographical questionnaire
Thank you for participating in this experiment. This questionnaire is used to collect
some background information of participants as part of this study. All information will
4. How often do you watch English films in the following conditions (tick the
box that is applicable):
1 2 3 4 5 6 7
Without
subtitles
102
With English
subtitles only
With Chinese
subtitles only
With both
English and
Chinese subtitles
5. How often do you watch BBC documentaries? (tick the box that is applicable):
1 2 3 4 5 6 7
103
Appendix B. Cognitive Load Questionnaire
104
105
106
107
108
109
110
Appendix C. Participant Information and Consent Form
You are invited to participate in a study on the impact of bilingual subtitles on film
comprehension and cognitive load. The purpose of this study is to investigate the
subtitles.
The study is being conducted to meet the requirements for the degree of Master of Research
under the supervision of Associate Professor Jan-Louis Kruger (02 9850 1467 or
If you decide to participate, you will be asked to fill in a biographical questionnaire and
then participate in an eye tracking experiment. You will be seated comfortably in a sound-
proof, sufficiently illuminated room watching four videos. Only your eye movement data
will be recorded by the eye tracking equipment. There will be no recording of your face or
voice. All videos are in English with English subtitles only, Chinese subtitles only, bilingual
subtitles in both Chinese and English or without subtitles. Each video lasts about 10 to 15
minutes. After watching one video, you will be given five minutes to fill in a self-reported
questionnaire regarding your viewing experience. The whole experiment will take
approximately one and a half hours to complete. Participation is on a voluntary basis and
there will be no cost to you. You will receive 2 hours credit on your practicum unit
111
Any information or personal details gathered in the course of the study are confidential,
except as required by law. No individual will be identified in any publication of the results.
The principle investigator and the co- investigator will be the only persons with access to
the data, which will be kept secure. A summary of the results of the data can be made
Please note that your current lecturers will not be made aware of who has
entirely voluntary: you are not obliged to participate and even if you decide to
participate, you are free to withdraw at any time without having to give a
(or, where appropriate, have had read to me) and understand the information above and
any questions I have asked have been answered to my satisfaction. I agree to participate
in this research, knowing that I can withdraw from further participation in the research at
any time without consequence. I have been given a copy of this form to keep.
Participant’s Name:
(Block letters)
112
Investigator’s Name: _____JAN-LOUIS KRUGER
(Block letters)
Date: ________________________
The ethical aspects of this study have been approved by the Macquarie University Human
Research Ethics Committee. If you have any complaints or reservations about any ethical
aspect of your participation in this research, you may contact the Committee through the
Director, Research Ethics & Integrity (telephone (02) 9850 7854; email
PARTICIPANT'S COPY
113
Appendix D. Research Ethics Approval Letter
Thank you very much for your response. Your response has addressed the
issues raised by the Faculty of Human Sciences Human Research Ethics Sub-
Committee and approval has been granted, effective 11th May 2017. This
email constitutes ethical approval only.
https://www.nhmrc.gov.au/book/national-statement-ethical-conduct-
human-research
NB. If you complete the work earlier than you had planned you must submit
a Final Report as soon as the work is completed. If the project has
been discontinued or not commenced for any reason, you are also required
tosubmit a Final Report for the project.
Progress reports and Final Reports are available at the following website:
http://www.research.mq.edu.au/current_research_staff/human_research_
114
ethics/resources
3. If the project has run for more than five (5) years you cannot
renew approval for the project. You will need to complete and submit a
Final Report and submit a new application for the project. (The five year
limit on renewal of approvals allows the Sub-Committee to fully re-
review research in an environment where legislation, guidelines and
requirements are continually changing, for example, new child protection
and privacy laws).
http://www.research.mq.edu.au/current_research_staff/human_research_
ethics/managing_approved_research_projects
http://www.mq.edu.au/policy
http://www.research.mq.edu.au/current_research_staff/human_research_
ethics/managing_approved_research_projects
If you will be applying for or have applied for internal or external funding
for the above project it is your responsibility to provide the Macquarie
University's Research Grants Management Assistant with a copy of this email
as soon as possible. Internal and External funding agencies will not be
informed that you have approval for your project and funds will not be
released until the Research Grants Management Assistant has received
a copy of this email.
Yours sincerely,
Dr Naomi Sweller
115
Chair
Faculty of Human Sciences
Human Research Ethics Sub-Committee
116