Journal of Memory and Language: Saúl Villameriel, Patricia Dias, Brendan Costello, Manuel Carreiras
Journal of Memory and Language: Saúl Villameriel, Patricia Dias, Brendan Costello, Manuel Carreiras
a r t i c l e i n f o a b s t r a c t
Article history: This study investigates cross-language and cross-modal activation in bimodal bilinguals.
Received 10 July 2015 Two groups of hearing bimodal bilinguals, natives (Experiment 1) and late learners
revision received 14 October 2015 (Experiment 2), for whom spoken Spanish is their dominant language and Spanish Sign
Language (LSE) their non-dominant language, performed a monolingual semantic decision
task with word pairs heard in Spanish. Half of the word pairs had phonologically related
Keywords: signed translations in LSE. The results showed that bimodal bilinguals were faster at
Cross-language activation
judging semantically related words when the equivalent signed translations were
Cross-modal activation
Bimodal bilingualism
phonologically related while they were slower judging semantically unrelated word pairs
Sign language when the LSE translations were phonologically related. In contrast, monolingual controls
with no knowledge of LSE did not show any of these effects. The results indicate
cross-language and cross-modal activation of the non-dominant language in hearing
bimodal bilinguals, irrespective of the age of acquisition of the signed language.
Ó 2015 Elsevier Inc. All rights reserved.
Introduction Libben & Titone, 2009), when hearing words (e.g., Marian
& Spivey, 2003; Spivey & Marian, 1999), or while naming
A central question in bilingualism is whether the pro- pictures (e.g., Costa, Caramazza, & Sebastian-Galles,
cessing of one language necessarily involves activating 2000), even when both languages use different written
the other, or whether the two languages are accessed inde- scripts (e.g., Hoshino & Kroll, 2008). In contrast, claims
pendently. Some neuroimaging studies have revealed have been made for language independence in monolin-
overlapping activation of the same brain regions for both gual contexts given the strong inhibition of one language
languages (e.g., Chee, Tan, & Thiel, 1999; Illes et al., 1999; (Rodriguez-Fornells, Rotte, Heinze, Nösselt, & Münte,
Klein, Milner, Zatorre, Meyer, & Evans, 1995), suggesting 2002).
that the languages share the same neural circuitry. Fur- The present study investigates whether cross-language
thermore, there is growing evidence for cross-language activation is present in hearing bimodal bilinguals by ask-
activation even when bilinguals are using just one of their ing whether there is activation of their second language
languages when reading words (e.g., Schwartz, Kroll, & (L2), Spanish Sign Language, LSE (lengua de signos
Diaz, 2007; Thierry & Wu, 2007) or sentences (e.g., española), when performing a task in their first language
(L1), spoken Spanish. Thus, we will be testing cross-
language and cross-modal activation, where modality refers
⇑ Corresponding author at: BCBL, Basque Center on Cognition, Brain and
to the perceptual channels employed by the language
Language, Paseo Mikeletegi 69-2, 20009 Donostia–San Sebastian, Spain.
(oral-auditory and gestural-visual). To that end, we
E-mail addresses: [email protected] (S. Villameriel), [email protected]
(P. Dias), [email protected] (B. Costello), [email protected] adapted the semantic relatedness paradigm used in a
(M. Carreiras). within modality setting by Thierry and Wu (2007; see also
http://dx.doi.org/10.1016/j.jml.2015.11.005
0749-596X/Ó 2015 Elsevier Inc. All rights reserved.
60 S. Villameriel et al. / Journal of Memory and Language 87 (2016) 59–70
Wu & Thierry, 2010), who showed that bilinguals in two since many deaf individuals learn to read by associating
spoken languages activated their L1 (Chinese) while deal- the signs of the signed language with the written forms
ing with their L2 (English). of the spoken language. This raises the issue of the type
A very small number of studies focusing on bimodal of representation that deaf individuals have (of the written
bilingualism in deaf individuals have examined whether form) of the spoken language and how that may come to
non-selective access can also be found across languages bear on the question of cross-linguistic and cross-modal
that do not overlap in modality. The activation of L1 when activation.
dealing with L2 occurs in deaf balanced bilinguals In the present study we focus on the modality of the
(Morford, Wilkinson, Villwock, Piñar, & Kroll, 2011). Deaf dynamic primary signal for a given language, namely,
native signers of American Sign Language (ASL) read pairs oral-auditory for spoken language, and visual-gestural for
of words in English and judged their semantic relatedness. signed language. Examining these effects in hearing bimo-
The results showed cross-language activation since the dal bilinguals, who acquire the dynamic primary signal of
presence of a phonological relation between the ASL equiv- each language directly, would provide a clearer picture of
alents of the English words influenced the reaction times cross-modal and cross-language activation. To our best
for the semantic judgement. Thus, on one hand, in the con- knowledge, there are only two studies (Giezen,
text of a semantic relation, a (unseen L1) phonological rela- Blumenfeld, Shook, Marian, & Emmorey, 2015; Shook &
tion produced a facilitation effect. Conversely, when the Marian, 2012) that have looked directly at cross-language
items were not semantically related, the (unseen L1) activation in hearing bimodal bilinguals with spoken lan-
phonological relation gave rise to an inhibitory effect. A guage as stimuli using a very different procedure: the
similar experiment was run in a subsequent study with visual world paradigm. Participants heard a word and
two groups of unbalanced ASL/English bilinguals: a group had to select one of four images on a screen. In addition
of deaf ASL-dominant/English and a group of hearing to the target, there were two unrelated distractors and a
English-dominant/ASL signers (Morford, Kroll, Piñar, & phonological competitor of the target in the ‘hidden’ lan-
Wilkinson, 2014). Deaf bilinguals showed the same inhibi- guage (ASL) that shared three phonological parameters.
tory and facilitatory effects reported previously. However, Bimodal bilinguals looked more often and longer at the
the hearing English-dominant signers, for whom ASL was competitor rather than the distractors. Consequently, co-
their L2, showed only the inhibitory effect. Importantly, activation does not seem to be dependent on the modality
the deaf bilinguals were performing the task in their L2, of the languages in bimodal bilinguals. Thus, these two
while the hearing bilinguals performed the experiment in studies showed parallel activation of the sign language
the written form of their L1. In addition, deaf native signers (the non-explicit language: ASL) during comprehension of
normally associate English word forms with ASL signs spoken English in highly proficient hearing bimodal bilin-
when they learn to read, while the hearing group would guals. However, the participants were looking at images
have linked the English orthography with the spoken Eng- while hearing words, so the task offered visual stimuli in
lish phonological forms. A very similar experiment using order to activate the signed language. In addition, the para-
the same procedure was carried out in proficient deaf digm prompted explicitly a representation through the pic-
bilinguals in German Sign Language, DGS (Deutsche ture that activated the sign language competitor. Both the
Gebärdensprache), and written German (Kubus, Villwock, explicit trigger and the hidden target shared the visual
Morford, & Rathmann, 2015). Kubus et al. (2015) found modality. Furthermore, these stimuli were not controlled
the inhibitory effect described by Morford et al. (2011) for iconicity of the signs. The form of iconic signs bears
but not the facilitatory effect for the ‘hidden’ phonological some resemblance to the meaning and iconicity has been
relation (in the context of a semantic relationship). The found to play a role in the activation of signs from word
authors linked this difference in the results to two reasons: stimuli in deaf bilingual children (Ormel, Hermans,
differences in the experimental stimuli and in the lan- Knoors, & Verhoeven, 2012). Thus, some of the picture
guages involved. stimuli from the experiment may have resembled the cor-
Therefore, there is evidence for a cross-modal print-sign responding sign forms and have triggered the activation of
parallel activation when the languages are of a different signs. A task without any visual cues would be a stronger
modality in deaf and hearing bimodal bilinguals. However, test to find out whether the cross-language activation is
there are two important considerations. Firstly, the spoken actually set off by the language input. Finally, none of these
language was presented in its written form, a secondary studies investigated whether these putative effects are
code that provides a visual representation of an auditory modulated by age of acquisition (AoA); that is, whether
signal. This involves looking for links between the repre- the effect is present both in native bimodal bilinguals
sentation of the static written form of the spoken language and late learners of the signed language.
and the representation of the dynamic signal of the sign When the signed language has been learnt impacts how
language. Additionally, this means that the explicit and the mental lexicon is organized, how the sublexical param-
the implicit codes are both visual, the experimental task eters are processed (Carreiras, Gutiérrez-Sigut, Baquero, &
is performed in the same (visual) modality as the language Corina, 2008; Corina & Hildebrandt, 2002; Emmorey &
for which implicit activation is sought. Secondly, most of Corina, 1990; Emmorey, Corina, & Bellugi, 1995;
the participants in these studies were deaf bimodal bilin- Mayberry & Eichen, 1991) and it might also impact on
guals. These individuals are sign-print bilinguals due to the link with the spoken language. Hearing native bimodal
their limited or indirect access to the acoustic form of the bilinguals normally have deaf parents who use the signed
spoken language. More critically, these two factors interact language. The milestones of language learning in these
S. Villameriel et al. / Journal of Memory and Language 87 (2016) 59–70 61
babies are the same as those of children that acquire two the semantically unrelated pairs since this appears to be
spoken languages, and very similar to those of children a more robust effect.
that acquire only one language, whether signed or spoken
(Petitto et al., 2001). Hearing late learners of a signed lan-
guage have the spoken language as L1. Late learners, com- Experiment 1. Hearing native bimodal bilinguals
pared to natives, rely more on iconicity (Campbell, Martin, (CODAs) and hearing monolinguals
& White, 1992) and associations with their spoken lan-
guage in order to learn the signed language. Furthermore, Methods
there is robust neurological evidence of AoA differences
in hearing bimodal bilinguals when processing signed Participants
and spoken languages (Neville et al., 1997; Newman, We recruited 20 children of deaf adults (CODAs) highly
Bavelier, Corina, Jezzard, & Neville, 2002; Zachau et al., proficient in both languages (LSE and Spanish) for the
2014). Therefore, given the previous differences between experimental group, and 20 hearing monolinguals in Span-
native signers and late learners, in the present study we ish as controls. In order to make sure that the CODAs were
will investigate whether hearing bimodal bilinguals who using LSE and Spanish on an everyday basis, we selected
have acquired a signed language at different ages activate subjects who were sign language interpreters at the time
signs (L2) when hearing words (L1). of the study and who had been working as such for at least
For the current experiments, we used the semantic the previous two years. The bimodal bilingual group and
relatedness paradigm from Thierry and Wu (2007) that the control group were matched in age and education level.
Morford et al. (2011) adapted to deaf signers. In our case, A small group of bimodal bilinguals had knowledge of
we modified the implicit priming for hearing bimodal other (spoken) languages, such as regional Spanish lan-
bilinguals in spoken Spanish and LSE and we ran the exper- guages (e.g. Galician, Basque) or English. Participants’ char-
iment in spoken Spanish (the dominant language) to see if acteristics are shown in Table 1. The experiment was run in
LSE (the non-dominant language) was activated. Previous the different locations where participants were recruited
cross-language and cross-modal experiments using this (Bilbao, Burgos, León, Madrid, Palencia, Pamplona, San
semantic relatedness paradigm with implicit priming have Sebastián and Valladolid).
been run with the written form of the spoken language.
This study, however, uses the spoken language in its pri-
mary manifestation as the input in order to assess whether Materials
there is selective or non-selective access from one primary 64 pairs of words in Spanish were selected (Appendix
code to another. In this sense, the paradigm in this study is A). Thirty-two of the word pairs were semantically related
also a cross-modal paradigm: participants hear spoken and the other thirty-two unrelated. Different semantic
words and we check whether or not they prime visual relations between primes and target words were included,
signs. This will allow us to make solid interpretations con- such as antonyms, synonyms, hypernyms and hyponyms,
cerning activation when the two languages involved do not coordinate terms and associative relations. Within each
share modality. This is a crucial difference with respect to semantic condition, half of the pairs had associated LSE
the paradigm of the previous studies that have demon- signs that were phonologically related, and the other half
strated cross-modal activation: the paradigm was uni- had no phonological relation in LSE. An example of each
modal in the sense that the visual form of the language condition appears in Fig. 1. Sixteen additional word pairs
was used to prime a visual signed language. were fillers.
We studied two groups of hearing bimodal bilinguals: Pairs of signs with phonological relation shared at least
native (Experiment 1) and late learners (Experiment 2). two formational parameters: handshape, orientation,
For hearing bimodal bilinguals the spoken language was movement and location (Appendix B). The focus of this
the dominant language according to self-ratings provided study was covert activation of LSE. Although it has been
by the participants in this study (see Methods section shown that these different parameters are not processed
below). Both experiments were also run with hearing par- identically (Carreiras et al., 2008) we were not aiming to
ticipants with no knowledge of LSE as the control groups. disentangle the influence of each parameter individually
Given that sign language can be activated when hearing in the present study. Within each semantic condition, the
spoken language we expect the highly proficient native second word (the target word) appeared in pairs with
bimodal bilinguals in Experiment 1 to show strong evi- and without phonological relationship. This way, partici-
dence of cross-language cross-modal activation. Specifi- pants were responding to the same words in the two crit-
cally, we expect that: (1) In the presence of a semantic ical conditions of interest.
relation, native bimodal bilinguals should show shorter Within each semantic condition, for every phonologi-
reaction times (RTs) and/or lower error rates when there cally related word pair a parallel word pair was created
is an implicit phonological relation (2) In the absence of a with the same target word and a phonologically unrelated
semantic relation, native bimodal bilinguals should show prime in LSE. A group of 20 native Spanish speakers (with
longer RTs and/or higher error rates when there is an no knowledge of LSE, and who did not participate in the
implicit phonological relation. In Experiment 2 we expect main experiment) judged the semantic relatedness of each
either similar results or, in line with the results found by word pair on a scale from 1 (no semantic relation) to 7
Morford et al. (2014), less evidence of cross-language and (strong semantic relation). Only pairs with scores below
cross-modal activation, limited to inhibitory effects on 3 or above 5 were considered.
62 S. Villameriel et al. / Journal of Memory and Language 87 (2016) 59–70
Table 1
Experiment 1. Participants’ characteristics.
Group Number of Mean age Gender Years of experience as Self-rated LSE competence
participants sign language interpreters (from 1 to 7)
Native bimodal bilinguals 20 39.6 15 women 17.7 (mean) 6.6 (mean)
5 men
Hearing monolinguals 20 39.2 12 women
8 men
Phonologic
relation
No phonologic
relation
Fig. 1. Examples of stimuli. Participants only heard words in Spanish, the explicit language. Photographs show the translation in LSE, the implicit language,
for illustrative purposes. English translations appear within parentheses.
Table 3 and the left key (‘A’ on the keyboard) for lack of semantic
Characteristics of semantically unrelated words (means and standard relation. Left-handed participants pressed ‘A’ for semantic
deviations in brackets).
relation and ‘L’ for lack of semantic relation. In this way,
Phonologically p-value Targets all participants responded positively with their dominant
of t-test hand.
Related Unrelated
primes
primes primes The participants read the instructions for the experi-
Log frequency 1.33 1.20 0.63 1.24 ment in Spanish, which included various illustrative exam-
(0.81) (0.70) (0.65) ples of the types of semantic relations that appeared in the
Number of phonemes 5.56 5.25 0.38 5.13 experimental items (i.e. synonyms, antonyms, coordinate
(1.75) (0.77) (1.26)
terms). The participants were instructed to respond as
Duration (ms) 778 766 (85) 0.76 744
(123) (103) quickly and as accurately as possible. This was followed
by a practice task consisting of eight trials in which the
DISCO semantic 0.04 0.04 0.86
similarity (0.04) (0.05) computer provided feedback (correct/incorrect) to help in
understanding the task. The presentation of the experi-
Human rating of 1.53 1.41 0.5
semantic (0.50) (0.49) mental trials was counterbalanced and pseudorandomized.
relationship There were at least five trials (i.e. ten different words) sep-
between primes arating both appearances of the same word. We used two
and targets
different lists so that half of the participants were pre-
sented with one list and the other half with the other list.
Reaction times were measured after the onset of the
the voice onset of each word was set, visualizing the wave- second word of each pair. Each trial started with a fixation
forms of each recording and matching them when neces- cross in the center of the screen (500 ms), then the first
sary in order to make sure that all the onsets were word was presented, followed by 200 ms of silence before
identical. For the analysis, this silent span was subtracted the second word was played. After the participant’s
from the RTs of each response, so the RT reflected the response, before the next trial began, there were 1500 ms
latency from the onset of the word, not from the beginning of silence (see Fig. 2).
of the audio recording. After the experiment, the bimodal bilinguals had to
translate into LSE all words presented during the experi-
Procedure ment to ensure that they associated the target sign (and
The experiment was presented using the SR Research not some dialectal variant) with that word. As we had
Experiment Builder software (V.10.1025) on a Toshiba lap- bimodal bilinguals from various locations, for each partic-
top with IntelÒ PentiumÒ M processor (1.73 GHz) with a ipant, items whose sign translation did not match the
Realtek AC97 Audio sound device and headphones (Beyer- expected translation were eliminated (4.99% of responses
dynamic DT 770 Pro 250 ohm) for the audio. The head- were removed for this reason).
phones provided soundproofing from any environmental Inaccurate responses were also discarded for the reac-
noise and delivered words at a volume of 60 dB (checked tion time analysis (2.93% of responses). Reaction times
with a sound level meter). For each word pair, participants more than 2.5 standard deviations from the mean by sub-
listened to the two words in succession and had to decide ject and condition were considered outliers and discarded
whether or not their meanings were related. Participants (1.73% of responses).
responded as soon as they could after hearing the second Participants also completed a questionnaire about their
word of the pair. Right-handed participants pressed the language profile (history and use) after finishing the
right key (‘L’ on a qwerty keyboard) for semantic relation experiment.
Table 4
Experiment 1. Participants. Mean reaction times and mean probability error (standard deviations in brackets).
Fig. 3. Mean reaction times (in ms) for native bimodal bilinguals (left) and hearing monolinguals with no knowledge of LSE (right). Error bars show the
standard error of the mean (from the F1).
S. Villameriel et al. / Journal of Memory and Language 87 (2016) 59–70 65
who learned LSE from birth. Therefore, the next question relation; yes vs. no) repeated measures F1 and F2 ANOVAs
was whether this parallel activation would also occur in on accuracy did not show any significant different on inter-
late bimodal bilinguals, who learned LSE late in life. And actions or main effects other than that late bimodal bilin-
if so, would this take place in the same terms as the native guals committed less errors than controls, F1(1, 78)
bimodal bilinguals (facilitation and inhibition) or would it = 4.87, p < .05, F2(1, 60) = 3.30, p = .074. The data from
be different. Thus, differences or similarities in the results accuracy were not normally distributed (verified by the
of the two groups of bimodal bilinguals would supply valu- Shapiro–Wilk test) and there is also a ceiling effect.
able data for inferences concerning the AoA of the signed The ANOVA on reaction times revealed a main effect of
language. semantic relatedness. Participants were faster responding
to semantically related than unrelated pairs of words, F1
(1, 76) = 161.39, p < .001, F2(1, 60) = 53.51, p < .001. The
Methods
interaction of semantics and phonology was significant in
the analysis by subjects, F(1, 76) = 47.54, p < .001, F2
Participants
(1, 60) = 2.77, p = .10. Importantly, the triple interaction of
Forty highly proficient bimodal bilinguals who were
semantics and phonology and group was significant, F1
late learners of LSE and 40 hearing Spanish speakers mono-
(1, 76) = 32.51, p < .001, F2(1, 60) = 18.21, p < .001. Table 6
linguals were recruited. As in experiment 1, in order to
shows the mean RTs and probability of errors of both
assure that the late learners were highly competent in both
groups.
languages, we chose bimodal bilinguals who were sign lan-
guage interpreters at the time of the study and who had
Late bimodal bilinguals
been working as such for at least the previous two years.
The 2 (Semantic relation) 2 (Phonologic relation)
They started learning sign language after the age of 18.
ANOVA showed a significant main effect of semantic relat-
Participants’ characteristics are shown in Table 5.
edness. Late bimodal bilinguals were faster to respond to
semantically related than unrelated pairs of words, F1
Materials and procedure (1, 39) = 80.66, p < .001, F2(1, 60) = 49.95, p < .001. The
The materials and procedure were the same as those interaction of semantics and phonology was also signifi-
used in experiment 1. cant, F1(1, 39) = 47.45, p < .001, F2(1, 60) = 7.51, p < .01.
As in the previous experiment, responses were removed Follow-up comparisons revealed that late learners were
due to discrepancy between the subject’s translation and faster to respond to semantically related words with
the expected target sign (5.96%). In addition, inaccurate phonologically related LSE translations than to those that
responses were removed for the reaction time analysis had no underlying relation in LSE, t(39) = 4.5, p < .001.
(2.64% of responses). Reaction times that were not within Late learners were also slower to respond to semantically
2.5 standard deviations of the average by subject and con- unrelated words with phonologically related signed trans-
dition were also discarded (1.84% of responses). lations than to those with phonologically unrelated trans-
lations, t(39) = 5.49, p < .001.
Results
Hearing L1 Spanish controls with no knowledge of LSE
A 2 (Group; Late Learners vs. control) 2 (list; A vs. The 2 2 ANOVA revealed a main effect of semantic
B) 2 (Semantic relation; yes vs. no) 2 (Phonologic relatedness, F1(1, 39) = 85.42, p < .001, F2(1, 69) = 45.02,
Table 5
Experiment 2. Participants’ characteristics.
Group Number of Mean age Gender Years of experience as sign Self-rated LSE competence
participants language interpreters (from 1 to 7)
Late bimodal bilinguals 40 34.35 31 women 10.02 (mean) 6.01 (mean)
9 men
Hearing monolinguals 40 38.15 24 women
16 men
Table 6
Experiment 2. Participants. Mean reaction times and mean probability error (standard deviations in brackets).
Fig. 4. Mean reaction times (in ms) for late bimodal bilinguals (left) and for hearing participants with no knowledge of LSE (right). Error bars show the
standard error of the mean (from the F1).
p < .001. However, the interaction of semantics and effect. However, both groups of bimodal bilinguals in our
phonology was not significant F1(1, 39) = 2.29, p = .14, F2 study exhibited the facilitatory effect in addition to the inhi-
(1, 60) = .10, p = .75 (see Fig. 4). bitory effect.2
Previous research has repeatedly demonstrated non-
selective access to the codes in a monolingual context,
General discussion mainly illustrated by the interference of the implicit lan-
guage when there is lack of semantic relation in bimodal
This study investigated whether hearing bimodal bilin- bilinguals (Kubus et al., 2015; Morford et al., 2011, 2014).
guals whose dominant language is spoken Spanish and However, the facilitation effect had only been revealed in
who have acquired LSE at different ages activate signs experiments run on deaf bimodal bilinguals native in ASL
when hearing words. The results were similar for both (Morford et al., 2011, 2014). Thus, this is the first study that
groups of bimodal bilinguals, native and late learners, reveals such strong activation in hearing bimodal bilin-
while controls showed a different outcome. guals as well. Crucially, this parallel activation is shown
All groups were significantly faster in answering the when using the dominant language and it does not seem
semantically related word pairs compared to the unre- to depend on the AoA of the sign language.
lated pairs, a well-established effect that provides support Parallel activation effects (facilitation and inhibition) in
that participants were performing the basic experimental hearing bimodal bilinguals may be enhanced by several
task adequately. More importantly, in native bimodal differences with previous studies: Firstly, the phonological
bilinguals and in late learners the (unseen) phonological relationship of the implicit signs; Secondly, the primary
relation in LSE had a different effect in each of the seman- code associated with the task; and Thirdly, code-blending
tic conditions. On the one hand, in the presence of a experience in hearing bimodal bilinguals. Prior research
semantic relation, native and late learner bimodal bilin- conducted in deaf sign-print bilinguals showed the facilita-
guals showed a facilitatory effect when the implicit signs tory and the inhibitory effect whether the deaf bilinguals
were phonologically related, since they were faster to were balanced in both languages (Morford et al., 2011) or
respond compared to when the signs were phonologically ASL-dominant (Morford et al., 2014). However, in an
unrelated. On the other hand, in the absence of a seman- experiment run with deaf balanced bilinguals of another
tic relation, these two groups showed an inhibitory effect language pair, DGS and written German (Kubus et al.,
when the words were phonologically related in their LSE 2015), only the inhibition effect was found. The authors
equivalent, since they were slower to respond than when related this dissimilar outcome with differences in the
the signs were phonologically unrelated. In contrast, the parameters that the signed translations shared. While in
control group did not show either of these two effects, the ASL studies the common parameters were mainly
as their responses were very similar in each semantic movement and location, in the DGS study the overlapping
condition regardless of the implicit phonological context. parameters were handshape and location. Kubus et al.
As predicted, this outcome in controls shows no effect (2015) point out that signs that share movement are per-
from the implicit LSE, since they do not know the (hid- ceptually more similar than signs with a common location
den) language. or handshape, and as such the phonological relation in
These results make clear that hearing bimodal bilinguals the ASL studies was stronger. In the current LSE study,
activate signs while processing spoken words. Moreover,
there are no differences in this activation whether the bimo-
2
dal bilinguals are natives or late learners. A possible out- To check this similar behavior in native and late signers, we performed
a 2 (Group; CODAs vs. late learners) 2 (Semantic relation; yes vs. no) 2
come was that late learners would perform in the same (Phonologic relation; yes vs. no) repeated measures across subjects (F1) and
fashion as the hearing late learners in the Morford et al. across items (F2) ANOVA. The three-way interaction was not significant, all
(2014) print-sign study, who only showed the inhibitory Fs < .05, all ps > .9
S. Villameriel et al. / Journal of Memory and Language 87 (2016) 59–70 67
the three parameters mentioned frequently overlap in the could have given rise to a stronger parallel activation in
signed translations in the [+phonology] conditions: loca- the current experiment, compared with the weaker activa-
tion, handshape and movement (Appendix B). tion primed by printed words (Morford et al., 2014). More
In their print-sign study, Morford et al. (2014) argued evidence for this robust bond between the spoken words
that hearing English-dominant bimodal bilinguals did not and the visual signs in hearing bimodal bilinguals comes
show the facilitation effect because the written words in from the fact that parallel activation has occurred in the
English are directly associated with their corresponding non-dominant language. This contrasts with most of the
sounds and not with their equivalent signs. Deaf bimodal previous work, where the task was carried out in the L2
bilinguals, on the contrary, link the printed words to signs and the implicit language was the dominant L1. Our partic-
when they learn to read, so the connection between the ipants were not only highly proficient hearing bimodal
written forms and the signs is more direct than in the hear- bilinguals, whether native or late learners, but also had a
ing bimodal bilinguals’ case. Consequently, the effect was uniformly high level of competence, as they all work as
more salient (facilitation and inhibition) in the deaf bimo- sign language interpreters. Therefore, although our exper-
dal bilinguals compared to the hearing bimodal bilinguals iment cannot provide evidence concerning proficiency, as
(only inhibition). Our experiment addresses this matter, there is currently no standard way to assess proficiency
since we have used the spoken form of the language as in LSE, the results seem to suggest that activation is driven
the explicit language. The presence of both effects (inhibi- by proficiency in this case rather than by AoA. In their
tory and facilitatory) for hearing bimodal bilinguals when print-sign experiment, Morford et al. (2014) were able to
listening to spoken words suggests that cross-modal activa- split the hearing bimodal bilinguals in two groups by pro-
tion is more salient from the primary language modality ficiency, since the sign language experience of their partic-
(i.e. auditory). Prior research using the semantic related- ipants was more heterogeneous. The more proficient ASL
ness paradigm in hearing unimodal bilinguals supports users showed a larger degree of inhibition compared to
this strong connection for the primary language modality, the less proficient, although, in contrast to the findings of
as the parallel activation is of the sounds of the equivalent the current study, this inhibitory effect did not reach sig-
language translations, rather than the written form (Wu & nificance. In spite of this, there is a factor that might have
Thierry, 2010). Additionally, in contrast to previous studies had some influence in the results: linguistic awareness.
(Giezen et al., 2015; Shook & Marian, 2012), the current When bimodal bilinguals were recruited, although it was
experiment did not include visual cues to prompt the acti- done through direct contact, that is, there was no experi-
vation of signs, so the robustness of the parallel activation ment announcement asking explicitly for a specific profile,
can be linked exclusively to the primary dynamic code of some of them might have been conscious of their linguistic
the spoken language. Further support for this comes from background as they could relate their recruitment with the
the third consideration: the connections established fact that they are sign language users. This awareness
through the simultaneous use of both languages. could have led to greater activation of LSE while carrying
Preceding research implies that hearing bimodal bilin- out the experimental task. However, the information they
guals might connect the phonological forms of words with had about the study and the task was quite restricted, as
the phonological forms of signs as they can be articulated they only knew that they had to perform a task concerning
at the same time, a phenomenon known as code-blending. semantic relation in Spanish words. Spanish sign language
Bimodal bilinguals mostly produce (simultaneous) code- was never mentioned, so the potential impact of the lin-
blends instead of the typical (sequential) code-switches that guistic awareness might be very limited. Additionally, the
unimodal bilinguals perform. In a study with ASL-English task formed part of a larger battery of experiments, so it
bimodal bilinguals, these code-blends tended to contain is quite unlikely that participants were aware that LSE
semantically equivalent information in the spoken and in was relevant to the task in hand.
the signed language (Emmorey, Borinstein, Thompson, A related issue is the fact that all the bimodal bilinguals
& Gollan, 2008). Most of the code-blends occurred with were sign language interpreters, and the effect that this
English as the matrix language (i.e. the language that pro- could have on the results. It is an open question whether
vided the syntactic structure of the utterance) and ASL the our results will also hold true for other populations. Unfor-
accompanying language. In fact, signs are produced with tunately, this is a naturally occurring confound in the pop-
speech even when bimodal bilinguals know that their inter- ulation of hearing signer language users: highly proficient
locutors do not know any sign language (Casey & Emmorey, signers tend to be interpreters and it is difficult to find
2009). This code-mixing situation changes when signing, as signers matched in proficiency who are not interpreters.
the spoken language (being the dominant language) is sup- Nevertheless, future research ought to examine the rela-
pressed and appears less frequently in signed utterances. tive roles of proficiency and interpreting experience by
This suggests that signs are readily available when using looking at whether native or late bimodal bilinguals who
the spoken language but not the other way around, as the are not working as sign language interpreters or who are
dominant spoken language is more inhibited. Future work not using the sign language on an everyday basis would
looking at cross-modal activation of the dominant (spoken) perform similarly to our participants.
language in hearing bimodal bilinguals could confirm or Finally, our results sit well with a recent study that pro-
refute this idea. vides evidence of cross-language activation also in produc-
The association between spoken and sign phonological tion (Giezen & Emmorey, 2015). This study demonstrated
forms is strongly established in hearing bimodal bilinguals that sign production was influenced by the parallel activa-
(Emmorey, Petrich, & Gollan, 2012). This solid connection tion of the equivalent sign of spoken distractor words.
68 S. Villameriel et al. / Journal of Memory and Language 87 (2016) 59–70
References Giezen, M. R., & Emmorey, K. (2015). Language co-activation and lexical
selection in bimodal bilinguals: Evidence from picture–word
interference. Bilingualism: Language and Cognition, 1–13.
Boersma, P., & Weenink, D. (2014). Praat: Doing phonetics by computer
Hoshino, N., & Kroll, J. F. (2008). Cognate effects in picture naming: Does
[Computer program]. Version 5.1.71. <http://www.praat.org>
cross-language activation survive a change of script? Cognition, 106,
Retrieved 09.04.14.
501–511.
Campbell, R., Martin, P., & White, T. (1992). Forced choice recognition of
Illes, J., Francis, W. S., Desmond, J. E., Gabrieli, J. D., Glover, G. H., Poldrack,
sign in novice learners of British Sign Language. Applied Linguistics, 13,
R., ... Wagner, A. D. (1999). Convergent cortical representation of
185–201.
semantic processing in bilinguals. Brain and Language, 70, 347–363.
Carreiras, M., Gutiérrez-Sigut, E., Baquero, S., & Corina, D. (2008). Lexical
Klein, D., Milner, B., Zatorre, R. J., Meyer, E., & Evans, A. C. (1995). The
processing in Spanish sign language (LSE). Journal of Memory and
neural substrates underlying word generation: A bilingual functional-
Language, 58, 100–122.
imaging study. Proceedings of the National Academy of Sciences, 92,
Casey, S., & Emmorey, K. (2009). Co-speech gesture in bimodal bilinguals.
2899–2903.
Language and Cognitive Processes, 24, 290–312.
Kolb, P. (2008). Disco: A multilingual database of distributionally similar
Chee, M. W., Tan, E. W., & Thiel, T. (1999). Mandarin and English single
words. In Proceedings of KONVENS-2008, Berlin.
word processing studied with functional magnetic resonance
Kolb, P. (2009, May). Experiments on the difference between semantic
imaging. The Journal of Neuroscience, 19, 3050–3056.
similarity and relatedness. In Proceedings of the 17th nordic conference
Corina, D. P., & Hildebrandt, U. C. (2002). Psycholinguistic investigations
on computational linguistics-NODALIDA’09.
of phonological structure in ASL. In R. P. Meier, K. Cormier, & D.
Kubus, O., Villwock, A., Morford, J. P., & Rathmann, C. (2015). Word
Quinto-Pozos (Eds.), Modality and structure in signed and spoken
recognition in deaf readers: Cross-language activation of German Sign
languages (pp. 88–111). Cambridge: Cambridge University Press.
Language and German. Applied Psycholinguistics, 36, 831–854.
Costa, A., Caramazza, A., & Sebastian-Galles, N. (2000). The cognate
Libben, M. R., & Titone, D. A. (2009). Bilingual lexical access in context:
facilitation effect: Implications for models of lexical access. Journal of
Evidence from eye movements during reading. Journal of Experimental
Experimental Psychology. Learning, Memory, and Cognition, 26, 1283.
Psychology. Learning, Memory, and Cognition, 35, 381.
Dijkstra, T., & Van Heuven, W. J. (2002). The architecture of the bilingual
Marian, V., & Spivey, M. (2003). Competing activation in bilingual
word recognition system: From identification to decision.
language processing: Within-and between-language competition.
Bilingualism: Language and Cognition, 5, 175–197.
Bilingualism: Language and Cognition, 6, 97–115.
Duchon, A., Perea, M., Sebastián-Gallés, N., Martí, A., & Carreiras, M.
Mayberry, R. I., & Eichen, E. B. (1991). The long-lasting advantage of
(2013). EsPal: One-stop shopping for Spanish word properties.
learning sign language in childhood: Another look at the critical
Behavior Research Methods, 45, 1246–1258.
period for language acquisition. Journal of Memory and Language, 30,
Emmorey, K., Borinstein, H. B., Thompson, R., & Gollan, T. H. (2008).
486–512.
Bimodal bilingualism. Bilingualism (Cambridge, England), 11, 43.
Morford, J. P., Kroll, J. F., Piñar, P., & Wilkinson, E. (2014). Bilingual word
Emmorey, K., & Corina, D. (1990). Lexical recognition in sign language:
recognition in deaf and hearing signers: Effects of proficiency and
Effects of phonetic structure and morphology. Perceptual and Motor
language dominance on cross-language activation. Second Language
Skills, 71, 1227–1252.
Research, 30, 251–271.
Emmorey, K., Corina, D., & Bellugi, U. (1995). Differential processing of
Morford, J. P., Wilkinson, E., Villwock, A., Piñar, P., & Kroll, J. F. (2011).
topographic and referential functional of space. In K. Emmorey & J.
When deaf signers read English: Do written words activate their sign
Reilly (Eds.), Language, gesture and space (pp. 43–62). Mahwah, NJ:
translations? Cognition, 118, 286–292.
Lawrence Erlbaum Associates.
Neville, H. J., Coffey, S. A., Lawson, D. S., Fischer, A., Emmorey, K., & Bellugi,
Emmorey, K., Petrich, J. A., & Gollan, T. H. (2012). Bilingual processing of
U. (1997). Neural systems mediating American Sign Language: Effects
ASL–English code-blends: The consequences of accessing two lexical
of sensory experience and age of acquisition. Brain and Language, 57,
representations simultaneously. Journal of Memory and Language, 67,
285–308.
199–210.
Newman, A. J., Bavelier, D., Corina, D., Jezzard, P., & Neville, H. J. (2002). A
Giezen, M. R., Blumenfeld, H. K., Shook, A., Marian, V., & Emmorey, K.
critical period for right hemisphere recruitment in American Sign
(2015). Parallel language activation and inhibitory control in bimodal
Language processing. Nature Neuroscience, 5, 76–80.
bilinguals. Cognition, 141, 9–25.
70 S. Villameriel et al. / Journal of Memory and Language 87 (2016) 59–70
Ormel, E., Hermans, D., Knoors, H., & Verhoeven, L. (2012). Cross-language Shook, A., & Marian, V. (2013). The bilingual language interaction network
effects in written word recognition: The case of bilingual deaf for comprehension of speech. Bilingualism: Language and Cognition, 16,
children. Bilingualism: Language and Cognition, 15, 288–303. 304–324.
Petitto, L. A., Katerelos, M., Levy, B. G., Gauna, K., Tétreault, K., & Ferraro, V. Spivey, M. J., & Marian, V. (1999). Cross talk between native and second
(2001). Bilingual signed and spoken language acquisition from birth: languages: Partial activation of an irrelevant lexicon. Psychological
Implications for the mechanisms underlying early bilingual language Science, 10, 281–284.
acquisition. Journal of Child Language, 28, 453–496. Thierry, G., & Wu, Y. J. (2007). Brain potentials reveal unconscious
Rodriguez-Fornells, A., Rotte, M., Heinze, H. J., Nösselt, T., & Münte, T. F. translation during foreign-language comprehension. Proceedings of
(2002). Brain potential and functional MRI evidence for how to handle the National Academy of Sciences, 104, 12530–12535.
two languages with one brain. Nature, 415, 1026–1029. Wu, Y. J., & Thierry, G. (2010). Chinese–English bilinguals reading English
Schwartz, A. I., Kroll, J. F., & Diaz, M. (2007). Reading words in Spanish and hear Chinese. The Journal of Neuroscience, 30, 7646–7651.
English: Mapping orthography to phonology in two languages. Zachau, S., Korpilahti, P., Hämäläinen, J. A., Ervast, L., Heinänen, K.,
Language and Cognitive Processes, 22, 106–129. Suominen, K., ... Leppänen, P. H. (2014). Electrophysiological
Shook, A., & Marian, V. (2012). Bimodal bilinguals co-activate both correlates of cross-linguistic semantic integration in hearing
languages during spoken comprehension. Cognition, 124, 314–324. signers: N400 and LPC. Neuropsychologia, 59, 57–73.