0% found this document useful (0 votes)
35 views

Journal of Memory and Language: Saúl Villameriel, Patricia Dias, Brendan Costello, Manuel Carreiras

This document summarizes a journal article that studied cross-language and cross-modal activation in bimodal bilinguals - individuals who are fluent in a spoken language and a sign language. The study tested whether hearing bimodal bilinguals whose dominant language is Spanish and non-dominant language is Spanish Sign Language (LSE) would show activation of LSE when performing a semantic decision task in Spanish only. The results showed that response times were faster when semantically related Spanish word pairs had phonologically related LSE signs, and slower when unrelated word pairs had related signs, indicating cross-language and cross-modal activation of LSE during the Spanish-only task. This suggests that both languages are activated even when only using
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views

Journal of Memory and Language: Saúl Villameriel, Patricia Dias, Brendan Costello, Manuel Carreiras

This document summarizes a journal article that studied cross-language and cross-modal activation in bimodal bilinguals - individuals who are fluent in a spoken language and a sign language. The study tested whether hearing bimodal bilinguals whose dominant language is Spanish and non-dominant language is Spanish Sign Language (LSE) would show activation of LSE when performing a semantic decision task in Spanish only. The results showed that response times were faster when semantically related Spanish word pairs had phonologically related LSE signs, and slower when unrelated word pairs had related signs, indicating cross-language and cross-modal activation of LSE during the Spanish-only task. This suggests that both languages are activated even when only using
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Journal of Memory and Language 87 (2016) 59–70

Contents lists available at ScienceDirect

Journal of Memory and Language


journal homepage: www.elsevier.com/locate/jml

Cross-language and cross-modal activation in hearing bimodal


bilinguals
Saúl Villameriel a,⇑, Patricia Dias a, Brendan Costello a, Manuel Carreiras a,b
a
BCBL, Basque Center on Cognition, Brain and Language, Donostia, Spain
b
Ikerbasque, Basque foundation for Science, Bilbao, Spain

a r t i c l e i n f o a b s t r a c t

Article history: This study investigates cross-language and cross-modal activation in bimodal bilinguals.
Received 10 July 2015 Two groups of hearing bimodal bilinguals, natives (Experiment 1) and late learners
revision received 14 October 2015 (Experiment 2), for whom spoken Spanish is their dominant language and Spanish Sign
Language (LSE) their non-dominant language, performed a monolingual semantic decision
task with word pairs heard in Spanish. Half of the word pairs had phonologically related
Keywords: signed translations in LSE. The results showed that bimodal bilinguals were faster at
Cross-language activation
judging semantically related words when the equivalent signed translations were
Cross-modal activation
Bimodal bilingualism
phonologically related while they were slower judging semantically unrelated word pairs
Sign language when the LSE translations were phonologically related. In contrast, monolingual controls
with no knowledge of LSE did not show any of these effects. The results indicate
cross-language and cross-modal activation of the non-dominant language in hearing
bimodal bilinguals, irrespective of the age of acquisition of the signed language.
Ó 2015 Elsevier Inc. All rights reserved.

Introduction Libben & Titone, 2009), when hearing words (e.g., Marian
& Spivey, 2003; Spivey & Marian, 1999), or while naming
A central question in bilingualism is whether the pro- pictures (e.g., Costa, Caramazza, & Sebastian-Galles,
cessing of one language necessarily involves activating 2000), even when both languages use different written
the other, or whether the two languages are accessed inde- scripts (e.g., Hoshino & Kroll, 2008). In contrast, claims
pendently. Some neuroimaging studies have revealed have been made for language independence in monolin-
overlapping activation of the same brain regions for both gual contexts given the strong inhibition of one language
languages (e.g., Chee, Tan, & Thiel, 1999; Illes et al., 1999; (Rodriguez-Fornells, Rotte, Heinze, Nösselt, & Münte,
Klein, Milner, Zatorre, Meyer, & Evans, 1995), suggesting 2002).
that the languages share the same neural circuitry. Fur- The present study investigates whether cross-language
thermore, there is growing evidence for cross-language activation is present in hearing bimodal bilinguals by ask-
activation even when bilinguals are using just one of their ing whether there is activation of their second language
languages when reading words (e.g., Schwartz, Kroll, & (L2), Spanish Sign Language, LSE (lengua de signos
Diaz, 2007; Thierry & Wu, 2007) or sentences (e.g., española), when performing a task in their first language
(L1), spoken Spanish. Thus, we will be testing cross-
language and cross-modal activation, where modality refers
⇑ Corresponding author at: BCBL, Basque Center on Cognition, Brain and
to the perceptual channels employed by the language
Language, Paseo Mikeletegi 69-2, 20009 Donostia–San Sebastian, Spain.
(oral-auditory and gestural-visual). To that end, we
E-mail addresses: [email protected] (S. Villameriel), [email protected]
(P. Dias), [email protected] (B. Costello), [email protected] adapted the semantic relatedness paradigm used in a
(M. Carreiras). within modality setting by Thierry and Wu (2007; see also

http://dx.doi.org/10.1016/j.jml.2015.11.005
0749-596X/Ó 2015 Elsevier Inc. All rights reserved.
60 S. Villameriel et al. / Journal of Memory and Language 87 (2016) 59–70

Wu & Thierry, 2010), who showed that bilinguals in two since many deaf individuals learn to read by associating
spoken languages activated their L1 (Chinese) while deal- the signs of the signed language with the written forms
ing with their L2 (English). of the spoken language. This raises the issue of the type
A very small number of studies focusing on bimodal of representation that deaf individuals have (of the written
bilingualism in deaf individuals have examined whether form) of the spoken language and how that may come to
non-selective access can also be found across languages bear on the question of cross-linguistic and cross-modal
that do not overlap in modality. The activation of L1 when activation.
dealing with L2 occurs in deaf balanced bilinguals In the present study we focus on the modality of the
(Morford, Wilkinson, Villwock, Piñar, & Kroll, 2011). Deaf dynamic primary signal for a given language, namely,
native signers of American Sign Language (ASL) read pairs oral-auditory for spoken language, and visual-gestural for
of words in English and judged their semantic relatedness. signed language. Examining these effects in hearing bimo-
The results showed cross-language activation since the dal bilinguals, who acquire the dynamic primary signal of
presence of a phonological relation between the ASL equiv- each language directly, would provide a clearer picture of
alents of the English words influenced the reaction times cross-modal and cross-language activation. To our best
for the semantic judgement. Thus, on one hand, in the con- knowledge, there are only two studies (Giezen,
text of a semantic relation, a (unseen L1) phonological rela- Blumenfeld, Shook, Marian, & Emmorey, 2015; Shook &
tion produced a facilitation effect. Conversely, when the Marian, 2012) that have looked directly at cross-language
items were not semantically related, the (unseen L1) activation in hearing bimodal bilinguals with spoken lan-
phonological relation gave rise to an inhibitory effect. A guage as stimuli using a very different procedure: the
similar experiment was run in a subsequent study with visual world paradigm. Participants heard a word and
two groups of unbalanced ASL/English bilinguals: a group had to select one of four images on a screen. In addition
of deaf ASL-dominant/English and a group of hearing to the target, there were two unrelated distractors and a
English-dominant/ASL signers (Morford, Kroll, Piñar, & phonological competitor of the target in the ‘hidden’ lan-
Wilkinson, 2014). Deaf bilinguals showed the same inhibi- guage (ASL) that shared three phonological parameters.
tory and facilitatory effects reported previously. However, Bimodal bilinguals looked more often and longer at the
the hearing English-dominant signers, for whom ASL was competitor rather than the distractors. Consequently, co-
their L2, showed only the inhibitory effect. Importantly, activation does not seem to be dependent on the modality
the deaf bilinguals were performing the task in their L2, of the languages in bimodal bilinguals. Thus, these two
while the hearing bilinguals performed the experiment in studies showed parallel activation of the sign language
the written form of their L1. In addition, deaf native signers (the non-explicit language: ASL) during comprehension of
normally associate English word forms with ASL signs spoken English in highly proficient hearing bimodal bilin-
when they learn to read, while the hearing group would guals. However, the participants were looking at images
have linked the English orthography with the spoken Eng- while hearing words, so the task offered visual stimuli in
lish phonological forms. A very similar experiment using order to activate the signed language. In addition, the para-
the same procedure was carried out in proficient deaf digm prompted explicitly a representation through the pic-
bilinguals in German Sign Language, DGS (Deutsche ture that activated the sign language competitor. Both the
Gebärdensprache), and written German (Kubus, Villwock, explicit trigger and the hidden target shared the visual
Morford, & Rathmann, 2015). Kubus et al. (2015) found modality. Furthermore, these stimuli were not controlled
the inhibitory effect described by Morford et al. (2011) for iconicity of the signs. The form of iconic signs bears
but not the facilitatory effect for the ‘hidden’ phonological some resemblance to the meaning and iconicity has been
relation (in the context of a semantic relationship). The found to play a role in the activation of signs from word
authors linked this difference in the results to two reasons: stimuli in deaf bilingual children (Ormel, Hermans,
differences in the experimental stimuli and in the lan- Knoors, & Verhoeven, 2012). Thus, some of the picture
guages involved. stimuli from the experiment may have resembled the cor-
Therefore, there is evidence for a cross-modal print-sign responding sign forms and have triggered the activation of
parallel activation when the languages are of a different signs. A task without any visual cues would be a stronger
modality in deaf and hearing bimodal bilinguals. However, test to find out whether the cross-language activation is
there are two important considerations. Firstly, the spoken actually set off by the language input. Finally, none of these
language was presented in its written form, a secondary studies investigated whether these putative effects are
code that provides a visual representation of an auditory modulated by age of acquisition (AoA); that is, whether
signal. This involves looking for links between the repre- the effect is present both in native bimodal bilinguals
sentation of the static written form of the spoken language and late learners of the signed language.
and the representation of the dynamic signal of the sign When the signed language has been learnt impacts how
language. Additionally, this means that the explicit and the mental lexicon is organized, how the sublexical param-
the implicit codes are both visual, the experimental task eters are processed (Carreiras, Gutiérrez-Sigut, Baquero, &
is performed in the same (visual) modality as the language Corina, 2008; Corina & Hildebrandt, 2002; Emmorey &
for which implicit activation is sought. Secondly, most of Corina, 1990; Emmorey, Corina, & Bellugi, 1995;
the participants in these studies were deaf bimodal bilin- Mayberry & Eichen, 1991) and it might also impact on
guals. These individuals are sign-print bilinguals due to the link with the spoken language. Hearing native bimodal
their limited or indirect access to the acoustic form of the bilinguals normally have deaf parents who use the signed
spoken language. More critically, these two factors interact language. The milestones of language learning in these
S. Villameriel et al. / Journal of Memory and Language 87 (2016) 59–70 61

babies are the same as those of children that acquire two the semantically unrelated pairs since this appears to be
spoken languages, and very similar to those of children a more robust effect.
that acquire only one language, whether signed or spoken
(Petitto et al., 2001). Hearing late learners of a signed lan-
guage have the spoken language as L1. Late learners, com- Experiment 1. Hearing native bimodal bilinguals
pared to natives, rely more on iconicity (Campbell, Martin, (CODAs) and hearing monolinguals
& White, 1992) and associations with their spoken lan-
guage in order to learn the signed language. Furthermore, Methods
there is robust neurological evidence of AoA differences
in hearing bimodal bilinguals when processing signed Participants
and spoken languages (Neville et al., 1997; Newman, We recruited 20 children of deaf adults (CODAs) highly
Bavelier, Corina, Jezzard, & Neville, 2002; Zachau et al., proficient in both languages (LSE and Spanish) for the
2014). Therefore, given the previous differences between experimental group, and 20 hearing monolinguals in Span-
native signers and late learners, in the present study we ish as controls. In order to make sure that the CODAs were
will investigate whether hearing bimodal bilinguals who using LSE and Spanish on an everyday basis, we selected
have acquired a signed language at different ages activate subjects who were sign language interpreters at the time
signs (L2) when hearing words (L1). of the study and who had been working as such for at least
For the current experiments, we used the semantic the previous two years. The bimodal bilingual group and
relatedness paradigm from Thierry and Wu (2007) that the control group were matched in age and education level.
Morford et al. (2011) adapted to deaf signers. In our case, A small group of bimodal bilinguals had knowledge of
we modified the implicit priming for hearing bimodal other (spoken) languages, such as regional Spanish lan-
bilinguals in spoken Spanish and LSE and we ran the exper- guages (e.g. Galician, Basque) or English. Participants’ char-
iment in spoken Spanish (the dominant language) to see if acteristics are shown in Table 1. The experiment was run in
LSE (the non-dominant language) was activated. Previous the different locations where participants were recruited
cross-language and cross-modal experiments using this (Bilbao, Burgos, León, Madrid, Palencia, Pamplona, San
semantic relatedness paradigm with implicit priming have Sebastián and Valladolid).
been run with the written form of the spoken language.
This study, however, uses the spoken language in its pri-
mary manifestation as the input in order to assess whether Materials
there is selective or non-selective access from one primary 64 pairs of words in Spanish were selected (Appendix
code to another. In this sense, the paradigm in this study is A). Thirty-two of the word pairs were semantically related
also a cross-modal paradigm: participants hear spoken and the other thirty-two unrelated. Different semantic
words and we check whether or not they prime visual relations between primes and target words were included,
signs. This will allow us to make solid interpretations con- such as antonyms, synonyms, hypernyms and hyponyms,
cerning activation when the two languages involved do not coordinate terms and associative relations. Within each
share modality. This is a crucial difference with respect to semantic condition, half of the pairs had associated LSE
the paradigm of the previous studies that have demon- signs that were phonologically related, and the other half
strated cross-modal activation: the paradigm was uni- had no phonological relation in LSE. An example of each
modal in the sense that the visual form of the language condition appears in Fig. 1. Sixteen additional word pairs
was used to prime a visual signed language. were fillers.
We studied two groups of hearing bimodal bilinguals: Pairs of signs with phonological relation shared at least
native (Experiment 1) and late learners (Experiment 2). two formational parameters: handshape, orientation,
For hearing bimodal bilinguals the spoken language was movement and location (Appendix B). The focus of this
the dominant language according to self-ratings provided study was covert activation of LSE. Although it has been
by the participants in this study (see Methods section shown that these different parameters are not processed
below). Both experiments were also run with hearing par- identically (Carreiras et al., 2008) we were not aiming to
ticipants with no knowledge of LSE as the control groups. disentangle the influence of each parameter individually
Given that sign language can be activated when hearing in the present study. Within each semantic condition, the
spoken language we expect the highly proficient native second word (the target word) appeared in pairs with
bimodal bilinguals in Experiment 1 to show strong evi- and without phonological relationship. This way, partici-
dence of cross-language cross-modal activation. Specifi- pants were responding to the same words in the two crit-
cally, we expect that: (1) In the presence of a semantic ical conditions of interest.
relation, native bimodal bilinguals should show shorter Within each semantic condition, for every phonologi-
reaction times (RTs) and/or lower error rates when there cally related word pair a parallel word pair was created
is an implicit phonological relation (2) In the absence of a with the same target word and a phonologically unrelated
semantic relation, native bimodal bilinguals should show prime in LSE. A group of 20 native Spanish speakers (with
longer RTs and/or higher error rates when there is an no knowledge of LSE, and who did not participate in the
implicit phonological relation. In Experiment 2 we expect main experiment) judged the semantic relatedness of each
either similar results or, in line with the results found by word pair on a scale from 1 (no semantic relation) to 7
Morford et al. (2014), less evidence of cross-language and (strong semantic relation). Only pairs with scores below
cross-modal activation, limited to inhibitory effects on 3 or above 5 were considered.
62 S. Villameriel et al. / Journal of Memory and Language 87 (2016) 59–70

Table 1
Experiment 1. Participants’ characteristics.

Group Number of Mean age Gender Years of experience as Self-rated LSE competence
participants sign language interpreters (from 1 to 7)
Native bimodal bilinguals 20 39.6 15 women 17.7 (mean) 6.6 (mean)
5 men
Hearing monolinguals 20 39.2 12 women
8 men

Semantic relation No semantic relation


suegra madre criada golfo
(mother-in-law) (mother) (maid) (scoundrel)

Phonologic
relation

hija madre suerte golfo


(daughter) (mother) (luck) (scoundrel)

No phonologic
relation

Fig. 1. Examples of stimuli. Participants only heard words in Spanish, the explicit language. Photographs show the translation in LSE, the implicit language,
for illustrative purposes. English translations appear within parentheses.

As a further check on the semantic relatedness of the Table 2


selected word pairs, we obtained automatic text-based val- Characteristics of semantically related words (means and standard devia-
ues of first-order and second-order semantic similarity tions in brackets).
using DISCO (extracting DIstributionally related words using Phonologically p-value Targets
CO-occurrences, Kolb, 2008, 2009) on a large, 232 million of t-test
Related Unrelated
(word) token corpus of Spanish texts.1 Both words in each primes primes
primes
pair were members of the same grammatical class. We also
Log frequency 1.48 1.68 0.37 1.80
controlled for log frequency and number of phonemes (0.57) (0.65) (0.51)
according to the values from EsPal, the Spanish Lexical Data- Number of phonemes 5.69 5.81 0.68 5.25
base (Duchon, Perea, Sebastián-Gallés, Martí, & Carreiras, (0.87) (1.38) (1.12)
2013), using the Written and Web Tokens database (2012- Duration (ms) 789 773 (101) 0.63 754
(84) (116)
11-06) and Castilian Spanish phonology. The properties of
the final word lists are shown in Tables 2 (semantically DISCO semantic 0.37 0.41 0.52
similaritya (0.27) (0.27)
related) and 3 (semantically unrelated).
The words were recorded using Goldwave audio soft- Human rating of 6.34 6.24 0.48
semantic (0.41) (0.42)
ware in an audio recording booth, spoken by a male native
relationship
Spanish speaker with an unmarked accent. The audio files between primes
were edited, de-noised and equalized using Praat (Boersma and targets
& Weenink, 2014). A time span of 100 ms of silence before a
For both semantically related and unrelated word pairs only DISCO
values for second-order semantic similarity are reported as first-order
1
The corpus consisted of the entire Spanish Wikipedia from July 2008, a values were also matched with no significant differences. First and second
collection of Parliamentary debates and works of literature from the 19th order refers to different matrices in size concerning the amount of words
and 20th centuries. More details (and the corpus itself) are available at taken into consideration to compute the semantic similarity values.
http://www.linguatools.de/disco/disco-languagedatapackets_en. Second-order values show a reasonable correlation with human-based
html#esgeneral. values (Kolb, 2008).
S. Villameriel et al. / Journal of Memory and Language 87 (2016) 59–70 63

Table 3 and the left key (‘A’ on the keyboard) for lack of semantic
Characteristics of semantically unrelated words (means and standard relation. Left-handed participants pressed ‘A’ for semantic
deviations in brackets).
relation and ‘L’ for lack of semantic relation. In this way,
Phonologically p-value Targets all participants responded positively with their dominant
of t-test hand.
Related Unrelated
primes
primes primes The participants read the instructions for the experi-
Log frequency 1.33 1.20 0.63 1.24 ment in Spanish, which included various illustrative exam-
(0.81) (0.70) (0.65) ples of the types of semantic relations that appeared in the
Number of phonemes 5.56 5.25 0.38 5.13 experimental items (i.e. synonyms, antonyms, coordinate
(1.75) (0.77) (1.26)
terms). The participants were instructed to respond as
Duration (ms) 778 766 (85) 0.76 744
(123) (103) quickly and as accurately as possible. This was followed
by a practice task consisting of eight trials in which the
DISCO semantic 0.04 0.04 0.86
similarity (0.04) (0.05) computer provided feedback (correct/incorrect) to help in
understanding the task. The presentation of the experi-
Human rating of 1.53 1.41 0.5
semantic (0.50) (0.49) mental trials was counterbalanced and pseudorandomized.
relationship There were at least five trials (i.e. ten different words) sep-
between primes arating both appearances of the same word. We used two
and targets
different lists so that half of the participants were pre-
sented with one list and the other half with the other list.
Reaction times were measured after the onset of the
the voice onset of each word was set, visualizing the wave- second word of each pair. Each trial started with a fixation
forms of each recording and matching them when neces- cross in the center of the screen (500 ms), then the first
sary in order to make sure that all the onsets were word was presented, followed by 200 ms of silence before
identical. For the analysis, this silent span was subtracted the second word was played. After the participant’s
from the RTs of each response, so the RT reflected the response, before the next trial began, there were 1500 ms
latency from the onset of the word, not from the beginning of silence (see Fig. 2).
of the audio recording. After the experiment, the bimodal bilinguals had to
translate into LSE all words presented during the experi-
Procedure ment to ensure that they associated the target sign (and
The experiment was presented using the SR Research not some dialectal variant) with that word. As we had
Experiment Builder software (V.10.1025) on a Toshiba lap- bimodal bilinguals from various locations, for each partic-
top with IntelÒ PentiumÒ M processor (1.73 GHz) with a ipant, items whose sign translation did not match the
Realtek AC97 Audio sound device and headphones (Beyer- expected translation were eliminated (4.99% of responses
dynamic DT 770 Pro 250 ohm) for the audio. The head- were removed for this reason).
phones provided soundproofing from any environmental Inaccurate responses were also discarded for the reac-
noise and delivered words at a volume of 60 dB (checked tion time analysis (2.93% of responses). Reaction times
with a sound level meter). For each word pair, participants more than 2.5 standard deviations from the mean by sub-
listened to the two words in succession and had to decide ject and condition were considered outliers and discarded
whether or not their meanings were related. Participants (1.73% of responses).
responded as soon as they could after hearing the second Participants also completed a questionnaire about their
word of the pair. Right-handed participants pressed the language profile (history and use) after finishing the
right key (‘L’ on a qwerty keyboard) for semantic relation experiment.

Fig. 2. Trial sequence.


64 S. Villameriel et al. / Journal of Memory and Language 87 (2016) 59–70

Results native bimodal bilinguals were quicker to respond to


semantically related words with phonologically related
Table 4 shows the mean RTs and probability of errors of than unrelated LSE translations t(19) = 3.11, p < .01. In
both groups. The 2 (Group; CODAs vs. control)  2 (list; A addition, participants were slower to respond to semanti-
vs. B)  2 (Semantic relation; yes vs. no)  2 (Phonologic cally unrelated words with phonologically related signed
relation; yes vs. no) repeated measures across subjects translations than with phonologically unrelated transla-
(F1) and across items (F2) ANOVAs on accuracy did not tions, t(19) = 4.24, p < .001 (see Fig. 3).
show any interactions and only a main effect of group
was revealed in the analysis by items. Bimodal bilinguals Hearing L1 Spanish controls with no knowledge of LSE
were more accurate than controls, F1(1, 38) = 2.98, p = .09, The same 2  2 ANOVA run for the controls only
F2(1, 60) = 5.48, p < .05. In any case, due to the high rate showed a main effect of semantic relatedness, F1(1, 19)
of correct responses, the data are not normally distributed = 35.42, p < 001, F2(1, 69) = 44.81, p < .001. No significant
(confirmed by the Shapiro–Wilk test) and there is also a interaction of semantics and phonology was revealed, F1
ceiling effect, as most of the results for accuracy are nearly (1, 19) = 1.83, p = .19, F2(1, 60) = .19, p = .67.
100%. Experiment 1 widens the results of previous studies
The ANOVA on reaction time showed a main effect of using the same semantic relatedness paradigm (Kubus
semantic relatedness. Participants were faster to answer et al., 2015; Morford et al., 2011, 2014) to hearing native
to semantically related than unrelated pairs of words, F1 signers whose dominant language is spoken Spanish. Cru-
(1, 36) = 55.55, p < .001, F2(1, 60) = 38.37, p < .001. The cially, in the current study, cross-language and cross-
interaction of semantics and phonology was significant in modal activation of the non-dominant code was primed
the analysis by subjects, F1(1, 36) = 32.03, p < .001, F2 by exclusive use of the dominant language in the natural
(1, 60) = 2.92, p = .093. Importantly, the triple interaction spoken modality (dynamic signal). This parallel activation
of semantics and phonology and group was significant F1 is highly robust, as shown by the appearance of both
(1, 36) = 21.65, p < .001, F2(1, 60) = 8.45, p < .01. effects: facilitation and inhibition.

Native bimodal bilinguals Experiment 2. Hearing late bimodal bilinguals and


The 2 (Semantic relation)  2 (Phonologic relation) hearing monolinguals
ANOVA run for the native bimodal bilinguals revealed a
significant main effect of semantics, F1(1, 19) = 23.09, The results of Experiment 1 provide clear evidence of
p < 001, F2(1, 60) = 26.18, p < 001, and an interaction of cross-language and cross-modal activation in native hear-
semantics and phonology, F1(1, 19) = 33.28, p < .001, F2 ing bimodal bilinguals. This effect could be linked to the
(1, 60) = 5.63, p < .05. Follow-up comparisons revealed that native status of the language for the experimental group,

Table 4
Experiment 1. Participants. Mean reaction times and mean probability error (standard deviations in brackets).

Semantically related Semantically unrelated


Phonologically Phonologically
Related Unrelated Related Unrelated
Mean reaction times (ms) Native bimodal bilinguals 886.67 (181.94) 938.45 (196.83) 1151.05 (213.12) 1041.43 (175.75)
Hearing monolinguals 815.67 (188.51) 821.64 (204.88) 996.92 (214.26) 984.93 (217.01)
Mean probability error Native bimodal bilinguals .006 (.019) .003 (.017) .003 (.017) .003 (.014)
Hearing monolinguals .012 (.032) .006 (.027) .012 (.025) .012 (.025)

Fig. 3. Mean reaction times (in ms) for native bimodal bilinguals (left) and hearing monolinguals with no knowledge of LSE (right). Error bars show the
standard error of the mean (from the F1).
S. Villameriel et al. / Journal of Memory and Language 87 (2016) 59–70 65

who learned LSE from birth. Therefore, the next question relation; yes vs. no) repeated measures F1 and F2 ANOVAs
was whether this parallel activation would also occur in on accuracy did not show any significant different on inter-
late bimodal bilinguals, who learned LSE late in life. And actions or main effects other than that late bimodal bilin-
if so, would this take place in the same terms as the native guals committed less errors than controls, F1(1, 78)
bimodal bilinguals (facilitation and inhibition) or would it = 4.87, p < .05, F2(1, 60) = 3.30, p = .074. The data from
be different. Thus, differences or similarities in the results accuracy were not normally distributed (verified by the
of the two groups of bimodal bilinguals would supply valu- Shapiro–Wilk test) and there is also a ceiling effect.
able data for inferences concerning the AoA of the signed The ANOVA on reaction times revealed a main effect of
language. semantic relatedness. Participants were faster responding
to semantically related than unrelated pairs of words, F1
(1, 76) = 161.39, p < .001, F2(1, 60) = 53.51, p < .001. The
Methods
interaction of semantics and phonology was significant in
the analysis by subjects, F(1, 76) = 47.54, p < .001, F2
Participants
(1, 60) = 2.77, p = .10. Importantly, the triple interaction of
Forty highly proficient bimodal bilinguals who were
semantics and phonology and group was significant, F1
late learners of LSE and 40 hearing Spanish speakers mono-
(1, 76) = 32.51, p < .001, F2(1, 60) = 18.21, p < .001. Table 6
linguals were recruited. As in experiment 1, in order to
shows the mean RTs and probability of errors of both
assure that the late learners were highly competent in both
groups.
languages, we chose bimodal bilinguals who were sign lan-
guage interpreters at the time of the study and who had
Late bimodal bilinguals
been working as such for at least the previous two years.
The 2 (Semantic relation)  2 (Phonologic relation)
They started learning sign language after the age of 18.
ANOVA showed a significant main effect of semantic relat-
Participants’ characteristics are shown in Table 5.
edness. Late bimodal bilinguals were faster to respond to
semantically related than unrelated pairs of words, F1
Materials and procedure (1, 39) = 80.66, p < .001, F2(1, 60) = 49.95, p < .001. The
The materials and procedure were the same as those interaction of semantics and phonology was also signifi-
used in experiment 1. cant, F1(1, 39) = 47.45, p < .001, F2(1, 60) = 7.51, p < .01.
As in the previous experiment, responses were removed Follow-up comparisons revealed that late learners were
due to discrepancy between the subject’s translation and faster to respond to semantically related words with
the expected target sign (5.96%). In addition, inaccurate phonologically related LSE translations than to those that
responses were removed for the reaction time analysis had no underlying relation in LSE, t(39) = 4.5, p < .001.
(2.64% of responses). Reaction times that were not within Late learners were also slower to respond to semantically
2.5 standard deviations of the average by subject and con- unrelated words with phonologically related signed trans-
dition were also discarded (1.84% of responses). lations than to those with phonologically unrelated trans-
lations, t(39) = 5.49, p < .001.
Results
Hearing L1 Spanish controls with no knowledge of LSE
A 2 (Group; Late Learners vs. control)  2 (list; A vs. The 2  2 ANOVA revealed a main effect of semantic
B)  2 (Semantic relation; yes vs. no)  2 (Phonologic relatedness, F1(1, 39) = 85.42, p < .001, F2(1, 69) = 45.02,

Table 5
Experiment 2. Participants’ characteristics.

Group Number of Mean age Gender Years of experience as sign Self-rated LSE competence
participants language interpreters (from 1 to 7)
Late bimodal bilinguals 40 34.35 31 women 10.02 (mean) 6.01 (mean)
9 men
Hearing monolinguals 40 38.15 24 women
16 men

Table 6
Experiment 2. Participants. Mean reaction times and mean probability error (standard deviations in brackets).

Semantically related Semantically unrelated


Phonologically Phonologically
Related Unrelated Related Unrelated
Mean reaction times (ms) Late bimodal bilinguals 829.50 (155.78) 890.81 (160.86) 1118.90 (239.59) 1020.45 (173.46)
Hearing monolinguals 831.59 (179) 835.60 (175.55) 1000.49 (186.30) 988.31 (197.13)
Mean probability error Late bimodal bilinguals .006 (.021) .005 (.019) .006 (.02) .006 (.03)
Hearing monolinguals .009 (.026) .01 (.031) .017 (.028) .017 (.031)
66 S. Villameriel et al. / Journal of Memory and Language 87 (2016) 59–70

Fig. 4. Mean reaction times (in ms) for late bimodal bilinguals (left) and for hearing participants with no knowledge of LSE (right). Error bars show the
standard error of the mean (from the F1).

p < .001. However, the interaction of semantics and effect. However, both groups of bimodal bilinguals in our
phonology was not significant F1(1, 39) = 2.29, p = .14, F2 study exhibited the facilitatory effect in addition to the inhi-
(1, 60) = .10, p = .75 (see Fig. 4). bitory effect.2
Previous research has repeatedly demonstrated non-
selective access to the codes in a monolingual context,
General discussion mainly illustrated by the interference of the implicit lan-
guage when there is lack of semantic relation in bimodal
This study investigated whether hearing bimodal bilin- bilinguals (Kubus et al., 2015; Morford et al., 2011, 2014).
guals whose dominant language is spoken Spanish and However, the facilitation effect had only been revealed in
who have acquired LSE at different ages activate signs experiments run on deaf bimodal bilinguals native in ASL
when hearing words. The results were similar for both (Morford et al., 2011, 2014). Thus, this is the first study that
groups of bimodal bilinguals, native and late learners, reveals such strong activation in hearing bimodal bilin-
while controls showed a different outcome. guals as well. Crucially, this parallel activation is shown
All groups were significantly faster in answering the when using the dominant language and it does not seem
semantically related word pairs compared to the unre- to depend on the AoA of the sign language.
lated pairs, a well-established effect that provides support Parallel activation effects (facilitation and inhibition) in
that participants were performing the basic experimental hearing bimodal bilinguals may be enhanced by several
task adequately. More importantly, in native bimodal differences with previous studies: Firstly, the phonological
bilinguals and in late learners the (unseen) phonological relationship of the implicit signs; Secondly, the primary
relation in LSE had a different effect in each of the seman- code associated with the task; and Thirdly, code-blending
tic conditions. On the one hand, in the presence of a experience in hearing bimodal bilinguals. Prior research
semantic relation, native and late learner bimodal bilin- conducted in deaf sign-print bilinguals showed the facilita-
guals showed a facilitatory effect when the implicit signs tory and the inhibitory effect whether the deaf bilinguals
were phonologically related, since they were faster to were balanced in both languages (Morford et al., 2011) or
respond compared to when the signs were phonologically ASL-dominant (Morford et al., 2014). However, in an
unrelated. On the other hand, in the absence of a seman- experiment run with deaf balanced bilinguals of another
tic relation, these two groups showed an inhibitory effect language pair, DGS and written German (Kubus et al.,
when the words were phonologically related in their LSE 2015), only the inhibition effect was found. The authors
equivalent, since they were slower to respond than when related this dissimilar outcome with differences in the
the signs were phonologically unrelated. In contrast, the parameters that the signed translations shared. While in
control group did not show either of these two effects, the ASL studies the common parameters were mainly
as their responses were very similar in each semantic movement and location, in the DGS study the overlapping
condition regardless of the implicit phonological context. parameters were handshape and location. Kubus et al.
As predicted, this outcome in controls shows no effect (2015) point out that signs that share movement are per-
from the implicit LSE, since they do not know the (hid- ceptually more similar than signs with a common location
den) language. or handshape, and as such the phonological relation in
These results make clear that hearing bimodal bilinguals the ASL studies was stronger. In the current LSE study,
activate signs while processing spoken words. Moreover,
there are no differences in this activation whether the bimo-
2
dal bilinguals are natives or late learners. A possible out- To check this similar behavior in native and late signers, we performed
a 2 (Group; CODAs vs. late learners)  2 (Semantic relation; yes vs. no)  2
come was that late learners would perform in the same (Phonologic relation; yes vs. no) repeated measures across subjects (F1) and
fashion as the hearing late learners in the Morford et al. across items (F2) ANOVA. The three-way interaction was not significant, all
(2014) print-sign study, who only showed the inhibitory Fs < .05, all ps > .9
S. Villameriel et al. / Journal of Memory and Language 87 (2016) 59–70 67

the three parameters mentioned frequently overlap in the could have given rise to a stronger parallel activation in
signed translations in the [+phonology] conditions: loca- the current experiment, compared with the weaker activa-
tion, handshape and movement (Appendix B). tion primed by printed words (Morford et al., 2014). More
In their print-sign study, Morford et al. (2014) argued evidence for this robust bond between the spoken words
that hearing English-dominant bimodal bilinguals did not and the visual signs in hearing bimodal bilinguals comes
show the facilitation effect because the written words in from the fact that parallel activation has occurred in the
English are directly associated with their corresponding non-dominant language. This contrasts with most of the
sounds and not with their equivalent signs. Deaf bimodal previous work, where the task was carried out in the L2
bilinguals, on the contrary, link the printed words to signs and the implicit language was the dominant L1. Our partic-
when they learn to read, so the connection between the ipants were not only highly proficient hearing bimodal
written forms and the signs is more direct than in the hear- bilinguals, whether native or late learners, but also had a
ing bimodal bilinguals’ case. Consequently, the effect was uniformly high level of competence, as they all work as
more salient (facilitation and inhibition) in the deaf bimo- sign language interpreters. Therefore, although our exper-
dal bilinguals compared to the hearing bimodal bilinguals iment cannot provide evidence concerning proficiency, as
(only inhibition). Our experiment addresses this matter, there is currently no standard way to assess proficiency
since we have used the spoken form of the language as in LSE, the results seem to suggest that activation is driven
the explicit language. The presence of both effects (inhibi- by proficiency in this case rather than by AoA. In their
tory and facilitatory) for hearing bimodal bilinguals when print-sign experiment, Morford et al. (2014) were able to
listening to spoken words suggests that cross-modal activa- split the hearing bimodal bilinguals in two groups by pro-
tion is more salient from the primary language modality ficiency, since the sign language experience of their partic-
(i.e. auditory). Prior research using the semantic related- ipants was more heterogeneous. The more proficient ASL
ness paradigm in hearing unimodal bilinguals supports users showed a larger degree of inhibition compared to
this strong connection for the primary language modality, the less proficient, although, in contrast to the findings of
as the parallel activation is of the sounds of the equivalent the current study, this inhibitory effect did not reach sig-
language translations, rather than the written form (Wu & nificance. In spite of this, there is a factor that might have
Thierry, 2010). Additionally, in contrast to previous studies had some influence in the results: linguistic awareness.
(Giezen et al., 2015; Shook & Marian, 2012), the current When bimodal bilinguals were recruited, although it was
experiment did not include visual cues to prompt the acti- done through direct contact, that is, there was no experi-
vation of signs, so the robustness of the parallel activation ment announcement asking explicitly for a specific profile,
can be linked exclusively to the primary dynamic code of some of them might have been conscious of their linguistic
the spoken language. Further support for this comes from background as they could relate their recruitment with the
the third consideration: the connections established fact that they are sign language users. This awareness
through the simultaneous use of both languages. could have led to greater activation of LSE while carrying
Preceding research implies that hearing bimodal bilin- out the experimental task. However, the information they
guals might connect the phonological forms of words with had about the study and the task was quite restricted, as
the phonological forms of signs as they can be articulated they only knew that they had to perform a task concerning
at the same time, a phenomenon known as code-blending. semantic relation in Spanish words. Spanish sign language
Bimodal bilinguals mostly produce (simultaneous) code- was never mentioned, so the potential impact of the lin-
blends instead of the typical (sequential) code-switches that guistic awareness might be very limited. Additionally, the
unimodal bilinguals perform. In a study with ASL-English task formed part of a larger battery of experiments, so it
bimodal bilinguals, these code-blends tended to contain is quite unlikely that participants were aware that LSE
semantically equivalent information in the spoken and in was relevant to the task in hand.
the signed language (Emmorey, Borinstein, Thompson, A related issue is the fact that all the bimodal bilinguals
& Gollan, 2008). Most of the code-blends occurred with were sign language interpreters, and the effect that this
English as the matrix language (i.e. the language that pro- could have on the results. It is an open question whether
vided the syntactic structure of the utterance) and ASL the our results will also hold true for other populations. Unfor-
accompanying language. In fact, signs are produced with tunately, this is a naturally occurring confound in the pop-
speech even when bimodal bilinguals know that their inter- ulation of hearing signer language users: highly proficient
locutors do not know any sign language (Casey & Emmorey, signers tend to be interpreters and it is difficult to find
2009). This code-mixing situation changes when signing, as signers matched in proficiency who are not interpreters.
the spoken language (being the dominant language) is sup- Nevertheless, future research ought to examine the rela-
pressed and appears less frequently in signed utterances. tive roles of proficiency and interpreting experience by
This suggests that signs are readily available when using looking at whether native or late bimodal bilinguals who
the spoken language but not the other way around, as the are not working as sign language interpreters or who are
dominant spoken language is more inhibited. Future work not using the sign language on an everyday basis would
looking at cross-modal activation of the dominant (spoken) perform similarly to our participants.
language in hearing bimodal bilinguals could confirm or Finally, our results sit well with a recent study that pro-
refute this idea. vides evidence of cross-language activation also in produc-
The association between spoken and sign phonological tion (Giezen & Emmorey, 2015). This study demonstrated
forms is strongly established in hearing bimodal bilinguals that sign production was influenced by the parallel activa-
(Emmorey, Petrich, & Gollan, 2012). This solid connection tion of the equivalent sign of spoken distractor words.
68 S. Villameriel et al. / Journal of Memory and Language 87 (2016) 59–70

The cross-language and cross-modal activation occurred at Acknowledgments


the lexical level, and could not occur at the sub-lexical
level since there is no formal overlap between the two This project would have not been possible without the
languages. help of the following Sign Language Interpreter (SLI) Asso-
Thus, our findings in hearing bimodal bilinguals support ciations: FILSE (Spanish Federation of SLIs), CILSE-CyL (SLI
that there is cross-language activation at the lexical and/or Center in Castilla y León), ESHIE (SLI Association in the Bas-
semantic level, given that both languages do not share que Country) and CILSEM (SLI Association in Madrid). We
phonological features. This adds new evidence to revise are also grateful to the Deaf Associations that have pro-
some models concerning bilingual word recognition, such vided their staff and premises to run the experiment:
as the Bilingual Interaction Activation (BIA+) model APERSORVA (Deaf Association in Valladolid), Euskal Gorrak
(Dijkstra & Van Heuven, 2002) or the Bilingual Language (Deaf Federation in the Basque Country) and ASORNA (Deaf
Interaction Network for Comprehension of Speech Association in Navarra). We would also like to thank
(BLINCS) (Shook & Marian, 2013), as they have emphasized Eunate in Navarra (Association of families with deaf mem-
the contribution of cross-linguistic phonological overlap, bers). Two universities have also provided useful spaces in
among other factors. These models focus on unimodal which to run the experiment: the University of the Basque
bilingualism but they can benefit from the contributions Country in Leioa, and the University of Valladolid. Special
provided by research in cross-modal bilingualism. thanks to the Institute Botikazar in Bilbao for allowing us
In conclusion, our results confirm that non-selective to use their premises.
activation traverses modality, even for hearing bimodal Various colleagues have provided invaluable support:
bilinguals who have acquired the sign language at different Noemí Fariña, Marcel Giezen, Martijn Baart, David Carcedo,
ages. Spanish and LSE do not share any phonological forms, Peter Boddy, Aina Casaponsa, Doug Davidson, Eneko Antón,
and yet we provide evidence here that signs are activated Jon Andoni Duñabeitia, Clara Martin and the students in
when hearing bimodal bilinguals just listen to words, in the Master in Cognitive Neuroscience of Language
the absence of visual stimuli. Furthermore, this study con- 2013/14.
firms that (cross-modal) parallel activation may occur in L2 Finally, we are truly indebted to all the sign language
when L1 is being used, and not only in L1 when L2 is used. interpreters and the hearing controls that have partici-
For proficient bilinguals who use their languages on an pated in the experiment.
everyday basis, both codes are cognitively active even This research was partially funded by Grant PSI2012-
when they are dealing explicitly with only one of them, 31448 from the Spanish Ministry of Economy and Compet-
and even when those languages operate in distinct itiveness and by financial assistance as a Severo Ochoa
modalities. Center of Excellence SEV-2015-0490.

Appendix A. Spanish words

Semantically related Semantically unrelated


Phonologically Phonologically
Related Unrelated Related Unrelated
primes primes targets primes primes targets
venta carrito compra edad carne signo
bebida ducha agua boli diente champán
damas juego ajedrez criada suerte golfo
abuelo sobrino nieto cosa banco sexo
pasado presente futuro chocolate plaza miedo
primo padre tío bici papel cojo
flojo fuerte débil ciclo mueble uvas
suegra hija madre lupa mito porro
bajar escalar subir ganas lápiz nube
lunes jueves martes teoría factura domingo
juntar separar unir campana bingo pera
corto ancho largo préstamo susto queso
profesor escuela alumno pueblo tacón palabra
médico hospital enfermo normal sucio soltero
claro negro oscuro hucha cuadro gota
pistola batalla guerra noviembre seta militar
S. Villameriel et al. / Journal of Memory and Language 87 (2016) 59–70 69

Appendix B. LSE translations of the Spanish words

Semantically related Semantically unrelated


Phonologically related Phonologically related
Primes Targets Shared parameters Primes Targets Shared parameters
VENTA COMPRA hs, loc EDAD SIGNO mov, loc
BEBIDA AGUA mov, ori, loc BOLI CHAMPÁN ori, loc
DAMAS AJEDREZ mov, loc CRIADA GOLFO mov, loc
ABUELO NIETO mov, loc COSA SEXO hs, loc
PASADO FUTURO hs, ori CHOCOLATE MIEDO hs, loc
PRIMO TÍO hs, loc BICI COJO hs, ori, loc
FLOJO DÉBIL hs, ori CICLO UVAS hs, ori, loc
SUEGRA MADRE hs, ori, loc LUPA PORRO mov, hs, ori
BAJAR SUBIR hs, loc GANAS NUBE mov, hs, ori
LUNES MARTES mov, loc TEORÍA DOMINGO hs, ori, loc
JUNTAR UNIR mov, loc CAMPANA PERA mov, ori, loc
CORTO LARGO hs, ori, loc PRÉSTAMO QUESO hs, ori, loc
PROFESOR ALUMNO mov, hs, loc PUEBLO PALABRA mov, hs, ori
MÉDICO ENFERMO hs, ori, loc NORMAL SOLTERO hs, ori
CLARO OSCURO ori, loc HUCHA GOTA mov, ori
PISTOLA GUERRA mov, ori, loc NOVIEMBRE MILITAR mov, ori, loc

References Giezen, M. R., & Emmorey, K. (2015). Language co-activation and lexical
selection in bimodal bilinguals: Evidence from picture–word
interference. Bilingualism: Language and Cognition, 1–13.
Boersma, P., & Weenink, D. (2014). Praat: Doing phonetics by computer
Hoshino, N., & Kroll, J. F. (2008). Cognate effects in picture naming: Does
[Computer program]. Version 5.1.71. <http://www.praat.org>
cross-language activation survive a change of script? Cognition, 106,
Retrieved 09.04.14.
501–511.
Campbell, R., Martin, P., & White, T. (1992). Forced choice recognition of
Illes, J., Francis, W. S., Desmond, J. E., Gabrieli, J. D., Glover, G. H., Poldrack,
sign in novice learners of British Sign Language. Applied Linguistics, 13,
R., ... Wagner, A. D. (1999). Convergent cortical representation of
185–201.
semantic processing in bilinguals. Brain and Language, 70, 347–363.
Carreiras, M., Gutiérrez-Sigut, E., Baquero, S., & Corina, D. (2008). Lexical
Klein, D., Milner, B., Zatorre, R. J., Meyer, E., & Evans, A. C. (1995). The
processing in Spanish sign language (LSE). Journal of Memory and
neural substrates underlying word generation: A bilingual functional-
Language, 58, 100–122.
imaging study. Proceedings of the National Academy of Sciences, 92,
Casey, S., & Emmorey, K. (2009). Co-speech gesture in bimodal bilinguals.
2899–2903.
Language and Cognitive Processes, 24, 290–312.
Kolb, P. (2008). Disco: A multilingual database of distributionally similar
Chee, M. W., Tan, E. W., & Thiel, T. (1999). Mandarin and English single
words. In Proceedings of KONVENS-2008, Berlin.
word processing studied with functional magnetic resonance
Kolb, P. (2009, May). Experiments on the difference between semantic
imaging. The Journal of Neuroscience, 19, 3050–3056.
similarity and relatedness. In Proceedings of the 17th nordic conference
Corina, D. P., & Hildebrandt, U. C. (2002). Psycholinguistic investigations
on computational linguistics-NODALIDA’09.
of phonological structure in ASL. In R. P. Meier, K. Cormier, & D.
Kubus, O., Villwock, A., Morford, J. P., & Rathmann, C. (2015). Word
Quinto-Pozos (Eds.), Modality and structure in signed and spoken
recognition in deaf readers: Cross-language activation of German Sign
languages (pp. 88–111). Cambridge: Cambridge University Press.
Language and German. Applied Psycholinguistics, 36, 831–854.
Costa, A., Caramazza, A., & Sebastian-Galles, N. (2000). The cognate
Libben, M. R., & Titone, D. A. (2009). Bilingual lexical access in context:
facilitation effect: Implications for models of lexical access. Journal of
Evidence from eye movements during reading. Journal of Experimental
Experimental Psychology. Learning, Memory, and Cognition, 26, 1283.
Psychology. Learning, Memory, and Cognition, 35, 381.
Dijkstra, T., & Van Heuven, W. J. (2002). The architecture of the bilingual
Marian, V., & Spivey, M. (2003). Competing activation in bilingual
word recognition system: From identification to decision.
language processing: Within-and between-language competition.
Bilingualism: Language and Cognition, 5, 175–197.
Bilingualism: Language and Cognition, 6, 97–115.
Duchon, A., Perea, M., Sebastián-Gallés, N., Martí, A., & Carreiras, M.
Mayberry, R. I., & Eichen, E. B. (1991). The long-lasting advantage of
(2013). EsPal: One-stop shopping for Spanish word properties.
learning sign language in childhood: Another look at the critical
Behavior Research Methods, 45, 1246–1258.
period for language acquisition. Journal of Memory and Language, 30,
Emmorey, K., Borinstein, H. B., Thompson, R., & Gollan, T. H. (2008).
486–512.
Bimodal bilingualism. Bilingualism (Cambridge, England), 11, 43.
Morford, J. P., Kroll, J. F., Piñar, P., & Wilkinson, E. (2014). Bilingual word
Emmorey, K., & Corina, D. (1990). Lexical recognition in sign language:
recognition in deaf and hearing signers: Effects of proficiency and
Effects of phonetic structure and morphology. Perceptual and Motor
language dominance on cross-language activation. Second Language
Skills, 71, 1227–1252.
Research, 30, 251–271.
Emmorey, K., Corina, D., & Bellugi, U. (1995). Differential processing of
Morford, J. P., Wilkinson, E., Villwock, A., Piñar, P., & Kroll, J. F. (2011).
topographic and referential functional of space. In K. Emmorey & J.
When deaf signers read English: Do written words activate their sign
Reilly (Eds.), Language, gesture and space (pp. 43–62). Mahwah, NJ:
translations? Cognition, 118, 286–292.
Lawrence Erlbaum Associates.
Neville, H. J., Coffey, S. A., Lawson, D. S., Fischer, A., Emmorey, K., & Bellugi,
Emmorey, K., Petrich, J. A., & Gollan, T. H. (2012). Bilingual processing of
U. (1997). Neural systems mediating American Sign Language: Effects
ASL–English code-blends: The consequences of accessing two lexical
of sensory experience and age of acquisition. Brain and Language, 57,
representations simultaneously. Journal of Memory and Language, 67,
285–308.
199–210.
Newman, A. J., Bavelier, D., Corina, D., Jezzard, P., & Neville, H. J. (2002). A
Giezen, M. R., Blumenfeld, H. K., Shook, A., Marian, V., & Emmorey, K.
critical period for right hemisphere recruitment in American Sign
(2015). Parallel language activation and inhibitory control in bimodal
Language processing. Nature Neuroscience, 5, 76–80.
bilinguals. Cognition, 141, 9–25.
70 S. Villameriel et al. / Journal of Memory and Language 87 (2016) 59–70

Ormel, E., Hermans, D., Knoors, H., & Verhoeven, L. (2012). Cross-language Shook, A., & Marian, V. (2013). The bilingual language interaction network
effects in written word recognition: The case of bilingual deaf for comprehension of speech. Bilingualism: Language and Cognition, 16,
children. Bilingualism: Language and Cognition, 15, 288–303. 304–324.
Petitto, L. A., Katerelos, M., Levy, B. G., Gauna, K., Tétreault, K., & Ferraro, V. Spivey, M. J., & Marian, V. (1999). Cross talk between native and second
(2001). Bilingual signed and spoken language acquisition from birth: languages: Partial activation of an irrelevant lexicon. Psychological
Implications for the mechanisms underlying early bilingual language Science, 10, 281–284.
acquisition. Journal of Child Language, 28, 453–496. Thierry, G., & Wu, Y. J. (2007). Brain potentials reveal unconscious
Rodriguez-Fornells, A., Rotte, M., Heinze, H. J., Nösselt, T., & Münte, T. F. translation during foreign-language comprehension. Proceedings of
(2002). Brain potential and functional MRI evidence for how to handle the National Academy of Sciences, 104, 12530–12535.
two languages with one brain. Nature, 415, 1026–1029. Wu, Y. J., & Thierry, G. (2010). Chinese–English bilinguals reading English
Schwartz, A. I., Kroll, J. F., & Diaz, M. (2007). Reading words in Spanish and hear Chinese. The Journal of Neuroscience, 30, 7646–7651.
English: Mapping orthography to phonology in two languages. Zachau, S., Korpilahti, P., Hämäläinen, J. A., Ervast, L., Heinänen, K.,
Language and Cognitive Processes, 22, 106–129. Suominen, K., ... Leppänen, P. H. (2014). Electrophysiological
Shook, A., & Marian, V. (2012). Bimodal bilinguals co-activate both correlates of cross-linguistic semantic integration in hearing
languages during spoken comprehension. Cognition, 124, 314–324. signers: N400 and LPC. Neuropsychologia, 59, 57–73.

You might also like