Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2019 Apr 1.
Published in final edited form as: Sci Eng Ethics. 2017 Mar 20;24(2):361–391. doi: 10.1007/s11948-017-9876-4

The Role of Culture and Acculturation in Researchers’ Perceptions of Rules in Science

Alison L Antes 1, Tammy English 2, Kari A Baldwin 3, James M DuBois 4
PMCID: PMC5607071  NIHMSID: NIHMS869667  PMID: 28321685

Introduction and Background

A number of rules guide conduct in research in the United States (U.S.). These rules include federal research regulations and statutes, norms of science that enjoy a consensus in the research ethics literature, and professional ideals that researchers are encouraged to strive towards. The consequences of violating these different kinds of rules varies widely, from jail time or debarment from federal funding for data fabrication to the displeasure of peers or trainees for failing to be generous with one’s time. Thus, these rules and how researchers view them have significant implications for professional behavior in research. However, everyone performing research in the U.S. may not view the rules in the same manner, especially given the cultural diversity of researchers working in the U.S. Culture exerts a strong influence on how individuals interpret situations, select their actions, and view social interactions, as well as what they believe to be acceptable or unacceptable behaviors (Knafo et al. 2011; Taylor et al. 2004; Resick et al. 2011; Markus and Kitayama 1991). Yet, the research community has largely overlooked the potential influence of culture on the perspectives and behavior of researchers.

Understanding the complex rules that guide scientific behavior in the U.S. requires knowing not only the content of the rules but the seriousness of violating different rules. Appreciating the seriousness of violating a rule requires recognizing how it is actually interpreted and applied in context, and this may be a more culturally sophisticated task than simply knowing the content of the rules. If U.S. researchers and their international counterparts perceive rules in science differently, this may put them at a disadvantage when collaborating internationally, or when working outside of their home countries. Our aim in this study was to identify how National Institutes of Health (NIH)-funded researchers working in the U.S. who were born in the U.S. and outside the U.S. perceive the rules in science as they are appraised in the U.S. context. Specifically, we examined how U.S.-born and non-U.S.-born researchers evaluated the seriousness of violations of regulations, norms, and ideals; how much they discriminated between these categories of rules; and how accurately they predicted the average U.S. Research Integrity Officer’s (RIO) evaluations of the seriousness of violations of the rules.

Rules in Scientific Research

Many rules, based on principles such as honesty, objectivity, openness, and respect, govern the work of researchers (Shamoo and Resnik 2015). These rules address such practices as handling data, interacting with research participants, and reporting research results, and they are meant to foster the reliability of the scientific record and trust in the scientific enterprise (DuBois 2004; Kreutzberg 2004; Shamoo and Resnik 2015). Failure to follow rules for responsible science can undermine scientific progress (Korenman et al. 1998) and result in penalties to institutions or investigators (DuBois et al. 2016a). However, there is great variation in the seriousness of violating different rules in science, which is reflected in the variable enforcement of the rules and the different consequences associated with them (DuBois 2004).

The variable nature of rules in science might puzzle a newcomer to research (Pimple 2002). Indeed, the rules address issues from matters of life-and-death (e.g., rules for the protection of human participants in clinical trials) to professional etiquette (e.g., rules for interacting with colleagues) (Pimple 2002; Steneck 2006). The rules expressed in government statutes and regulations in the U.S. have the force of law, and violations of these rules are often viewed as severe. Among these rules are laws regarding fabrication, falsification, and plagiarism (FFP). FFP constitute the U.S. federal definition of research misconduct (Steneck 2007). U.S. federal regulations also address disclosure of conflicts of interest, the care and use of animal subjects, and the protection of human participants, though institutions rather than researchers often suffer the most severe penalties for failures in these domains (DuBois 2004), and none are labeled as “research misconduct.”

In contrast, another set of norms in science serve as professional ideals or aspirations for behavior in research. They include guidelines for workplace civility and collegiality, and for social-mindedness (Pimple 2002). As we have addressed them, they could also include personal aspirations for one’s research career such as publishing extensively in high-impact journals or serving on the committees of professional associations. These guidelines are not formally expressed in regulations and violations are rarely met with formal consequences (Pimple 2002). For example, violations of collegiality could undermine trust and might result in attracting fewer collaborators or building less effective scientific teams (Adams 2014), but violating such rules would not result in formal penalties from institutions or funding agencies.

A third set of rules is based on generally accepted scientific norms, such as pursuing knowledge impartially and sharing scientific findings openly (Anderson et al. 2010). Thus, these rules address responsible practices for activities from assigning authorship to reporting research findings and performing peer review (Macrina 2014). The significance assigned to these rules falls between research regulations and professional ideals. Such scientific norms may be stated formally in institutional policies or professional codes of ethics (Macrina 2007), but they may also be informal, or even implicit. Behaviors that violate these norms are often known as “questionable research practices” (John et al. 2012). Although scientists view violations of these rules as damaging to the scientific enterprise (De Vries et al. 2006), they report relatively frequent violations of them (John et al. 2012; Martinson et al. 2005). They also report ambiguity about following some of these rules in light of competing pressures in science and behaviors among scientists that are contradictory to scientific norms (Anderson et al. 2007; De Vries et al. 2006). Furthermore, the consequences of violating these norms are often mild or unobservable.

In principle, following the rules in science is straightforward. This is a matter of knowing the rules and applying them in the decisions and actions in one’s work. In practice, this may not be so simple. Researchers are often busy and distracted from compliance; they compete with each other for funding; they may delegate tasks to staff, trainees or peers who may not know or be committed to following the rules; and they may feel pressure to take short cuts to increase productivity (Anderson and Adam 2014; DuBois et al. 2016a; Martinson et al. 2009). When establishing priorities for one’s scarce time, or deciding which shortcuts can be taken without significant harm to one’s career, it may be vital to understand diverse rules and norms as well as the significance attached to them. One might hope that researchers internalize the norms and values of science; nevertheless, oversight, punishment and reward systems also deeply influence professional behavior (Anderson and Adam 2014; DuBois 2004).

Culture and the Global Context of Research

Culture shapes people’s patterns of thinking and the standards that guide their behavior (Heine and Ruby 2010; Taras et al. 2010). Among the most widely studied topics in cultural research includes the value orientations of individuals from different cultures, and in particular, individualism versus collectivism (Aycan and Gelfand 2012; Knafo et al. 2011; Taras et al. 2010; Triandis 1995). Individualistic, Western cultures in North America and Western Europe tend to emphasize independence and the pursuit of personal achievements. On the other hand, collectivistic, Eastern cultures found in Asian nations tend to focus on interdependence and the interests of the group (Oyserman et al. 2002; Triandis 2001). In the workplace setting, culture influences people’s views, practices, and interactions, such as their approaches to teamwork, strategies for negotiation, and attitudes to leadership (Dickson et al. 2003; Gelfand et al. 2013; Ralston et al. 1997; Ramesh and Gelfand 2010; Aycan and Gelfand 2012). However, only recently have members of the scientific community begun to consider the role of culture in the work of researchers and in research integrity (Heitman 2014; Hwang 2013; InterAcademy Parternership 2016; Steele, Johnson, Watts, MacDougall, Mumford, Connelly, & Williams, 2016; Nho 2016; Steneck 2013). Scholars have considered the need for cultural sensitivity with regard to culturally diverse research participants (Benatar 2000; Calamaro 2008; Marshall 2008), but generally have not examined such issues among culturally diverse researchers working together.

This limited attention is notable given that approximately 25% of the academic research faculty in science and engineering in the U.S. was born internationally (National Science Foundation 2014). Estimates suggest that between 50 and 60% of the internationally born scientists and engineers in the U.S. workforce are from Asian nations (Kent 2011; National Science Foundation 2014). Furthermore, global collaborations, co-authorship, and training programs are on the rise, and about half of the post-doctoral fellows in the U.S. are international (Boesz and Lloyd 2008; Garrison et al. 2005; Heitman 2014; National Science Foundation 2014).

Cross-cultural differences in research policy, practices, and professional interactions may shape different practices and views with regard to scientific integrity (Boesz and Lloyd 2008; Heitman 2014; Nho 2016). Work by Heitman and Litewka (2011), for example, highlighted cultural differences concerning the practice of plagiarism. Plagiarism is taken very seriously in the U.S.; indeed it is part of the federal definition of research misconduct. However, international researchers may observe it commonly practiced in their home countries, be unfamiliar with policies formally and clearly defining it, or have different perspectives about intellectual property and authorship (Heitman and Litewka 2011). An Indian scientist contended that educational practices, such as teaching students to restate answers exactly, might contribute to plagiarism among Indian students and scientists (Chaurasia 2016). He also noted the contrast between the minor response to plagiarism in India versus nations like the U.S. where plagiarism seriously jeopardizes a scientist’s career (Chaurasia 2016). Some evidence suggests that international and U.S. doctoral students differ in their acceptance of the norms of academic research in the U.S., which led the researchers to speculate that perhaps such normative orientations are shaped prior to doctoral studies (Anderson and Louis 1994). More broadly, cultures differ with regard to placing respect in rules (rule-based cultures) versus authority figures (relationship-based cultures) to regulate behavior within their society (Hooker 2009; Pitta et al. 1999). As a result, behaviors viewed as corrupt in rule-based, Western cultures may be acceptable in relationship-based, non-Western cultures, and vice versa (Hooker 2009).

There is also modest evidence that international researchers are overrepresented in cases of research misconduct in the U.S. The senior author of the present study is the director of a remediation training program for researchers referred by their universities for failures in research compliance or integrity (e.g., violations of human or animal subjects protocols, plagiarism, or data fabrication). A slight majority of researchers referred have been international (DuBois et al. 2016a). This reflects twice as many international researchers attending the program than expected based on the proportion of the international, faculty-level researchers working in the U.S. The course instructors offered an analysis of the root causes and suggested that at least some of the problems stemmed from differences in views about appropriate interactions in a research lab. For example, failure to provide adequate oversight of a post-doctoral researcher was linked to a lab director’s concern that reviewing the post-doc’s data and analyses would indicate mistrust (DuBois et al. 2016a). We propose these influences—different research policies, practices, interaction styles, and cultural values—may explain the cultural difference we observed in prior research among NIH-funded researchers working in the U.S. on a measure of professional decision-making (DuBois et al. 2016b; Antes et al. 2016). The measure requires individuals to interpret and apply rules and norms for responsible research in the U.S., and being born outside of the U.S. was associated with lower performance on the professional decision-making measure. More broadly, research suggests that cultural differences in individuals’ reasoning styles and perceptual processes influence social judgments and approaches to problems (Nisbett and Miyamoto 2005; Peng and Nisbett 1999; Sachdeva et al. 2011), and therefore might influence approaches to professional decision-making.

The Present Study

In the present study, we examined whether cultural differences exist in researchers’ perceptions of rules in science in the U.S. research context. Researchers operationalize culture according to visible characteristics, such as nation of birth, or underlying characteristics, such as values (Ayman & Korabika 2010). In this study, we operationalized culture according to nation of birth, grouping researchers born outside of the U.S., primarily in Asian nations, together. This allows a general comparison of cultures broadly considered Eastern and Western. Although this approach is commonly applied, it is limited in that it simply classifies individuals into groups; if differences are identified, it does not permit deeper analysis of precisely why (Soares, Farhangmehr, & Shoham 2007).

Our aim was to explore whether U.S.-born and non-U.S.-born researchers differ in their perceptions of rules. First, we examined U.S.-born and non-U.S.-born researchers’ evaluations of the seriousness of violations of research regulations, scientific norms, and professional ideals. Next, we examined whether U.S.-born and non-U.S.-born researchers differed in how much they discriminated between the rule categories in their evaluations of seriousness. As described in the introduction, the three rule categories are fairly distinct in the U.S. research setting, however, we expected that discerning between the rule categories is a somewhat culturally bound task. For example, U.S.-born researchers are working in their home nation, and thus the distinctions might be more discernable, potentially due to greater alignment with their personal views or to more readily recognizing how the rules are applied in research—e.g., which rules are followed, which are not, and the nature of consequences associated with violations of the rules. Thus, our first two research questions included:

  • RQ1: Do researchers working in the U.S. who were born in the U.S. versus outside of the U.S. differ in their evaluations of the seriousness of violations of research regulations, scientific norms, and professional ideals?

  • RQ2: Do researchers working in the U.S. who were born in the U.S. versus outside of the U.S. differ in how much they discriminate in their evaluations between research regulations, scientific norms, and professional ideals?

To further explore the proposition that recognizing how research rules are viewed and applied in the U.S. might be a culturally bound task, we also examined the U.S.-born and non-U.S.-born researchers’ predictions of how the average U.S. Research Integrity Officer (RIO) would evaluate the seriousness of violations of the rules. We asked researchers to predict how RIOs would evaluate the seriousness of violations of the rules. While having personal views similar to that of RIOs might facilitate this task, in principle, one could accurately perform this task to the extent that they recognize how RIOs—individuals charged with fostering research integrity and responding to research misconduct—might view the seriousness of each category of rules. We anticipated that these judgments might be easier for those born in the U.S. leading to greater accuracy. Again, the mechanisms noted above might explain potential differences: more personal similarity in perceptions, or greater ease in identifying how rules are applied in context. Thus, our third research question included:

  • RQ3: Do researchers working in the U.S. who were born in the U.S. versus outside of the U.S. differ in the accuracy with which they predict RIOs’ evaluations of the seriousness of violations of the categories of rules?

If we observe that perceptions of rules in science differ by nation of origin, then it is pertinent to ask whether the influence of culture persists or changes as researchers become acculturated to the U.S. Although culture is a deeply rooted influence on human psychology and behavior, it is not necessarily static. As individuals are exposed to the culture of a host nation, particularly if they are educated there, acculturation occurs (Taras et al. 2013). Although it is typically a slow process, acculturation facilitates adjustment to the new culture and well-being (Sam and Berry 2010; Taras et al. 2013). Of course, the passage of time in a host country is necessary for acculturation, but it involves deeper psychological and behavioral change in which individuals adopt the norms and patterns of behavior of the new culture (Birman and Simon 2014). Acculturation requires acquiring language proficiency, engaging in the cuisine, entertainment, sporting, and media activities central to the host culture, and feeling connected to and a sense of belonging in the new culture (Lu et al. 2016; Mendenhall and Oddou 1985). Thus, our fourth research question included:

  • RQ4: Does acculturation to the U.S. explain differences in researchers’ discrimination between categories of rules or the accuracy of their predictions of RIOs’ evaluations of the rules?

Finally, we explored the potential for two additional variables to play a role in explaining perceptions of rules in science: amount of research experience and education in research ethics. We asked two additional questions, which included:

  • RQ5: Does the level of research experience influence a researchers’ discrimination between the categories of rules or the accuracy of their predictions of RIOs’ evaluations of the rules?

  • RQ6: Does education in research ethics influence researchers’ discrimination between the categories of rules or the accuracy of their predictions of RIOs’ evaluations of the rules?

The many aspects of responsible research might be puzzling for researchers, especially for newcomers to the scientific enterprise (Pimple 2002). However, presumably experience working in research increases clarity regarding rules in science. With experience in a profession, people gain knowledge, not just technical knowledge about how to perform the work, but practical knowledge concerning professional problems and strategies for addressing these challenges (Mumford et al. 2009a). Indeed the purpose of socialization into a professional field is to encourage individuals to adopt the norms of the profession (Fisher et al. 2009a). Thus, we anticipated that more experience in research might be associated with greater distinctions in the seriousness of violating rules and more accuracy in predictions of RIOs’ evaluations. Moreover, participation in research ethics education might be associated with an enhanced perception of rules. Theoretically, effective research ethics instruction would relate to a more sophisticated understanding of rules in science, reflected in discrimination between rule categories and accuracy in predicting RIOs’ evaluations. We examined these research questions in a sample of NIH-funded researchers and trainees born in the U.S. and internationally.

Method

Participants

NIH-Funded Researchers

Two hundred and three NIH-funded researchers participated in this study. Fifty-four percent were male. Fifty percent were born in the U.S. (n = 101) and the remaining were born outside of the U.S. (n = 102). Of the non-U.S.-born participants, 82% were born in Asia. Participants reported their ethnicity as Asian (53%), White (46%), and Black (2%). Sixty percent indicated that English was their native language, while 40% indicated English was a second language. The majority (59%) were in their 30s, and 25% were in their 40s—the remaining were in their 20s (6%), 50s (8%), or 60+ (2%).

Participants reported all of their graduate degrees: 86% indicated a research doctorate (e.g., PhD, ScD), 21% a professional doctorate (e.g., MD, DO), and 18% a master’s degree (e.g., MPH, MS). Most participants earned their graduate degrees in the U.S. (89% of research doctorates, 81% of professional doctorates, and 97% of master’s degrees). Their academic ranks included post-doctoral trainee (35%), instructor (6%), assistant professor (37%), associate professor (9%), and full professor (10%); 3% selected “other”. Participants selected each type of research that was relevant to describe their work: 38% indicated human subjects social/behavioral, 33% human subjects clinical, 40% animal subjects, 21% dry lab, 59% wet lab, and 2% “other”. (Overall, 46% of the sample performed human subjects research.) Participants reported that they had conducted research for an average of 12.24 years (SD = 7.86; range 1 – 45 years). On average, participants had worked in 4 research labs or groups in their career thus far (SD = 1.63; range 1 – 12 labs/groups).

RIOs

One hundred and twelve research institutional officials participated. Nearly all were born in the U.S. (89%) and reported English as their native language (92%). The most common age for the officials was 60 years old or older (39%), followed by 30% in their 50s, 23% in their 40s, and the remaining 8% were 39 years old or younger. A slight majority (56%) were male. We asked the officials to indicate their professional title by indicating if there were a “Research Integrity Officer”, “Research Compliance Officer”, or “Other”. (They indicated all that applied; thus their professional titles add to more than 100%.) Research Integrity Officer was the most common title (51%), and 28% indicated that their title was Research Compliance Officer. Additionally, 46% reported “Other” and provided a title such as Chief Research Officer or Director of Research Integrity. The average years they reported being in the position ranged from 0.5 years to 39 years with an average of 7.5 years (SD = 7.0).

Recruitment and Procedure

We identified principal investigators (PIs) and post-doctoral trainees funded by NIH through the NIH RePORTER database using criterion-based sampling. Specifically, we sought individuals across career stages (post-doctoral trainees, early career junior faculty, and mid-career and senior faculty) who were U.S.-born and non-U.S.-born. We searched one grant mechanism—fellowship (F) grants for postdoctoral trainees, career development (K) grants for junior investigators, and research (R01) grants for mid-career and senior investigators—at a time for active projects within a given year and narrowed the search by award notice date, pulling all individuals into our recruitment database until we reached a sampling frame large enough to meet our enrollment goals.

We obtained individuals’ names, institutions, and email addresses from the project information provided by RePORTER. Using online web searches, we identified individuals’ phone numbers, gender, and English as a second language (ESL) status. We aimed to constitute a sample that was about 50% U.S.-born and 50% born outside of the U.S. In 2009, the majority of international researchers (about 60%) working in the U.S. were from Asian backgrounds, primarily China and India (Kent 2011). Therefore, in this initial study of cultural differences among researchers, we aimed for the group of non-U.S. researchers to be comprised of individuals from Asian nations in order to reflect the group of international researchers mostly commonly represented among those working in the U.S. As defined by NIH and the U.S. Office of Management and Budget, Asian individuals come from the Far East, Southeast Asia, or the Indian subcontinent; for example, Cambodia, China, India, Japan, Korea, Malaysia, Pakistan, the Philippine Islands, Thailand, and Vietnam (National Institues of Health RePORT 2012; Office of Management and Budget 1997). Although Asian nations are heterogeneous, defining the groups in our sample in this manner allowed us to examine the influence of being from an Asian nation compared to being from the U.S., while identifying a population sufficiently large to enable us to meet enrollment goals. As we constructed our recruitment database, we used Asian surnames to estimate whether investigators or trainees were from an Asian background (Shah et al. 2010; Wong et al. 2010), in addition to biographical information posted online.

Recruitment consisted of emailing an initial invitation to a cohort of 883 researchers asking them to participate in a study that aimed to learn more about how NIH-funded researchers and trainees perceive research rules, norms, and values in science and think about research. The announcement indicated that the study would take one hour or less, and that participants would receive $50 for participating. We sent up to three reminder emails and made follow-up calls to investigators who had not participated. We monitored the demographic information reported by participants, specifically nation of birth and level of research experience, so that in the second round of recruitment we could send invitations to a cohort of potential participants in a targeted fashion. For example, if participation among senior investigators lagged behind junior investigators, we sent more recruitment invitations to senior investigators. The second cohort of recruitment included 179 individuals. In all, we approached 1062 individuals through our recruitment email. This excludes 42 invalid emails that did not reach the intended recipients. The survey platform does not indicate how many emails were definitely opened and how many went to junkmail. Therefore, the response rate based on raw emails sent was 19%. The survey platform does indicate how many of the survey links were accessed. Thus, of the 316 individuals who clicked on the survey link to access the study and view the initial informed consent information, the response rate was 64%. In past research with the same recruitment strategy, the response rate of those we know opened the email was 30% (Antes et al. 2016); thus we anticipate the true response rate is likely about 30%.

We also recruited Research Integrity Officers (RIOs) from the 218 doctoral granting research-intensive universities in the U.S. (i.e., those designated as highest or higher research activity by the Carnegie Classification of Institutions of Higher Education) (Center for Postsecondary Research 2016). Recruitment of RIOs focused on identifying the individual with a title most closely resembling “Research Integrity Officer” at each university. We located the names, email addresses, and phone numbers of individuals with these titles using university websites. We sent recruitment invitations via email and up to three follow-up emails. The announcement invited individuals to participate in a survey that would take 3–5 minutes that aimed to learn more about how research administrators perceive rules and norms, and think about research. We provided no incentive for participation to this group of participants. The response rate among RIOs was 51%.

The recruitment email messages sent to investigators, trainees, and RIOs included a link to the study hosted via the online survey platform Qualtrics. The investigators and trainees completed all of the measures described in the measures section. Additionally, they completed measures of work values, personal values, professional decision-making, and exposure to unprofessional research practices. This report focuses on the findings with regard to rules in science. The RIOs completed only the Evaluating Rules in Science Task, as their evaluations served as a comparison for the evaluations made by researchers.

Measures

Evaluating Rules in Science Task (ERST)

We developed the ERST for this study, which assesses how researchers and RIOs perceive the severity of violations of research regulations, scientific norms, and professional ideals. The ERST consists of 18-items (see Appendix) that are short statements describing research regulations (e.g., “Give credit when using the words of other people”), scientific norms (e.g., “Read and approve the final text of articles when listed as an author”), or professional ideals (e.g., “Be available to colleagues who need assistance”). The regulations, norms, and ideals scales each consist of six items. The research regulations items covered the topics of plagiarism, data falsification, complying with mandated training requirements, effort reporting, disclosure of conflicts of interest, and sharing data when required by funding agencies. The norms items included reading and approving final manuscripts when listed as an author, being objective in peer review, completely describing analyses in publications, reporting initial hypotheses regardless of results, and backing up data. The ideals were intentionally written to support discrimination among items. For example, a sample item includes, “Work hard to produce publications in high quality journals.” The items were meant to represent positive aspirations, but of the sort that few would consider to be obligations that apply to all situations or trump regulations or norms of science when they conflict; hence, an isolated violation (e.g., refusing to join a committee) would be viewed as more minor than a violation of a regulation or norm of science.

The ERST presents the statements describing regulations, norms, and ideals in a mixed order. The instructions ask participants to rate how serious they personally consider a violation of each rule or ideal on a 6-point scale of 1 (not at all serious), 2 (somewhat serious), 3 (moderately serious), 4 (serious), 5 (very serious), to 6 (extremely serious). Researchers and RIOs completed this task. The researchers also completed a second task. They rated how serious they thought an average compliance officer working at their institution would consider a violation of each statement on the same 6-point scale. The Cronbach’s alpha coefficients indicated that the scales demonstrated satisfactory reliability. (For ratings of seriousness among investigators, regulations = .68, norms = .68, and ideals = .82. For RIOs’ ratings of seriousness, regulations = .80, norms = .80, and ideals = .86. For researchers’ predictions of RIOs’ ratings, regulations = .78, norms = .84, and ideals =.87).

The item development procedure for the ERST included identifying regulations and norms that pertain to all researchers working on federally-funded research in the U.S., regardless of their scientific field. For example, we did not include regulations that would be specific to research with human subjects, select agents, or animal research. We conducted a literature review using leading textbooks on research integrity and the responsible conduct of research (Burroughs Wellcome Fund, Howard Hughes Medical Institute 2006; Macrina 2014; National Academies of Science 2009; Shamoo and Resnik 2015; Steneck 2007) and reviews of these materials to identify relevant regulations and norms (DuBois et al. 2010; Kon et al. 2011). We expected that violations of most regulations and norms would be rated as somewhat serious; thus including professional ideals allowed us to further assess how much researchers distinguished across the three categories.

After the literature review, two members of the research team (A.L.A. and J.M.D.) drafted items. Next, all four authors met to review and revise the draft items and create new items as necessary. The team met three times for item writing. A research team member (K.A.B.) then conducted interviews with two regulatory experts at Washington University in St. Louis to verify that the regulation items did articulate regulatory or legal requirements of researchers. We also asked them whether the items captured all major requirements that pertained to all researchers regardless of their specific scientific discipline. The expert review lead to minor revisions of the items.

We conducted cognitive interviews with individuals similar to our target participants to ensure the clarity and face validity of the items (Beatty and Willis 2007). We interviewed seven researchers representing diverse fields and career stages, and we included researchers from outside of the U.S. First, we asked participants to read the items on the ERST and share their general impressions. We asked whether the instructions were clear, and whether they would feel comfortable addressing the items. Next, we asked them to describe what they though each item was about, and whether they would have difficulty responding to any items. The interviewer recorded notes and compiled the results for the item development team. Generally, the feedback indicated that the items were clear and required only minor revisions. We met twice after the cognitive interviews to revise and finalize the wording of the items. Finally, we computed Lexile scores for the items to approximate their reading level. The average Lexile score was 860; for reference, the interquartile range of Lexile scores for sixth graders in the U.S. is 690–1020 (MetaMetrics, 2016).

From the researchers’ ratings of how serious they personally viewed the ERST items, we computed rule seriousness scores for the three scales (i.e., research regulations, scientific norms, and professional ideals) by averaging the six items from each. In addition, we computed a seriousness score from the two specific items within the 6-item research regulations scale that constitute the federal definition of research misconduct, namely plagiarism and falsification, and called this scale “research misconduct regulations.” These two behaviors are generally punished much more severely than, say, failures to complete training requirements or to share data, and they are the primary focus of the work of RIOs.

To test our question as to whether U.S.-born and non-U.S.-born researchers differ in their discrimination of the seriousness of violating different categories of rules, we computed “rule discrimination” scores by taking the absolute difference between the researchers’ aggregate seriousness scores for each pairwise comparison (e.g., regulations versus ideals). This calculation produced five discrimination scores: research regulations versus professional ideals, research misconduct regulations versus professional ideals, research regulations versus scientific norms, research misconduct regulations versus scientific norms, and scientific norms versus professional ideals. The range of possible scores is 0 to 5. Larger discrimination scores represent greater discrimination between the seriousness of violations of the categories of rules.

An additional score generated from the ERST was the accuracy of the researchers’ predictions of RIOs’ ratings of the seriousness of rule violations. We computed these scores first at the item-level, taking the absolute difference of the researchers’ predictions for each item minus the average rating provided by the RIOs, and then aggregated them by averaging them across the items from each of the four scales. For ease of interpretation, we reverse coded these scores so higher accuracy scores reflect more accuracy. The range of possible accuracy scores is 0 to 5. An accuracy score of five would indicate complete concordance between the researcher’s prediction and the average rating provided by RIOs. An accuracy score of zero would indicate complete discordance between the researcher’s prediction and the average rating provided by RIOs. The alpha reliability estimates for the accuracy scores included: regulations = .60, norms = .74, and ideals = .64. Attenuated reliability estimates are common when the component measures of difference scores are positively correlated. In turn, these scores will yield attenuated relationships with other variables. Thus, we must bear this in mind when interpreting our findings.

General Ethnicity Questionnaire

We measured acculturation with the 38-item abridged version of the General Ethnicity Questionnaire with American culture as the reference culture (Tsai et al. 2000). The first 25 items of the measure ask participants to indicate their level of agreement, on a 1 (strongly disagree) to 5 (strongly agree) scale, with statements about behaviors, practices, and preferences in a culture. Example items include: “I celebrate American holidays” and “I am proud of American culture”. The remaining 13 items ask participants to indicate how much, on a 1 (not at all) to 5 (very much) scale, they perform certain behaviors (e.g., “How much do you speak English with friends?”) and how fluently they speak, read, write, and understand English (e.g., “How fluently do you read English?”). We reverse-coded one item (“I am embarrassed/ashamed of American culture”) so that higher values reflect greater orientation to American culture, then averaged across all responses to computed acculturation scores. The Cronbach’s alpha in our sample was .92.

Demographic Questionnaire

We used a brief background questionnaire to gather demographic information to characterize our sample and to capture nationality, experience in research, and hours of research ethics education. Participants were asked whether they were born in the United States. We used responses to this question to create comparison groups for our analyses. As noted in the introduction, grouping individuals according to being born in the U.S. versus outside of the U.S. allowed us to classify our sample into groups for comparison, but does not take into account potential differences among distinct cultures represented within these broad groupings. We also asked participants not born in the U.S. to indicate in which region they were born and in what nation they obtained their graduate degrees. We used responses to this question to characterize our sample, but we did not have adequate sample size within specific regions to analyze the data at the level of individual regions.

The demographic questionnaire also asked participants to indicate their academic rank (i.e., postdoctoral trainee, instructor, assistant professor, associate professor, or professor). Using their response, we created a trainee status variable to represent those who were post-doctoral trainees versus faculty to use as a measure of their experience in research. We additionally used their responses to the number of years of experience they had working in research (which we defined as years doing research that led to their own and others’ publications) as another measure of experience.

Finally, the questionnaire asked participants to estimate how many hours of research ethics instruction they had completed. The prompt indicated to include instruction on RCR (Responsible Conduct of Research), human subjects protection, HIPAA (Health Insurance Portability and Accountability Act) patient privacy, animal care and use, and other topics commonly identified as “research ethics.” We used this estimate of hours of education (M = 30.85, SD = 26.20) as a continuous variable to explore its relationship with the outcomes. A self-report measure of hours of participation in ethics instruction may be limited, but lending to its validity, this estimate (collected using the same question) correlated with conducting clinical human subjects research in a sample of 400 NIH-funded investigators (Antes et al. 2016). This group, indeed, generally has more hours of required training.

Data Analysis

We performed all analyses using both the 6-item research regulations scores and the 2-item research misconduct regulations scores. First, we examined the correlations of the potential predictor variables—nation of origin, acculturation, trainee status, years of experience in research, and hours of ethics instruction—with the primary outcome variables: rule discrimination and prediction accuracy.

Second, we examined our research question regarding potential differences between U.S.-born and non-U.S.-born researchers in discrimination between the seriousness of different categories of rules. To do so, we performed a repeated measures analysis of variance (RM-ANOVA) using the seriousness scores for the research regulations, scientific norms, and professional ideals scales as the outcome variables and nation of origin, rule category, and the nation of origin by rule category interaction as the predictor variables. In a separate analysis, we included the seriousness scores for the research misconduct regulations, scientific norms, and professional ideals scales as the outcome variables. Follow-up t-tests on the rule discrimination scores were conducted to understand the specific nature of any significant differences. We report Cohen’s d effect size estimates for the group differences. Traditional interpretations of Cohen’s d effect sizes include 0.20 = small, 0.50 = medium, and 0.80 = large (Cohen, 1988; Lakens, 2013).

Third, we examined our research question regarding potential differences between U.S.-born and non-U.S.-born researchers in prediction accuracy. Like the analyses for rule discrimination, we performed two RM-ANOVAs. The first analysis included the prediction accuracy scores for the research regulations, scientific norms, and professional ideals as the outcome variables and nation of origin, rule category, and the nation of origin by rule category interaction as the predictor variables. The second analysis included the prediction accuracy scores for the research misconduct regulations, scientific norms, and professional ideals as the outcome variables. We performed follow-up t-tests on the prediction accuracy scores to understand the specific nature of significant differences and calculated Cohen’s d effect size estimates.

Finally, to address whether acculturation explained any observed differences in rule discrimination or prediction accuracy between the U.S.-born and non-U.S.-born groups, we performed follow-up repeated measures analysis of covariance (RM-ANCOVA) tests. These analyses were identical to the RM-ANOVA procedures described above with the exception that acculturation and the acculturation by rule category interaction were added as predictors.

Research Ethics

This study was conducted in accordance with ethical standards for research with human participants. The Institutional Review Board at Washington University in St. Louis reviewed and approved the study (ID#201511060). All participants received an informed consent form before indicating their consent to participate and proceeding to the study procedures.

Results

Correlations

Table 1 presents the correlations of researchers’ rule discrimination and prediction accuracy scores with the predictor variables. Nation of origin correlated positively with rule discrimination. Being from the U.S. was associated with greater discrimination between the seriousness of all categories of rules, with the exception of research regulations versus scientific norms. Likewise, nation of origin correlated positively with prediction accuracy, except accuracy of predictions regarding scientific norms. Individuals from the U.S. were more accurate in their predictions of RIOs’ judgments of the seriousness of the categories of rules. Acculturation also correlated positively with rule discrimination; specifically distinguishing between research regulations and professional ideals, research misconduct regulations and professional ideals, and research misconduct regulations and scientific norms. Acculturation also correlated positively with accuracy of predictions of RIOs’ ratings of violations of research regulations and research misconduct regulations.

Table 1.

Correlations of Hypothesized Predictors with Researchers’ Rule Discrimination and Prediction Accuracy Scores

Nation of Origin Acculturation Trainee Status Years Doing Research Hours of Ethics Instruction
Rule Discrimination
 Research Regulations vs. Ideals .32** .21** .13 −.04 .03
 Research Misconduct Regulations vs. Ideals .40** .26** .17* −.02 .00
 Research Regulations vs. Norms .08 .09 .06 −.05 .06
 Research Misconduct Regulations vs. Norms .27** .24** .23** −.06 .02
 Scientific Norms vs. Professional Ideals .30** .14 .03 .01 −.02
Prediction Accuracy
 Research Regulations .17* .21** .19** −.15* −.03
 Research Misconduct Regulations .17* .25** .20** −.22** −.07
 Scientific Norms .02 .06 .10 −.08 −.07
 Professional Ideals .21** .12 .08 −.05 .01

Notes. N = 203. Variable codes for Nation of Origin: 0 = non-U.S.-born, 1 = U.S.-born; Variable codes for Trainee Status: 0 = not a trainee, 1 = trainee.

*

p < .05;

**

p < .01.

Hours of instruction in ethics did not correlate with rule discrimination or prediction accuracy. Thus, the response to RQ6 appears to be no; using a basic measure of ethics education, education does not predict discrimination or accuracy. Trainee status correlated positively with rule discrimination and prediction accuracy, but to a lesser degree than nation of origin and acculturation; generally demonstrating fewer statistically significant correlations and correlations of smaller magnitudes. Nonetheless, being a post-doctoral trainee versus faculty correlated with greater rule discrimination and predictions that were more accurate. However, we had anticipated that more experience in research would be associated with greater discrimination and accuracy. We suspected that perhaps the trainee status variable may not be an adequate measure of low versus high levels of experience in research. Indeed, the trainee group had over 8 years (M = 8.21, SD = 2.71) of research experience, and non-trainees over 14 years (M = 14.40, SD = 8.82) of experience. However, the continuous variable “years doing research” as a second measure of experience also revealed modest correlations with some outcomes: it correlated negatively with prediction accuracy for research regulations and research misconduct regulations, but did not correlate with rule discrimination. This was, again, in the direction opposite of that anticipated. Thus, in response to RQ5, the pattern of finding suggested a modest and mixed relationship of experience in research with the outcomes. Given the tenuous relationship of these measures of experience with the outcomes and the stronger pattern among the key predictors of interest, nation of origin and acculturation, we focused the remainder of our report of findings on the research questions regarding nation of origin and acculturation.

Researchers’ Seriousness Scores and Rule Discrimination

In Table 2, we report the means and standard deviations for researchers’ ratings of the seriousness of violations for each category of rules. As anticipated, all researchers rated violations of research regulations as the most serious, followed by scientific norms and professional ideals. Although this basic pattern was consistent among U.S.-born and non-U.S.-born researchers, the degree of seriousness attributed to the categories of rules (with the exception of research regulations) differed between the groups. Specifically, U.S.-born researchers rated the research misconduct regulations higher and the scientific norms and professional ideals lower than the non-U.S.-born researchers. As shown in Table 2, the differences between the groups for research misconduct regulations and professional ideals were statistically significant (p < .01).

Table 2.

Researchers’ Seriousness Scores

All
Researchersa
U.S.-Born
Researchersb
Non-U.S.-Born
Researchersc

M SD M SD M SD t df p Cohen’s d

Research Regulations 4.84 .65 4.84 .55 4.83 .74 0.14 201 .886 .02
Research Misconduct Regulations 5.42 .64 5.56 .51 5.28 .72 3.15 201 .002 .45**
Scientific Norms 4.46 .74 4.39 .71 4.52 .77 1.27 201 .204 .18
Professional Ideals 3.10 .96 2.82 .77 3.38 1.04 4.32 201 .000 .61***

Notes.

a

N = 203.

b

N = 101.

c

N = 102.

M = Mean. SD = Standard Deviation. Possible range of scores: 1 (not at all serious) to 6 (extremely serious).

**

p < .01.

***

p < .001.

The RM-ANOVA tests on the seriousness scores indicated that there was a significant main effect of rule category, F(2, 402) = 667.98, p < .001. More central to our research question, there was also a main effect of nation of origin, F(1, 201) = 5.90, p < .05, and a significant nation of origin by rule category interaction, F(2, 402) = 17.54, p < .001. In the parallel analysis using seriousness scores for the narrower research misconduct regulations scale compared to scientific norms and professional ideals, there was a main effect of rule category, F(2, 402) = 882.58, p < .001 and a significant nation of origin by rule category interaction, F(2, 402) = 28.03, p < .001; the main effect of nation of origin was not statistically significant, F(1, 201) = 2.52, p = .114.

Follow-up t-tests (presented in Table 3) revealed that, with the exception of research regulations versus scientific norms, U.S.-born researchers made greater distinctions between the seriousness of violations of the different categories of rules than non-U.S.-born researchers (p < .001). Thus, in response to RQ1, the findings suggest that U.S.-born versus non-U.S.-born researchers differed in their evaluations of the seriousness of violations of the categories of rules. In particular, U.S.-born researchers rated violations of research misconduct regulations as more serious than non-U.S.-born researchers, while non-U.S.-born researchers rated violations of professional ideals as more serious than U.S.-born researchers. In keeping with the findings for seriousness scores, the findings with regard to RQ2 indicate that U.S.-born researchers discriminated more between the seriousness of violating different categories of rules. For example, U.S.-born researchers perceived a much greater difference between the seriousness of violations of research misconduct regulations versus professional ideals than non-U.S.-born researchers.

Table 3.

Researchers’ Rule Discrimination Scores

U.S.-Born
Researchersa
Non-U.S.-Born
Researchersb

M SD M SD t df p Cohen’s d

Research Regulations vs. Ideals 2.03 .70 1.48 .91 4.82 201 .000 .69***
Research Misconduct Regulations vs. Ideals 2.74 .73 1.93 1.12 6.09 201 .000 .86***
Research Regulations vs. Norms .58 .46 .51 .41 1.15 201 .250 .16
Research Misconduct Regulations vs. Norms 1.18 .63 .84 .60 3.95 201 .000 .55***
Scientific Norms vs. Professional Ideals 1.57 .68 1.15 .71 4.39 201 .000 .62***

Notes.

a

N = 101.

b

N = 102.

M = Mean. SD = Standard Deviation. Possible range of scores: 0 (no discrimination) to 5 (complete discrimination).

***

p < .001.

RIOs’ Seriousness Ratings and Researchers’ Prediction Accuracy

Table 4 presents the means and standard deviations for the RIOs’ evaluations of the seriousness of violations for each category of rules. Additionally, the table provides the means and standard deviations of the researchers’ predictions of the RIOs’ evaluations. As anticipated, the RIOs evaluated research regulations as most serious, followed by scientific norms and professional ideals.

Table 4.

Research RIOs’ Seriousness Scores and Researchers’ Predictions of Their Scores

RIOsa All
Researchersb
U.S.-Born
Researchersc
Non-U.S.-Born
Researchersd

M SD M SD M SD M SD

Research Regulations 5.28 .63 5.30 .71 5.51 .55 5.10 .79
Research Misconduct Regulations 5.54 .63 4.99 1.09 5.20 .99 4.77 1.15
Scientific Norms 4.38 .84 4.23 1.09 4.12 1.04 4.34 1.13
Professional Ideals 2.69 .95 2.91 1.16 2.57 .88 3.24 1.30

Notes.

a

N = 112.

b

N = 203.

c

N = 101.

d

N = 102.

M = Mean. SD = Standard Deviation. Possible range of scores: 1 (not at all serious) to 6 (extremely serious).

In Table 5, we report the researchers’ prediction accuracy means and standard deviations for each rule category. The RM-ANOVA procedures examining accuracy scores indicated that there were significant main effects of rule category, F(2, 402) = 70.52, p < .001, and nation of origin, F(1, 201) = 7.28, p < .01. The nation of origin by rule category interaction approached statistical significance, F(2, 402) = 2.49, p = .084. A similar pattern was found in the parallel analysis focusing on the narrower research misconduct regulations, scientific norms, and professional ideals: there were significant main effects of rule category, F(2, 402) = 25.13, p < .001 and nation of origin, F(1, 201) = 8.16, p < .01, and the nation of origin by rule category interaction approached significance, F(2, 402) = 2.70, p = .069.

Table 5.

Researchers’ Prediction Accuracy

U.S.-Born
Researchersa
Non-U.S.-Born
Researchersb

M SD M SD t df p Cohen’s d

Research Regulations 4.22 .32 4.08 .49 2.38 201 .018 .34*
Research Misconduct Regulations 4.18 .75 3.90 .92 2.41 201 .017 .33*
Scientific Norms 3.75 .56 3.74 .59 0.22 201 .827 .01
Professional Ideals 3.74 .36 3.52 .63 3.09 201 .002 .43**

Notes.

a

N = 101.

b

N = 102.

M = Mean. SD = Standard Deviation. Possible range of scores: 0 (completely inaccurate) to 5 (completely accurate).

*

p < .05.

**

p < .01.

Presented in Table 5 are the follow-up t-tests on the accuracy scores to determine the specific differences between the U.S.-born and non-U.S.-born groups. U.S.-born researchers were more accurate than non-U.S.-born researchers in their predictions for research regulations, research misconduct regulations, and professional ideals. There was no difference between the groups in terms of accuracy for scientific norms. Therefore, in response to RQ3, U.S.-born researchers were more accurate in their predictions of RIOs’ views of the seriousness of violating different categories of rules than researchers born outside of the U.S.

Acculturation to the U.S. and Researchers’ Seriousness Ratings

When we included acculturation as a covariate in the analysis of seriousness ratings for research regulations, scientific norms, and professional ideals, the effects we identified in the original analysis remained. Specifically, the main effect for nation of origin, F(1, 200) = 5.41, p < .05, the main effect for rule category, F(2, 400) = 6.12, p < .01, and the interaction of nation of origin and rule category, F(2, 400) = 11.28, p < .001, remained statistically significant. Neither the acculturation main effect, F(1, 200) = 0.26, p = .614, nor the acculturation by rule category interaction, F(2, 400) = 0.80, p = .450, were statistically significant.

Similarly, in the analysis of seriousness ratings that included research misconduct regulations, scientific norms, and professional ideals, the original effects remained, with the exception of the effect of nation of origin. Specifically, rule category, F(2, 400) = 5.90, p < .01, and the nation of origin by rule category interaction, F(2, 400) = 16.13, p < .001, were still statistically significant. However, the main effect of nation of origin dropped to become only marginally significant, F(1, 200) = 3.13, p = .078. Neither the acculturation main effect, F(1, 200) = 0.64, p = .427, nor the acculturation by rule category interaction, F(2, 400) = 1.66, p = .191, were statistically significant. These findings partially respond to RQ4, and suggest that acculturation to U.S. culture does not explain the effect of nation of origin on seriousness ratings.

Acculturation to the U.S. and Researchers’ Prediction Accuracy

Lastly, we examined whether acculturation explained the effects of nation of origin on prediction accuracy. In contrast to the analyses of seriousness ratings, adding acculturation to the analyses of researchers’ prediction accuracy scores changed the effects of nation of origin on prediction accuracy. In the analysis with the research regulations, scientific norms, and professional ideals, when acculturation was added to the model, there was no longer a significant main effect of nation of origin, F(1, 200) = 2.35, p = .127, main effect of rule category, F(2, 400) = 0.06, p = .944, or an interaction between nation of origin and rule category, F(2, 400) = 2.43, p = .089. However, there was also not a significant main effect of acculturation, F(1, 200) = 2.13, p = .146, or interaction of acculturation and rule category, F(2, 400) = 0.70, p = .499. Thus, acculturation explained some of the variation in prediction accuracy that nation of origin and rule category had explained initially.

In contrast, the analysis with the narrower research misconduct regulations, scientific norms, and professional ideals revealed that acculturation was a significant predictor of accuracy, and adding acculturation to the model removed the main effects of nation of origin, F(1, 200) = 1.67, p = .198, and rule category, F(2, 400) = 1.98, p = .140. The interaction between nation of origin and rule category was also not significant, F(2, 400) = 1.44, p = .239. However, now there was a significant main effect of acculturation, F(1, 200) = 4.87, p < .05, as well as an interaction between acculturation and rule category, F(2, 400) = 3.79, p < .05. Thus, more acculturated individuals were more generally accurate in their predictions. Follow-up regression analyses to examine the specific nature of the interaction effect revealed that acculturation did not predict accuracy for norms (β = .06, t(201) = 0.80, p = .422), but that greater acculturation was associated with more accuracy for research misconduct regulations in particular (β = .25, t(201) = 3.65, p < .001) and professional ideals to some degree (β = .12, t(201) = 1.78, p = .077). Overall, to complete the response to RQ4, these findings suggest that acculturation partially explains the differences between U.S. born and non-U.S. born researchers’ levels of accuracy, particularly when focusing on research misconduct regulations.

Discussion

The present study examined differences among researchers by nation of origin—those born in the U.S. versus those born primarily in Asia—in their evaluations of the seriousness of violations of different categories of rules in research and accuracy in predicting the views of RIOs. We found that nationality influenced researchers’ perceptions of rules in the U.S. research context. Researchers working in the U.S. who were born internationally perceived less distinction between the seriousness of violating different categories of rules than researchers born in the U.S. Their lower discrimination scores relative to U.S.-born researchers stemmed from evaluating violations of research misconduct rules as less serious and professional ideals as more serious than U.S. researchers. The difference between U.S. and non-U.S.-researchers’ evaluations of professional ideals was particularly pronounced. Perhaps the social nature of the professional ideals accounted for Asian researchers evaluating violations of these items as fairly serious, as Asian cultures emphasize interdependence and the needs of the group over the individual (Triandis 2001).

Nonetheless, recognizing the relative distinctions between the categories of rules may reflect deeper awareness of how these rules are viewed and applied in the U.S. research context, particularly in terms of their consequences if violated. This awareness may be important for one’s interactions, choices, and behaviors within the research setting. It is important to note that we presume that the reverse is also likely true: a U.S. researcher working internationally may not recognize as readily as locals how rules in research are interpreted and applied in that setting.

We also examined researchers’ predictions of the evaluations of the rules that they anticipated RIOs working in the U.S. would make. In this task, researchers considered not their personal evaluations of the rules, but how they thought that research officials evaluate the rules. We found that international researchers were less accurate in their predictions of U.S. RIOs’ ratings of the seriousness of violating different rules, except for predicting norms where there was no difference. Navigating professional life requires applying rules in one’s work practices. Even if individuals know the content of the rules, an aspect of professional acumen requires understanding the seriousness attributed to the rules in one’s specific professional context. Our findings suggest that this latter task is a more culturally sophisticated one. Without this understanding, an individual may be at greater risk of getting into trouble when working outside of their home country. It is of note that we observed these differences in perceptions despite the fact that most of the researchers born outside of the U.S. completed their scientific training in the U.S. This finding likely reflects the strong influence of culture and its relative persistence even with exposure to a host culture.

To obtain evidence regarding the potential for changes in perceptions as individuals acculturate to a host culture, we assessed acculturation and examined whether it accounted for the relationships observed between nation of origin, discrimination, and accuracy. Acculturation did not exert the same influence on individuals’ ratings of their personal views of the seriousness of violating different sets of rules as it did on their accuracy in predicting how a compliance officer would view violations of the rules. This may reflect the persistence of cultural influences on personal views and judgments even with experience in a new culture, but the ability to adapt one’s understanding of how others in a culture might view matters. It is of note that acculturation research suggests that adapting well to a host culture involves orienting oneself to the new culture, while retaining some connection to one’s home cultural identity (Sam and Berry 2010). Thus, adapting to rules as interpreted and applied in a particular nation does not preclude individuals from maintaining ties to their home culture.

While our findings with regard to acculturation suggest that experience in a culture may play some role in researchers’ perceptions of rules, we found that experience in research was less important than experience in the culture. It is not clear precisely why experience in research was limited in its influence on perceptions of rules, and in particular, why more experience was associated with lower accuracy and being a trainee with greater discrimination and accuracy. However, these findings were not entirely surprising given prior research on the influence of experience in research on other related outcomes; generally, experience was not associated with researchers’ scores on measures of ethical decision-making (Mumford et al. 2009a) or professional decision-making (Antes et al. 2016; DuBois et al. 2016b). Other work examining researchers’ subscription to norms and counternorms in science found that early-career and mid-career scientists generally subscribed to the norms of science equally, but differed in their subscription to counternorms, with mid-career researchers subscribing to most counternorms to a greater extent than early-career researchers (Anderson et al. 2007). Thus, it is generally unclear whether research experience may have a deleterious, beneficial, or neutral influence on researchers’ perceptions of rules in science.

We also observed no relationship between hours of ethics instruction and perceptions of rules. This too was not surprising, as we have previously found no effects of self-reported hours of research ethics instruction on professional decision making (Antes et al. 2016). Overall, the effect of research ethics instruction on various outcomes is notoriously mixed and tends to be moderate at best (Antes et al. 2009; Antes et al. 2010; Kalichman 2014b; Watts et al. 2016).

Implications for Education in the Responsible Conduct of Research (RCR)

Many rules and norms are recommended content for mandated training in RCR in the U.S. (Kalichman and Plemmons 2007; Kalichman 2014b; Steneck 2007). Our findings underscore the need for instructors to consider how they present the diverse rules and norms in science and make clear the distinctions between them. This is important generally for newcomers to research, but particularly for international researchers. Researchers, especially senior researchers, may unwittingly take for granted the sources of rules in science, and their interpretations, applications, and consequences. However, RCR education would best serve researchers if it were explicit about both the content and the consequences of rules. For example, it is arguably important for researchers to know that in the U.S. we punish more severely plagiarism (e.g., through debarment from funding, public shaming on government websites, and possible termination of employment) than violations of authorship criteria. In a study of instructors’ goals for teaching knowledge in RCR courses, some instructors reported not covering research misconduct in their courses because they viewed these serious violations as a limited problem or covering misconduct as not the intention of RCR education (Plemmons and Kalichman 2007). Yet, such instructional omission may disproportionately disadvantage international researchers.

Heitman (2014) specifically noted the potential value of exploring research policy and regulatory frameworks for science in different countries as an activity in cross-cultural research ethics training. A framework for RCR education that helps researchers identify their own and others’ orientations and assumptions might be particularly essential given this diversity and complexity (DuBois et al. 2016b; Mumford et al. 2008). Cultural generalizations about a particular cultural group are useful to understand common tendencies and patterns of behavior or beliefs within that group, but they are only useful when an individual gathers further information about a specific individual and listens and interacts with that individual with empathetic awareness (Galanti 2000).

Finally, mentors should not assume that everyone shares the same perspective; rather they should hold explicit conversations about research integrity and practices (Loue and Loff 2013). Trainees should also be empowered to start conversations about rules, standards, and practices (Kalichman 2014a). Many of our recommendations will require cross-cultural competence and sensitivity on the part of instructors, mentors, and trainees.

Limitations and Future Research

Our study employed a criterion-based sample of U.S.-born and non-U.S.-born, NIH-funded researchers across career stages (post-doctoral trainees, junior faculty, and mid to senior investigators). We recruited intentionally to ensure roughly equal representation across these groups, and we stopped collecting data once we reached targeted enrollment. This study was time-consuming for participants (requiring approximately 45 – 60 minutes), which makes it challenging to obtain high participation rates on a voluntary basis. Although our sample size was large enough to detect statistically significant differences between our groups, our ability to generalize is limited. We therefore recommend a replication study.

Although we identified differences by nationality in perceptions of rules, we do not know how these differences influence real-world behavior. It is reasonable to expect that interpretations of the rules influence the application of those rules in one’s work, but we did not establish that link in the present effort. We encourage further research that focuses on behavioral outcomes, even while acknowledging that obtaining such data is extremely difficult for reasons of ethics and practicality.

It is unclear from the present effort how researchers think about the “seriousness” of a violation. For example, is a violation serious insofar as it is likely to be caught or punished, if it is likely to damage the scientific record, or if it damages relationships in the scientific work setting? It is unclear how culture might inform these judgments, but we know that different cultures have different bases for rules and view different behaviors as appropriate and inappropriate (Hooker 2009). We would recommend a qualitative approach to explore nuanced thinking about the content and importance of diverse rules and norms in research. For example, in such work individuals might be asked to elucidate their thought processes in evaluating the seriousness of rules; this might reveal why Asian researchers evaluated violations of ideals as more serious relative to U.S. researchers.

We also do not know from the present study specifically what explains the differences in perceptions of rules that we identified by nationality, and it is critical in cultural research to discover the mechanisms underlying observed differences (Wang, 2016). The following important research questions remain unanswered:

  • Are differences linked to different levels of observed adherence to the rules among colleagues and mentors (Mumford et al. 2009b)?

  • Are differences due simply to the prevalence of different norms for the conduct of research (Boesz and Lloyd 2008; Chaurasia 2016; Steneck 2013)

  • Are differences due to different reward structures for scientific behaviors (DePellegrin and Johnston 2015; Jufang and Huiyun 2011)?

  • To what extent does the local institutional or lab ethical climate influence judgments about rules in science (Fisher et al. 2009b; Mumford et al. 2007; Wells et al. 2014; Martinson et al. 2016), and how does this interact with nation of origin?

  • Do researchers and trainees from the same nation gravitate together in labs when working abroad, thus reinforcing the values of the nation of origin?

  • Are there international differences in how rules are enforced and violations punished or remediated?

  • What is the influence of different professional and cultural values on learning the values of research (Nho 2016)?

In this study, we operationalized culture using nation of origin, and split our sample into two groups—those from the U.S. and those from outside of the U.S. The group not born in the U.S. primarily consisted of Asian researchers. Our current data do not allow us to distinguish differences that might exist among specific groups of researchers from other international backgrounds. Furthermore, grouping researchers from all Asian nations together allowed a general comparison of those from Eastern and Western cultures, but not an examination of potentially important differences within different Asian cultures. Indeed, the non-U.S.-born group of researchers tended to have greater variance in their average responses than U.S.-born researchers. Further research should explicitly examine different Asian cultures, and, additionally, might employ alternatives to nation of birth for operationalizing culture, for example, values. Although nationality is a common proxy variable for culture, other measures may permit more specific understanding of the mechanisms underlying the influence of culture. Future research should also be sensitive to individual differences, and to the fact that individuals who emigrate to pursue their career may differ from the average individual within their home culture.

We also encourage future research to consider alternative approaches to measuring perceptions of rules and related outcome measures to expand this work. The difference scores approach applied in this study yielded attenuated reliabilities, and, therefore, produced conservative findings. Additionally, we employed measures written in English. We consider this appropriate in this sample and given the participants’ task. The language of research compliance and research integrity in the U.S. is English, thus it is appropriate to ask researchers to make these practical, professional judgments in this local language. Additionally, the ERST measure was written at approximately the 6-grade reading level and the items were evaluated for clarity with cognitive interviews among Asian researchers before initiating the study. Also of note, nearly all the individuals in our sample trained in the U.S. and held NIH funding as principal investigators; thus their English proficiency would likely differ substantially relative to international researchers just joining research labs in the U.S. Nevertheless, it is important to note in cross-cultural research the importance of measures that are equally valid in both groups to ensure meaningful comparisons (Milfont, 2015).

A related consideration comes from recent work suggesting that moral judgments differ by native versus second language. However, this work focused on basic moral reasoning scenarios (e.g., the trolley dilemma; consensual incest) (Costa et al. 2014; Geipel et al. 2015; Geipel et al. 2016) as opposed to applied, work-place scenarios where individuals may have learned the topics in their second language, and are exposed to the issues in their second language. Nonetheless, it is appropriate to consider how the language of measures within a given cross-cultural study might influence findings.

Finally, our study focused on U.S. norms and used the views of U.S. RIOs to generate a target for accuracy predictions. We assume that similar differences would be observed if our study design were reversed and U.S.-born researchers working, say, in China, were asked about rules and the seriousness of violating rules within Chinese research settings. We intended to focus on fundamental mechanisms such as nationality and acculturation; however, it is necessary to adopt the perspective of specific cultures when studying such matters. Future research should be conducted in diverse home nations.

Conclusion

Our findings suggest that the scientific community should not take for granted that all researchers understand in the same way the different categories of rules that govern scientific work in the U.S. We know that diverse perspectives in the workplace foster creative problem solving and innovation when leveraged effectively, but multicultural collaborations do involve challenges (Chua 2013; Dibble and Gibson 2013; Hwang 2013). Thus, to support high-quality, responsible research in today’s globalized scientific setting (InterAcademy Parternership 2016), we must better understand the role of culture and adapt our approaches to training, mentoring, and collaborating to best serve all researchers and society.

Appendix. Items from the Evaluating Rules in Science Task

Research Regulations and Statutes

  • 1

    Disclose all financial conflicts of interest in research.

  • 2

    Give credit when using the words of other people.

  • 3

    Complete federally mandated training programs.

  • 4

    Only report findings that accurately reflect research data.

  • 5

    Share data when required by funding agencies.

  • 6

    Ensure salary charged to grants reflects actual time spent on the project.

Norms of Science

  • 7

    Read and approve the final text of articles when listed as an author.

  • 8

    Completely describe your analysis approach in research publications.

  • 9

    Be objective when reviewing the work of peers.

  • 10

    Discuss limitations of studies when publishing results.

  • 11

    Report initial hypotheses or research questions regardless of study results.

  • 12

    Ensure all electronic data are backed up to protect them from loss.

Professional Ideals

  • 13

    Be available to colleagues who need assistance.

  • 14

    Volunteer to serve on committees of professional associations.

  • 15

    Work hard to produce publications in high quality journals.

  • 16

    Build relationships with the people working in compliance offices.

  • 17

    Attend events aimed at building a community of scholars.

  • 18

    Share study findings with the public.

Footnotes

Authors Note

The authors have no conflicts of interest to disclose. We would like to thank Mobolaji Fowose for assistance with the literature search.

Contributor Information

Alison L. Antes, Division of General Medical Sciences, Washington University School of Medicine, 4523 Clayton Avenue, Campus Box 8005, St. Louis, MO 63110, USA, 314-362-6006 (p), 314-454-5113 (f).

Tammy English, Department of Psychological & Brain Sciences, Washington University in St. Louis, One Brookings Drive, Campus Box 1125, St. Louis, MO 63130, USA, 314-935-3190 (p).

Kari A. Baldwin, Division of General Medical Sciences, Washington University School of Medicine, 4523 Clayton Avenue, Campus Box 8005, St. Louis, MO 63110, USA, 314-747-2703 (p), 314-454-5113 (f).

James M. DuBois, Division of General Medical Sciences, Washington University School of Medicine, 4523 Clayton Avenue, Campus Box 8005, St. Louis, MO 63110, USA, 314-747-2710 (p), 314-454-5113 (f).

References

  1. Adams LG. Putting together a scientific team: Collaborative science. Trends in Microbiology. 2014;22(9):483–485. doi: 10.1016/j.tim.2014.05.001. [DOI] [PubMed] [Google Scholar]
  2. Anderson MS, Adam JA. A proposal for considering research integrity from the perspective of behavioral economics. Journal of Microbiology & Biology Education. 2014;15(2):173–176. doi: 10.1128/jmbe.v15i2.868. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Anderson MS, Louis KS. The graduate student experience and subscription to the norms of science. Research in Higher Education. 1994;35(3):273–299. doi: 10.1007/BF02496825. [DOI] [Google Scholar]
  4. Anderson MS, Martinson BC, De Vries R. Normative dissonance in science: Results from a national survey of U.S. scientists. Journal of Empirical Research on Human Research Ethics. 2007;2(4):3–14. doi: 10.1525/jer.2007.2.4.3. [DOI] [PubMed] [Google Scholar]
  5. Anderson MS, Ronning EA, De Vries R, Martinson B. Extending the Mertonian norms: Scientists’ subscription to norms of research. Journal of Higher Education. 2010;81(3):366–393. doi: 10.1353/jhe.0.0095. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Antes AL, Chibnall JT, Baldwin KA, Tait RC, Vander Wal JS, DuBois JM. Making professional decisions in research: Measurement and key predictors. Accountability in Research. 2016;23(5):288–308. doi: 10.1080/08989621.2016.1171149. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Antes AL, Murphy ST, Waples EP, Mumford MD, Brown RP, Connelly S, et al. A meta-analysis of ethics instruction effectiveness in the sciences. Ethics and Behavior. 2009;19(5):379–402. doi: 10.1080/10508420903035380. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Antes AL, Wang X, Mumford MD, Brown R, Connelly S, Devenport LD. Evaluating the effects that existing instruction on responsible conduct of research has on ethical decision making. Academic Medicine. 2010;85(3):519–526. doi: 10.1097/ACM.0b013e3181cd1cc5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Aycan Z, Gelfand MJ. Cross-cultural organizational psychology. In: Kozlowski SWJ, editor. The Oxford handbook of organizational psychology. Vol. 2. New York: Oxford University Press; 2012. pp. 1103–1160. [Google Scholar]
  10. Ayman R, Korabik K. Leadership: Why gender and culture matter. American Psychologist. 2010;65(3):157–170. doi: 10.1037/a0018806. [DOI] [PubMed] [Google Scholar]
  11. Beatty PC, Willis G. Research synthesis: The practice of cognitive interviewing. Public Opinion Quarterly. 2007;71(2):287–311. doi: 10.1093/poq/nfm006. [DOI] [Google Scholar]
  12. Benatar SR. A new look at international research ethics. British Medical Journal. 2000;321(7264):824–826. doi: 10.1136/bmj.321.7264.824. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Birman D, Simon CD. Acculturation research: Challenges, complexities, and possibilities. In: Leong FTL, Comas-Díaz L, Nagayama Hall GC, McLoyd VC, Trimble JE, Leong FTL, editors. APA handbook of multicultural psychology, Vol. 1: Theory and research. Washington, DC: American Psychological Association; 2014. pp. 207–230. [Google Scholar]
  14. Boesz C, Lloyd N. Collaborations: Investigating international misconduct. Nature. 2008;452(7188):686–687. doi: 10.1038/452686a. [DOI] [PubMed] [Google Scholar]
  15. Calamaro CJ. Culture competence in research: Research design and subject recruitment. Journal of Pediatric Health Care. 2008;22(5):329–332. doi: 10.1016/j.pedhc.2008.05.007. [DOI] [PubMed] [Google Scholar]
  16. Center for Postsecondary Research. Carnegie classification of insitutions of higher education. Center for Postsecondary Reseearch. Indiana University School of Education; 2016. Retrieved from http://carnegieclassifications.iu.edu/ [Google Scholar]
  17. Chaurasia A. Stop teaching Indians to copy and paste. Nature. 2016;534:1. doi: 10.1038/534591a. [DOI] [PubMed] [Google Scholar]
  18. Chua RYJ. The costs of ambient cultural disharmony: Indirect intercultural conflicts in social environment undermine creativity. Academy of Management Journal. 2013;56(6):1545–1577. doi: 10.5465/amj.2011.0971. [DOI] [Google Scholar]
  19. Costa A, Foucart A, Hayakawa S, Aparici M, Apesteguia J, Heafner J, et al. Your morals depend on language. PLOS One. 2014;9(4):e94842. doi: 10.1371/journal.pone.0094842. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Davis MS. The role of culture in research misconduct. Accountability in Research. 2003;10:189–201. doi: 10.1080/08989620390250207. [DOI] [PubMed] [Google Scholar]
  21. De Vries R, Anderson MS, Martinson BC. Normal misbehavior: Scientists talk about the ethics of research. Journal of Empirical Research on Human Research Ethics. 2006;1(1):43–50. doi: 10.1525/jer.2006.1.1.43. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. DePellegrin TA, Johnston M. An arbitrary line in the sand: Rising scientists confront the impact factor. Genetics. 2015;201(3):811–813. doi: 10.1534/genetics.115.182261. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Dibble R, Gibson C. Collaboration for the common good: An examination of challenges and adjustment processes in multicultural collaborations. Journal of Organizational Behavior. 2013;34(6):764–790. doi: 10.1002/job.1872. [DOI] [Google Scholar]
  24. Dickson MW, Den Hartog DN, Mitchelson JK. Research on leadership in a cross-cultural context: Making progress, and raising new questions. Leadership Quarterly. 2003;14:729–768. doi: 10.1016/j.lequa.2003.09.002. [DOI] [Google Scholar]
  25. DuBois JM. Is compliance a professional virtue of researchers? Reflections on promoting the responsible conduct of research. Ethics and Behavior. 2004;14(4):383–395. doi: 10.1207/s15327019eb1404_8. [DOI] [PubMed] [Google Scholar]
  26. DuBois JM, Chibnall JT, Tait RC, Vander Wal JS. Lessons from researcher rehab. Nature. 2016a;534:173–175. doi: 10.1038/534173a. [DOI] [PubMed] [Google Scholar]
  27. DuBois JM, Chibnall JT, Tait RC, Vander Wal JS, Baldwin KA, Antes AL, et al. Professional Decision-Making in Research (PDR): The validity of a new measure. Science and Engineering Ethics. 2016b;22(2):391–416. doi: 10.1007/s11948-015-9667-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. DuBois JM, Schilling D, Heitman E, Steneck NH, Kon A. Instruction in the responsible conduct of research: An inventory of programs and materials within CTSAs. Clinical and Translational Science. 2010;3(3):109–111. doi: 10.1111/j.1752-8062.2010.00193.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Fisher CB, Fried AL, Feldman LG. Graduate socialization in the responsible conduct of research: A national survey on the research ethics training experiences of psychology doctoral students. Ethics and Behavior. 2009a;19(6):496–518. doi: 10.1080/10508420903275283. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Fisher CB, Fried AL, Goodman SJ, Germano KK. Measures of mentoring, department climate, and graduate student preparedness in the responsible conduct of psychological research. Ethics and Behavior. 2009b;19(3):227–252. doi: 10.1080/10508420902886726. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Galanti G. An introduction to cultural differences. Western Journal of Medicine. 2000;172(5):335–336. doi: 10.1136/ewjm.172.5.33. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Garrison HH, Stith AL, Gerbi SA. Foreign post-docs: The changing face of biomedical science in the US. FASEB Journal. 2005;19(14):1938–1942. doi: 10.1096/fj.05-1203ufm. [DOI] [PubMed] [Google Scholar]
  33. Geipel J, Hadjichristidis C, Surian L. How foreign language shapes moral judgment. Journal of Experimental Social Psychology. 2015;59:8–17. http://dx.doi.org/10.1016/j.jesp.2015.02.001. [Google Scholar]
  34. Geipel J, Hadjichristidis C, Surian L. Foreign language affects the contribution of intentions and outcomes to moral judgment. Cognition. 2016;154:34–39. doi: 10.1016/j.cognition.2016.05.010. http://dx.doi.org/10.1016/j.cognition.2016.05.010. [DOI] [PubMed] [Google Scholar]
  35. Gelfand MJ, Brett J, Gunia BC, Imai L, Huang TJ, Hsu BF. Toward a culture-by-context perspective on negotiation: Negotiating teams in the United States and Taiwan. Journal of Applied Psychology. 2013;98(3):504–513. doi: 10.1037/a0031908. [DOI] [PubMed] [Google Scholar]
  36. Heine SJ, Ruby MB. Cultural psychology. Wiley Interdisciplinary Reviews-Cognitive Science. 2010;1(2):254–266. doi: 10.1002/wcs.7. [DOI] [PubMed] [Google Scholar]
  37. Heitman E. Cross-cultural considerations in U.S. research ethics education. Journal of Microbiology & Biology Education. 2014;15(2):130–134. doi: 10.1128/jmbe.v15i2.860. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Heitman E, Litewka S. International perspectives on plagiarism and considerations for teaching international trainees. Urologic Oncology. 2011;29(1):104–108. doi: 10.1016/j.urolonc.2010.09.014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Hooker J. Corruption from a cross-cultural perspective. Cross Cultural Management. 2009;16(3):251–267. doi: 10.1108/13527600910977346. [DOI] [Google Scholar]
  40. Burroughs Wellcome Fund, Howard Hughes Medical Institute. Making the right moves: A practical guide to scientific management for post-docs and new faculty. 2. Research Park Triangle, NC: Burroughs Wellcome Fund; Chevy Chase, MD: Howard Hughes Medical Institute; 2006. [Google Scholar]
  41. Hwang K. Effects of the language barrier on processes and performance of international scientific collaboration, collaborators’ participation, organizational integrity, and interorganizational relationships. Science Communication. 2013;35(1):3–31. doi: 10.1177/1075547012437442. [DOI] [Google Scholar]
  42. InterAcademy Parternership. Doing global science: A guide to responsible conduct in the global research enterprise. Princeton, NJ: Princeton University Press; 2016. [Google Scholar]
  43. John LK, Loewenstein G, Prelec D. Measuring the prevalence of questionable research practices with incentives for truth telling. Psychological Science. 2012;23(5):524–532. doi: 10.1177/0956797611430953. [DOI] [PubMed] [Google Scholar]
  44. Jufang S, Huiyun S. The outflow of academic papers from China: Why is it happening and can it be stemmed? Learned Publishing. 2011;24(2):95–97. doi: 10.1087/20110203. [DOI] [Google Scholar]
  45. Kalichman MW. A modest proposal to move RCR education out of the classroom and into research. Journal of Microbiology & Biology Education. 2014a;15(2):93–95. doi: 10.1128/jmbe.v15i2.866. [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Kalichman MW. Rescuing Responsible Conduct of Research (RCR) education. Accountability in Research. 2014b;21(1):68–83. doi: 10.1080/08989621.2013.822271. [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Kalichman MW, Plemmons DK. Reported goals for responsible conduct of research courses. Academic Medicine. 2007;82(9):846–852. doi: 10.1097/ACM.0b013e31812f78bf. [DOI] [PubMed] [Google Scholar]
  48. Kent MM. More U.S. scientists and engineers are foreign-born. Population Reference Bureau. 2011 Retrieved from http://www.prb.org/Publications/Articles/2011/usforeignbornstem.aspx.
  49. Knafo A, Roccas S, Sagiv L. The value of values in cross-cultural research: A special issue in honor of Shalom Schwartz. Journal of Cross-cultural Psychology. 2011;42(2):178–185. doi: 10.1177/0022022110396863. [DOI] [Google Scholar]
  50. Kon AA, Schilling DA, Heitman E, Steneck NH, DuBois JM. Content analysis of major textbooks and online resources used in responsible conduct of research instruction. AJOB Primary Research. 2011;2(1):42–46. doi: 10.1080/21507716.2011.564263. [DOI] [PMC free article] [PubMed] [Google Scholar]
  51. Korenman SG, Berk R, Wenger NS, Lew V. Evaluation of the research norms of scientists and administrators responsible for academic research integrity. Journal of the American Medical Association. 1998;279(1):41–47. doi: 10.1001/jama.279.1.41. [DOI] [PubMed] [Google Scholar]
  52. Kreutzberg GW. The rules of good science. EMBO Reports. 2004;5(4):330–332. doi: 10.1038/sj.embor.7400136. [DOI] [PMC free article] [PubMed] [Google Scholar]
  53. Loue S, Loff B. Mentoring international research ethics trainees: Identifying best practices. Journal of Empirical Research on Human Research Ethics. 2013;8(5):52–58. doi: 10.1525/jer.2013.8.5.52. [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. Lu Y, Samaratunge R, Härtel CEJ. Predictors of acculturation attitudes among professional Chinese immigrants in the Australian workplace. Journal of Management & Organization. 2016;22(1):49–67. doi: 10.1017/jmo.2015.19. [DOI] [Google Scholar]
  55. Macrina FL. Scientific societies and promotion of the responsible conduct of research: Codes, policies, and education. Academic Medicine. 2007;82(9):865–869. doi: 10.1097/ACM.0b013e31812f7e58. [DOI] [PubMed] [Google Scholar]
  56. Macrina FL. Scientific Integrity: Text and cases in responsible conduct of research. 4. Washington, D.C: ASM Press; 2014. [Google Scholar]
  57. Markus HR, Kitayama S. Culture and the self: Implications for cognition, emotion, and motivation. Psychological Review. 1991;98(2):224. [Google Scholar]
  58. Marshall PA. “Cultural competence” and informed consent in international health research. Cambridge Quarterly of Healthcare Ethics. 2008;17(2):206–215. doi: 10.1017/S0963180108080237. [DOI] [PubMed] [Google Scholar]
  59. Martinson BC, Anderson MS, de Vries R. Scientists behaving badly. Nature. 2005;435(7043):737–738. doi: 10.1038/435737a. [DOI] [PubMed] [Google Scholar]
  60. Martinson BC, Crain AL, Anderson MS, De Vries R. Institutions’ expectations for researchers’ self-funding, federal grant holding, and private industry involvement: Manifold drivers of self-interest and researcher behavior. Academic Medicine. 2009;84:1491–1499. doi: 10.1097/ACM.0b013e3181bb2ca6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  61. Martinson BC, Nelson D, Hagel-Campbell E, Mohr D, Charns MP, Bangerter A, et al. Initial results from the Survey of Organizational Research Climates (SOuRCe) in the U.S. Department of Veterans Affairs Healthcare System. PLOS One. 2016;11(3):e0151571. doi: 10.1371/journal.pone.0151571. [DOI] [PMC free article] [PubMed] [Google Scholar]
  62. Mendenhall M, Oddou G. The dimensions of expatriate acculturation: A review. The Academy of Management Review. 1985;10(1):39–47. doi: 10.2307/258210. [DOI] [Google Scholar]
  63. Milfont TL, Fischer R. Testing measurement invariance across groups: Applications in cross-cultural research. International Journal of Psychological Research. 2015;3(1):111–130. [Google Scholar]
  64. MetaMetrics. Lexile-to-Grade Correspondence. 2016 Retrieved from https://www.lexile.com/about-lexile/grade-equivalent/grade-equivalent-chart/
  65. Mumford MD, Connelly S, Brown RP, Murphy ST, Hill JH, Antes AL, et al. A sensemaking approach to ethics training for scientists: preliminary evidence of training effectiveness. Ethics and Behavior. 2008;18(4):315–339. doi: 10.1080/10508420802487815. [DOI] [PMC free article] [PubMed] [Google Scholar]
  66. Mumford MD, Connelly S, Murphy ST, Devenport LD, Antes AL, Brown RP, et al. Field and experience influences on ethical decision making in the sciences. Ethics and Behavior. 2009a;19(4):263–289. doi: 10.1080/10508420903035257. [DOI] [PMC free article] [PubMed] [Google Scholar]
  67. Mumford MD, Murphy ST, Connelly S, Hill JH, Antes AL, Brown RP, et al. Environmental influences on ethical decision making: Climate and environmental predictors of research integrity. Ethics and Behavior. 2007;17(4):337–366. doi: 10.1080/10508420701519510. [DOI] [Google Scholar]
  68. Mumford MD, Waples EP, Antes AL, Murphy ST, Connelly S, Brown RP, et al. Exposure to unethical career events: Effects on decision-making, climate, and socialization. Ethics and Behavior. 2009b;19(5):351–378. doi: 10.1080/10508420903035356. [DOI] [PMC free article] [PubMed] [Google Scholar]
  69. National Academies of Science. On being a scientist. Washington, DC: National Academics Press; 2009. [Google Scholar]
  70. National Institues of Health RePORT. Data on post-docs by citizenship. ACD biomedical workforce workding group data. 2012 Retrieved from http://report.nih.gov/investigators_and_trainees/ACD_BWF/data_post-docs_citizenship.aspx.
  71. National Science Foundation. Chapter 5: Academic research and development. Science and engineering indicators 2014. 2014 http://www.nsf.gov/statistics/seind14/index.cfm/chapter-5/c5h.htm#s4.
  72. Nho H. Research ethics education in Korea for overcoming culture and value system differences. Journal of Open Innovation: Technology, Market, and Complexity. 2016;2(1):1–11. doi: 10.1186/s40852-016-0030-3. [DOI] [Google Scholar]
  73. Nisbett RE, Miyamoto Y. The influence of culture: Holistic versus analytic perception. Trends in Cognitive Sciences. 2005;9(10):467–473. doi: 10.1016/j.tics.2005.08.004. [DOI] [PubMed] [Google Scholar]
  74. Office of Management and Budget. Revisions to the standards for the classification of federal data on race and ethnicity. Federal Register Notice. 1997 Oct 20; Retrieved from https://www.whitehouse.gov/omb/fedreg_1997standards.
  75. Oyserman D, Coon HM, Kemmelmeier M. Rethinking individualism and collectivism: Evaluation of theoretical assumptions and meta-analyses. Psychological Bulletin. 2002;128(1):3–72. doi: 10.1037//0033-2909.128.1.3. [DOI] [PubMed] [Google Scholar]
  76. Peng K, Nisbett RE. Culture, dialectics, and reasoning about contradiction. American Psychologist. 1999;54(9):741–754. doi: 10.1037/0003-066X.54.9.741. [DOI] [Google Scholar]
  77. Pimple KD. Six domains of research ethics a heuristic framework for the responsible conduct of research. Science and Engineering Ethics. 2002;8(2):191–205. doi: 10.1007/s11948-002-0018-1. [DOI] [PubMed] [Google Scholar]
  78. Pitta DA, Fung HG, Isberg S. Ethical issues across cultures: Managing the differing perspectives of China and the USA. Journal of Consumer Marketing. 1999;16(3):240–256. doi: 10.1108/07363769910271487. [DOI] [Google Scholar]
  79. Plemmons DK, Kalichman MW. Reported goals for knowledge to be learned in responsible conduct of research courses. Journal of Empirical Research on Human Research Ethics. 2007;2(2):57–66. doi: 10.1525/jer.2007.2.2.57. [DOI] [PubMed] [Google Scholar]
  80. Ralston DA, Holt DH, Terpstra RH, Kai-Cheng Y. The impact of national culture and economic ideology on managerial work values: A study of the United States, Russia, Japan, and China. Journal of International Business Studies. 1997;28(1):177–207. doi: 10.1057/palgrave.jibs.8400330. [DOI] [Google Scholar]
  81. Ramesh A, Gelfand MJ. Will they stay or will they go? The role of job embeddedness in predicting turnover in individualistic and collectivistic cultures. Journal of Applied Psychology. 2010;95(5):807–823. doi: 10.1037/a0019464. [DOI] [PubMed] [Google Scholar]
  82. Resick C, Martin G, Keating M, Dickson M, Kwan H, Peng C. What ethical leadership means to me: Asian, American, and European perspectives. Journal of Business Ethics. 2011;101(3):435–457. doi: 10.1007/s10551-010-0730-8. [DOI] [Google Scholar]
  83. Sachdeva S, Singh P, Medin D. Culture and the quest for universal principles in moral reasoning. International Journal of Psychology. 2011;46(3):161–176. doi: 10.1080/00207594.2011.568486. [DOI] [PubMed] [Google Scholar]
  84. Sam DL, Berry JW. Acculturation: When individuals and groups of different cultural backgrounds meet. Perspectives on Psychological Science. 2010;5(4):472–481. doi: 10.1177/1745691610373075. [DOI] [PubMed] [Google Scholar]
  85. Shah BR, Chiu M, Amin S, Ramani M, Sadry S, Tu JV. Surname lists to identify South Asian and Chinese ethnicity from secondary data in Ontario, Canada: A validation study. BMC Medical Research Methodology. 2010;10:42. doi: 10.1186/1471-2288-10-42. [DOI] [PMC free article] [PubMed] [Google Scholar]
  86. Shamoo AE, Resnik DB. Responsible conduct of research. 3. New York: Oxford University Press; 2015. [Google Scholar]
  87. Soares AM, Farhangmehr M, Shoham A. Hofstede’s dimensions of culture in international marketing studies. Journal of Business Research. 2007;60(3):277–284. http://dx.doi.org/10.1016/j.jbusres.2006.10.018. [Google Scholar]
  88. Steele LM, Johnson JF, Watts LL, MacDougall AE, Mumford MD, Connelly S, Williams THL. A comparison of the effects of ethics training on international and U.S. students. Science and Engineering Ethics. 2016;22(4):1217–1244. doi: 10.1007/s11948-015-9678-5. [DOI] [PubMed] [Google Scholar]
  89. Steneck NH. Fostering integrity in research: Definitions, current knowledge, and future directions. Science and Engineering Ethics. 2006;12(1):53–74. doi: 10.1007/PL00022268. [DOI] [PubMed] [Google Scholar]
  90. Steneck NH. ORI introduction to the responsible conduct of research. Washington, DC: U.S. Government Printing Office; 2007. [Google Scholar]
  91. Steneck NH. Global research integrity training. Science. 2013;340(6132):552–553. doi: 10.1126/science.1236373. [DOI] [PubMed] [Google Scholar]
  92. Taras V, Kirkman BL, Steel P. Examining the impact of culture’s consequences: A three-decade, multilevel, meta-analytic review of Hofstede’s cultural value dimensions. Journal of Applied Psychology. 2010;95(3):405–439. doi: 10.1037/a0018938. [DOI] [PubMed] [Google Scholar]
  93. Taras V, Rowney J, Steel P. Work-related acculturation: Change in individual work-related cultural values following immigration. The International Journal of Human Resource Management. 2013;24(1):130–151. doi: 10.1080/09585192.2012.672446. [DOI] [Google Scholar]
  94. Taylor SE, Sherman DK, Kim HS, Jarcho J, Takagi K, Dunagan MS. Culture and social support: Who seeks it and why? Journal of Personality and Social Psychology. 2004;87(3):354. doi: 10.1037/0022-3514.87.3.354. [DOI] [PubMed] [Google Scholar]
  95. Triandis HC. Individualism and collectivism. Boulder, CO: Westview Press; 1995. [Google Scholar]
  96. Triandis HC. Individualism-collectivism and personality. Journal of Personality. 2001;69(6):907–924. doi: 10.1111/1467-6494.696169. [DOI] [PubMed] [Google Scholar]
  97. Tsai JL, Ying Y, Lee PA. The meaning of “being Chinese” and “being American”: Variation among Chinese American young adults. Journal of Cross-cultural Psychology. 2000;31(3):302–332. doi: 10.1177/0022022100031003002. [DOI] [Google Scholar]
  98. Watts LL, Medeiros KE, Mulhearn TJ, Steele LM, Connelly S, Mumford MD. Are ethics training programs improving? A meta-analytic review of past and present ethics instruction in the sciences. Ethics and Behavior. 2016 doi: 10.1080/10508422.2016.1182025. [DOI] [PMC free article] [PubMed] [Google Scholar]
  99. Wells JA, Thrush CR, Martinson BC, May TA, Stickler M, Callahan EC, et al. Survey of organizational research climates in three research intensive, doctoral granting universities. Journal of Empirical Research on Human Research Ethics. 2014;9(5):72–88. doi: 10.1177/1556264614552798. [DOI] [PubMed] [Google Scholar]
  100. Wong EC, Palaniappan LP, Lauderdale DS. Using name lists to infer Asian racial/ethnic subgroups in the healthcare setting. Medical Care. 2010;48(6):540–546. doi: 10.1097/MLR.0b013e3181d559e9. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES