A Cheating in Online Exams Report
A Cheating in Online Exams Report
A Cheating in Online Exams Report
by
Mirella Baker Bemmel
This applied dissertation was submitted by Mirella Baker Bemmel under the direction of
the persons listed below. It was submitted to the Abraham S. Fischler School of
Education and approved in partial fulfillment of the requirements for the degree of
Doctor of Education at Nova Southeastern University.
________________________________ ________________________
Gordon Doctorow, EdD Date
Committee Chair
________________________________ ________________________
Michael Simonson, PhD Date
Committee Member
________________________________ ________________________
Mary Ann Lowe, SLPD Date
Associate Dean
ii
Statement of Original Work
I have read the Code of Student Conduct and Academic Responsibility as described in the
Student Handbook of Nova Southeastern University. This applied dissertation represents
my original work, except where I have acknowledged the ideas, words, or material of
other authors.
Where another author’s ideas have been presented in this applied dissertation, I have
acknowledged the author’s ideas by citing them in the required style.
Where another author’s words have been presented in this applied dissertation, I have
acknowledged the author’s words by using appropriate quotation devices and citations in
the required style.
I have obtained permission from the author or publisher—in accordance with the required
guidelines—to include any copyrighted material (e.g., tables, figures, survey instruments,
large portions of text) in this applied dissertation manuscript.
iii
Acknowledgments
This memorable journey I was blessed with would not have been possible without
the enthusiasm and encouragement from my wonderful husband, Ahmed Baker and our
sons Sharif and Tariq. I will be eternally grateful for their unconditional love and
unwavering support. I would also like to thank my mother and siblings, who inspired me
Doctorow. Our first meeting in Orlando three years ago was the beginning of a whirlwind
meticulous feedback, insightful suggestions, and encouragement in the most kind and
professional manner imaginable. While reaching the final destination is certainly icing on
the cake, the journey provided an experience of personal and intellectual growth, which
the institutions at the focus of this study, whose assistance I could always count on.
Finally, I would like to thank my friends, family, and colleagues whose words of
encouragement helped me get through this process. Their loving and supportive inquiries,
prayers and constant words of reassurance were never taken for granted.
I dedicate this dissertation to my father, whose spirit was my guiding light. His
life story inspired me to persevere with an insatiable hunger for intellectual growth and
steadfast determination.
iv
Abstract
This applied dissertation was an inquiry into the phenomenon of cheating among students
who take their classes online. There is a common perception that cheating is rampant in
online classes and the Southern Association of Colleges and Schools, the accreditation
association in the South, implemented policies, which mandate stricter monitoring of
students. In turn, colleges have reevaluated or implemented integrity policies, but there is
inconsistent enforcement of said policies.
Online faculty at three Florida community colleges were invited to complete a modified
version of the Academic Integrity Survey, which provided insights into their perception
of cheating, their awareness and enforcement of institutional policies regarding cheating
and safeguards used or desired. The survey was followed up with an eight-member focus
group discussion, and the results were triangulated.
An analysis of the data revealed that faculty is uncertain about the extent of cheating at
their college, but most take action once they discover an instance of cheating. Their
reaction to cheating may not necessarily be in line with the institutional policy although
they are aware of the required steps. Different safeguards are used to protect the integrity
of their courses, but there is an apparent lack of knowledge about available safeguards
and their use.
v
Table of Contents
Page
Chapter 1: Introduction ........................................................................................................1
Description of the Problem ......................................................................................1
Background and Justification ...................................................................................1
About the Researcher ...............................................................................................3
Purpose of the Study ................................................................................................3
Definitions of Major Concepts and Terms...............................................................4
vi
Recommendations for Future Studies ..................................................................133
References ........................................................................................................................134
Appendices
A McCabe Academic Integrity Survey 2010: Screen Shot of Faculty
Survey .........................................................................................................143
B Modified Academic Integrity Survey .........................................................152
C Chi Square Test of the First 42 Questions ..................................................166
Tables
1 Area of Primary Teaching Responsibility.....................................................73
2 Number of Years Teaching at the College Level..........................................74
3 Aggregated Survey Responses: Frequency of Cheating, Questions
4a–4c .............................................................................................................75
4 Means and Standard Deviations, Questions 4a–4c .......................................75
5 Aggregated Survey Responses: Frequency of Specific Cheating
Behaviors, Questions 9a1–9d1......................................................................77
6 Means and Standard Deviations, Questions 9a1–9d1 ...................................77
7 Aggregated Survey Responses: Frequency of Specific Cheating
Behaviors, Questions 9e1–9h1......................................................................78
8 Means and Standard Deviations, Questions 9e1–9h1 ...................................78
9 Aggregated Survey Responses: Frequency of Specific Cheating
Behaviors, Questions 9i1–9m1 .....................................................................79
10 Means and Standard Deviations, Questions 9i–9m1 ....................................79
11 Source of Material Used by Student to Paraphrase or Copy Material ..........80
12 Aggregated Survey Responses: Types of Cheating Observed,
Questions 12a–12d ........................................................................................80
13 Cheating is a Serious Problem at Your Institution........................................81
14 Aggregated Survey Responses: Faculty Attitudes Toward Online
Cheating, Questions 13a–13d .......................................................................85
15 Mean, Median, Mode, and Standard Deviations, Questions 13a–13d ..........85
16 Seriousness of Behavior, Questions 9a2–9d2 ...............................................86
17 Means and Standard Deviations, Questions 9a2–9d2 ...................................87
18 Seriousness of Behavior, Questions 9e2–9h2 ...............................................87
19 Means and Standard Deviations, Questions 9e2–9h2 ...................................88
20 Seriousness of Behavior, Questions 9i2–9m2 ..............................................88
21 Means and Standard Deviations, Questions 9i2–9m2 ..................................89
22 Aggregated Survey Responses: Reactions to Cheating, Questions
6a–6d .............................................................................................................91
23 Aggregated Survey Responses: Reactions to Cheating, Questions
6e–6i ..............................................................................................................91
24 Aggregated Survey Responses: Safeguards to Reduce Cheating .................92
25 Aggregated Survey Responses: Primary Source From Which Faculty
Learned About Academic Integrity Policies .................................................94
vii
26 Aggregated Survey Responses: Frequencies and Reasons for Ignoring
Cheating ........................................................................................................95
27 Degree of Satisfaction by Faculty With Handling Cases of Cheating ..........96
28 Aggregated Survey Responses: Additional Safeguards Faculty Would
Employ ..........................................................................................................98
29 Aggregated Survey Responses: Faculty Ratings of Institutional
Measures to Prevent Online Cheating.........................................................100
30 Aggregated Survey Responses: How and When Faculty Discuss
Institutional Dishonesty Policies, Questions 2a–2d ....................................101
31 Reason Cheating Was Ignored ....................................................................102
32 Pearson Correlations of Institutional Policies, Support, and
Effectiveness ...............................................................................................107
33 Pearson Correlations: Cheating is a Serious Problem Versus Faculty are
Vigilant in Reporting ..................................................................................108
34 Pearson Correlations: Actions Taken for Cheating Versus Years of
Experience...................................................................................................108
35 Aggregated Cross-Tabulation: Responses to Cheating by Gender .............109
36 Aggregated Cross-Tabulation: Reactions to Cheating by Discipline,
Questions 6a–6e ..........................................................................................110
37 Aggregated Cross-Tabulation: Reactions to Cheating by Discipline,
Questions 6f–6i ...........................................................................................110
viii
1
Chapter 1: Introduction
education, but questions regarding cheating in this environment have become more
intense. The ability to take courses at remote locations has opened doors to students
globally who may not have thought they would be able to further their education. While
online education has been growing (Sloan Consortium, n.d.), questions about the integrity
of courses offered online have gotten more intense (Mills, 2010; Parry, 2009; Roach,
2001). Faculty, administrators and even students continue to question whether the online
The problem addressed by this study was the lack of documentation about the
are delivered via computer, according to the Southern Association of Colleges and
Schools (SACS, 2010; WCET, n.d.), comes from the widespread belief that many higher
There are indications of widespread concern about cheating among college students,
which has resulted in much research devoted to the topic of academic dishonesty (e.g.,
Black, Greaser, & Dawson, 2008; Eckles, 2010; Grijalva, Nowell, & Kerkvliet, 2010;
Hollinger & Lanza-Kaduce, 2006; Moeck, 2002). The research for this study took place
2
in Florida, where the problems of dishonesty have also been evident. In 2007, Kaczor
wrote about athletes at a Florida university who were involved in different forms of
cheating in their online classes, a case that received nationwide attention. The Obama
Administration has implemented revised regulations to the Higher Education Act (Higher
Education Opportunity Act, 2008) designed to protect the integrity of online courses.
contingent upon the establishment of a process which ensures that the student who
submits assignments in an online class is the same student who is actually enrolled in the
The Southern Association of Colleges and Schools (SACS) adopted this revision
in 2010 and offered suggestions for different methods in which this could be
accomplished: “(1) a secure login and pass code, (2) proctored examinations, and (3) new
or other technologies and practices that are effective in verifying student identification”
(WCET, n.d.), where attempts have been made to uphold the standards of online classes
by offering solutions to the growing concern about integrity in the online environment.
The institutions at the focus of this study were three community colleges based in
Florida where online course offerings are available in both fully online and blended
formats. Records at one of the colleges where the researcher is a faculty member showed
that from 2006 to 2010 the number of students enrolled in blended courses at this
institution grew from 3,983 to 21,028, while the number of students enrolled in fully
online courses during that same time grew from 13,369 to 31,669 (R. Adkins, former
3
2011).
Instructors who teach online can elect to have their students take proctored exams
at the institution’s online testing center available on site. According to their records, the
online testing center served 11,530 students during the 2010 academic year (J. Davidoff,
2011). Since instructors who teach blended courses likely deliver exams in class, there is
a surveillance gap between the 31,669 students enrolled in fully online classes, and the
11,530 presumed fully online students taking proctored exams at the testing center on
site. This apparent gap has led the researcher to ask what measures were being taken by
instructors to ensure that the remaining students do not cheat on their exams.
The researcher of this study works at a community college where she is the Lead
E-Associate. In this position, she mentors instructional faculty who are in the course
development process, while she guides the developers through the implementation of
quality standards mandated at the researcher’s site. Additionally, she frequently facilitates
E-Learning workshops face-to-face and online. She holds an instructional faculty position
in Sociology and offers her classes face-to-face, fully online, and in blended format. She
has been teaching online for 12 years and has developed several online courses that have
The purpose of this study was to provide an inquiry into the phenomenon of
cheating in online courses. This study critically evaluated Gallant and Drinan’s (2008)
4
institutional theory regarding academic dishonesty: factors that lead to cheating, what
instructional faculty to safeguard integrity in their online courses. The information gained
cheating culture, and motivations for cheating. Additionally, the research may help the
implementation of existing or newly proposed safeguards and to what extent the college
is enforcing the code of conduct pertaining to academic dishonesty. The researcher offers
Online course assessment. For purposes of this study, online course assessment
is defined as testing performed by students “with the assistance of the Internet and related
technologies” (Watson & Sottile, Abstract, para. 1). Testing is performed by students
whose courses are delivered via the computer through the use of the Internet or an online
environment. The examinations are taken online by students and submitted electronically
dishonesty” (Watson & Sottile, Abstract, para. 1) including, but not limited to “cheating
and receiving assistance during tests and quizzes” (Watson & Sottile, Discussion section,
para. 4).
Plagiarism. For the purposes of this study, plagiarism is defined as “the intent to
5
claim as one’s own someone else’s words or ideas” (Simonson, Smaldino, Albright, &
Introduction
Distance education has opened doors to many who may not have thought that
education was an option for them because of limitations of time or the distance to a
specific location. Online education has been growing exponentially over time, and the
questions about the integrity of courses offered online have been described as having
gotten more intense (Mills, 2010; Parry, 2009; Roach, 2001). Face-to-face classes have
been indicated to have fewer incidents of cheating, but also that faculty, administrators,
and students have continued to question whether the online environment offers enough
security to prevent them (Mills, 2010; Parry, 2009; Roach, 2001). Some studies have
proposed that the distance between the teacher and the student is a factor that increases
assumed by someone else (Davis, Drinan, & Gallant, 2009). The purpose of this literature
The question of whether students in online courses are submitting their own work
environment (Black et al., 2008; Guernsey, 2001; Mills, 2010; Prince, Fulton, &
Garsombke, 2009). These same studies state that online students are often not monitored
and are free to share answers to exams, which are taken at home, or in any environment
that provides Internet access. Patnaude (2008) concluded that the lack of monitoring may
7
give faculty the perception that students are more likely to cheat in online courses. The
issue of cheating in the online environment has been addressed before, and some
researchers agree that there is reason to be concerned, but that cheating online is not a
bigger problem than it is in face-to-face classes (e.g., Grijalva et al., 2010; Kwong, Ng, &
Mark, 2010). In fact, several studies have concluded that postsecondary students in online
classes are less likely to cheat compared to students in the traditional face-to-face setting
(e.g., Eckles, 2010; Grijalva et al., 2010; Guernsey, 2001; Hart & Morgan, 2010; Kwong
et al., 2010).
surveys that asked undergraduate college students questions about whether they had
cheated or how frequently they thought their peers engaged in dishonest academic
behaviors (Mills, 2010; Stuber-McEwen, Wiseley, & Hoggatt, 2009). King, Guyette, and
Piotrowski (2009) and Kelley and Bonner (2005) proposed that cheating is more common
among postsecondary students from departments where the stakes of passing exams are
high, such as nursing programs. Although there has not been evidence to support these
claims, questions regarding the issue have continued to come up (Kelley & Bonner, 2005;
King et al., 2009). The range of cheating varies, as do the demographics of college
students who cheat. A study conducted among 1,390 postsecondary students revealed
70.2% of those who cheated were between the ages of 18 and 22 (Stearns, 2001).
of dishonest behaviors among college students, including copying from another student,
giving other students access to the exam, taking the test for another student, getting
answers from someone who previously took the exam, among other behaviors. Cheating
8
does not only pertain to dishonest activities on exams. Stearns (2001) showed that
students allow others to copy their homework or papers they wrote, and they frequently
engaged in acts of plagiarism. There are individuals and agencies that have made writing
papers for others their way to earn money (Spaulding, 2009; Watson & Sottile, 2010).
Although it does not seem to be one of the leading ways of cheating, it still occurs and
therefore needs to be acknowledged according to Shaw (2004) and Stearns (2001). Shaw
found that postsecondary students are more likely to cheat on exams than they are on
writing assignments completed at home. His study set out to find out the extent of
cheating in online courses among postsecondary students. Of the 581 students in his
study, only 0.7% asked others to take their exam for them. Spaulding stated that self-
reports on cheating are often unreliable, since postsecondary students may not provide
frank answers if they fear that their academic standing may be placed in jeopardy.
Additionally, Spaulding noted that postsecondary students may lie on a survey about
academic dishonesty because they fear that their level of acceptance among their peers
conclusion on the unreliability of self-reports. In his study, Jones found that 92% of
students indicated that they had personally cheated or knew of others who had cheated.
He compared those results to results of similar studies where the self-report rate is much
lower. Jones concluded that self-reporting must be unreliable, reasoning that otherwise
answers. Moten, Fitterer, Brazier, Leonard, and Brown (2013) detailed some options of
online cheating that included students waiting for their classmates to get the answers.
9
Moten et al. pointed out that when students take their exams in a nonproctored
environment, they may also use multiple computers to facilitate cheating. On one
computer, they will have the exam open, while the others provide Internet access, which
is used to browse for answers. Moten et al. mentioned that students fraudulently claim
that their computer showed error messages. While the instructor researches the problem,
the student has a chance to look up the answers. At times, students will submit corrupted
files to buy more time to complete a writing assignment (Moten et al., 2013). Students
will ask others to take the exam for them, by providing their user name and password to
third parties.
The concern about cheating among college students resulted in much research
devoted to the topic of academic dishonesty (e.g., Baron & Crooks, 2005; Boehm,
Justice, & Weeks, 2009; Brown, Weible, & Olmosk, 2010; Eckles, 2010; Hollinger &
Lanza-Kaduce, 2006; Moeck, 2002; Thomas & De Bruin, 2012). Newspapers have
reported on different cases of cheating in higher education. Zou (2011) reported that
submit in-class assignments during their absence. The students used hand-held devices,
called clickers, which were registered under the owner’s name, to submit class work
when they were not in class. Zou’s interview with a professor from the University of
Texas revealed that many students exchanged answers, which were then submitted via
the clickers. This resulted in students’ receiving credit for work that was completed by
their friends. The Air Force Academy in Colorado Springs, Colorado, also reported
cheating among 78 cadets whose scores on a calculus final exam were much lower than
10
the scores on their previous online math exam (Rodgers, 2012). The cadets apparently
used a math program, called Wolfram Alpha, to obtain questions from the same test bank
that was used to create the exam. The extent of cheating could be influenced by field of
study, as found by Sendag, Duran and Fraser (2012). Their study found that engineering
compared to their social science and education peers. Another influence to the extent of
cheating, as indicated by Sendag et al., is the modality in which courses are taken. They
found that students who only took face-to-face classes admitted to more cheating
Florida has not been spared from cheating and had made national headlines in
2007 when almost two dozen athletes at Florida State University were caught cheating in
their online classes (Kaczor, 2007). The students involved in online academic dishonesty
were all athletes who were either receiving scholarships at the time of the incident, or had
received scholarships in the past. That incident revealed several common forms of online
cheating: having someone else take the exam, receiving the answers from someone who
has already taken the exam, receiving strictly prohibited assistance during the
examination (Kaczor, 2007). Specifically, these Florida athletes had others write their
papers for them and submitted them as their own and had their tutors take their exams for
For-profit schools are also under fire when it comes to issues of integrity. Their
continued growth and cost of tuition are often mentioned as reasons why they may be
lacking in rigor and integrity (Klor de Alva, 2011). The extent of cheating and concerns
about it are not limited to the United States. Thomas and De Bruin (2012) surveyed 917
11
full-time faculty in Johannesburg, South Africa, to learn about the faculty perceptions of
online cheating and actions they take to prevent it. The data that were gathered showed
that 92.6% of the respondents felt that online cheating compromised the university’s
ideals.
No Evidence of Cheating
Attempts to find out whether the problem of cheating online is more serious than
cheating in face-to-face classrooms have not been successful. Several studies consistently
classroom (e.g., Grijalva et al., 2010; Klor de Alva, 2011; Krsak, 2007; Watson & Sottile,
serious problem, but the extent of the problem varied (Grijalva et al., 2010; Klor de Alva,
2010; Krsak, 2007; Watson & Sottile, 2010). Some of the shortcomings in the findings
stem from the fact that the research is limited by privacy issues. As such, Watson and
Sottile (2010) could not provide additional information regarding the majors of the
undergraduate college students to show whether students with specific majors were more
likely to cheat. Their study also failed to address the frequency of cheating by
individuals. Witherspoon, Maldonado, and Lacey (2012) showed in their study that
students who cheat are more likely to cheat by using contemporary methods (r = .78, p <
.001), rather than the more traditional forms of cheating (r = .68, p < .001). Contemporary
methods include, but are not limited to, the use of cell phones, text messages, and the
purchase of research papers on the Internet. The researchers considered some examples
completed by someone else, improper citations, and copying someone else’s answers.
12
al., 2009). There is only speculation about the reason(s) why online students do not
appear to cheat as much, but researchers indicated it may have to do with the increased
level of motivation by online students. The students’ motivations and self-direction in the
online environment may also be at a higher level, as more independent work is required
perceptions of online dishonesty. A survey was sent to 555 business school deans who
responses sent back by 177 deans showed that 78% of them thought that less than 40% of
their students participated in cheating. Only 5.1% indicated that cheating was a very
serious problem, while 48.3% stated the problem was moderate. The perception of deans
who had an honor code at their school was that cheating was not a serious problem, in
contrast to the deans whose schools were lacking an honor code. Those latter deans
perceived the problem of dishonesty to be more serious. Overall, the results show that the
deans underestimated the overall extent of dishonesty. Brown et al. (2010) speculated that
the deans may have lacked awareness of the problem of dishonesty because most of them
did not teach and may have had poor communication with faculty who experienced these
Federal Regulations
Education Act motivated by the rationale of protecting the integrity of online courses.
contingent upon the establishment of “processes through which the institution establishes
that the student who registers in a distance education or correspondence education course
or program is the same student who participates in and completes the program and
receives the academic credit” (2008, pp. Pub. L. No. 110-315, para. 115, II, ii, Stat.
3325). The 2010 adoption of this revision by the SACSCOC Board of Trustees has, in
compared to cheating in the face-to-face environment, it was found that there was a high
proportion of postsecondary students who cheated in both online classes and face-to-face
classes. But the authors proposed that their reasons for cheating might have been
different. Black et al. compared 1,068 participants’ perceptions about cheating and found
that several factors contributed to the likelihood of the postsecondary students resorting
to dishonest behavior. These factors ranged from the students’ credit load to the level of
interaction with their instructor. Contrary to other studies (Shaw, 2004; Spaulding, 2009;
Watson & Sottile, 2010), postsecondary students in Black et al.’s study perceived that
online students engage in less cheating than those in traditional classrooms. Watson and
Sottile (2010) conducted a study among undergraduate and graduate university students
to expand the limited amount of research that had been done in regard to online cheating.
The participants of their study self-reported on cheating, including ways in which they
cheated. Stuber-McEwen et al. (2009) explained that face-to-face university students are
14
more likely to cheat because of pressures they feel from instructors who set date and time
deadlines. Such pressures could result in students’ cheating out of panic. According to
this hypothesis, they are less likely to panic in online classes where they have more
students at a medium-sized university in Appalachia (Watson & Sottile, 2010). The focus
of the study was on cheating behaviors in online and face-to-face classes by examining
classes experienced greater cheating behaviors. Gender and participation in sports were
seen as contributing factors to cheating as males and athletes showed higher instances of
cheating. The conclusion of the study was that students in face-to-face classes were more
likely to cheat, possibly because of their stronger social relationships with their
classmates (Watson & Sottile, 2010). These connections purportedly enabled students to
Turner Dille (2011) studied 343 students from various institutions throughout the
United States to find whether or not there is a difference between face-to-face students
and online students and their reported cheating behaviors. Findings were that cheating
was prevalent in both modalities, but that students who cheated in face-to-face courses
were 7.3 times more likely than online students to cheat in their online courses as well
(Turner Dille, 2011). Furthermore, Turner Dille’s results showed that 15.5% of students
admitted to cheating in their online courses, compared to 18.4% who admitted to cheating
methods, than they would in digital forms of cheating, claimed Stephens, Young, and
Calabrese (2007). In their study, they found that cutting and pasting information from the
Internet is far more common now than it was in the past, and students create cheat sheets
on their electronic devices rather than using notes. In general, students’ self-reports
indicated that 19% of the 1305 students in the study were more likely to resort to
traditional cheating methods, compared to the 7% who reported using handwritten notes.
Stephens et al. (2007) found that students who cheat did not exclusively rely on either
type of cheating, but instead, they used both types of cheating as dictated by the
circumstance. The findings also suggested that students view both types of cheating as
equally serious.
Theoretical Framework
study has drawn upon Gallant and Drinan’s (2008) institutional theory to explain what
motivates students to cheat and the faculty and administrative role in preventing it.
Gallant and Drinan proposed a four-stage process, which is to guide an institution toward
change. The stages are recognition and commitment, response generation, response
implementation, and institutionalization. During Stage 4, Gallant and Drinan advised that
the institution should focus on academic integrity. They suggest that academic integrity is
accepted and implemented. Gallant and Drinan (2008) contended that a new norm would
emerge upon this institutionalization. A case study by Gallant and Drinan illustrated the
progression through the four-stage model, as they examined the lack of faculty response
to academic dishonesty. During the recognition and commitment stage, the institution
16
would have to recognize that the problem exists and commit to taking the necessary steps
to change it. The institution’s response to the existing problem is said to occur in the
second stage, while moving into Stage 3, the implementation stage. Finally, acceptance in
the institutionalization stage would result from a buy-in by faculty, administrators and
students who would all see the benefits and the long-term effects from the new process
students based on Gallant and Drinan’s (2008) theoretical framework. The 377 students
who took part in their study were composed of traditional face-to-face and online
students. Hart and Morgan found that students reported low levels of cheating in both
groups and high rankings in terms of how they rated academic integrity. Face-to-face
students reportedly had more instances of cheating, which Hart and Morgan (2010)
speculated to be possibly the result of the way integrity information and other academic
policies are disseminated. According to their analysis, online students have to exert more
independence and are expected to seek out written materials and policies on integrity, as
information by their instructor. The information obtained by the online students is more
in depth than the condensed version presented in a traditional classroom (Hart & Morgan,
2010). Hart and Morgan suggested that upholding and supporting the policy of academic
integrity by the online students may be their attempt to protect the reputation of their
degree and reduce the general concerns of academic dishonesty in online courses.
Honor codes. Contrary to the findings of Hart and Morgan (2010), Patnaude
(2008) found that the presence of an institutional honor code does not decrease the
17
365 online faculty from five campuses at the University of Houston completed a “Faculty
perceptions of academic honesty online” survey (p. 37). The study compared the
perceptions of faculty who had reported to have taught at a university that had an honor
code to faculty who had reported to have taught at a university that had no honor code.
There was a significant statistical difference (p = .009) between the two groups: faculty
who taught at a university that had an honor code perceived cheating to be higher among
student cheating existed between faculty who did not know whether an honor code
existed and those who were aware of the honor code. In concurrence with Hart and
Morgan (2010), Miller, Shoptaugh, and Wooldridge (2011) concluded that internalized
integrity standards can be highly effective. They found that honor codes can be effective
in that regard, as they underscore the students’ moral character and instill in the student a
as a whole.
In their study, LoSchiavo and Shatz (2011) found that the impact of honor codes
depends on the course delivery method. They implemented three studies in their
Introductory Psychology course. Students in their fully online sections did not show a
significant drop in cheating when they signed the honor code. The students in the blended
courses who signed the honor code were 30% less likely to cheat (N = 165) than the
students who did not sign the honor code (57.6% and 81.8% respectively; LoSchiavo &
Shatz, 2011). Loschiavo and Shatz attributed the significant difference between cheating
18
patterns of students in the blended classroom versus those in the fully online classroom to
the academic setting. LoSchiavo and Shatz (2011) hypothesized that when students have
personal interactions with their peers and their teachers, they may feel a sense of moral
obligation to be honest.
Reasons for cheating. Mayhew, Hubbard, Finelli, Harding, and Carpenter (2009)
pointed out that cheating in high school could be an indicator that the student will cheat
in college. Of the 527 college students who participated in their study, they found that
71.3% of the students reported that they never cheated on in-class exams while in college,
compared to 50% who reported that they cheated while they attended high school.
Twenty-seven percent reported that they cheated only a few times while in college,
whereas the remainder reported more instances of cheating. Additionally, 40% reported
that they cheated a few times on their tests while in high school. Mayhew et al. (2009)
claimed that cheating in high school is highly predictive of cheating in college. They
suggested that cheating can be diminished if instructors develop better understanding into
the motivations of cheating and if the students are made aware of their violation of
academic expectations that Mayhew et al. dubbed the “cheating norm” (2009, p. 432).
university in South Florida revealed that student connectedness played a significant role
in their engagement with dishonest behavior. Chase concluded that the more connected
students were, the less likely they were to cheat. Chase also found a positive correlation
between the number of classes a student is enrolled in and their likelihood to cheat.
Findings showed that the more classes the student was enrolled in, the more likely they
were to cheat in their online classes. Students in Chase’s (2010) study reported that they
19
were less likely to cheat if their instructor showed engagement within the course and care
for the students. Findings by Sendag et al. (2012) did not fully support the notion that
Midwestern university regarding the extent to which they cheated. Humanities and
Education students were least affected by the policies, which led Sendag et al. to question
whether the distribution of such policies varied and if such variations influenced their
effectiveness. Their findings also showed that freshmen were more likely to engage in
cheating, as the values of students from this new millennium have shifted. As a result of
this shift, students look at cheating as a legitimate way of getting through school. Gross
argued that ignoring this value shift will keep institutions stuck in their old views where
they fail to become more tolerant of behaviors that are now widely acceptable and no
longer condemned. In turn, negative, moralistic reactions to cheating only address the
issue on a superficial level. Gross (2011) suggested “the need is to adapt performance
criteria to these new realities rather than act to repress or punish them” (p. 436). Gross
searching for new ways to contribute to the student’s personal growth and learning
process. Students’ level of motivation also plays a role in their likelihood to cheat.
Sendag et al. (2012) stated that more than two fifths of the 1,153 students they studied
reported that they felt overwhelmed by their assignments. About 32% did not feel
motivated by their assignments, or they did not feel capable of doing them or felt
pressured to get good grades. Gross pointed out that the current generation of students is
20
pressured by the values of the work environment they strive to enter, where striving for
credentials and good grades on a transcript lead them to a sense of entitlement. They feel
empowered to challenge their teachers and offer suggestions for grade improvements, so
they have a chance at competing against their peers. A solution to making improvements
have a say in their individual learning plan (Gross, 2011). According to Gross, this will
likely enhance learning and make the relationship between the student and their teacher
more effective.
22 colleges across the United States, Correa (2011) found that what they consider
dishonesty in their classrooms varies. While there might be agreement on some forms of
dishonesty, for example, submitting a paper that was purchased online or one written by
another student, the instructors varied in the way they rated the seriousness of cheating.
Some forms of cheating that ranked low on perceived seriousness were asking another
student what was on the test when they took it, enlisting help from tutors or native
speakers, and using online translators. On the question related to who the ultimate victim
of cheating is, participants almost unanimously agreed that the student is the ultimate
victim (98.8%), while the institution ranked second (80.2%; Correa, 2011). Jones (2011)
found that a student’s perception of what qualifies as teaching depended on the scenario.
The students unanimously agreed that turning in someone else’s assignment as their own
is cheating. Most (92%) of the 48 students sampled agreed that improperly citing
information that was directly copied from an Internet course is cheating. Only 75% of
(73%) of students did not think that submitting the same paper in multiple classes is
The results of Correa’s (2011) study further showed that most instructors (70.6%,
n = 75) preferred to handle cheating by giving the student a zero for their work, rather
than following the institutional policy and dealing with the issue outside of the classroom.
Approximately one third (34%, n = 70) of instructors who caught their students cheating
indicated that they reported some (34%, n = 70) of the students, but not others; one third
(31%, n = 70) reported all of the students; and one third reported none of the students
who reportedly cheated. In their responses, instructors wrote that they lacked the support
from their institution when it came to enforcing policies on cheating, while others wrote
While academic integrity policies could offer clarity on cheating for full-time
faculty, the buy-in may not be the same among part-time faculty (Hudd, Apgar, Bronson,
& Lee, 2009). An example of how part-time faculty’s understanding of the policies is
slightly different from that of full-time faculty, was provided in a study by Hudd et al. in
which the part-timers indicated that they did not see collaboration on homework and the
use of notes during exams without authorization as serious violations of the dishonesty
major violation, 41% of full-time faculty classified it as such. Their study also showed
that for the most part, students felt that it was up to the instructor to take steps to prevent
cheating, rather than for the students to take personal responsibility to conduct
themselves honestly. The biggest difference among the responses received from full-time
and part-time faculty was a matter of perception. While 68.5% of full-timers felt that
22
there was a lot of cheating, only 34.1% of part-timers concurred. This led Hudd et al.
(2009) to conclude that part-time instructors may be less likely to include integrity
policies on their syllabi and discuss issues of cheating in their classes. Hudd et al.
concluded that part-time instructors may lack awareness because of their limited time on
campus and limited involvement in professional development where more emphasis may
who teach at various campuses might have false expectations of the students based on
academic dishonesty among faculty, they ranked their findings of the 212 usable surveys
answers from another student during an exam and stealing exams extreme forms of
cheating behaviors. However, the results showed mixed responses on whether using a
preparation. The overall findings showed that faculty look at cheating on a continuum,
which varies among the more serious to the less severe types of cheating (Pincus &
Schmelkin, 2003). The differences between the student and faculty perceptions were
serious offense, whereas faculty ranked that very high. Pincus and Schmelkin (2003)
recommended that institutional policies need to be clear on the different types of cheating
and how to deal with them. They felt that existing policies often exclude what may be
considered minor infractions, which could create confusion among students. Pincus and
Schmelkin (2003) commented that faculty would benefit from having institutional
23
guidelines on how to deal with cheating that are on a continuum based on its severity.
Sanctions should be determined based on the type of violation. Students might not
their own cheating that they saw as collaboration according to Williams, Tanner, Beard,
and Hale (2013). At the Midwestern university where their study was conducted,
Williams et al. found no institutional honor code, but they learned that the school had a
student code of conduct in place. Furthermore, students were well informed of the
school’s policy during orientation in their freshman year as well as in classes that they
attended. Despite the exposure to the policies, 67% of respondents admitted on the survey
that they engaged in dishonest behaviors over the past year. Additionally, of the 562
responses that were received, 59% of the students indicated that they engaged in
unauthorized collaboration.
According to Hudd et al. (2009), it is important to ensure that students receive the
same message against dishonesty in classes taught by part-time and full-time faculty.
This is especially important because of the large number of adjuncts that teach various
classes at institutions across the United States (Hudd et al., 2009). Hudd et al. pointed out
that faculty sometimes feel that enforcing rules against dishonesty is not their job and that
students should have learned about integrity policies in high school. The authors claimed
that attitude contributes to the perpetuation of cheating, especially since it makes the
instructors less likely to reduce the perpetrator’s grade or take any other corrective action.
the United States, Thakkar (2012) asked open-ended questions about their perceptions of
different aspects of academic cheating. There were six main themes among the questions
24
asked, which touched on issues related to understanding the policy on academic integrity,
the roles of the students, the roles of the instructors, prevalence, attitudes, and prevention.
The survey revealed that the majority of students acknowledged that they were made
aware of the institutional policy in cheating through student orientation and their
constituted cheating, and they were particularly confused about plagiarism. The students
mostly felt that an institutional policy was necessary, and that the burden of enforcing the
policy rested on the instructors. Students in the study reported resentment towards
Thakkar’s (2012) study revealed that students felt strongly about implementation
from improved proctoring during exams to more individual instructor involvement with
the students to help them improve. The students agreed that policy enforcement by
faculty, in addition to frequent reminders of the policy, decreases the chances that
Bruner’s theory of learning. Insights into how students learn were provided by
Bruner (1960), who argued that learning occurs when learners are motivated and
with new information, Bruner said that they will grasp this information best when they
spend enough time to absorb it. If students are not excited about the materials they learn,
they will lack excitement and they will be more likely to forget it quickly. Students are
generally tested on what Bruner felt to be trivial facts which are only remembered
through rote memorization. The intrinsic motivation to learn is thus displaced by the
25
“teaching machines” (p. 83), which may not adequately present challenging content and
teaching their subject matter, but arguably also for teaching morals and values to their
classroom rules, and overlooking those would result in the decline of moral behavior
relates to the norms of the group they are part of. This group determines a “moral
atmosphere,” which lays the foundation of how the group members will behave. As a
result, Kohlberg concluded that it is not just the individual that should be addressed when
it comes to moral decisions. Instead, he proposed that attempts should be made to raise
influencing why one would elect to make a morally sound decision. Furthermore,
Kohlberg contended that the critical issue in cheating is “recognition of the element of
contract and agreement implicit in the situation” (Kohlberg, 1981b, p. 44). Following
from this reasoning, the likelihood of cheating increases if the situation is such that a test-
26
taker is not being supervised and the possibility of sanctions is unclear (Kohlberg,
1981b).
The psychology behind cheating was also studied by Staats, Hupp, Wallace, and
Gresley (2009) who described students who do not cheat as heroes with special
characteristics. Staats et al. found that the students who fit the attributes of being brave,
honest and empathetic are most likely to be honest because cheating brings them a feeling
of guilt, which has an overall restraining effect on their possibility of cheating at all.
Based on these findings, Staats et al. suggested that attempts to prevent cheating must be
Based on their list, they created an instrument that consisted of questions that would help
them determine where participants ranked in areas of empathy, honesty and courage. The
Short Index of Bravery, the Morally Debatable Behaviors Scale, The Interpersonal
Reactivity Index, and the Faces Index were existing instruments which laid the
foundation for the modified instrument used by Staats et al. Their study of 383
the students who ranked high on bravery, heroism, and empathy ranked low on past,
current, and future intent of cheating. Staats et al. (2009) found that the characteristics
were weakly correlated with gender. Their theory suggested that combating academic
when they are afraid of failing exams, and schools may consider awarding students who
display those characteristics. Academic heroism, claimed Staats et al. (2009), should be
27
Staats et al. encourage institutions to offer more support for faculty who often fail to
follow through with reporting academic misconduct for fear of retaliation or wasted
efforts.
One problem with cheating is that students may rationalize their behavior and not
see any fault in their actions (Brent & Atkisson, 2011). This differs from purposeful
cheating that is done in order to get admitted into an institution, or because of pressure or
convenience (Devlin & Gray, 2007). When 56 Australian students participated in a study
in 2003, they revealed that some of their cheating was done because of external pressures
(Devlin & Gray, 2007). Claims about a lack of choice because of strict application
policies and education cost were similar to the reasons used by students in the United
States who justified cheating on exams (Brent & Atkisson, 2011; Devlin & Gray, 2007).
Brent and Atkisson (2011) warned that these attitudes must be considered when an
In their study, Brent and Atkisson (2011) surveyed 420 students enrolled at a
Midwestern university. The purpose of the survey was to compare different perspectives
on cheating. The students were asked to answer questions related to the attitudes toward
cheating among fellow students. The students’ responses indicated that the perpetrators
mostly denied their responsibility in cheating, a tactic in line with neutralization by Sykes
and Matza (1957). Brent and Atkisson (2011) designed their survey to include questions
and Matza theory, Brent and Atkisson included questions that were in line with Scott and
Lyman’s (1968) theory on reformulation. Brent and Atkisson (2011) found that students
28
most likely cheat because of personal crises they may be going through. The excuses fall
offering excuses or justifications (Scott & Lyman, 1968). These are referred to by Scott
and Lyman (1968) as “accounts” or explanations offered for behavior that is considered
wrong or unacceptable. The act itself is not denied, but the reason for committing the act
is somehow justified. Brent and Atkisson (2011) claimed that the theory of accounts
offers a partial explanation of students’ cheating behaviors. This helps to explain why
Chapman, Davis, Toy, and Wright (2004) learned in their exploratory interviews with 40
students that the students saw nothing wrong with providing a friend with questions they
could expect on an exam, as it would help the friend get a better grade. The students’
answers led Chapman et al. to develop a questionnaire for a sample of 824 business
students at a western university. Fifty-eight percent of students felt that it was considered
cheating to pass information on a test to another student, after the professor’s specific
request not to do so. Further findings indicated that although students think cheating is
morally wrong, they continue to do it because they perceive that the benefits are higher
than the cost. The students also indicated that they felt that everyone else was doing it
Brent and Atkisson’s (2011) study revealed that 245 of the 401 students who
completed the survey indicated that cheating could never be justified. However, 144
students indicated that under certain conditions cheating could be justified; for example,
if the result could move them further along. This justification supports Sykes and Matza’s
(1957) theoretical perspective on neutralization (Brent & Atkisson, 2011). Sykes and
Matza (1957) studied different types of deviant behavior, ranging from minor offenses to
29
serious crimes, and concluded that much can be explained by the theory of association,
which states that delinquency arises from the acceptance of new norms and behaviors.
According to Sykes and Matza, delinquent behavior arises for different reasons,
depending on which technique is adopted by the person who violates the norm. The type
that is directly in line with Brent and Atkisson’s (2011) findings is “denial of
responsibility” (Sykes & Matza, 1957, p. 667). When it comes to exams, students more
likely admitted their wrong-doings, but would often justify their response by offering
excuses, such as stating that the material was not covered during class lectures (Brent &
Atkisson, 2011). Students, according to Brent and Atkisson, see cheating on exams much
differently than they see cheating on homework. Students in Brent and Atkisson’s (2011)
study reported that working together was almost an essential part of learning. As such,
Brent and Atkisson stated that instructors must hold up their end of the bargain, and
clearly indicate in their course contract what constitutes cheating and which behaviors
including punishments, need to be mentioned on the contract (Brent & Atkisson, 2011;
Chapman et al., 2004). Miller et al. (2011) found in their study, however, that students
who were made aware of the harsh consequences of cheating were more likely to cheat.
They concluded that “punishment has its effect when we make the salience of
punishment high, but is likely to have little effect when the perception is that the
status many schools have looked for ways to lower instances of cheating and also to
30
lower the perception that cheating is widespread, especially in online courses (Moeck,
2002; Parry, 2009; Prince et al., 2009; Roach, 2001). During their interviews of 225
upper- and lower-level undergraduate students, Stuber-McEwen et al. (2009) found that
these adults who were also enrolled in traditional postsecondary classrooms all reported
that they had cheated in the past. Their self-reports showed a higher instance of cheating
in the classroom by students whose cheating was prompted by panic during the exam,
rather than by deliberate planning to cheat. Stuber-McEwen et al. (2009) stated that
students in online courses may be better motivated and therefore less inclined to cheat,
and that instructors in online courses may be more vigilant about preventing cheating
because of their perception that more cheating occurs online. The SACSCOC (2010)
mandate that institutions wishing to retain their accreditation are under pressure to take
measures to ensure that faculty strictly enforces their institution’s code of conduct dealing
with dishonesty.
In an effort to minimize the amount of cheating that takes place, Moten et al.
(2013) suggested rapport-building on the part of the instructor. The instructor will get to
know the student through frequent interactions, which will give an idea of the student’s
writing and testing style. Having the students sign a dishonesty statement with each
submission, administering proctored exams and using multiple versions of exams were
cheating traps by creating websites that contain the exam questions with incorrect
answers. The instructor can take on the role of “class mole” by enrolling themselves in
the class under an alias (Moten et al., 2013). This fake student may then inadvertently be
to prevent cheating could be proctoring written exams, assignments, or other graded class
activities. Students do not always realize their behavior is considered cheating, claimed
Harkins and Kubik, and these students sometimes feel that they are engaging in
collaborative behavior with the resources that are available to them. According to
Harkins and Kubik (2010), students make use of readily available tools online, and these
students do not realize that the availability does not justify their use in the context of a
summative evaluation. Harkins and Kubik mentioned that this form of cheating may be
considered ethical by the students because it is widespread and seems to have become the
norm. Harkins and Kubik dubbed this type of cheating “collaborative ethical cheating”
(2010, p. 139), because it is common among students who, as he claimed, have learned to
cheat defensively. Davis et al. (2009) stated that it is easier to plagiarize when
competing in a global environment where they often feel pressured to get ahead so they
may enter the workplace, which embraces speed and innovation. Workers are expected to
access information quickly and perhaps it is felt that copying from online resources is not
Harkins and Kubik (2010) added that the types of cheating among students has
moved beyond the traditional exchanges of answers or getting answers from the person
who sits close enough to them that they can read their answers. Students now use devices
that are not always easy to detect because they have gotten smaller and more
sophisticated. Harkins and Kubik contended that students have easy access to digital
media, the Internet, and software which can give them unauthorized access. Many cell
32
phones are now equipped with Internet access, which tempt students to take pictures of
their exams for friends (Harkins & Kubik, 2010). Even teachers expect more
collaborative work, as they encourage their students to tap into the multitude of resources
available online (Davis et al., 2009). This can contribute to students’ misunderstanding of
their limitations when it comes to the use of the information that is obtained. The vast
array of resources is beyond the teachers’ control, and they struggle to prevent cheating
or to enforce the school’s honor code (Davis et al., 2009). Patnaude (2008) suggested that
respective institutions, which should be specifically designed for courses that are
delivered online. Enforcement and acceptance of those customized honor codes may be
more successful than enforcement of general honor codes which were initially designed
Preventative measures against cheating may need to start with a look at why
students are inclined to cheat. Kohn (1999) posited that rewards and punishment are
useful for training animals, but he warned that these behaviorist techniques impede
learning. Instead of feeling motivated by good grades or awards, students need intrinsic
motivation, which will help them understand the value of learning (Kohn, 1999). When
students are motivated to learn, he argues, they will perform better as a result, and when
their interest gets triggered, the students’ overall achievement improves. Kohn therefore
suggests that educators should design intriguing and engaging tasks to serve as intrinsic
motivation for the students. Kohn (1999) says that when students are given the
opportunity to play an active role in their learning process, they perform much better than
when they are passive recipients of information who must demonstrate their knowledge
33
by scores on assignments and examinations. Students may perform well because of the
immediate reward they work towards, but their long-term interest in learning is
negatively affected by complying with the status quo. Kohn warns that students may lose
their motivation to learn when the rewards cease to exist. Kohn (1999) challenged the
system by questioning the value of the evaluation process that is currently in place in
academia. The pressures are not only on the students who have to perform to standard,
but also on the teachers who are restricted by measures set by the institutions. These
measures are usually grade or performance related, which in turn drives the teachers to
feel pressured to get the materials across to the students within a limited environment of
constraint (Kohn, 1999). Sendag et al. (2012) mentioned that peer pressure contributes to
the instances of cheating in online classes, and educators need to consider incorporating
Correa’s (2011) study concluded that many instructors do not take their role in
combating cheating seriously. Correa complained that they do not explain to their
students what cheating is and warned that there cannot be an expectation of integrity if
the students are not given the academic policy on cheating. Correa stressed the
the school. Simply giving the student a zero and handling the matter individually, stated
Correa, aids in poor record-keeping as future instructors would have no way of knowing
whether the student committed a first offense. This point is supported by Thakkar (2012)
shared with students who can become anonymous informants who might get incentivized
34
by rewards. The role of faculty in the prevention of cheating was highlighted by Thomas
and De Bruin (2012), who stated that barriers against cheating will only be effective
when faculty commit to advising students what cheating entails, explain what the
consequences are of cheating and finally, commit to taking steps to report cheating and
follow through with disciplinary actions. In their research with online faculty in
Johannesburg, South Africa, Thomas and De Bruin (2012) learned that some instructors
do not feel responsible for curtailing cheating by their students. Conversely, of the 60%
of faculty who reported that they had reported cheating in the past, 80% indicated that
they would much rather provide students with policies regarding academic integrity, than
take disciplinary action once cheating occurs. They blamed their inaction or
unwillingness to take action on their workloads and lack of evidence that cheating in fact
occurred, thus resulting in psychological discomfort. Faculty also blamed the institution’s
module on academic integrity that students must take within their first year of enrollment.
The early exposure was expected to elicit open discussion of students with their peers and
their instructors, which would address any questions the students may have. Additionally,
misconceptions could also be cleared up. Williams et al. (2012) suggested that faculty
members should also be educated on the topic to gain better understanding of dishonest
Other suggestions on how to combat cheating range from the instructor checking
the students’ citations, to the use of webcams, increasing the number of required papers
35
that can be checked for plagiarism, limiting the exam time, incorporating the use of
Skype for oral examinations, using different assignments in the classroom, providing
clear guidelines on rules and expectations, locking Internet sites while the exam is in
progress, and using full screen programs to create the exams, which prevent students
the academic integrity policy and the institutional honor code as part of the syllabus. She
suggested that the policies should be clear and the steps that would be taken when such
policies are violated should also be mentioned. According to Jones, online instructors
should make specific mention of what is considered cheating, because the expectations in
the online environment may be different from face-to-face. The policies should be
reviewed during the course orientation, and students should be quizzed on the policy to
ensure their understanding (Jones, 2011). Jones proposed the use of an entertaining
activity to draw students’ attention to the policy. The syllabus or the learning activity
related to academic integrity should include links to tutorials in the Internet, which
Copyright issues have a bearing on the issue of plagiarism as they help students
understand the problems with cheating. Since students come from diverse backgrounds
and schools, they may not understand what constitutes plagiarism especially because of
changes which almost seem to promote plagiarism. Farnsworth and Bevis (2006) argued
and permission should be obtained prior to adopting the information. Farnsworth and
Bevis (2006) stated that students over the age of 18 are protected by copyright laws, but
36
they must understand that information submitted for their classes for the purpose of
assignments, for example, gets added to their institution’s database. Students are often
not allowed to submit the same work for different classes without the permission of the
instructor, said Farnsworth and Bevis. Their views are not widely accepted because the
interpretation of academic dishonesty in terms of submission of one’s work for more than
one class varies from institution to institution (Schmelkin, Gilbert, Spencer, Pincus, &
Silva, 2008). In their study with 560 students, Schmelkin et al. found that students’
perceptions of cheating on papers are different from how they perceive cheating on
exams. The lack of clarity of what constitutes cheating may lead to unintentional
cheating behavior (Schmelkin et al., 2008). To prevent violations of the integrity policies,
students should be asked to provide a written copy with citations for written and oral
presentation according to (Jones, 2011). These submissions, Jones pointed out, can be
In their article, Harkins and Kubik (2010) argued that “copyleft” encourages
cheating, since it is the antithesis of copyright. They claimed that it allows users to find
and modify materials and claim them as their own. Lessig (2008) pointed out that writers’
creativity is stifled when they are unable to produce information that was modified,
without the permission of the original author. While some consider it plagiarism, Lessig
called this form of creative writing remixing, where authors freely use materials from
others to create a different version. He argued that allowing users to edit web-based or
print-based material encourages creativity and should therefore not be held by a standard
Harkins and Kubik (2010) stated that access to music and other software provides
well, and students have free access to papers they can in turn modify and call their own
(Harkins & Kubik, 2010). Simonson et al. (2012) provided descriptions of various ways
in which materials are protected by copyright laws. They stated that an instructor’s notes
are subject to protection. They further explained that since material in online courses is
digitally presented to students, this material is considered “fixed” and may not be
reproduced by the student without permission from the instructor. Simonson et al. (2012)
also discussed different forms of plagiarism, and claim that “online entrepreneurs” are
particularly troublesome because they sell prewritten papers to any interested buyer, who
can make changes as they see fit, and submit the work as their own. Simonson et al.
brought up the issue of student’s intellectual property rights, as they mentioned that the
those rights. Their concern stems from the fact that the students’ papers get added to the
Witherspoon et al. (2012) and Heckler, Rice, and Hobson Bryan (2013) stated that
deterrent and prompt students to take charge of their academic success with honest
pursuit.
In their study, Heckler et al. (2013) found that when students knew their work was
going to be submitted through a plagiarism detection program, they were less inclined to
cheat, and the problem of plagiarism was reduced. The researchers used secondary data
from Turnitin to review the scores of seven courses offered in the fall of 2010 and the
38
spring of 2011. In their courses, the students were provided with a syllabus which
included the academic integrity policy. In the fall of 2010, the students were asked to
submit their papers, without being told by their instructor that it would be submitted
through a plagiarism detection system. In the spring of 2011, the students were required
to submit their paper through the plagiarism detection service (Heckler et al., 2013).
Turinitin results are expressed in percentages, which indicate the amount of overlap
found. The results showed that students who were unaware that their paper was going to
be submitted for plagiarism detection were most likely to plagiarize from other students.
They ranged between 0% to 76% in overlap. The mean was 16.33% and SD = 16.92%.
The students who were aware that their paper was going to be submitted to detect
2013). Their findings showed that males were more likely to plagiarize than their female
counterparts. The researchers concluded that the use of plagiarism detection software
provided a significant prediction of plagiarism. The conclusion is in line with Moten et al.
In Baron and Crooks’ (2005) research, they mentioned that instructors need to be
vigilant about catching the students who engage in plagiarism. As part of a solution, they
offered that instructors could provide students with in-class writing exercises, which
helps to set a baseline for these instructors who later assign papers that have to be
completed outside of the classroom. Baron and Crooks (2005) proposed that the
instructor could compare the writing style of a student’s in-class work to assignments
completed at home. They also wrote that issues of instructors who notice significant
39
differences in a student’s writing styles are not uncommon. In online classes, instructors
have numerous ways of obtaining writing samples from students, because students are
expected to engage in writing continuously through emails and discussions (Davis et al.,
2009). Farnsworth and Bevis (2006) suggested that teachers can look for the sudden
changes in writing style by looking for sudden changes in the font of printed work, and
stylistic differences in the reference list, which may have been pasted from different
sources.
Patel, Bakhtiyari, and Taghavi (2011) recommended that teachers should require
students to submit documents that are unlocked. PDF documents often have a locking
feature, which prevents the use of plagiarism detection tools. An instructor who tries to
submit a paper in PDF format to verify originality will receive an error message and will
not receive any results (Patel et al., 2011). There are ways around plagiarism detection
tools, and Patel et al. stated that tricks are being used to make the tools ineffective.
Replacing spaces with dots, called “Dot Replacement” and changing the dot color to
white apparently tricks the detection programs. Rather than reading independent words,
the program will process the text as single word sentences (Patel et al., 2011). Translator
services on the Internet also offer an opportunity to change sentences, when text is
translated into another language and then translated back. Patel et al. explained that the
initial translation is often not a direct translation, but rather a paraphrased version of the
text. This can be done multiple times with different languages, each one offering its own
interpretation. When converted back, the translated text offers a paraphrased version of
the original text with a different sentence structure, which will not be detected by
et al., 2011).
When students are taught to use online citation tools, stated Jones (2011), they get
in the habit of generating a reference list, which should be submitted with their work.
Jones recommended that instructors familiarize their students with tools such as Easybib
Another solution offered by Baron and Crooks (2005) is the use of portfolios.
They mentioned that students who keep a portfolio during the semester would have
multiple samples of their work, similar to the writing sample that can serve as a baseline
of students’ work. Additionally, Baron and Crooks stated that instructors need to increase
their level of awareness, as students do not always remove the evidence of their cheating
ways: they may leave information in the headers or footers, which instructors can detect
Baron and Crooks (2005) pointed out that reporting cheating students for
disciplinary action is not consistent among instructors, who may see it as additional work
or not worth the trouble of reporting. In their research, Williams et al. (2012) learned that
of the 74% of faculty who acknowledged knowing that cheating takes place in their
classes, only 18% reported it. Institutions often have policies on academic dishonesty,
and instructors are advised to include those policies in their syllabi and apprise students
of the consequences. Baron and Crooks (2005) speculated that these policies alone deter
cheating and that therefore enforcement should be compulsory. If not enforced, Baron
and Crooks argued, students quickly realize that they can get away with dishonest
practices. They pointed out that students’ work that is submitted online can be checked
simple checks with search engines such as Google, which usually picks up exact
sentences that were copied into a student’s writing assignment. Farnsworth and Bevis
(2006) also recommended the use of Google, which is an easily accessible search engine
that can track plagiarism by typing parts of paragraphs or sentences in the search area to
look for plagiarized information. Williams et al. (2012) found that faculty don’t usually
report instances of cheating as they lack evidence, see it as trivial, or that the student will
eventually suffer the consequences when they get caught in future classes.
that would enlighten the students with factual information regarding the extent of
al. proposed that the tactic might be as successful as a similar approach used to combat
alcohol use at universities. This, however, is not supported by McCabe and Trevino
(1997) who reported that awareness of the academic integrity policy and peer reporting
High teacher and learner interaction. Like other researchers (Prince et al.,
2009), Baron and Crooks (2005) have recommended high levels of interaction between
students and between the student and their instructor. Prince et al. (2009) have listed
other practices that deter online cheating, such as including group projects and requiring
discussion questions and posting them on the class discussion board. The instructor can
then assign each student a set of discussion questions to answer (Farnsworth & Bevis,
2006). Prince et al. (2009) suggested that students should be assessed in multiple ways,
so their final grade in the class is determined by their participation on exams, quizzes,
42
discussions, papers and group activities. The use of open-book exercises and
collaborative work can foster students’ ability to synthesize information from different
According to Lieber (2012), students form their own conclusions on cheating and
faculty efforts to reduce it. Lieber observed that they reported lower incidences of
cheating when their teachers used various versions of the test during the examination and
if they only reused tests or portions of tests for 2 years or less. Changing the questions
would lower the students’ chances of obtaining an advanced copy. Random-spaced seat
assignment and different exam versions were indicated as providing additional cheating
barriers. The role of proctors was highlighted by Lieber (2012) as well, particularly the
actions of the proctor who provides close monitoring of the students. Some examples
included staying in the room, keeping a watchful eye and walking around in the room on
deterring cheating made a difference. His findings were that the likelihood of these
incentives is rare because of budget constraints, and that instructors are generally
Setup of online exams. Various researchers proposed that to lower the instance
of cheating, instructors can change the order of the questions and change exams
frequently to ensure that exam questions or answers are not shared between students
(Baron & Crooks, 2005; Farnsworth & Bevis, 2006; Moeck, 2002). Open-ended
questions require a deeper level of thinking and involvement, stated Baron and Crooks
(2005), and could be used instead of multiple-choice questions. In turn, they explained
that these essay questions should carry more weight than multiple-choice question. Other
43
ways to lower cheating offered by researchers include using a variation of different types
of questions, varying the order of the questions (Moeck, 2002), and limiting the test
availability to only one hour on a specific day to lower the chances of sharing test
information (Farnsworth & Bevis, 2006). Students who are unable to take the test at that
time should be given an alternate test with different questions, stated Farnsworth and
Bevis (2006).
Baron and Crooks (2005) claimed that engagement in group projects shifts the
responsibility as well, arguing that this makes the students responsible for their share of
the work. Interaction with others supposedly makes it more difficult to cheat (Baron &
Crooks, 2005). Moeck (2002) suggested that administering tests more frequently also
deters cheating. Furthermore, he stated that conferences with students help establish
rapport, which he claimed to be a deterrent against cheating. Moeck explained that as the
students build a relationship with their instructor, they may feel a sense of guilt or may be
fearful of the instructor’s finding out about their dishonest behavior. Moeck (2002)
pointed out that conferences can be set up via the telephone, the computer or even face-
to-face.
Ullah, Xiao, Lilley, and Barker (2012) designed a “profile based authentication
framework (PBAF)” to authenticate students who take online exams. Along with a user
identification and password, students are required to answer challenging questions that
are used to identify themselves. Ullah et al. stated that unlike the banking experience
where users are less likely to share their user identification and password, students may
be much more willing to share their personal information with others if their intent is to
cheat. The PBAF uses a two-step approach to authenticate the student, namely, the initial
44
login with their username and password, followed by a series of profile and challenge
questions. Students who fail to answer the questions correctly are denied access and are
reported. In their study, Ullah et al. (2012) tested the PBAF on 34 participants from
universities within the UK and other universities outside of the UK. The authentication
process was done for 7 days spread over a 3-week span. The results of their study showed
that well-designed questions make it difficult for inauthentic users to answer the
questions correctly within a short time. Critical in the validity of the PBAF, said Ullah et
al. is the selection and design of authentication questions which will not lead to
Testing centers. One common practice to ensure integrity is that of using testing
centers which have proctors who monitor test-takers (Baron & Crooks, 2005; Prince et
al., 2009). Prince et al. (2009) suggested that proctors should require two forms of
identification from the students, to ensure that they are indeed the person they claim to
be. Institutions that do not have an on-campus testing center, or who have students who
reside outside of the region where the institution is located, can seek the assistance of a
nationwide testing center such as the National College Testing Association (NCTA, n.d.,
cited in Prince et al., 2009). Participating schools can join this consortium of 259
participants located throughout the United States as well as in two other countries.
Students who wish to take their proctored exam at any of the NCTA centers need to pay a
fee that ranges depending on the location of where the exam is administered.
Jung and Yeom (2009) offered an alternative to the use of proctors placed in the
same room with the test-taker. An elaborate system which provides remote monitoring of
students while also securing their identity is called the Security Control system in the
45
Online Exam (SeCOnE). Each student’s computer would need to be equipped with a web
camera and microphone and the SeCOnE system software would need to be installed.
The software serves as a verification tool, which establishes the identity of the test-taker
and delivers questions and answers through encryption. Additionally, screen shots of the
examinee are taken throughout the test-taking period, which can be reviewed for
suspicious behavior, such as navigation away from the screen. The system also provides a
way to lock any communication tools during the examination, thereby minimizing a
student’s ability to strike up a chat or email conversation with someone else (Jung &
Yeom, 2009). Prince et al. (2009) recommend that nonproctored exams should be used
for extra credit type activities, and they should not make up a large percentage of the
Mirza and Staples’s (2010) study on the use of cameras for monitoring purposes
during examinations found that 80% of the 33 students that were monitored reported
feeling uncomfortable during the test. The students felt psychological pressure, which
Mirza and Staples warned could lead to anxiety during the exam. The students did report,
however, that they were more likely to cheat when they are being monitored by a camera
Some students fail to see the value of education and seem to worry more about the
grade they will receive at the end of the term, than the quality of education and course
outcomes, claimed Bedford, Gregg, and Clinton (2011). Bedford et al. (2011) observed
that in order to be considered for jobs or universities, students focus on the grade, rather
than their education. In their study, 20 faculty from University of West Alabama
responded to the call for participation in a pilot program where the Remote Proctor was
46
going to be evaluated (Bedford, Gregg, & Clinton, 2009; Bedford et al., 2011). These
instructors had their students complete their exams while being proctored remotely. Each
participating student had to install the required software and submit their picture and
fingerprint for identification purposes before they were allowed to take the exam
(Bedford et al., 2009; Bedford et al., 2011). Students were made aware that they were
being watched and that the Remote Proctor would record any suspicious behavior. The 30
students were asked to purposefully engage in suspicious behavior, and the recordings
were given to the faculty for their review. Of the students who were part of the study, 15
responded favorably to the use of Remote Proctor, while 5 did not like it. The remainder
of the 30 students who were part of the study had no opinion (Bedford et al., 2009;
Bedford et al., 2011). Faculty also reported favorably in terms of the use, with 14
answering yes, three saying no, and three not expressing their opinion. Based on their
findings, Bedford et al. (2011) recommend that institutions implement a policy to verify
the students’ identification prior to their taking an exam and using live or remote proctors
to help curb the extent of cheating. The recommendations were made despite the
limitations pointed out by the researchers: at the time of the study, the Remote Proctor
was not available for Macintosh computers; it could not be installed on computers of
military students in Iraq and Afghanistan; nor could it accommodate some students with
special needs (Bedford et al., 2009). After the study and upon implementation of the
Remote Proctor at the small southern regional universities, there were reports of 600 calls
for IT assistance and students expressing privacy concerns (Bedford et al., 2009).
Tutors and biometrics. Students who work with tutors, or have a relationship
with teaching assistants, also build connections that deter cheating, claimed Baron and
47
Crooks (2005). They have to answer to these individuals who closely monitor their
progress. Any suspicious deviation from the norm might raise red flags, and the
Baron and Crooks (2005) argued that the use of biometrics is the best method to
prevent cheating. The student’s handwriting can be sampled, and their voice and
provides a full camera view of the students while they are taking their exam (Parry,
2009). Some researchers (Baron & Crooks, 2005; Bedford et al., 2011; Parry, 2009)
argued that the U.S. federal government’s regulation online students’ identity verification
(Higher Education Opportunity Act, 2008) is something that would be best handled with
the use of biometrics. However, Baron and Crooks mentioned that biometric verification
is not only costly, but it also raises the issue of privacy, as it is not devoid of security
issues and does not guarantee that students’ records will be kept confidential. In a pilot
study, 20 faculty used the Software Secure Remote Proctor, biometric software that
verifies an individual’s identity, with their college students to determine its effectiveness
(Bedford et al., 2011). Students were encouraged to engage in activities which are usually
forbidden during testing, such as using books and talking. All these activities were
captured by the Remote Proctor and were reported by the monitoring company. Students
were less likely to deny their guilt because their actions were recorded. As a result, the
Remote Proctor was deemed to be a highly effective monitoring system, which helps
Chapter Summary
Although concerns about dishonesty in online courses continue, most research has
not provided scientific evidence that academic cheating warrants special focus on the
online environment. Assessments by Baron and Crooks (2005); Grijalva et al. (2010);
Hollinger and Lanza-Kaduce (2006); Shaw (2004); Spaulding (2009); and Watson and
Sottile (2010) of overall cheating have indicated that cheating is more common in face-
to-face courses. Faculty have several available measures they can implement in their
courses to prevent it from happening in the first place. Gallant and Drinan’s (2008)
dishonesty, which must be carried out by faculty and administrators, while Bruner (1960)
their honest participation. Kholberg and Kohn (1981a), on the other hand, argued that
placing more importance on the intrinsic motivation of learning rather than credentialing
would make students less likely to cheat. Understanding the motivations for cheating may
offer insights into combative measures (Brent & Atkisson, 2011). A variety of techniques
were reviewed, such as proctoring examinations (Baron & Crooks, 2005; Harkins &
Kubik, 2010; Prince et al., 2009), in-class writing assignments (Baron & Crooks, 2005),
and honor codes (Patnaude, 2008). Researchers also suggested the use of security or
biometric systems (Bedford et al., 2011; Jung & Yeom, 2009; Parry, 2009). This study
explored the current state of instructor and administrative awareness and involvement in
Research Questions
2. How do online faculty judge the seriousness of online cheating and how well
integrity?
online cheating?
Chapter 3: Methodology
The problem addressed by this study was the lack of documentation about the
Participants
The target population for this study was all instructors who teach fully online
courses at the researcher’s community college site, as well as online instructors from two
other community colleges in Florida. According to Creswell (2005), the target population
should consist of individuals with a common characteristic that the researcher can
identify. The common characteristic among the selected participants is that they all teach
fully online courses. Since approximately 289 instructors at the researcher’s institution
teach about 570 fully online courses, all instructors were invited to participate in the
September 30, 2013). Among 120 institutions nationwide, the Aspen Institute ranked this
institution in the top 10% of community colleges nationwide. It is the largest institution
of higher education in its county, and its top four areas of study for 2010–2011 were
business administration, liberal arts, criminal justice, and nursing. The college offers
diplomas. With a student population of 67,258 in the 2010–2011 academic year, the
college employed 1,182 adjuncts and 420 full-time instructional faculty. There are three
main campuses and six centers spread throughout the county (Broward College, n.d.-a).
The researcher also invited all online instructors from a community college in a
neighboring county to participate in the study. This institution had 48,966 students
51
enrolled for the 2011–2012 school year. The college offers Bachelor and Associate
degrees, as well as certificates, vocational degrees, and preparatory programs. There are
four campuses in the county and one satellite location (Palm Beach State College, n.d.).
The highest number of graduates were in the areas of nursing, paralegal, emergency
medical services, and business administration (Palm Beach State College, 2013a). In the
2011–2012 academic year, the college offered 802 online courses (Palm Beach State
College, 2013b). In the spring of 2013, the college had 159 fully online instructors
teaching 344 sections (S. Beitler, E-Learning Director, personal communication, January
29, 2013).
The third institution included in this study served over 25,000 students during the
2011–2012 school year. This college has six campuses and several centers spread
throughout the county (Santa Fe College, n.d.-a). Like the other institutions included in
this study, this college offers Associate and Bachelor degrees in disciplines such as
Health, Early Childhood, and Nursing (Santa Fe College, n.d.-a). They offer
approximately 400 online classes during the spring and fall semester, taught by
approximately 200 online instructional faculty (L. Ciardulli, Assistant Vice President of
The demographic makeup of the participants spans a wide range of age, race, and
gender categories. Demographic information gathered from the participants at the time of
participation provided exact information, but specific focus was placed on the extent of
experience and gender of the instructors. The procedure followed to gather the sample for
this study was to contact the directors of the instructional technology department at the
selected institutions to either obtain a list of email addresses of all online instructors or
52
make arrangements to disseminate the survey (Fowler, 2009). The instructors were
contacted via email and an invitation to participate in the study was extended, as
proposed by Sue and Ritter (2007). The instructors were sent a reminder email
participants as possible (Fowler, 2009). Creswell (2005) estimated that 350 individuals
would be a good sample size to partake in a research study, thereby making the combined
populations of fully online instructors at all proposed institutions a suitable size. Sue and
Ritter (2007) posited that the number of participants likely increases if all the members of
the population are invited to participate. They suggested that the number of participants
who will respond increases when they are preliminarily contacted through various
methods, such as email, telephone, and regular mail. An agreement to participate makes
nonresponses less likely to occur. According to Fowler (2009), the importance of sample
size depends on the nature of the study. Fowler stated that while a study which has been
repeated many times may require a large sample size, studies that have not been done as
much can be statistically sound even with a smaller sample. Fowler suggested securing a
sample, which is reflective of the population by ensuring each individual had an equal
chance of being selected, that probability sampling be used, and that the design be such
that the sample reflects the entire population. Fowler warned that the appropriate size of
the sample should not just be based on statistical suggestions, but rather on the individual
study and its goal. He also cautioned that studies should not be approached solely based
The research method used for this study was mixed-methods. Participants were
asked to answer survey questions for the quantitative portion of the study. The qualitative
53
portion of the study involved a focus group meeting, which provided the researcher with
information that was used to validate the data gathered from the surveys. According to
Tashakkiro and Teddlie (2003), Creswell (2008), and Pinto (2010), mixed-methods
research is a newer approach to research design, which enables the researcher to mix
their topic. Pinto mentioned that mixed-methods offer deeper understanding of the data
that are gathered and allows for triangulation between the quantitative and qualitative
data. Triangulation is believed to improve the validity of the research. Though it does not
come without critique, Pinto (2010) believes that triangulation provides a more holistic
In the quantitative portion of this research study, the participants were asked to
items. This questionnaire was securely delivered online via Google forms. Sue and Ritter
(2007) warned about invited participants not responding to the request to partake in a
study to which they were invited. There were people who wished not to be part of this
research study, and others who initially agreed to complete the survey but changed their
mind. The participants completed an online survey, which Sue and Ritter explained to be
a relatively quick and low cost option to gather data. In an effort to increase the number
participants were contacted via email to inform them of the study and the importance of
their participation. The survey was easy to navigate and was kept short and concise.
there may be those who do not answer every question in the survey and more
54
importantly, there may be people who do not submit any response at all. To reduce this
sample bias due to nonresponse, Fowler (2009) suggested sending an advance letter to
inform the participants of the study. In the advance letter, the participants will learn of the
purpose of the survey and the purpose of the study. For the qualitative portion of the
research, participants were invited to a focus group meeting to further discuss the survey
questions
Instrument
The instrument used for this study was a modified version of the Academic
Integrity Survey (AIS, Appendix A), developed by McCabe in 1999 (McCabe, Trevino,
& Butterfield, 1999). Revisions of the survey were made in 2003 (Eckles, 2010). Dr.
University in New Jersey, was contacted via email by the researcher to request
permission to use his survey. He gave written permission to the researcher to modify and
use the instrument (D. McCabe, Creator of Academic Integrity Survey, personal
communication, June 7, 2013). The revised survey, consisting of 96 items, was modified
to fit the purpose of the study (Appendix B). According to Creswell (2005), it is
important to establish the validity and reliability of an instrument. For the study to be
considered valid, Creswell stated that the researcher should obtain useful information
from the participants, which can be used to make generalizations about the population.
Reliability, on the other hand, refers to the expectation of the instrument yielding similar
and consistent results with each use (Creswell, 2005). Boehm et al. (2009), Eckles
(2010), and Hart and Morgan (2010) all utilized the AIS, and each established reliability
and validity of the instrument prior to conducting their studies. Eckles stated that validity
55
of the instrument was based on the survey’s being designed by one of the leaders in the
point Likert scale ranging from never to very often, or responses were answered on a
checklist where specific behaviors were marked on a 5-point Likert scale which ranged
from not cheating to very serious cheating (Boehm et al., 2009; Eckles, 2010; Hart &
Morgan, 2010). The researcher’s study gathered information from all faculty who teach
online, to assess their attitudes and opinions in regard to dishonest behavior among their
students. The AIS is broken down into three main themes, namely, academic
environment, specific behaviors and demographics (McCabe et al., 1999). The purpose of
the survey was to measure the extent to which instructional faculty are aware of various
methods of cheating in their classrooms, to gather information about measures that are
already used by instructional faculty to enforce the institution’s code of conduct (Eckles,
2010; McCabe et al., 1999). In his research, Eckles (2010) evaluated and reviewed the
instrument for validity and reliability and found it to be solid in both areas. Eckles
performed the Cronbach’s Alpha statistical analysis, which revealed a score of .911. This
The purpose of the AIS was to find out the perceptions of faculty about students
who cheat, what factors contribute to cheating, the effects of honor codes used in
academia and the likelihood of that lowering the instances of cheating, and the effects of
academic integrity policies at institutions (McCabe et al., 1999). The writer employed a
modified version of the AIS, which places more emphasis on faculty’s perception
prevent cheating before it takes place in the context of online courses (Appendix B).
56
While there is no specific reason to let the researcher believe that cheating in the online
environment is alarming at any of the three institutions, the (SACS, 2010) has stated that
that they have taken measures to reduce online academic cheating. The instrument
contains questions about the participant’s attitude about students who cheat. Nitko and
Brookhart (2011) explained that when attitudes are measured, one looks at
“characteristics of persons that describe their positive and negative feelings toward
particular objects, situations, institutions, persons, or ideas” (p. 433). In this case, the
instrument elicits faculty’s attitudes regarding the types of dishonest behavior their
students commonly exhibit, what measures they took after cheating was detected and
how academic policies affect cheating. Nitko and Brookhart explained that part of a
adequate for use, it is important to determine the validity of the instrument. According to
Nitko and Brookhart (2011), validity is “the soundness of your interpretations and uses of
students’ assessment results” (p. 35). Nitko and Brookhart pointed out that there are four
principles that are used to determine whether a survey is valid. There must be evidence
that the survey is appropriate, the way the instrument is used must also be appropriate,
the values implied in the results of the survey must be appropriate, and finally, the
consequences of the interpretations must be consistent with the values (Nitko &
Brookhart, 2011). Another factor to consider when determining the validity of a survey is
57
content validity. This measures whether the survey questions and the scores assigned to
the questions represent all of the possible questions that can be asked given the
survey has to take a look at the way it was planned and which procedures were followed,
stated Creswell. Eckles (2010) established content validity based on the fact that the
instrument was created by McCabe, whom he described as “a leading expert in the field
of academic integrity issues in higher education” (p. 58). The modifications made to the
AIS were merely to customize the instrument to the participating research sites.
whether the scores from an instrument are a good predictor of some outcome (or
criterion) they are expected to predict” (p. 165). Eckles’ findings were based on his
research which revealed that the survey was examined by experts in the field.
Internal and external validity. External validity was established when Eckles
(2010) carefully identified and selected his population from which he ultimately drew his
western U.S. public institution of higher education. Additionally, he did not generalize
his results to groups outside of his population, as that would have created a threat to
external validity.
data were actually provided in any of the aforementioned categories. When assessments
are given to participants, the scoring of those assessments will determine whether the
researcher of this study was able to analyze validity or not. Eckles (2010) made an
58
inference about the validity of the instrument based on the designer’s credibility in the
field.
(2005) claimed that it should be the goal of good research to have reliable measures or
observations. According to Nitko and Brookhart (2011), reliability is the degree to which
assess a test for reliability, Eckles used Cronbach’s Alpha statistical analysis. The score
was .911, which is “of a high internal consistency reliability rating” (Eckles, 2010, p. 58).
Boehm et al. (2009) conducted a pilot study as part of their research, in an effort to
reestablish reliability and validity. The researchers asked experts to rate the survey
questions on how clear and consistent they were. The required score of 3.0 was exceeded
for clarity (3.6) and consistency (3.3). Additionally, the consistency reliability coefficient
of .768 on a Spearman-Brown formula added to the conclusion that the instrument was
reliable.
performed on the modified survey for this study. Multon and Coleman (2010) explained
that the Cronbach’s Alpha analysis is appropriate to run on scale items that highly
correlate with one another. The only question with such a correlation is question 1 about
the academic environment. The 5-item scale yielded a value of α = .87, indicating high
reliability. Scale means were 3.39 for severity of penalties for cheating (SD = 1.14), 2.78
for average student’s understanding of the college’s policies concerning cheating (SD =
1.01), 2.68 for student support of the policies (SD = 0.96), 3.80 for faculty support of the
policies (SD = 1.04), and 3.09 for effectiveness of the policies (SD = 1.02).
59
the utilization of the Academic Integrity survey. He suggested that the survey should be
adapted to include a “not applicable” option for some of the questions, as respondents did
not all have experience in, or exposure to, the questions related to policies at the
institution. The survey only contained a quantitative approach, and Eckles suggested that
qualitative follow-up questions upon receipt of the quantitative portion would expand the
Measured Domains
For his research, Eckles (2010) measured a variety of domains: the academic
Item Selection
To determine how items were selected for the test, the writer evaluated the
original writings by McCabe (Mc Cabe et al., 1999). McCabe explained which factors
were going to drive the research. He listed honor codes (institutional factors) and moral
norms (personal factors). There was a comparison between schools that had honor codes
and schools that did not. The idea behind that was to find out if having an honor code
deters students from being dishonest in the first place (McCabe et al., 1999).
60
Procedures
The instrument used for this mixed-methods study was a modified version of the
AIS (Appendix B). Creswell (2005) stated that surveys can yield useful information
which in turn aid in the evaluation of a program. In order to gather data, the researcher
employed the modified version of the AIS (DuPree & Sattler, 2010) and made it available
online through utilization of an electronic questionnaire. At the start of the study, the
researcher submitted required paperwork to the Institutional Review Board (IRB) at the
institution where she is a student, as well as the three institutions that agreed to
participate in the research. The directors of the respective distance education offices were
contacted and each explained that their procedure would be to disseminate the survey
once IRB approval was obtained. The directors all agreed to be the liaisons who would
distribute the survey via email, as it was against the policy of the institutions to provide
the researcher with a list of their online faculty. Upon receipt of the IRB approval, an
email was sent to the director of distance education to request that all online faculty be
contacted. The IRB approval from their respective institutions was attached to the email,
along with an invitation letter from the researcher, which explained the purpose of the
study and requested participation of the recipient. The modified AIS was sent to all
information about the survey, as well as a request for the participants to indicate their
interest in participating in a focus group by responding to the email (Sue & Ritter, 2007).
Signed consent was not required for the online survey as the surveys were anonymous
and are considered nonintrusive. Prospective participants were made aware that the
survey would take 15 to 20 minutes to complete, and the letter provided background
61
information of the researcher, the purpose of the study, as well as the risks and benefits of
participating in the study. The invitation contained a URL, which took the participant to
Addressing nonresponse and bias. There are different reasons why prospective
respondents decide not to participate in a study, or fail to answer all survey questions.
Participants may refuse to respond because they have no interest in participating (Merkle,
2013). The request for participation may not have reached the prospective participant,
wrote Merkle, or they did not understand the nature of the survey because of language
barriers, physical or mental disabilities. Sue and Ritter (2007) further explained that fear
of the lack of anonymity may affect participants’ participation. Even when participants
are promised anonymity, Sue and Ritter argued that some fear that their responses might
be traced back to them, raising their skepticism to participate or answer certain questions.
The problem of nonresponse has been addressed by researchers who have also
offered recommendations on how to reduce it (Merkle, 2013). Merkle pointed out that
nonresponse does not necessarily indicate that there is bias. As Groves et al. (2004)
stated, it almost never happens that all participants who are invited actually participate in
the study. Nonresponse is not automatically an issue when respondents fail to participate
as “response rates alone are not quality indicators” (Groves et al., 2004, p. 59). Instead,
Groves et al. explained that nonresponse bias may be reduced when the response rate is
high, but that there are ways to help reduce the bias and increase the response rate.
Merkle (2013) argued that reducing the correlation between the likelihood of response
and the variable of the survey itself would help to reduce bias. According to Groves et al.,
the quality of the survey statistics may be harmed by nonresponse, but the researcher
62
would have no way of knowing ahead of time whether nonresponse will have a negative
effect on their study. Nonresponse bias, stated Groves et al., arises “when the causes of
the nonresponse are linked to the survey statistics measured” (2004, p. 178). Based on
writings by Groves et al., nonresponse is to be expected, and key survey statistics ought
to be carefully looked at to ensure that nonresponse was not a result of these key
statistics.
Because the survey for this study pertains to online education, one way of
reducing bias was to deliver the survey online, where faculty have an assumed level of
comfort because of their online course delivery status. Prospective participants were
Fowler (2009) and Merkle (2013) suggested that the rate response for a survey
likely increases if participants are made aware of the importance of the study. In
following Fowler and Merkle’s advice, 10 days after the initial email was sent,
participants were sent a reminder email, which indicated the importance of the survey to
the college and the benefit of the results that would contain ways to improve the job of all
faculty who had already completed the survey to encourage their colleagues to do the
same. Fowler (2009) mentioned that increasing the amount of contact increases the
likelihood of the participants to respond. Based on Fowler’s advice, an email was sent out
The use of incentives has been suggested (Fowler, 2009; Sue & Ritter, 2007), as a
way to motivate the participants to complete the survey Accordingly, the researcher of
this study offered participants a chance to enter sweepstakes where four people had a
63
chance to win a $25 gift card from Amazon.com. The participants received their prize
after final completion of the survey when the random drawing was held. They had an
opportunity to complete an online form on Google docs with their name and email
address through which they were notified. Participants’ names were in no way linked to
their survey answers, as they submitted that information through a different program.
After the period to submit the survey had expired, all the names of the sweepstakes
Fourteen days after the initial invitation was sent to the directors, the first
reminder letter was sent via email. The directors were asked to craft their own reminder
letter, or to use the reminder letter that was written by the researcher. Each director
elected to personalize the reminder letter that was provided by the researcher. They sent it
along with the required IRB forms. The final request to send a reminder was sent to the
directors after 10 more days. They each customized the letter that was provided by the
researcher and emailed it to the prospective participants. The respondents completed the
survey completely voluntarily and were provided full disclosure of potential harm prior to
eight-member focus group consisting of instructional faculty met to discuss the most
effective measures to prevent cheating, and perceptions and motivation of cheating at the
institutions. The participants of this focus group were given brief information regarding
the nature of the study, as suggested by Sue and Ritter (2007). Focus group participants
were made aware of the importance of their participation in the study and the potentially
negative effect nonresponse may have (Fowler, 2009). Additionally, they were assured
64
that their participation was anonymous and that transcripts of their words would be coded
or protected by password secrecy and the recordings would be kept in a secured place
(Sue & Ritter, 2007). As supported by Fowler (2009), the respondents need to feel
comfortable with their participation in the study, thus ensuring their confidentiality is
critical.
In the initial information letter sent to all online teaching faculty, they were asked
to send an email to the researcher if they wished to participate in the focus group. An
electronic record of the email responses was kept of those instructors who indicated their
information that was collected. A letter was sent via email to those who indicated their
interest in partaking in the focus group. Morgan (2008) stipulated that the size of the
focus group is to be determined by the researcher, based on the needs pertaining to the
study. Morgan (2006) defined a focus group as having six to eight members selected
instructional faculty. Eight of those who indicated their interest in the focus group were
selected at random. Three extra names were drawn as alternate participants. An email
was sent to the eight participants to invite them to a face-to-face meeting scheduled for
one month after the initial mail date of the survey. Because some of the eight participants
declined the invitation, instructors from the alternate group were solicited to fill their
spot. After the selection, the members were apprised of the contents of the letter of
permission they were asked to sign. A copy of the signed consent form was given to the
participants and the original signed consent forms were placed in a locked cabinet. These
65
consent forms included information on how their comments/responses in the focus group
would be recorded. The focus group was facilitated by the researcher. The results of the
open-ended questions from the focus group and the responses from the modified AIS
were triangulated. Creswell (2005) mentioned that the process of triangulation can be
used to examine the accuracy and credibility of the responses. Tashakkori and Teddlie
(2003) concurred with Creswell’s explanation regarding triangulation and added that the
qualitative and quantitative information that is gathered complement one another as they
each reflect their own perspective. The interaction of the focus group provided additional
insights into the phenomenon of online cheating which may not have been obviously
revealed with the survey. Short (2006) acknowledged the controversy regarding the
group, how this small group can address issues that are not delved into in the survey.
Instructional faculty were asked questions on the modified AIS related to their
reporting to obtain an indication of whether and to what degree the faculty were aware
that students cheat in their classes. Results indicated whether demographic information
could have influenced the answers (Appendix B, Questions 4, 5, 9, 10, 12, 13).
cheating and how well do they think their college deals with it?
There were questions on the modified AIS about the seriousness of cheating,
66
To find out which strategies instructors use to minimize the instances of cheating
in their online courses, they were asked two questions (Appendix B, Questions 6, 14) on
the modified AIS which determined whether any measures were taken at all. If measures
were in place, the results of the surveys provided an indication of what was put in place.
Faculty were asked to indicate on the survey whether assessments in their courses are
used to detect plagiarism for written assignments, or if no action is taken to ensure course
integrity.
7, 8) on the modified AIS related to the institution’s code of conduct. They were also
asked what steps are taken when there is a breach of the code of conduct. Faculty
responses were analyzed to determine the extent to which instructional faculty enforce
the modified AIS to indicate what they need in order to increase their awareness about
67
online cheating. Additionally, they were able to express what support the institution can
provide to help them be successful in their efforts to reduce or prevent cheating. The
qualitative responses were coded into groups to determine the distribution of scores.
question gave faculty an opportunity to express whether they feel that institutional
Upon receipt of completed surveys, the results were entered on PASW Statistics
18, formerly known as SPSS, a statistical program, which was used to evaluate the
descriptive statistics to analyze the results (Boehm et al., 2009; Creswell, 2005; Eckles,
2010; Hart & Morgan, 2010). Creswell (2005) explained that the grouped frequency
distribution will help summarize the data more easily. To explain the results, data
collected about knowledge of the institution’s code of conduct were converted into
percentages and a descriptive analysis, namely median and mode. According to Creswell
(2005), descriptive statistics are helpful in summarizing the trends and tendencies of data
that are gathered. The data analysis provided information about the variance for each set
of values, which were all relevant in order to make sense of the data. Creswell (2005)
confirmed that the SPSS program provides a good basis for scoring data collected by the
researcher. Information that was obtained was reported in written form and tables.
cheating” and “Student support of these policies”; “Student support of these policies” and
“Faculty support of these policies”; “Faculty support of these policies”; and “The
effectiveness of these policies.” A Pearson correlation was also performed on the number
of times a student was caught cheating (Question 5) and the steps taken as a result
correlated with the severity of punishment (Question 6). In Question 13, “Cheating is a
serious problem at this institution” was tested for correlation with “Faculty members are
discipline (Questions 16, 17, and 18) were tested for correlation with the instructors’
reaction (Question 6). More specifically, Question 16, “How many years have you been
teaching at the college level?” was tested for correlation with the faculty’s reaction to
evidence of cheating (Question 6). The researcher tested whether a correlation exists
between the faculty’s gender (Question 17) and the type of reaction to evidence of
cheating (Question 6). The faculty’s teaching discipline (Question 18) was tested against
correlation exists.
The focus group answered the same questions on the modified AIS, except the
questions were open-ended, rather than closed. The open-ended questions provided the
Subsequently, they were written down, organized into common themes, coded, and
analyzed. According to Creswell (2005), the use of a focus group can result in the
gathering of extensive data. Members of the focus group for this study had an opportunity
69
to go into more depth about the extent of cheating by students and ways to prevent it. The
purpose of the focus group was to allow the group members an opportunity to engage in a
were particularly useful as they provided deeper insights into the research questions,
along with the possibility of elucidating any hidden variables (Davern, 2008). The group
members all had experience with the online platform, and their efforts in increasing
student success while maintaining credibility of the institutions added to the value of the
group. Its homogeneity got the members to share experiences that were similar or
different and served to further support the quantitative portion of the study (Davern,
2008).
contacted. Since the information could not be obtained due to institutional policy, a
liaison sent correspondence to all online faculty, which contained an informational form
2. A Google forms URL for survey access was included in the informational
4. Fourteen days after initial contact, a reminder was sent to the population to
researcher via email to express their interest. Interested participants did not have to
3. Since more than eight participants expressed interest, eight were randomly
selected to become focus group members 5 days after the survey portion of the study
closed. After 14 days, focus group members were invited to a meeting to discuss the
Modified AIS questions. The group met in a conference room at the researcher’s
worksite, where group members who were unable to meet in person had an opportunity to
be present via conference call. To ensure the privacy of the participants, the meeting was
held in a closed room, which limited the voices from being heard by others who may
4. The focus group members were advised of the general purpose of the group:
to have a discussion about the Modified AIS questions in an effort to triangulate their
responses with the ones obtained through the survey. The group members were asked not
to discuss the focus group conversation outside. Additionally, they were asked not to
5. The one-hour meeting was recorded on a portable audio recorder for further
through P8 and their answers were coded as follows: Academic Environment questions
were coded AE1a, AE1b, AE1c, etc. Demographics questions were coded: D16, D17,
D18.
6. After the meeting, the researcher listened to the data wearing headphones,
71
sorted and recorded them electronically and analyzed the results by comparing the
researcher listened to and transcribed the audio recordings in her private home office. The
recordings and transcripts were secured in a locked cabinet at the researcher’s home
office.
7. All information collected for the focus portion of the study will be destroyed
after 3 years following the completion of the study by deleting the electronic files and the
Chapter 4: Results
The purpose of this mixed-methods study was to provide an inquiry into the
phenomenon of cheating in online courses. The previous chapter provided details about
the steps taken to implement the study. This chapter will discuss the results of the data
analysis.
Days after the invitations were sent to the participants, the researcher received a
few emails which stated that there was a technical glitch with one of the questions
(Question 9). The question instructed participants to select one answer from the left
column (Part I) and another answer from the right column (Part II). The participants were
only able to select one answer from either column, resulting in 42 answer submissions for
Part I and zero submissions for Part II. As a result, the researcher had to change the
question into two parts: in Part I, the participants selected one answer and in Part II, they
selected the other answer. By the time the correction was made, the researcher had to
evaluate the likely effect of the 42 submissions in which the respondents were limited to
selecting from either the left column or the right but not both. The chi squares (for Part I)
and correlations (for Part II) were completed to determine whether Question 9 responses
differed between the first 42 participants and the rest (see Appendix C). No significant
differences were found (χ2 ranged from .742 to 5.622, p ranged from .132 to .863, df =
3). These results suggest that modifying the survey did not affect the way participants
The results of the survey and the focus group meetings are included in the
73
Demographics
A total of 588 online faculty from the three research institutions were invited to
complete the online survey. Of those who were invited, 22% completed the survey: 51
males (39.2%) and 79 (60.8%) females indicated their gender, and one participant did not
complete the gender question (N = 131). Table 1 shows the breakdown by academic
discipline.
Table 1
Arts 1 .8 14 3.3
Business 17 13.2 74 17.2
Communication/journalism 9 7.0 41 9.5
Engineering 2 1.6 0 0.0
Humanities 22 17.1 45 10.5
Math or Science 31 24.0 101 23.5
Nursing/health professions 23 17.8 69 16.0
Social/behavioral sciences 24 18.3 86 20.0
Missing* 2
Two faculty did not respond to the question, perhaps because their discipline was
not listed or they chose not to answer for other reasons. It is worth noting that the same
Table 2 displays the number of years participants have taught at the college level.
Table 2
Years Frequency %
0–2 4 3.1
3–7 39 30.2
8–12 35 27.1
13 or more 51 39.5
Missing* 2
Total 129
The focus group consisted of six males and two females (n = 8). The members
science (n = 5); business (n = 1); social and behavioral science (n = 1). All of the focus
group members had more than 13 years of college level teaching experience.
Combined results indicated that the majority of instructors (57.3%) thought that
plagiarism at their institution occurs often or very often (Table 3). When faculty were
asked how frequently they thought students inappropriately shared work in group
assignments, the majority (51.9% combined) indicated that it occurred often to very
often. The frequency of cheating, based on the total of those who responded, is presented
75
in Table 3. Means and standard deviations for Question 4a–4c are presented in Table 4.
Table 3
Missing** 2 2 2
* 4a—How frequently do you think plagiarism on writing assignments occurs in the online courses at your
institution?; 4b—How frequently do you think students inappropriately share work in group assignments
occurs in the online courses at your institution?; 4c—How frequently do you think cheating during tests or
examinations occurs in the online courses at your institution.
** Missing indicates how many participants did not respond.
Table 4
4a 3.76 .830
4b 3.73 .940
4c 3.24 1.006
Faculty were asked which dishonest behaviors they witnessed their students
76
engaging in during the past 3 years. When asked how often, if ever, they saw a student
cheat during an online test or examination, the type of dishonest behavior that was
selected by participants most often (68.1% in the combined Once and More than once
(not electronic or Web-based) without footnoting them in a paper s/he submitted (see
Table 5). Behaviors that were never observed by the majority of respondents were using
digital technology (such as text messaging) to get unpermitted help from someone during
an online test or assignment (65.8%), helping someone else cheat on an online test
(65.2%), copying from another student during an online test with his or her knowledge
(61.4%) and getting questions or answers on an online test from someone who has
already taken a test (58.5%). More than 25% of participants teach in math, science and
engineering—areas that generally do not require research papers. Therefore, there were
several who selected the “Not Relevant” option. Over 41% of participants indicated that
they caught students using a “paper mill” (a paper written and previously submitted by
another student) and claiming it as his/her own work once or more than once. The results
are in Table 5. The mean values indicate that the respondents deemed every question to
A combined majority of faculty (89.7%) indicated that their students used the
Internet or other electronic means only (57.0%) or the Internet primarily (32.7%) to
access paraphrased or copied material from a written electronic source (see Table 7).
Respondents were asked if they ever offered an online test or exam at their
institution and 83.7% (n = 108) answered affirmatively. Those who answered yes were
then asked if they ever observed collaboration, use of books on a closed book exam,
77
students receiving unauthorized help or looking up information on the Internet when not
permitted. For this question, respondents had to check all that applied. The type of
cheating most frequently observed by faculty was students’ looking up information on the
Internet when not permitted (30.5%). The types of cheating observed are shown in Table
6.
Table 5
Aggregated Survey Responses: Frequency of Specific Cheating Behaviors, Questions
9a1–9d1
Table 6
Means and Standard Deviations, Questions 9a1–9d1
Question Means Standard deviation
While some participants (49.7%) indicated that they agreed or strongly agreed
that cheating is a serious problem at their institution, more than half (50.5%) indicated
that they strongly disagreed, disagreed or were unsure. The mean score of 3.54 supports
Table 7
Aggregated Survey Responses: Frequency of Specific Cheating Behaviors, Questions
9e1–9h1
Table 8
Table 9
Aggregated Survey Responses: Frequency of Specific Cheating Behaviors, Questions
9i1–9m1
*9i1— How serious is using an electronic/digital device as an unauthorized aid during an exam; 9j1—
How serious is turning in a paper copied, at least in part, from another student's paper, whether or not the
student is currently taking the same online course; 9k1— How serious is using a false or forged excuse to
obtain an extension on a due date or delay taking an online exam; 9l1— How serious is turning in work
done by someone else in an online class; 9m1— How serious is cheating on a test in an online class in any
other way.
**Missing indicates how many participants did not respond.
Table 10
Table 11
Method Frequency %
Missing* 24
Total 107
Table 12
*12a—Collaborated with others during an online test or exam when not permitted?; 12b—Used notes or
books on a closed book online test or exam?; 12c—Received unauthorized help from someone on an online
test or exam?; 12d—Looked up information on the Internet when not permitted?
81
Table 13
Response Frequency %
Strongly disagree 1 .8
Disagree 10 7.8
Agree 46 35.7
Missing* 2
Total 129
Participant 6 stated:
A lot more plagiarism in discussion postings because Turnitin does not work with
the discussion feature. Cheating for proctored–never. Nonproctored I think it
happens, but there is no way you can prove it,
Participant 2 stated:
I have had students hack each other’s accounts with the tests. And it’s quite easy.
Here at XX college, you know everyone’s user name from the mail system and
the default password is your birthday and everyone has their birthday on
Facebook. I always tell my students change your password and they don’t. Once
you’re in the test, it does not take much time–it’s very, very quick.
Participant 4 stated:
Cheating can also be something like looking into Google and translating the
answer to another language and translating it back.
Focus group members were asked how often, if ever, they have seen a student
82
cheat during an online test or examination at their institution. Three members indicated
that they have seen cheating once to a few times. Some of their comments were as
follows:
Participant 2 stated:
A few times. On more than one occasion I have had students hack each other’s
account. Another circumstance when the students took the test simultaneously.
Participant 3 stated:
Great many times, as I work in learning resources. The problem I’ve had with
mathematics is that students would write down the problem and come to us for
help on solving the problem for them and then they go in and put in the answers.
Participant 7 stated:
Many times. It’s obvious when you’ve been doing it for 16 some-odd years.
Next, focus group members were asked how often, if ever, they have observed or
become aware of a student in their class engaging in different cheating behaviors during
the last 3 years. Two indicated that they observed fabricating or falsifying a bibliography
in an online assignment more than once, one indicated that he witnessed students working
on an online assignment with others although the instructor had asked for individual
work.
Witnessed it not in my own course, but other staff. I did not do anything when I
witnessed it, because I think it should be up to the faculty to design the course so
this does not happen.
Participant 6 stated:
One also indicated that he observed once that students got questions or answers
83
on an online test from someone who had already taken the test. No one indicated that
they witnessed students help someone cheat on an online test. Two noted that they
became aware of students copying from another student during an online test with his or
her knowledge. One focus group member once observed or became aware of a student
using digital technology to get unpermitted help from someone during an online test or
assignment. Once, two focus group members became aware or observed a student
footnoting them in a paper he or she submitted in an online class. Finally, more than once
two focus group members observed or became aware of a student using a false or forged
There were several forms of cheating that were never observed by any of the
focus group members, namely, turning in a paper in an online class from a “paper mill”
unauthorized aid during an exam; turning in a paper copied from another student’s paper;
turning in work done by someone else in an online class; cheating on a test in an online
When focus group members were asked how they believed students assessed
material if they paraphrased or copied material from a written electronic source without
citing it, each member stated that students accessed the information from the Internet.
Focus group members were asked whether they agreed or disagreed with the
Four stated that they were not sure, two agreed and one strongly agreed.
online faculty judge the seriousness of online cheating and how well do they think their
To answer this question, the first step was to assess whether faculty even think
faculty’s attitudes about the severity of cheating and different measures in response to
online cheating (see Table 14). The median and mode of 3 indicate that faculty were
unsure about cheating being a serious problem at their institution. It’s important to
establish the faculty’s uncertainty, as it may influence their perceptions on the factors that
One of the survey questions was about the fairness of the student judicial process
(see Table 14). The median (3) and mode (3) indicated that faculty were not sure how fair
the process is. The mode (4) for the response to whether students should be held
responsible for the academic integrity of other students indicates that the most frequently
reported answer is agree. The median value is 3. The median and mode of 3 for faculty
vigilance showed that participants are unsure whether other faculty members are vigilant
in discovering and reporting suspected cases of academic honesty in their online classes
(see Table 14). The lack of commitment is another factor that could contribute to
cheating. Faculty were also unsure about the fairness and impartiality of the college’s
judicial process that handles student cheating, as indicated by a median and mode of 3.
what extent faculty interpreted behaviors as cheating or not. Each of the dishonest
behaviors was seen as cheating to some extent by each participant who answered the
question. Most of the types of dishonest behaviors were identified by more than 80% of
85
Table 14
Aggregated Survey Responses: Faculty Attitudes Toward Online Cheating, Questions
13a–13d
13a* 13b* 13c* 13d*
Response
Freq. % Freq. % Freq. % Freq. %
*13a – Cheating in online classes is a serious problem at their institution; 13b – Our student judicial
process is fair and impartial; 13c – Students in online classes should be held responsible for monitoring the
academic integrity of other students; 13d – Faculty members are vigilant in discovering and reporting
suspected cases of academic dishonesty in their online classes.
**Missing indicates how many participants did not respond.
Table 15
respondents as serious cheating (see Table 9). Only 36.8% of respondents indicated that
they thought of as serious cheating paraphrasing or copying a few sentences from a book,
86
serious cheating when students were working on an online assignment with others when
the instructor asked for individual work (M = 3.32, SD = 0.69). For those questions, the
mean scores are closer to 3, which indicates that the respondents felt that the dishonest
Table 16
Not 0 0 0 0 0 0 0 0
cheating
Missing** 41 34 31 30
Table 17
Table 18
*9e2 - Copying from another student during an online test with his or her knowledge; 9f2 - Using digital
technology (such as text messaging) to get unpermitted help from someone during an online test or
assignment; 9g2 - Paraphrasing or copying a few sentences from a book, magazine or journal (not
electronic or Web-based) without footnoting them in a paper s/he submitted in an online class; 9h2 -
Turning in a paper in an online class from a “paper mill” (a paper written and previously submitted by
another student) and claiming it as his/her own work.
**Missing indicates how many participants did not respond.
88
Table 19
Table 20
cheating
cheating
cheating
cheating
Missing** 30 28 26 26 28
*9i2 - Using an electronic/digital device as an unauthorized aid during an exam; 9j2 - Turning in a paper
copied, at least in part, from another student’s paper, whether or not the student is currently taking the same
online course; 9k2 - Using a false or forged excuse to obtain an extension on a due date or delay taking an
online exam; 9l2 - Turning in work done by someone else in an online class; 9m2 - Cheating on a test in an
online class in any other way.
**Missing indicates how many participants did not respond.
89
Table 21
The focus group results showed that the six out of the eight participants were
between not being sure and agreeing that cheating in online classes is a serious problem
at this institution, much like the survey respondents. Two participants noted that they
would have to guess at their answer, because they “need to look at data.” When asked if
the judicial process is fair and impartial, five agreed, whereas two were not sure. For the
question on whether students in online classes should be held responsible for monitoring
the academic integrity of other students, five varied between disagree to strongly
disagree.
Several types of dishonest behavior were marked as “not cheating,” such as paraphrasing
or copying a few sentences from a book, magazine or journal (not electronic or Web-
based) without footnoting them in a paper s/he submitted in an online class (2.8%) and
using a false or forged excuse to obtain an extension on a due date or delay taking an
The focus group members had a much different perception on dishonest behaviors
90
than the survey respondents. All of the focus group participants stated that the forms of
cheating are all “serious cheating”, with the exception of two who considered using a
false or forged excuse to obtain an extension on a due date or delay taking an online
Participant 5 stated:
Using a false or forged excuse to get more time–all the time. That seems more
moderate. It’s more like boundary pushing, Not as serious as the last one.
Participant 8 stated:
I believe in my own mind that it’s false or forged, but I consider it trivial.
Research Question 3. The third research question was: What strategies are used
that they saw a student cheat at least once. Those respondents were then asked to answer
what their likely reaction would be if they were convinced, even after discussion with a
student, that a student had cheated on a major test or assignment in their online course.
They had to check all the reactions that applied to them. One answer—fail the student for
opportunity to write in their own answer, if they had a reaction to cheating that was not
3. “Discuss the assignment with the student in an effort to prove he/she couldn't
Table 22
Table 23
Response N % n % n % n % n %
*If you were convinced, even after discussion with the student, that a student had cheated on a major test or
assignment in your online course, what would be your most likely reaction? *6a—Reprimand or warn the
student; 6b—Lower the student’s grade; 6c—Fail the student for the test assignment; 6d—Fail the student
for the course; 6e—Require student to retake test/redo assignment; 6f—Report student to the Dean of
Students; 6g—Report student to your Chair/Director or Dean; 6h—Do nothing about the incident; 6i—
Other. Total of percentages indicate that respondents in some cases selected multiple responses. Total
number of respondents for each response = 60.
reactions to cheating, there are several safeguards employed by faculty to aid in the
reduction of cheating. Respondents checked all options that applied to them. The most
widely used are provision of information about cheating (65.6%), Internet or plagiarism
software (59.5%), discussing the importance of honesty (52.7%) and changing exams
92
regularly (51.1%). A small percentage of participants (1.5%) indicated that they use no
safeguards in their courses. At-home proctor software was selected by only 9.9% of the
Table 24
N = 131 N % Yes
Note. 14a. None. I do not use any special safeguards in my courses, 14b. Use the Internet,
or software such as Turnitin.com, to detect or confirm plagiarism, 14c. Provide information
about cheating/plagiarism on course outline or assignment sheet, 14d—Change exams
regularly, 14e—Hand out different versions of an exam, 14f—Discuss my views on the
importance of honesty and academic integrity with my students, 14g—Remind students
periodically about their obligations under the institution’s academic integrity policy,
14h—Closely monitor students taking a(n) test/exam, 14i—On-campus proctored testing
center, 14j—Off –campus proctored testing center, 14k—At-home webcam computer
proctor, 14l—Password protected exams, 14m—Secure exam browser lockdown.
Focus group members were asked what safeguards they employ in their courses,
93
and indicated that the most widely used safeguards are the Internet or software such as
center. While that was the second highest selected safeguard, survey respondents
indicated that providing information regarding cheating or plagiarism is their most likely
action to safeguard their course. Three focus group members indicated that they no longer
give exams or they no longer base the students’ grades on results of high stakes exams.
Safeguards that were mentioned by other focus group members are providing information
different versions of the exam and using password-protected exams. Some of the remarks
Participant 2 stated:
Refuse to teach a course where all of the tests would be online. I don’t see the
point of that. I would accept offsite as long as it is a reputable place.
Participant 5 stated:
My biggest concern with webcam or off-campus is the cost. If the cost situation
could be resolved where I don’t have to take into consideration that I want to give
five tests in my course and it is $20 to $25 a pop—that all of a sudden becomes a
lot of money. I don’t trust secure lockdown browser. I don’t have confidence with
that type of technology where all of a sudden you’re roped into “I can’t get it
installed or the system froze.”
extent do instructional college faculty follow the institution’s code of conduct in response
to academic dishonesty? Respondents were able to select more than one response. Table
12 shows that with the exception of 8.4% of respondents, all respondents knew about the
academic integrity policy. The majority (61.8%) learned about it in the faculty handbook,
followed by 41.2% who learned about the policies from the faculty orientation program.
94
Table 25
Aggregated Survey Responses: Primary Source From Which Faculty Learned About
Academic Integrity Policies
Response n Yes %
Faculty were asked what their reaction to cheating would be if they were
convinced that a student cheated on a major test or assignment. Table 22 shows that
23.3% would report the student to the Dean of Students, 16.6% to their Chair/Director or
Dean; but 3.3% indicated that they would do nothing about the incident.
When asked whether an incident of cheating was ever ignored and why, 38 out of
125 participants (30.4%) who answered the question indicated that they have ignored it.
Those 38 respondents were asked to indicate on a checklist what the reason was they
ignored cheating. Most of them (84.2%) indicated that they ignored it because they
their own answer, if their reason for ignoring cheating that was not provided as an answer
1. “My exams are designed so that students who cheat them fail. Saves me from
having to get into the whole bureaucratic mess of bringing student up on charges.”
3. “How can I prove another person took the exam; perfect score in minimal
time.”
4. “Using books and notes would not help one cheat on an oral French test.”
5. “The student was not passing the course. Did not matter if the student earned
100% the balance of the grades were so poor, it not make a difference”
Table 26
Response Frequency %
Total 38
Faculty were asked if they ever referred a case of cheating to their Chair, Dean or
96
anyone else and how satisfied they were with the way the case was handled. Of the 58
people who answered, 70.7% indicated they were very satisfied (36.2%) or satisfied
(34.5%). See Table 27 for further details. The most likely reason 78 respondents did not
Table 27
Response Frequency %
Total 58
*Missing indicates how many participants did not respond. Total respondents = 58.
about the academic integrity policies at their institution from the college catalog. For this
question, respondents could select multiple sources if the integrity policy was received in
that manner. Four members indicated that they also received this information from the
faculty orientation program, the faculty handbook, the department chair and from other
faculty. One focus group member indicated that information was obtained from the dean
or other administrator.
Five of the focus group members—those who indicated that they were convinced
that a student cheated on a major test or assignment—stated that they would fail the
97
student for the test or assignment. In each of the following categories, one focus group
member each indicated that their reaction would be to lower the student’s grade, fail the
student for the course, and do nothing about the incident. Participant 2 explained in
regard to what action would be taken if a student had cheated: “unless I can really
validate then there is no point [to take any action]. Unless I can convince myself, then
there is no way of really convincing anyone else [that the student cheated].” Participant 3
mentioned “If I were to catch someone in the test environment then they would fail that
particular test. And anything else I would ignore. I would have to be sure.” Participant 4
said “My first year, I ignored it because I did not know how to proceed.”
Have I known it happened and decided not to proceed further on the chain of
commands? Absolutely, because, as others have said, my standpoint is obvious:
they’ve cheated. But they already received punishment—they failed the test or
assignment. Why bother?—the penalty is in place.
Participant 8 said:
I usually fail the student on that assignment and tell them not to do it again. With
my multiple-choice quiz I usually do [ignore cheating] because I can’t prove that
it was done. With my experience, they will ultimately fail. I usually teach six
classes and it’s hard. It’s time-consuming.
There were two focus group members who indicated that they have referred a
suspected case of cheating to their Chair or someone else. One was very satisfied with the
way it was handled, while the other (Participant 6) mentioned “I was hoping that the dean
Research Question 5. The fifth research question was: What types of support do
instructional college faculty desire to help lower online cheating? Plagiarism detection
software, like Turnitin.com is the most widely selected choice of safeguards (50.0%) as
98
2. “Time frame for completion thus providing time to cheat once test started”
5. “The structure of the class can reduce cheating greatly. Multiple, smaller
assignments that ask for written explanations can make cheating a lot more difficult”
The details of the survey participants’ answers are reflected in Table 28.
Table 28
Safeguard n %
15a—Plagiarism detection software, like TurnItIn.com 52 50.0
15b—On-campus proctored testing center 33 31.7
15c—Off –campus proctored testing center 18 17.3
15d—At-home webcam computer proctor 33 31.7
15e—Password-protected exams 37 35.6
15f—Secure exam browser lockdown 34 32.7
15g—other 6 5.8
Note. 15a—Plagiarism detection software, like TurnItIn.com; 15b—On-campus proctored testing center;
15c—Off –campus proctored testing center; 15d—At-home webcam computer proctor; 15e—Password-
protected exams; 15f—Secure exam browser lockdown; 15g—other.
When asked which safeguards focus group members would use if they were
reputable place.” Participant 4 mentioned “in Moodle, you have test banks with three
Participant 7 mentioned
Would love to have at home webcam computer proctor. Problem is the cost. To
have them pay $125 a semester, just… I can’t ask that of them. So until the cost
can be mitigated I won’t do it.
degree do instructional college faculty perceive the acceptance of the use of institutional
measures to prevent online cheating? To answer this research question, faculty answered
a Likert-scale question where they had to rate their perception very low (1), low (2),
medium (3), high (4), or very high (5). The most repeated answer was for faculty support
3.80, SD = 1.058).
policies (Table 28), it is interesting to note that the most widely selected answer by
faculty on how information regarding plagiarism is conveyed is via their syllabus (74.4%;
Those 38 were then asked to check all reasons that applied to them from a checklist
provided. Faculty who ignored a suspected incident of cheating checked off lack of
evidence proof as the primary reason why they did so (84.2%, n = 32) (see Table 30). As
far as referring a suspected case of cheating to the Chair, Dean or anyone else, 44.6% (n
= 58) indicated that they had and 70.7% were very satisfied (36.2%) to satisfied (34.5%)
Table 29
Aggregated Survey Responses: Faculty Ratings of Institutional Measures to Prevent
Online Cheating
Response n Median Mode Mean SD
1a. Severity of penalties for cheating in 121 3.00 3 3.26 1.173
online classes at your institution
1b.Student’s understanding of the 125 3.00 3 2.71 1.022
college’s policies concerning cheating
in online classes
1c. Student support of these policies 101 3.00 3 2.69 .935
1d. Faculty support of these policies 120 4.00 5 3.80 1.058
1e. Effectiveness of these policies 117 3.00 3 2.98 1.025
Focus group members were asked to rate the severity of penalties for cheating in
Participant 1 stated:
Really high, because I have seen where it has gone through the ranks—not in my
case, but I have seen where it—it occurred in other cases–where it went from the
Dean to the Associate Dean all the way up to the Dean of Student Affairs. I think
we have the appropriate setup to take care of cheating.
Participant 2 stated:
I agree with the fact that we have a process in place that works. I am not so sure
that I would rate the overall severity being high because it is very much at the
discretion of the instructor as the instructor determines their own syllabus. So I
could have one penalty and another colleague could have another penalty for the
same infraction. So institutionally, I don't think we’re highly effective that way.
But I do agree that once you set your policy the procedure does work, assuming
that policy is then seen through.
Participant 3 said:
101
Table 30
Aggregated Survey Responses: How and When Faculty Discuss Institutional Dishonesty
Policies, Questions 2a–2d
2a* 2b* 2c* 2d*
Response
n % n % n % n %
*2a—When, if at all, in your online courses do you discuss with students your policies concerning
plagiarism? 2b—When, if at all, in your online courses do you discuss with students your policies
concerning permitted and prohibited group work or collaboration? 2c—When, if at all, in your online
courses do you discuss with students your policies concerning the proper citation or referencing of sources?
2d—When, if at all, in your online courses do you discuss with students your policies concerning
falsifying/fabricating research data?
**Missing indicates how many participants did not respond.
Participant 4 stated:
102
Table 31
Response n %
I have to say no opinion, because I have not seen the process go through.
Participant 5 stated:
I don’t know if I think that there is a culture of severity for cheating, because I
don’t think it’s something that can be quantified, I guess you would say. Because
for me, it’s like, like your case where 30% is taken at home. Is that really…? And
if your brother does it for you? Well, can I prove that? And the administration is
in a position of ”Well, did it really happen?” I don’t really think that … it’s sort of
ubiquitous, it’s not really well defined. I agree that if it is in my syllabus, I can
really say I’m behind that. The administration would do the same. But I think it’s
a difficult situation to prove and a difficult situation to apply a penalty for
something you really can’t define.
Participant 6 stated:
When you look at the syllabus template that the Institutional technology
department provides and their statement on what the penalties are, it very much
follows the policy that is in place by the college, which is very open-ended. I
think that the severity is dependent on the instructor and the department that the
instructor is in as to how much they want to actually enforce it.
Participant 7 stated:
Yes, there is a culture of severity: the penalty is set out and it’s severe. But the
103
position the administration takes is ”Well, but can you prove it?” And that’s a
very difficult thing when you teach solely online the way I do. So I think it is
much more complex.
Participant 8 stated:
Well, I can only echo what everyone else says in terms of “There is a policy in
place,” but it’s extremely subjective from our perspective as professors and from
those who are above us—technically the associate dean and the dean of students.
And it’s subjective also in the sense of “What are the penalties on our end?” If we
pursue punishing the student, there is an atmosphere in the college where they
would rather give the student the benefit of the doubt. Although I have, very early
on in my career, I’ve sat in on grade appeals and that is where we find that we
have a lot of coverage and advocacy, but when it comes to severely punishing
someone for cheating … I don’t know far the school would like to go. And I
wonder as well–just to add on to what I said—with this atmosphere of retention–
well that’s something to consider when retention is based on ... or monies is based
on retention … that’s something else we have to figure out when we see cheating.
To the question of how faculty would rate the average student’s understanding of
the college’s policies concerning cheating in online classes, focus group members
answered as follows:
Participant 1 said:
Participant 2 said
Participant 3 said:
I would say low as well–I have no reason why, except that from interacting with
students. I would say that they are not aware and that they will see how much they
can get away with and push to the boundaries. Maybe they are aware of it and
they decide to push the boundaries
Participant 4 said:
when do you cross over into cheating like taking somebody’s notes instead of
your own or turning in someone else’s work as your own.
Participant 5 said:
I think it’s really a two-pronged problem: The first is that I don’t think they
understand. They go on the Internet and think, ”Well, this is like research.” They
can put that in their discussion. Now I just take it and put it into Google and, look,
it comes up as this other guy’s article. They don’t really realize that that’s not
theirs–you have to cite that. So I think they don’t really know and they also think
too, if they can push a little bit and try to get to the edge. I think it’s probably a
combination. I think we should probably push for more: Maybe they can have a
module or something to explain what it is–what cheating really is.
Participant 6 said:
I feel that students are given enough opportunity to actually know what it is,
because the orientation has a page with a lot on academic honesty. Like I teach a
course where the orientation assignment that they had to do was to go and find the
academic honesty policy in the syllabus and paste it in, and submit that
assignment. The students did that. And then it comes back to now–OK, I think
they know. In this one class I caught four people cheating, even after submitting
the assignment that said find that academic policy and show that you’ve read it by
submitting it. So I think it is also a question of knowing really what it is because it
is kind of broad–that policy statement. Does that tell the students enough? I have
a suspicion that academic honesty is not really a priority for the K-12 system.
Their mindset is set at that level and when they come to the college they think
they can just continue with that.
Participant 7 said:
I would say that awareness and compliance are two vastly different issues. And to
that point, two years ago, I was required by my college to do a culture project. I
teach Spanish. And I gave them very very specific instructions especially
concerning not stealing photographs that were copyright-protected. They were
given really really really detailed instructions about don’t do this, look for
creative commons images that give you permissions that allow you with
attributions. I would say that out of 90 students between my four classes that
semester, I had to no-credit at least 20 of them for violating that policy.
105
Participant 8 said:
Again I feel that the there is a policy, from my understanding, since I’ve been
teaching online–there is a hyperlink on the syllabus. In my syllabus quiz I have a
question about academic honesty, plus it is adequate in terms of notice. But are
the students reading it? Possibly not. I also feel that many students, especially in
teaching History, they may have had the 1101 class where they are introduced to
the idea of academic honesty. I just think that they try to see what they can get
away with.
They seem genuinely shocked to get caught when they are confronted.
Focus group participants all rated effectiveness of student support of the policies
against cheating either very low or low. They also rated faculty support for the policies
mostly low (n=3), yet some rated them high (n=2) and very high (n=1). Accordingly, the
effectiveness of the policies were also rated low (n=4) by most and only one rated it high.
Reasons stated why effectiveness is rated low are: “There is uncertain administrative
support. Let’s be real: it is a lot of work.” (Participant 2), and Participant 3 said:
There is all of the hoops to jump through once you catch a student, even when it is
red-handed. All of the paperwork, and then the back and the forth and then the
meeting and all of that stuff and how you’re gonna prove it. Even in a face-to-face
class where the student … if you catch a student with a cell phone with pictures
and all that stuff. What do you do at that point? Do you get that cell phone? How
will you prove now what the student had on the cell phone and all that stuff? So
that’s the problem there. So I think from this point it is prevention–from the
faculty standpoint: for example, giving multiple tests, organization. That leads to
the effectiveness of these policies. Of course we want a fair process for the
students, but at the same time, does it become a burden for the faculty?
Focus group members were asked if they had ever ignored a suspected incident of
cheating in one of their courses for any reason. While one stated that they had, the rest
(n=7) indicated that they took action such as failing the student for the test. The one that
mentioned to have ignored it explained that he was new to the college at the time and did
When asked how strongly focus group members agreed or disagreed that faculty
dishonesty in their online classes, one was unsure, while two said that they varied
between unsure and agree. The rest (n = 4) agreed. One remark was that there is likely a
difference between part-time and full-time faculty, with part-timers being less likely to be
was “not sure especially with regards with the vigilance just because I hear too often
concerning cheating" and "Student support of these policies." The correlation between the
"student support of these policies" is statistically significant, r=0.41, p<.001 (see Table
31). These results indicate that the average student's understanding of the college’s
policies concerning cheating has a moderate positive correlation with student support of
these policies.
The correlation between the “Student support of these policies” and “Faculty
support of these policies,” r = 0.60, p < .001, is statistically significant (see Table 19).
According to these results, there is a moderate positive correlation between the students’
and faculty’s support for the policies concerning cheating in online classes.
The correlation between the “Faculty support of the college's policies concerning
< .001 (see Table 31). These results indicate that the average faculty’s as well as the
107
average student’s support of the college’s policies concerning cheating has a moderate
Table 32
Correlations N r p
1b. The average student’s 100 0.41 <.001
understanding of the college’s policies
concerning cheating in online classes
vs. 1c. Student support of these policies
1b. The average student’s 116 0.53 <.001
understanding of the college’s policies
concerning cheating in online classes
vs. 1d. Faculty support of these policies
1c. Student support of these policies vs. 96 0.60 <.001
1d. Faculty support of these policies
Question 13, "Cheating is a serious problem at this institution," was tested for
correlation with "Faculty members are vigilant in discovering and reporting suspected
Table 33
Pearson Correlations: Cheating is a Serious Problem Versus Faculty are Vigilant in
Reporting
Correlation N r p
13a. Cheating in online classes is a serious 129 0.01 <.001
problem at this institution vs. 13d Faculty
members are vigilant in discovering and
reporting suspected cases of academic
dishonesty in their online classes
The researcher tested whether a correlation exists between the faculty’s number of
years of teaching at the college level (Question 16) and the type of reaction to evidence of
cheating (Question 6). The correlation between the faculty’s years of teaching and the
respondent’s type of reaction to the evidence of cheating was weak when all the
Table 34
Correlation N R p
16. How many years have you been 68 0.25 <.001
teaching at the college level vs. (q6) Actions
Total
The researcher tested whether a relationship exists between the faculty’s gender
(Question 17) and the type of reaction to evidence of cheating (Question 6). The
relationship between the faculty’s gender and the respondent’s type of reaction to the
evidence of cheating was weak for any type of response (Table 35). Cross-tabulations
showed female faculty would more likely reprimand the student than male faculty by 10
percentage points, would be twice as likely to lower their grade or fail the student for the
109
course. The largest difference, 16 percentage points, was in female faculty’s being more
likely to fail the student for the test or assignment than male faculty. Chi square analyses
were used to determine whether faculty’s gender is associated with their response to
cheating in the areas which showed a significant difference between the male and female
responses. No significant associations were found. Table 35 shows a trend that female
respondents were markedly more punitive in their responses to cheating than males.
Table 35
Male Female
Pearson
Response to cheating (n = 28) (n = 41) chi-
square
% Yes % Yes
relationships were found (see Table 36). These results show that faculty’s teaching
discipline is not interrelated with their reaction to evidence of cheating. The respondents
cheating. Overall, their reaction is higher than in other disciplines. In Table 36, the
Table 36
Note. If you were convinced, even after discussion with the student, that a student had cheated on a major test or
assignment in your online course, what would be your most likely reaction? *6a—Reprimand or warn the student; 6b—
Lower the student’s grade; 6c—Fail the student or the test assignment; 6d—Fail the student for the course; 6e—
Require student to retake test/redo assignment.
Table 37
Total 14 9 2 3
Note. 6f—Report student to the Dean of Students; 6g—Report student to your Chair/Director or Dean; 6h—Do nothing
about the incident; 6i—Other; *Total of percentages exceeds 100% indicating that respondents in some cases selected
multiple responses.
111
Chapter Summary
The findings of the research were presented in this chapter. The survey answers of
the participants’ responses were analyzed with descriptive statistics and sampling
distributions and compared to the qualitative responses from the focus group members.
The perceptions of cheating at their respective institutions varied, with the majority of
faculty being unsure, or disagreeing that cheating is a serious problem at their institution.
Faculty mostly indicated that they had not personally witnessed students engaging in
obtaining answers to online tests or copying answers from another student and were
unsure whether dishonesty is a problem at their institution, but they strongly believed
copying information from the Internet without proper citation (plagiarism) to be the
integrity was identified by faculty as the factor that mostly influences cheating, but focus
group members expressed concern regarding students in this role, questioning whether it
is a fair burden.
preventative strategies like providing integrity policy information in the syllabus and
using plagiarism detection software, or reactive strategies, like failing the student for the
off campus was also commonly selected. Respondents indicated that the at-home webcam
was not widely used, nor was it selected by many as a feasible tool, as the cost for
students seeking those options was said to be high; and faculty indicated that they were
more likely to utilize it if the cost for each use was reduced.
integrity from reading the college handbook, for example, but their reaction to cheating
was not always in line with the institution’s policy, manifested by about 30% confessing
lowering the student’s grade was the widely practiced reaction, while reporting the
faculty ignored cheating as they lacked proof that it took place. Desired support to help
lower cheating included on-campus proctored exams and at-home webcam computer
proctor.
The degree to which instructional college faculty perceived the acceptance of the
use of instructional measures to prevent online teaching depended on the level of support.
which resulted in low support of them. Faculty were highly supportive of the policies and
perceived them as being very effective, but they were mostly unsure about the
effectiveness of the student judicial process as they had not seen data related to this
effectiveness.
Neither gender, discipline, nor the number of years faculty taught at the college
level seemed to have a significant relationship with the punishment in general, or the type
of punishment faculty used to reprimand students for cheating. There was a slight
indication of females in this study being more punitive compared to males. The same
seemed true for faculty from the social and behavioral sciences. Chapter 5 will provide a
and recommendations.
113
Chapter 5: Discussion
The purpose of this study was to provide an inquiry into the phenomenon of
cheating in online courses. This mixed-method study on cheating in online classes at the
college level was conducted as an inquiry into the problem of dishonesty from the
perspective of faculty. The findings of the study were presented in Chapter 4, where the
data of the survey portion of the research, as well as the information obtained from the
focus group meeting, were organized by each of the six research questions that were the
There are many studies that address the problem of cheating in online classes
(e.g., Bedford et al., 2011; Brent & Atkisson, 2011; Chapman et al., 2004; Correa, 2011;
Devlin & Gray, 2007); Hudd et al., 2009), and increased pressure by the Federal
has shown that the perception about cheating is still ambiguous, which results in reduced
effort to implement strategies for reduction (Pincus & Schmelkin, 2003). Moreover, there
is some evidence that the gap between students and faculty perception of what constitutes
Butterfield, & Trevino, 2012). As indicated by Pincus and Schmelkin (2003), faculty do
not always view academic honesty in two dichotomous categories of existence. Rather,
they found that faculty often view dishonesty on a continuum that ranks forms of
dishonesty on different levels based on their perceived level of severity. The findings of
this study were consistent with the notion of a continuum, as faculty rated paraphrasing
114
or copying a few sentences from a book without proper footnoting as a much lower case
of dishonesty than copying from another student during an online test with his or her
knowledge.
2. How do online faculty judge the seriousness of online cheating and how well
integrity?
online cheating?
Five hundred and eighty-eight online faculty from three Florida community
colleges were invited to partake in the study. The initial invitation with two reminders
were sent via email by a liaison from the department of Instructional Technology at each
survey was completed by 131 online faculty (22%). The AIS was modified with
developed the survey. Participants were asked to sign up for a one-hour focus group
115
meeting which addressed the same questions. Eight volunteers were selected to attend the
meeting. The purpose of the focus group meeting was to obtain an in-depth view from the
Summary of Findings
The sample for the quantitative part of the study consisted of 51 males (39%), 79
females (61%), and two other members who did not disclose their gender. Cross-
tabulations showed that there is no significant relationship between gender and the
response to cheating, although female faculty indicated a slightly more punitive attitude
“strong external validity” (p. 721), when its make-up is reflective of the population. He
determine if the study’s sample is representative of the target population, the researcher
determined the gender breakdown of online instructors for the Winter 2013–2014
semester to be 374 females (61.5%) and 234 males (38.5%; L. Ciardulli, Assistant Vice
Muirhead, Executive Assistant, personal communication, April 12, 2014, and S. Arsht,
eLearning Student Success Specialist, personal communication, April 25, 2014), and this
were grouped the same way in which the groups were combined for the statistical
116
analysis of this study, which resulted in 430 online instructors altogether in subject areas
that matched the ones for this. This breakdown falls in line with the breakdown of this
study, with all of the disciplines being within 4% difference in terms of representation,
with the exception of faculty in the business department, which had a 6.6% higher
definitive trends between faculty’s teaching discipline and their reaction to any evidence
of cheating. The number of years of teaching did not indicate a significant bearing on
their reaction to cheating, except when it came to having the student retake a major test or
redo an assignment when cheating was discovered. The results showed that the greater
the number of years of teaching experience, the more likely that faculty are to have the
student retake the test or redo the assignment. The results for each research questions will
classes?”
occurred often in their online classes. Studies done with students who had to self-report
their instances of cheating support faculty’s inclination to believe that students cheat in
their classes (Harkins & Kubik, 2010; McCabe et al., 2012). The perception of cheating is
based on speculation, except for plagiarism that involves copying lines without citations.
This explains why the highest percentage of faculty (41.9%) expressed uncertainty about
cheating being a serious problem at their institution. This trend could be attributed to
117
cheating being a less noticeable problem in the online environment because online faculty
Focus group discussion revealed that many of the different types of cheating
cannot physically be witnessed by the instructor, due to the mode of delivery. The
reasonable doubt, but that easy access to electronic materials makes it more likely for
students to try. This includes the use of multiple electronic devices while taking exams:
one device has the exam open, while the other device is used to look up answers. Another
method used for cheating that was discussed by focus group members was plagiarism
when submitting discussion posts, as the discussion feature does not have the plagiarism
detection software. Hacking into accounts was also cited to be a common way to cheat, as
obtaining username and password information from other students seems rather easy.
Turning in papers from a “paper mill” is not widely noted as a common way to cheat.
whether cheating is a serious problem at their institution. The uncertainty about the
existence of cheating likely affects the faculty’s reaction to cheating. Focus group
members argued that their answers were based on guesses, as they did not see any data
from their college that provided factual information. A weak relationship exists between
“cheating is a serious problem at this institution” and “faculty members are vigilant in
discovering and reporting suspected cases of academic dishonesty.” This may indicate
that published institutional data regarding cheating will likely encourage faculty to
Another factor that may influence cheating is the perception of the instructors
118
about the seriousness of cheating. More than 89% of instructors indicated that turning in
a paper from a paper mill or turning in work done by someone else is considered serious
cheating. There were a few forms of cheating that were seen as trivial to moderate, such
When faculty’s perception and reaction are inconsistent, their reaction to the type of
cheating may also vary. The focus group discussion addressed this issue, where members
mentioned that students often test the boundaries to see how much they can get away
with. This understanding echoes Correa’s (2011) conclusions that students learn about the
culture of academic integrity at their institution and if faculty does not take their role in
Participants of the survey study and focus group members differed in their rating
of peer influence. Survey study participants mostly agreed that students in online classes
should be responsible for the integrity of other students, while focus group participants
mostly disagreed because they felt that it should not be the students’ task to police other
students. McCabe and Trevino (1997) argued that peer reporting can be highly effective
since peers are more likely to find out from one another that someone has cheated. In
turn, stated McCabe and Trevino, the threat of its being reported may be enough to keep
students from cheating at all. Their study revealed that students were mostly affected by
the disapproval or potential negative reaction of their peers. McCabe and Trevino (1997)
therefore recommended that institutions that are serious about combating cheating must
look closely at ways to create a culture of cheating being unacceptable among peers.
The last factor that may influence cheating is the subject discipline of the faculty
119
member. The small pool of respondents in any of the disciplines makes generalizing
difficult. However, there were observed differences worth noting: based on the selection
of reactions that were offered, social and behavioral science respondents had the
strongest reaction to cheating, compared to the other disciplines. There were two
respondents who indicated that they would do nothing, even when they were convinced
that a student cheated. This shows that most faculty in the study are inclined to take
action once they have evidence of cheating, but that factors, such as bureaucratic barriers,
lack of time or understanding of personal responsibility may deter them from taking any
action at all.
cheating varied, and the results indicated that almost all faculty (n = 60) with the
exception of two indicated that they would take action. Failing the students for the test or
(2011) claimed that enforcement of integrity policy helps to increase the institution’s
credibility, but as his study showed instructors would rather handle issues of dishonesty
on their own than follow the policy which may include referring the student to the chair,
director or dean of students. Focus group members for this research study stated that
there may also be a difference in understanding of the policies between part-time and
full-time faculty. Hudd et al. (2009) mentioned that part-time faculty’s understanding of
cheating differs and their strategies to combat cheating will differ as a result.
used by faculty, and most stated that they provide their students with information about
dishonesty and change their exams regularly. Other strategies cited to prevent cheating
120
include, but are not limited to, handing out different exam versions and using on-campus
respondents about different safeguards that are available. In the focus group conversation
it was revealed that there was misunderstanding of how some safeguards work.
Additionally, respondents indicated that there is a lack of trust in some of the technology
software for their discussion feature, while the software is available in assignments. As a
result, faculty may not be able to utilize the software even when they are familiar with it.
The cost of off-campus proctored testing and webcam-proctored exams was mentioned as
a deterrent. Three focus group members indicated that they no longer give exams or they
Suggested safeguards. Three focus group members indicated that they no longer
give exams or they no longer base the students’ grades on results of high stakes exams.
Safeguards that were mentioned by other focus group members are providing information
In the literature, there are different safeguards to protect online course integrity,
1. Faculty should establish rapport with their students so they can recognize
patterns of cheating when it occurs (Moten et al., 2013). One of the focus group members
no longer gives tests, but gives assignments instead, with the goal of building rapport
with the students. Survey respondents indicated a preference to have conversations with
their students to discuss honesty and integrity, as well as the student’s obligations
121
2. Faculty should use multiple versions of exams (Moten et al., 2013). More than
35% of survey respondents indicated that they already use multiple versions of exams
and focus group members mentioned doing the same. One respondent suggested that each
al., 2013) and the college should add academic integrity policy to the syllabus (Jones,
2011). Focus group members discussed that this feature is currently available at their
institution. Focus group members discussed that their syllabi often include statements
about academic integrity. Perhaps requiring the students to sign the dishonestly statement
separately will reduce cheating. Since 73.3% of survey respondents indicated that they
provide information regarding dishonesty in their syllabus, they could include the
4. Faculty should make use of proctored exams (Harkins & Kubik, 2010; Lieber,
2012; Moten et al., 2013). When off-campus exams are administered, faculty should
utilize reputable testing centers like the NCTA (Baron & Crooks, 2005). While more than
a third of survey respondents utilize on-campus testing centers, only 15.9% indicated that
5. The instructor can be added to the class roster under a fictitious name (Moten
et al., 2013). This option was not discussed among focus group members, nor was there a
2013; Harkins & Kubik, 2010). The survey results demonstrated how faculty are not in
Clarification of the guidelines should clear up misunderstandings for faculty and students
alike.
7. Faculty should develop a clear honor code and enforce it (Patnaude, 2008).
The development of an honor code was not addressed in the survey. It was clear that
faculty had different ideas on how they should deal with cheating, but enforcement has
been inconsistent. Additionally, it was mentioned during the focus group meeting that
students’ interest and enthusiasm (Kohn, 1999). A survey respondent offered the
questions on exams.
9. Faculty should utilize positive peer pressure (McCabe et al., 2012; Sendag et
al., 2012). This option was not discussed by the focus group members, nor was there a
10. Faculty should commit to combating dishonesty and following through with
the institutional guidelines (Correa, 2011; Thakkar, 2012; Thomas & De Bruin, 2012).
Survey respondents and focus group members expressed uncertainty about their
De Bruin, 2012). Most survey respondents failed the student for the test or assignment
they cheated on, but the responses were very inconsistent and a few respondents admitted
123
12. The college should institute a required orientation module that covers
academic integrity (Williams et al., 2012). Focus group members discussed that such
orientation is already required in their courses. It was not addressed in the survey by
survey respondents.
13. Faculty should use webcams (Cole & Swartz, 2013) or other remote
monitoring devices such as SeCOnE (Jung & Yeom, 2009). Twenty five percent of
survey respondents expressed an interest in the webcam option, while some faculty
indicated that they already use it. Others expressed their concern about the cost associated
14. Faculty should require an increased number of written assignments (Cole &
Swartz, 2013). One focus group member identified written assignments as the preferred
15. Faculty should use the screen-lock option to prevent the student from
minimizing the screen from its full-screen mode while a student is taking an exam (Cole
Crooks, 2005; Heckler et al., 2013; Jones, 2011; Moten et al., 2013; Patel et al., 2011;
Simonson et al., 2012). Almost 60% of survey respondents indicated that they already use
such software and almost 40% indicated their desire to use it. During their discussion,
focus group members shared that the software is very effective, but they expressed
124
concern that in some Learning Management Systems, the software is not available for
discussions, only for assignments. Survey respondents expressed desire for access to this
17. Faculty should use Google to search for exact sentence copies (Baron &
Crooks, 2005; Farnsworth & Bevis, 2006). Although this method was not specifically
addressed in the survey, one focus group member spoke about the effectiveness of this
The research about safeguards offered additional options, which were not part of this
study. Future research in this area could focus on these methods and evaluate their
effectiveness:
2. Faculty should use Skype or other synchronous tools for oral examinations
3. Faculty should compare the students’ writing to other writing they submitted
via email or discussions (Davis et al., 2009; Farnsworth & Bevis, 2006)
5. Faculty should look out for tricks, like transparent dots that are placed
6. Faculty should use portfolios to establish a writing baseline (Baron & Crooks,
2005).
teacher-student and student-student interaction (Baron & Crooks, 2005; Prince et al.,
125
2009).
8. Faculty should include students in assignment design and topic design for
11. Faculty should require students to use tutors, as their relationship might deter
12. Faculty should use biometrics to verify students’ identities (Baron & Crooks,
2005).
Institutional code of conduct. Faculty in the study were made aware of their
institutional integrity policy via different avenues. Each of the institutions’ code of
conduct highlights the steps faculty must take in case of a breach, which includes referral
to the Dean of students (Broward College, n.d.-b; Palm Beach State College, 2013b). The
policy at one of the three institutions requires that faculty members determine the extent
n.d.-b). The sources selected by the majority of respondents in respect to cheating policy
were the faculty handbook (61.8%) and the college’s orientation program (41.2%). Focus
group members mentioned that part-time faculty may not fully understand their role as
they’re only on campus briefly to teach their classes. They may not have been given
detailed information regarding what cheating is and how they are required to follow up,
should cheating be detected. The discussion also revealed that some part-time faculty
may work at multiple institutions, each with its own policy. This may lead to further
126
dealt with. Hudd et al. (2009) showed that the difference in perception of what cheating
entails is an issue that should be addressed. Their study confirmed the perception of focus
group members regarding the lack of understanding regarding policies and enforcement,
The main reason for this lack of understanding, indicated by 84.2% of survey
respondents (n = 32), was lack of proof. The focus group members also discussed their
reasons for ignoring cheating when it occurred, citing lack of proof as the main reason
why they failed to follow up. Thomas and De Bruin (2012) wrote about the lack of proof
and heavy workload as reasons why faculty fail to follow up on cheating. The
issue that hinders the enforcement of the school’s policy. Nonetheless, the chi squares
additional safeguards against cheating revealed that faculty either (a) do not have the
safeguards available, (b) are unaware that the safeguards are already available through
their institution, (c) do not use some of the available safeguards because they are unaware
or unsure of how they can deter or detect dishonesty, and (d) lack commitment or desire
The survey respondents were asked which additional safeguards they would
employ if they were available. The answers in rank order, starting with the most desired
safeguard were: (1) Plagiarism detection software, like Turnitin.com was the most widely
127
selected choice of safeguards (50%), (2) password protected exams (35.6%), (3) secure
exam browser lockdown (32.7%), (4) at home webcam computer proctor (31.7%), (5)
Other safeguards mentioned by faculty are (1) different version of the test for
each student, (2) time frame for completion thus providing time to cheat once test started,
(3) change the test or generate random test questions, (4) large data base of questions, (5)
the structure of the class can reduce cheating greatly, (6) multiple, smaller assignments
that ask for written explanations, (7) higher-order thinking and application exams versus
recall of information.
Focus group members added that the off-site proctored testing and webcam-
proctored testing are desirable methods, but the cost for use is deemed too high and deters
faculty form using those options. Their desire was to see the cost lowered.
Faculty were asked “To what degree do instructional college faculty perceive the
respondents rated faculty’s support of institutional policies with a mode of 5 (very high)
and a median of 4 (high). One indicator that the policy is accepted is that faculty widely
publishes this integrity policy in their syllabus. Another indicator of the acceptance is by
the enforcement of the policies by taking action when a student is caught cheating. While
the action by the faculty varies, they indicated that their action included giving the
student a failing grade for the exam or assignment. Institutions that have an institutional
policy in place are likely to include the steps to follow once cheating is detected. Focus
group members were not confident about the handling of cases that were referred to the
128
dean. Pincus and Schmelkin (2003) stressed the importance of clarity of institutional
policies and steps required by faculty. When faculty feel that they lack support from
administration, they will be less likely to take enforcement seriously (Correa, 2011).
Conclusions
students are indeed doing the required work (Higher Education Opportunity Act, 2008).
specific language about dishonesty online. The policy statement of the SACS, one of the
accrediting bodies used in Florida, provided guidelines in this regard, which include the
use of proctored environments for examinations and verification of the students’ identity
(SACS, 2010). This research study sought to find out how online faculty perceive the
instance of cheating and to what extent they take action when cheating is detected. The
idea that cheating is more common in the online environment than it is face-to-face is
inconclusive (e.g., Grijalva et al., 2010; Klor de Alva, 2011; Krsak, 2007; Watson &
Sottile, 2010). Cheating online is an ongoing problem, however, and institutions often
have integrity policies in place, which provide guidelines on how to proceed once
cheating is detected. Participants in this study indicated that the faculty handbook is
commonly where they find out about such guidelines. The problem is that not everyone is
aware of the guidelines and there are variations between departments on enforcement of
institutional policies. The research study showed that when there is evidence of cheating,
most faculty fail the student for the particular exam or assignment. Cheating is sometimes
ignored because of bureaucratic red tape or the time it takes to follow through with the
129
institutional procedures.
Plagiarism was identified as the type of cheating that is most commonly detected
by respondents. There are many safeguards available to protect the course integrity, and
Management Systems. The software is not widely used by respondents in this study,
which impedes the efforts of the faculty. There appears to be a lack of knowledge by
faculty about safeguards that are available and their functionality. Lastly, part time
On-campus proctored testing environments are utilized more frequently than off-
campus testing centers or webcam proctoring, although the use is limited. Faculty
recognize the additional protection proctoring offers, but they have not shown
commitment to its use. Moreover, some have expressed concern about the additional cost
the student has to carry. Other faculty no longer base their grades on high stakes exams or
Implications
This mixed-method study confirmed that online students cheat and that many
faculty lack resources and commitment to actively combat cheating. Based on the results
institutional integrity policy may clear up confusion for full-time and part-time faculty.
Increased administrative efforts may also help to shift the direction, and these efforts
development opportunities to teach faculty about the use and availability of safeguards.
130
These united efforts by administration and faculty may help to decrease the level of
dishonesty, thereby avoiding scrutiny from the accrediting bodies. The reputation of the
institutions will likely improve when it becomes widely known that the institution has
high standards and expectations and is serious about the integrity of its courses.
Limitations
1. The study was conducted at community colleges, where the results may be
different than if it were conducted at a university. Faculty at these institutions differ, for
example, in their contractual obligations and their salaries, which may be linked to their
level of commitment. The student population they work with is different not only in size,
2. The researcher was limited by the required protocol in regard to reaching out
to the faculty. The participants were contacted by the administrators from the online
the decision to participate. Faculty may not feel supported by administrators due to, for
example, tensions between faculty, administrators, unions and boards. The requests to
participate in the survey were sent out by administrative liaisons who may have elicited
suspicion or apathy.
4. Faculty may have participated in other surveys and may have felt a sense of
survey overload.
5. The survey required a 20- to 30-minute time commitment which may have
131
deterred some invitees. Changing the questions by making them shorter and more concise
and eliminating some questions would help reduce the time of completion. For example,
the question about where paraphrased information was accessed may be eliminated, as it
did not provide critical information. The question regarding what constitutes cheating
should be presented as one question, thereby allowing the respondent to only read each
6. The invitation letter was lengthy as it followed the required template and
contained required IRB approval forms. This method was not in line with Sue and
Ritter’s (2007) suggestion to keep invitation letters short and inviting. Participants were
offered an incentive for participation, but the incentive may have been unnoticed as it
was mentioned in the participation letter. Sue and Ritter (2007) suggested the use of a
flashing banner which would focus the readers’ attention immediately and increase
interest.
conclusions can be generalized for the rest of the population. Respondents were solicited
8. The low response rate resulted in a small sample size, which may have
influenced the trends. Donmoyer (2008) asserted that online surveys have unique
challenges, which may result in problems with generalizability and, in turn, problems
with reliability due to low response. In some instances it was not possible to find trends
or draw conclusions because certain questions only pertained to those respondents whose
common answer led them to a follow up question thereby shrinking the pool of
132
9. The survey was a modified version of the original AIS and so the reliability
data could not be confirmed as being the same for both versions. The researcher might
have improved the quality of the data analysis by testing the survey for reliability with a
selected group of volunteers of college instructors who would be excluded from the
actual study and then running it again a month later to measure the degree of consistency.
10. As suggested by Fowler (2009), respondents may have been concerned about
the level of anonymity due to the nature of some of the survey questions. Fowler called
11. The results of a study conducted in Florida may be different than results of a
12. Because the survey questions were delivered via Google forms, an online
survey delivery program, participants did not have an opportunity to ask questions, which
may have led to misinterpretation of the items on the survey and perhaps inherent bias
13. There may be a potential for bias on the part of the researcher, who is a
faculty member at one of the schools that was used for the survey. Fowler (2009)
14. Due to a technical glitch, some initial responses were not properly recorded.
testing them and collecting longitudinal data on their impact. The implications of
133
cheating in the online environment span across different areas, such as credibility of the
Studying the enforcement of institutional policies will help determine whether its
impact on cheating is favorable. The following data should be collected and analyzed:
distribution of such policies, the clarity of required steps, and the implication on faculty
of faculty and students play a role. Other demographic differences, such as the number of
years teaching in higher education will help determine whether faculty tenure impacts the
discussed in this study, such as the use of synchronous online class sessions, critical
thinking activities and exams, limits on exam times and comparison of writing samples.
A future inquiry into the effectiveness of those safeguards may give faculty a more
increase the size of the sampled population and boost the representativity and
generalizability.
cheating should help decrease the level of skepticism about the authenticity of those
courses.
134
References
Baron, J., & Crooks, S. M. (2005). Academic integrity in web based distance education.
Education Journals, 49(2), 40. doi:10.1007/BF02773970
Bedford, D. W., Gregg, J. R., & Clinton, M. S. (2011). Preventing online cheating with
technology: A pilot study of remote proctor and an update of its use. Journal of
Higher Education Theory and Practice, 11(2), 41–58.
Black, E., Greaser, J., & Dawson, K. (2008). Academic dishonesty in traditional and
online classrooms: Does the “media equation” hold true. Journal of Asynchronous
Learning Networks, 12(3–4), 23–30.
Boehm, P., Justice, M., & Weeks, S. (2009). Promoting academic integrity in higher
education. The Community College Enterprise, 15(1), 45–61.
Brent, E., & Atkisson, C. (2011). Accounting for cheating: An evolving theory and
emergent themes. Research in Higher Education, 52, 640–658.
doi:10.1007/s11162-010-9212-1
Brown, B. S., Weible, R. J., & Olmosk, K.E. (2010). Business school deans on student
academic dishonesty: A survey. College Student Journal, 44(2), 299–308.
Chapman, K. J., Davis, R., Toy, D., & Wright, L. (2004). Academic integrity in the
business school environment: I'll get by with a little help from my friends.
Journal of Marketing Education, 26(3), 236–249.
doi:10.1177/0273475304268779
Cole, M. T., & Swartz, L. B. (2013, February). Understanding academic integrity in the
online learning environment: A survey of graduate and undergraduate business
students. Paper presented at the ASBBS Annual Conference, Las Vegas, NV.
Davis, S. F., Drinan, P., & Gallant, T. B. (2009). Cheating in school: What we know and
what we can do. Malden, MA: Wiley-Blackwell.
Devlin, M., & Gray, K. (2007). In their own words: A qualitative study of the reasons
Australian university students plagiarize. Higher Education Research and
Development, 26(2), 181–198. doi:10.1080/07294360701310805
DuPree, D., & Sattler, S. (2010). McCabeʼs Academic Integrity Survey Report 2010.
Retrieved from Texas Tech University Ethics Center website:
www.depts.ttu.edu/provost/qep/docs/McCabe_Academic_Integrity_Report_Cover
.pdf
Farnsworth, K., & Bevis, T. B. (2006). A fieldbook for community college online
instructors. Washington, DC: Community College Press.
Fowler, F. J. (2009). Survey research methods (4th ed.). Thousand Oaks, CA: SAGE
Publications.
136
Grijalva, T. C., Nowell, C., & Kerkvliet, J. (2010). Academic honesty and online courses.
College Student Journal, 40(1), 180.
Gross, E. R. (2011). Clashing values: Contemporary views about cheating and plagiarism
compared to traditional beliefs and practices. Education, 132(2), 435–440.
Groves, R. M., Fowler, F. J., Couper, M. P., Lepkowski, J. M., Singer, E., & Tourangean,
R. (2004). Survey methodology. Hoboken, NJ: John Wiley and Sons Inc.
Guernsey, L. (2001, April 26). For those who would click and cheat. New York Times.
Retrieved from www.nytimes.com
Harkins, A. M., & Kubik, G. H. (2010). Ethical cheating in formal education. On The
Horizon, 18(2), 134–146. doi:10.1108/10748121011050487
Hart, L., & Morgan, L. (2010). Academic integrity in an online registered nurse to
baccalaureate in nursing program. Journal of Continuing Education in Nursing,
41(11), 498–505. doi:10.3928/00220124-20100701-03
Heckler, N. C., Rice, M., & Hobson Bryan, C. (2013). Turnitin systems: A deterrent to
plagiarism in college classrooms. Journal of Research on Technology in
Education, 45(3), 229–248.
Hollinger, R. C., & Lanza-Kaduce, L. (2006). Academic dishonesty and the perceived
effectiveness of countermeasures: An empirical survey of cheating at a major
public university. NASPA Journal, 33, 292–306.
Hudd, S. S., Apgar, C., Bronson, E. F., & Lee, R. G. (2009). Creating a campus culture of
integrity: Comparing the perspectives of full- and part-time faculty. Journal of
Higher Education, 80(2), 146–177. doi:10.1353/jhe.0.0039
Jung, I. Y., & Yeom, H. Y. (2009). Enhanced security for online exams. IEEE
transactions on education, 52(3), 340–349. doi:10.1109/TE.2008.928909
Kaczor, B. (2007, September 26). Nearly 2 dozen Florida State athletes accused of
cheating. USA Today. Retrieved from www.usatoday.com
137
Kelley, K., & Bonner, K. (2005). Digital text, distance education and academic
dishonesty: Faculty and administrator perceptions and responses. JALN, 9(1), 43–
52.
King, C. G., Guyette, R. W., & Piotrowski, C. (2009). Online exams and cheating: An
empirical analysis of business students’ views. The Journal of Educators Online,
6(1), 1–11.
Klor de Alva, J. (2011, June 19). For-profit learning is always cheaper; and other myths.
Chronicle of Higher Education. Retrieved from www.chronicle.com
Kohlberg, L. (1981a). The meaning and measurement of moral development (Vol. XIII).
Worcester, MA: Clark University Press.
Kohlberg, L. (1981b). The philosophy of moral development: Moral stages and the idea
of justice (Vol. 1). San Francisco, CA: Harper and Row Publishers.
Kohn, A. (1999). Punished by rewards : The trouble with gold stars, incentive plans, Aʼs,
praise, and other bribes. New York, NY: Houghton Mifflin Co.
Kwong, T., Ng, H., & Mark, K. (2010). Students’ and faculty’s perception of academic
integrity in Hong Kong. Campus-Wide Information Systems, 27(5), 341–355.
doi:10.1108/10650741011087766
Lessig, L. (2008). Remix: Making art and commerce thrive in the hybrid economy. New
York, NY: Penguin Press.
LoSchiavo, F., & Shatz, M. (2011). The impact of honor code on cheating in online
courses. MERLOT, 7(2), 179–184.
Mayhew, M. J., Hubbard, S. M., Finelli, C. J., Harding, T. S., & Carpenter, D. D. (2009).
Using structural equation modeling to validate the theory of planned behavior as a
model for predicting student cheating. Review of Higher Education, 32(4), 441–
468.
McCabe, D., Trevino, L. K., & Butterfield, K. D. (1999). Academic integrity in honor
code and non-honor code environments: A qualitative investigation. Journal of
Higher Education, 70(2), 211–234.
138
McCabe, D. L., Butterfield, K. D., & Trevino, L. K. (2012). Cheating in college: Why
students do it and what educators can do about it. Baltimore, MD: Johns Hopkins
University Press.
Miller, A., Shoptaugh, C., & Wooldridge, J. (2011). Reasons not to cheat, academic-
integrity resposibility, and frequency of cheating. Journal of Experimental
Education, 79(2), 169–184. doi:10.1080/00220970903567830
Mirza, N., & Staples, E. (2010). Webcam as a new invigilation method: Studentsʼ
comfort and potential for cheating. Journal of Nursing Education, 49(2), 116–
119. doi:10.3928/01484834-20090916-06
Morgan, D. (2006). Focus group. In V. Jupp (Ed.), The SAGE dictionary of social
research methods (pp. 122–124). Publisher's location: SAGE Publications Inc. .
Moten, J., Fitterer, A., Brazier, E., Leonard, J., & Brown, A. (2013). Examining online
college cyber cheating methods and prevention measures. Electronic Journal of e-
Learning, 11(2), 139–146.
Multon, K., & Coleman, J. (2010). Coefficient alpha. In N. J. Salkind (Ed.), Encyclopedia
of research design (pp. 160–164). Thousand Oaks, CA: SAGE Publications, Inc.
Nitko, A. J., & Brookhart, S. M. (2011). Educational assessment (6th ed.). Boston, MA:
Pearson Education, Inc.
Palm Beach State College. (2013a). Institutional research and effectiveness. Retrieved
from www.palmbeachstate.edu/ire/documents/acadmgmt/graduates_latest.pdf
Palm Beach State College. (2013b). Palm Beach State College 2013–2014 student
handbook. Retrieved from www.palmbeachstate.edu/catalog/documents/
studenthandbook2013-14.pdf
139
Palm Beach State College. (n.d.). Fast facts. Retrieved from www.palmbeachstate.edu/
crm/publications/fast-facts.aspx
Parry, M. (2009). Online educators wonʼt be forced to spy on students, new rules say.
Chronicle of Higher Education, 55(39), A19.
Patel, A., Bakhtiyari, K., & Taghavi, M. (2011). Evaluation of cheating detection
methods in academic writings. Library Hi Tech, 29(4), 623–640.
doi:10.1108/07378831111189732
Patnaude, K. A. (2008). Faculty perceptions regarding the extent to which the online
course environment affects academic honesty (Doctoral dissertation). University
of Houston, Houston, TX. ProQuest Dissertations and Theses database. (3323556)
Rodgers, J. (2012, June 8). AFA discovered cheating by comparing online, final exams.
Gazette. Retrieved from www.gazette.com
Santa Fe College. (n.d.-b). Santa Fe Cummunity College rules manual. Retrieved from
www.dept.sfcollege.edu/rules/studentcodeofconduct.pdf
Schmelkin, L. P., Gilbert, K., Spencer, K. J., Pincus, H. S., & Silva, R. (2008). A
multidimensional scaling of college students' perceptions of academic dishonesty.
Journal of higher education 79(5), 587–607. doi:10.1353/jhe.0.0021
Scott, M., & Lyman, S. (1968). Accounts. American Sociological Review, 33(1), 46–62.
Sendag, S., Duran, M., & Fraser, M. R. (2012). Surveying the extent of involvement in
online academic dishonesty (e-dishonesty) related practices among university
students and the rationale students provide: One university’s experience.
Computers in Human Behavior, 28, 849–860. doi: 10.1016/j.chb.2011.12.004
140
Shaw, C. (2004). Academic dishonesty in traditional and online courses as self reported
by students in online courses (Doctoral dissertation). Retrieved from ProQuest
Dissertations and Theses database. (3120331)
Short, S. (2006). Focus groups: Focus group interviews. In e. Perecman & S. R. Curran
(Eds.), A handbook for social science field research: Essays and bibliographic
sources on research and design methods (pp. 104–117). Thousand Oaks, CA:
SAGE Publications, Inc. doi:10.4135/9781412973427
Simonson, M., Smaldino, S., Albright, M., & Zvacek, S. (2012). Teaching and learning
at a distance: Foundations of distance education (5th ed.). Boston, MA: Pearson.
Sloan Consortium. (n.d.). Class differences: Online education in the United States, 2010.
Retrieved from www.sloanconsortium.org/publications/survey/class_differences
Staats, S., Hupp, J. M., Wallace, H., & Gresley, J. (2009). Heroes donʼt cheat: An
examination of academic dishonesty and students' views on why professors donʼt
report cheating. Ethics and Behavior, 19(3), 171–183.
doi:10.1080/10508420802623716
Stephens, J. M., Young, M. F., & Calabrese, T. (2007). Does moral judgment go offline
when students are online? A comparative analysis of undergraduatesʼ beliefs and
behaviors related to conventional and digital cheating. Ethics and Behavior,
17(3), 233–254. doi:10.1080/10508420701519197
Stuber-McEwen, D., Wiseley, P., & Hoggatt, S. (2009). Point, click, and cheat:
Frequency and type of academic dishonesty in the virtual classroom. Online
Journal of Distance Learning Administration, 7(3), n.p.
Sue, V. M., & Ritter, L. A. (2007). Conducting online surveys. [Online book]. SAGE
Publications, Inc. doi:10.4135/9781412983754.
Tashakkori, A., & Teddlie, C. (Eds.). (2003). Handbook of mixed methods in social and
141
Thomas, A., & De Bruin, G. (2012). Student academic dishonesty: What do academics
think and do, and what are the barriers to action? African Journal of Business
Ethics, 6(1), 13–24. doi:10.4103/1817-7417.104698
Ullah, A., Xiao, H., Lilley, M., & Barker, T. (2012). Using challenging questions for
student authentication in online examination. International Journal for
Infonomics, 5(3/4), 631–639.
Watson, G., & Sottile, J. (2010). Cheating in the digital age: Do students cheat more in
online courses? Online Journal of Distance Learning Administration, 8(1), n.p.
Williams, S., Tanner, M., Beard, J., & Hale, G. (2012). Academic integrity on college
campuses. International Journal for Educational Integrity, 8(1), 9–24.
Witherspoon, M., Maldonado, N., & Lacey, C. (2012). Undergraduates and academic
dishonesty. International Journal of Business and Social Science, 3(1), 76–86.
Zou, J. J. (2011, September 4). With cheating only a click away, professors reduce the
incentive. The Chronicle of Higher Education. Retrieved from
www.chronicle.com
142
Appendix A
From “McCabeʼs Academic Integrity Survey Report 2010,” by D.DuPree and S. Sattler,
2010. Copyright 2003 by Don McCabe, Texas Tech University Ethics Center website:
www.depts.ttu.edu/provost/qep/docs/McCabe_Academic_Integrity_Report_Cover.pdf.
Adapted with permission.
151
Appendix B
Modified AIS
Academic Environment
Please tell us about the academic environment at your institution. Please note that
all responses will be part of the aggregated data and no individual responses will
be released or identified with any individual.
1. How would you rate
Very No
Very low Low Medium High
high opinion
The severity
of penalties
for cheating
in online
classes at
your
institution
The average
student’s
understanding
of the
college’s
policies
concerning
cheating in
online
classes?
Student
support of
these
policies?
Faculty
support of
these
policies?
The
effectiveness
of these
policies?
153
2. When, if at all, in your online courses do you discuss with students your
policies concerning: (which applies best?)
In
On syllabus At start
Do not Not
individual of of Other
discuss relevant
assignments course semester
outline
Plagiarism
Permitted and
prohibited group
work or collaboration
The proper citation
or referencing of
sources
Falsifying/fabricating
research data
3. Please note the primary sources from which you have learned about the
academic integrity policies at your institution (Check all that apply).
4. How frequently do you think the following occur in the online courses at your
institution?
Very Very No
Never Seldom/sometimes Often
seldom often opinion
Plagiarism on
writing
154
Very Very No
Never Seldom/sometimes Often
seldom often opinion
assignments
Students
inappropriately
sharing work
in group
assignments
Cheating
during tests or
examinations
5. How often, if ever, have you seen a student cheat during an online test or
examination at your institution?
o Never
o Once
o A few times
o Several times
o Many times
6. If you answered anything other than Never to Question 5, please answer the
following question. If you were convinced, even after discussion with the student,
that a student had cheated on a major test or assignment in your online course,
what would be your most likely reaction? (Check all that apply)
7. Have you ever ignored a suspected incident of cheating in one of your courses
for any reason?
o Yes
o No
If you answered Yes, did any of the following influence your decision? (Check all
that apply)
o Lacked evidence/proof
o Cheating was trivial/not serious
o Lack of support from administration
o Student is the one who will ultimately suffer
o Didn’t want to deal with it; system is so bureaucratic
o Not enough time
o Other:
8. Have you ever referred a suspected case of cheating to your Chair, Dean, or
anyone else?
o Yes
o No
If you answered Yes, how satisfied were you with the way the case was handled?
o Very satisfied
o Satisfied
o Neutral
o Unsatisfied
o Very unsatisfied
Specific Behaviors
9. Students have different views on what constitutes cheating and that is
acceptable behavior. We would like to ask you some questions about specific
behaviors that some students might consider cheating. This is a two-part question.
In part one, please mark how often, if ever, you have observed or become aware
of a student in your class engaging in any of the following behaviors during the
last three years. If a question does not apply to any of your courses, please check
156
the “Not Relevant” column. For example, if you do not use tests/exams, you
would check “Not Relevant” for questions related to tests/exams. In part 2, you
will be asked the same questions, but this time you will mark how serious you
think each type of behavior is.
Part 1: How often, if ever, you have observed or become aware of a student in
your class engaging in any of the following behaviors during the last three years?
More than
Never Once Not relevant
once
Fabricating or
falsifying a
bibliography in
an online
assignment
Working on an
online
assignment with
others when the
instructor asked
for individual
work.
Getting
questions or
answers on an
online test from
someone who
has already
taken a test
Helping
someone else
cheat on an
online test.
Copying from
another student
during an online
test with his or
her knowledge.
Using digital
technology
(such as text
messaging) to
get unpermitted
help from
someone during
157
More than
Never Once Not relevant
once
an online test or
assignment.
Paraphrasing or
copying a few
sentences from a
book, magazine
or journal (not
electronic or
Web-based)
without
footnoting them
in a paper s/he
submitted in an
online class.
Turning in a
paper in an
online class
from a “paper
mill” (a paper
written and
previously
submitted by
another student)
and claiming it
as his/her own
work.
Using an
electronic/digital
device as an
unauthorized aid
during an exam.
Turning in a
paper copied, at
least in part,
from another
student’s paper,
whether or not
the student is
currently taking
the same online
course.
158
More than
Never Once Not relevant
once
Using a false or
forged excuse to
obtain an
extension on a
due date or
delay taking an
online exam.
Turning in work
done by
someone else in
an online class.
Cheating on a
test in an online
class in any
other way.
Part 2: How serious do you think each type of behavior is?
Trivial Moderate Serious
Not cheating
cheating cheating cheating
Fabricating or
falsifying a
bibliography in
an online
assignment
Working on an
online
assignment with
others when the
instructor asked
for individual
work.
Getting
questions or
answers on an
online test from
someone who
has already
taken a test
Helping
someone else
cheat on an
159
11. Have you ever offered an online test or exam at your institution?
o Yes
o No
12. If you have answered Yes to Question 11, have you ever observed a student
who: (Check all that apply)
13. How strongly do you agree or disagree with the following statements?
Disagree Strongly
Disagree Not sure Agree
strongly agree
Cheating in
online classes
is a serious
problem at this
institution
Our student
judicial
process is fair
and impartial
Students in
online classes
should be held
responsible for
monitoring the
academic
integrity of
other students
Faculty
members are
vigilant in
discovering
and reporting
suspected
cases of
162
Disagree Strongly
Disagree Not sure Agree
strongly agree
academic
dishonesty in
their online
classes
14. What safeguards do you employ to reduce cheating in your online courses?
(Check all that apply)
15. What additional safeguards would you employ to reduce cheating in your
online courses, if they were available? (Check all that apply)
Demographics
o
16. How many years have you been teaching at the college level?
o 0-2
o 3-7
o 8-12
o 13 or more
17. Gender?
o Male
o Female
o Arts
o Business
o Communication/Journalism
o Engineering
o Humanities
o Math or Science
o Nursing/Health professions
o Social and behavioral sciences
o Other:
Focus Group: The researcher will invite 8 focus group members for a one hour
conversation about the survey questions. If you are interested in joining the focus
group, please add your contact information to this link. Your information cannot
be traced back to your survey answers.
https://docs.google.com/forms/d/1Z_zK5e4ryLjktEBUzLCysWXPmcjRnTN1BrU
pglJphQU/viewform
164
Thank you for your participation! Please click to enter into the sweepstakes for a
chance to win a $25 Amazon giftcard https://docs.google.com/forms/d/1Gxqi-
F2IfpLEk4IFbaHn4SzgULSXVtXYMukHJVW6J7Y/viewform
From McCabeʼs Academic Integrity Survey Report 2010, by D. DuPree and S. Sattler,
2010. Reprinted with permission. Retrieved from Texas Tech University Ethics Center
website: www.depts.ttu.edu/provost/qep/docs/
McCabe_Academic_Integrity_Report_Cover.pdf
165
Appendix C
The survey was delivered via Google forms and there was a technical glitch, which
disallowed the first 42 respondents from selecting multiple answers as indicated in the
question. Instead, respondents could only select one answer from question 9a and one
answer for 9b. Chi square test results indicate that this glitch did not significantly
influence the respondents’ answers when compared to subsequent submissions after the