A Cheating in Online Exams Report

Download as pdf or txt
Download as pdf or txt
You are on page 1of 175

Cheating in Online Classes: A Preliminary Investigation

by
Mirella Baker Bemmel

An Applied Dissertation Submitted to the


Abraham S. Fischler School of Education
in Partial Fulfillment of the Requirements
for the Degree of Doctor of Education

Nova Southeastern University


2014
Approval Page

This applied dissertation was submitted by Mirella Baker Bemmel under the direction of
the persons listed below. It was submitted to the Abraham S. Fischler School of
Education and approved in partial fulfillment of the requirements for the degree of
Doctor of Education at Nova Southeastern University.

________________________________ ________________________
Gordon Doctorow, EdD Date
Committee Chair

________________________________ ________________________
Michael Simonson, PhD Date
Committee Member

________________________________ ________________________
Mary Ann Lowe, SLPD Date
Associate Dean

ii
Statement of Original Work

I declare the following:

I have read the Code of Student Conduct and Academic Responsibility as described in the
Student Handbook of Nova Southeastern University. This applied dissertation represents
my original work, except where I have acknowledged the ideas, words, or material of
other authors.

Where another author’s ideas have been presented in this applied dissertation, I have
acknowledged the author’s ideas by citing them in the required style.

Where another author’s words have been presented in this applied dissertation, I have
acknowledged the author’s words by using appropriate quotation devices and citations in
the required style.

I have obtained permission from the author or publisher—in accordance with the required
guidelines—to include any copyrighted material (e.g., tables, figures, survey instruments,
large portions of text) in this applied dissertation manuscript.

Mirella Baker Bemmel


Signature

Mirella Baker Bemmel


Name

May 12, 2014


Date

iii
Acknowledgments

This memorable journey I was blessed with would not have been possible without

the enthusiasm and encouragement from my wonderful husband, Ahmed Baker and our

sons Sharif and Tariq. I will be eternally grateful for their unconditional love and

unwavering support. I would also like to thank my mother and siblings, who inspired me

beyond belief and catapulted me higher than I ever thought possible.

My deepest appreciation goes out to my Dissertation Chair, Dr. Gordon

Doctorow. Our first meeting in Orlando three years ago was the beginning of a whirlwind

of exploration guided by a strong commitment to scholarship, tremendously helpful

meticulous feedback, insightful suggestions, and encouragement in the most kind and

professional manner imaginable. While reaching the final destination is certainly icing on

the cake, the journey provided an experience of personal and intellectual growth, which

has changed the course of my life forever.

A word of thanks to those helped me conduct my research, especially the staff at

the institutions at the focus of this study, whose assistance I could always count on.

Finally, I would like to thank my friends, family, and colleagues whose words of

encouragement helped me get through this process. Their loving and supportive inquiries,

prayers and constant words of reassurance were never taken for granted.

I dedicate this dissertation to my father, whose spirit was my guiding light. His

life story inspired me to persevere with an insatiable hunger for intellectual growth and

steadfast determination.

iv
Abstract

Cheating in Online Classes: A Preliminary Investigation, Mirella Baker Bemmel, 2014:


Applied Dissertation, Nova Southeastern University, Abraham S. Fischler School of
Education. ERIC Descriptors: Cheating, Online, Academic Integrity, Community
College, Safeguards

This applied dissertation was an inquiry into the phenomenon of cheating among students
who take their classes online. There is a common perception that cheating is rampant in
online classes and the Southern Association of Colleges and Schools, the accreditation
association in the South, implemented policies, which mandate stricter monitoring of
students. In turn, colleges have reevaluated or implemented integrity policies, but there is
inconsistent enforcement of said policies.

Online faculty at three Florida community colleges were invited to complete a modified
version of the Academic Integrity Survey, which provided insights into their perception
of cheating, their awareness and enforcement of institutional policies regarding cheating
and safeguards used or desired. The survey was followed up with an eight-member focus
group discussion, and the results were triangulated.

An analysis of the data revealed that faculty is uncertain about the extent of cheating at
their college, but most take action once they discover an instance of cheating. Their
reaction to cheating may not necessarily be in line with the institutional policy although
they are aware of the required steps. Different safeguards are used to protect the integrity
of their courses, but there is an apparent lack of knowledge about available safeguards
and their use.

v
Table of Contents

Page
Chapter 1: Introduction ........................................................................................................1
Description of the Problem ......................................................................................1
Background and Justification ...................................................................................1
About the Researcher ...............................................................................................3
Purpose of the Study ................................................................................................3
Definitions of Major Concepts and Terms...............................................................4

Chapter 2: Literature Review ...............................................................................................6


Introduction ..............................................................................................................6
Existence of Online Cheating ..................................................................................6
Extent of Online Cheating .......................................................................................9
No Evidence of Cheating .......................................................................................11
Federal Regulations ...............................................................................................12
Comparison of Online Cheating to Traditional Cheating ......................................13
Theoretical Framework ..........................................................................................15
Catalogue of Different Types of Combative Measures .........................................29
Chapter Summary ..................................................................................................48
Research Questions ................................................................................................48

Chapter 3: Methodology ....................................................................................................50


Participants .............................................................................................................50
Instrument ..............................................................................................................54
Evaluation of Technical Adequacy: Validity and Reliability ................................56
Recommendations for Future Use of the Instrument .............................................59
Measured Domains ................................................................................................59
Item Selection ........................................................................................................59
Procedures ..............................................................................................................60
Data Collection and Analysis.................................................................................65
Summary of Procedural Steps ................................................................................69

Chapter 4: Results ..............................................................................................................72


Purpose of the Study ..............................................................................................72
Correcting for a Technical Problem .......................................................................72
Demographics ........................................................................................................73
Perception of Dishonesty as a Problem in Online Classes ....................................74
Chapter Summary ................................................................................................111

Chapter 5: Discussion ......................................................................................................113


Overview of the Study .........................................................................................113
Summary of Findings ...........................................................................................115
Conclusions ..........................................................................................................128
Implications..........................................................................................................129
Limitations ...........................................................................................................130

vi
Recommendations for Future Studies ..................................................................133

References ........................................................................................................................134

Appendices
A McCabe Academic Integrity Survey 2010: Screen Shot of Faculty
Survey .........................................................................................................143
B Modified Academic Integrity Survey .........................................................152
C Chi Square Test of the First 42 Questions ..................................................166

Tables
1 Area of Primary Teaching Responsibility.....................................................73
2 Number of Years Teaching at the College Level..........................................74
3 Aggregated Survey Responses: Frequency of Cheating, Questions
4a–4c .............................................................................................................75
4 Means and Standard Deviations, Questions 4a–4c .......................................75
5 Aggregated Survey Responses: Frequency of Specific Cheating
Behaviors, Questions 9a1–9d1......................................................................77
6 Means and Standard Deviations, Questions 9a1–9d1 ...................................77
7 Aggregated Survey Responses: Frequency of Specific Cheating
Behaviors, Questions 9e1–9h1......................................................................78
8 Means and Standard Deviations, Questions 9e1–9h1 ...................................78
9 Aggregated Survey Responses: Frequency of Specific Cheating
Behaviors, Questions 9i1–9m1 .....................................................................79
10 Means and Standard Deviations, Questions 9i–9m1 ....................................79
11 Source of Material Used by Student to Paraphrase or Copy Material ..........80
12 Aggregated Survey Responses: Types of Cheating Observed,
Questions 12a–12d ........................................................................................80
13 Cheating is a Serious Problem at Your Institution........................................81
14 Aggregated Survey Responses: Faculty Attitudes Toward Online
Cheating, Questions 13a–13d .......................................................................85
15 Mean, Median, Mode, and Standard Deviations, Questions 13a–13d ..........85
16 Seriousness of Behavior, Questions 9a2–9d2 ...............................................86
17 Means and Standard Deviations, Questions 9a2–9d2 ...................................87
18 Seriousness of Behavior, Questions 9e2–9h2 ...............................................87
19 Means and Standard Deviations, Questions 9e2–9h2 ...................................88
20 Seriousness of Behavior, Questions 9i2–9m2 ..............................................88
21 Means and Standard Deviations, Questions 9i2–9m2 ..................................89
22 Aggregated Survey Responses: Reactions to Cheating, Questions
6a–6d .............................................................................................................91
23 Aggregated Survey Responses: Reactions to Cheating, Questions
6e–6i ..............................................................................................................91
24 Aggregated Survey Responses: Safeguards to Reduce Cheating .................92
25 Aggregated Survey Responses: Primary Source From Which Faculty
Learned About Academic Integrity Policies .................................................94

vii
26 Aggregated Survey Responses: Frequencies and Reasons for Ignoring
Cheating ........................................................................................................95
27 Degree of Satisfaction by Faculty With Handling Cases of Cheating ..........96
28 Aggregated Survey Responses: Additional Safeguards Faculty Would
Employ ..........................................................................................................98
29 Aggregated Survey Responses: Faculty Ratings of Institutional
Measures to Prevent Online Cheating.........................................................100
30 Aggregated Survey Responses: How and When Faculty Discuss
Institutional Dishonesty Policies, Questions 2a–2d ....................................101
31 Reason Cheating Was Ignored ....................................................................102
32 Pearson Correlations of Institutional Policies, Support, and
Effectiveness ...............................................................................................107
33 Pearson Correlations: Cheating is a Serious Problem Versus Faculty are
Vigilant in Reporting ..................................................................................108
34 Pearson Correlations: Actions Taken for Cheating Versus Years of
Experience...................................................................................................108
35 Aggregated Cross-Tabulation: Responses to Cheating by Gender .............109
36 Aggregated Cross-Tabulation: Reactions to Cheating by Discipline,
Questions 6a–6e ..........................................................................................110
37 Aggregated Cross-Tabulation: Reactions to Cheating by Discipline,
Questions 6f–6i ...........................................................................................110

viii
1

Chapter 1: Introduction

Online course availability appears to be the answer to increased interest in higher

education, but questions regarding cheating in this environment have become more

intense. The ability to take courses at remote locations has opened doors to students

globally who may not have thought they would be able to further their education. While

online education has been growing (Sloan Consortium, n.d.), questions about the integrity

of courses offered online have gotten more intense (Mills, 2010; Parry, 2009; Roach,

2001). Faculty, administrators and even students continue to question whether the online

environment is secure or if it provides an invitation for academic dishonesty (Mills, 2010;

Parry, 2009; Roach, 2001).

Description of the Problem

The problem addressed by this study was the lack of documentation about the

phenomenon of cheating in online instructional environments in terms of the extent,

causes, effects, procedural preparedness, and future planning.

Background and Justification

The federal government’s increased scrutiny pertaining to integrity in courses that

are delivered via computer, according to the Southern Association of Colleges and

Schools (SACS, 2010; WCET, n.d.), comes from the widespread belief that many higher

education institutions have not closely monitored authentication in this environment.

There are indications of widespread concern about cheating among college students,

which has resulted in much research devoted to the topic of academic dishonesty (e.g.,

Black, Greaser, & Dawson, 2008; Eckles, 2010; Grijalva, Nowell, & Kerkvliet, 2010;

Hollinger & Lanza-Kaduce, 2006; Moeck, 2002). The research for this study took place
2

in Florida, where the problems of dishonesty have also been evident. In 2007, Kaczor

wrote about athletes at a Florida university who were involved in different forms of

cheating in their online classes, a case that received nationwide attention. The Obama

Administration has implemented revised regulations to the Higher Education Act (Higher

Education Opportunity Act, 2008) designed to protect the integrity of online courses.

These changes mandate that accreditation of institutions of higher education will be

contingent upon the establishment of a process which ensures that the student who

submits assignments in an online class is the same student who is actually enrolled in the

program (Higher Education Opportunity Act, 2008).

The Southern Association of Colleges and Schools (SACS) adopted this revision

in 2010 and offered suggestions for different methods in which this could be

accomplished: “(1) a secure login and pass code, (2) proctored examinations, and (3) new

or other technologies and practices that are effective in verifying student identification”

(SACSCOC, 2010, p. 1). Educators and administrators have collaborated through

organizations, such as the Western Cooperative for Educational Telecommunications

(WCET, n.d.), where attempts have been made to uphold the standards of online classes

by offering solutions to the growing concern about integrity in the online environment.

The institutions at the focus of this study were three community colleges based in

Florida where online course offerings are available in both fully online and blended

formats. Records at one of the colleges where the researcher is a faculty member showed

that from 2006 to 2010 the number of students enrolled in blended courses at this

institution grew from 3,983 to 21,028, while the number of students enrolled in fully

online courses during that same time grew from 13,369 to 31,669 (R. Adkins, former
3

Vice President of Instructional Technology, personal communication, September 23,

2011).

Instructors who teach online can elect to have their students take proctored exams

at the institution’s online testing center available on site. According to their records, the

online testing center served 11,530 students during the 2010 academic year (J. Davidoff,

E-Assessment and Learning Resources Manager, personal communication, August 25,

2011). Since instructors who teach blended courses likely deliver exams in class, there is

a surveillance gap between the 31,669 students enrolled in fully online classes, and the

11,530 presumed fully online students taking proctored exams at the testing center on

site. This apparent gap has led the researcher to ask what measures were being taken by

instructors to ensure that the remaining students do not cheat on their exams.

About the Researcher

The researcher of this study works at a community college where she is the Lead

E-Associate. In this position, she mentors instructional faculty who are in the course

development process, while she guides the developers through the implementation of

quality standards mandated at the researcher’s site. Additionally, she frequently facilitates

E-Learning workshops face-to-face and online. She holds an instructional faculty position

in Sociology and offers her classes face-to-face, fully online, and in blended format. She

has been teaching online for 12 years and has developed several online courses that have

also been delivered by other faculty.

Purpose of the Study

The purpose of this study was to provide an inquiry into the phenomenon of

cheating in online courses. This study critically evaluated Gallant and Drinan’s (2008)
4

institutional theory regarding academic dishonesty: factors that lead to cheating, what

constitutes cheating, influences on cheating, and measures currently taken by

instructional faculty to safeguard integrity in their online courses. The information gained

from this study is intended to clarify existing perspectives on cheating, including

influences on individual cheating and cheating in collaborative environments, the

cheating culture, and motivations for cheating. Additionally, the research may help the

participating institutions determine to what degree steps must be taken to ensure

implementation of existing or newly proposed safeguards and to what extent the college

is enforcing the code of conduct pertaining to academic dishonesty. The researcher offers

suggestions on tools and professional development opportunities that need to be

implemented to have an effective online program.

Definitions of Major Concepts and Terms

Online course assessment. For purposes of this study, online course assessment

is defined as testing performed by students “with the assistance of the Internet and related

technologies” (Watson & Sottile, Abstract, para. 1). Testing is performed by students

whose courses are delivered via the computer through the use of the Internet or an online

environment. The examinations are taken online by students and submitted electronically

through the course website (Watson & Sottile, 2010).

Cheating. For the purposes of this study, cheating is defined as “academic

dishonesty” (Watson & Sottile, Abstract, para. 1) including, but not limited to “cheating

and receiving assistance during tests and quizzes” (Watson & Sottile, Discussion section,

para. 4).

Plagiarism. For the purposes of this study, plagiarism is defined as “the intent to
5

claim as one’s own someone else’s words or ideas” (Simonson, Smaldino, Albright, &

Zvacek, 2012, p. 285).


6

Chapter 2: Literature Review

Introduction

Distance education has opened doors to many who may not have thought that

education was an option for them because of limitations of time or the distance to a

specific location. Online education has been growing exponentially over time, and the

questions about the integrity of courses offered online have been described as having

gotten more intense (Mills, 2010; Parry, 2009; Roach, 2001). Face-to-face classes have

been indicated to have fewer incidents of cheating, but also that faculty, administrators,

and students have continued to question whether the online environment offers enough

security to prevent them (Mills, 2010; Parry, 2009; Roach, 2001). Some studies have

proposed that the distance between the teacher and the student is a factor that increases

the instance of cheating in online courses, inasmuch as a student’s identity can be

assumed by someone else (Davis, Drinan, & Gallant, 2009). The purpose of this literature

review is to address the issue of dishonesty online by providing information on the

theoretical framework of academic dishonesty, the background, the importance of

addressing dishonesty, and ways to help combat cheating in online classes.

Existence of Online Cheating

The question of whether students in online courses are submitting their own work

continues as is the concern of students taking exams at home in a nonproctored

environment (Black et al., 2008; Guernsey, 2001; Mills, 2010; Prince, Fulton, &

Garsombke, 2009). These same studies state that online students are often not monitored

and are free to share answers to exams, which are taken at home, or in any environment

that provides Internet access. Patnaude (2008) concluded that the lack of monitoring may
7

give faculty the perception that students are more likely to cheat in online courses. The

issue of cheating in the online environment has been addressed before, and some

researchers agree that there is reason to be concerned, but that cheating online is not a

bigger problem than it is in face-to-face classes (e.g., Grijalva et al., 2010; Kwong, Ng, &

Mark, 2010). In fact, several studies have concluded that postsecondary students in online

classes are less likely to cheat compared to students in the traditional face-to-face setting

(e.g., Eckles, 2010; Grijalva et al., 2010; Guernsey, 2001; Hart & Morgan, 2010; Kwong

et al., 2010).

Studies that have evaluated academic dishonesty have relied on self-reported

surveys that asked undergraduate college students questions about whether they had

cheated or how frequently they thought their peers engaged in dishonest academic

behaviors (Mills, 2010; Stuber-McEwen, Wiseley, & Hoggatt, 2009). King, Guyette, and

Piotrowski (2009) and Kelley and Bonner (2005) proposed that cheating is more common

among postsecondary students from departments where the stakes of passing exams are

high, such as nursing programs. Although there has not been evidence to support these

claims, questions regarding the issue have continued to come up (Kelley & Bonner, 2005;

King et al., 2009). The range of cheating varies, as do the demographics of college

students who cheat. A study conducted among 1,390 postsecondary students revealed

70.2% of those who cheated were between the ages of 18 and 22 (Stearns, 2001).

Stearns (2001) classified cheating as an overarching term that includes a number

of dishonest behaviors among college students, including copying from another student,

giving other students access to the exam, taking the test for another student, getting

answers from someone who previously took the exam, among other behaviors. Cheating
8

does not only pertain to dishonest activities on exams. Stearns (2001) showed that

students allow others to copy their homework or papers they wrote, and they frequently

engaged in acts of plagiarism. There are individuals and agencies that have made writing

papers for others their way to earn money (Spaulding, 2009; Watson & Sottile, 2010).

Although it does not seem to be one of the leading ways of cheating, it still occurs and

therefore needs to be acknowledged according to Shaw (2004) and Stearns (2001). Shaw

found that postsecondary students are more likely to cheat on exams than they are on

writing assignments completed at home. His study set out to find out the extent of

cheating in online courses among postsecondary students. Of the 581 students in his

study, only 0.7% asked others to take their exam for them. Spaulding stated that self-

reports on cheating are often unreliable, since postsecondary students may not provide

frank answers if they fear that their academic standing may be placed in jeopardy.

Additionally, Spaulding noted that postsecondary students may lie on a survey about

academic dishonesty because they fear that their level of acceptance among their peers

will be negatively affected. Findings from Jones (2011) supported Spaulding’s

conclusion on the unreliability of self-reports. In his study, Jones found that 92% of

students indicated that they had personally cheated or knew of others who had cheated.

He compared those results to results of similar studies where the self-report rate is much

lower. Jones concluded that self-reporting must be unreliable, reasoning that otherwise

the results would be similar.

In the online environment, cheating extends beyond signaling and exchanging

answers. Moten, Fitterer, Brazier, Leonard, and Brown (2013) detailed some options of

online cheating that included students waiting for their classmates to get the answers.
9

Moten et al. pointed out that when students take their exams in a nonproctored

environment, they may also use multiple computers to facilitate cheating. On one

computer, they will have the exam open, while the others provide Internet access, which

is used to browse for answers. Moten et al. mentioned that students fraudulently claim

that their computer showed error messages. While the instructor researches the problem,

the student has a chance to look up the answers. At times, students will submit corrupted

files to buy more time to complete a writing assignment (Moten et al., 2013). Students

will ask others to take the exam for them, by providing their user name and password to

third parties.

Extent of Online Cheating

The concern about cheating among college students resulted in much research

devoted to the topic of academic dishonesty (e.g., Baron & Crooks, 2005; Boehm,

Justice, & Weeks, 2009; Brown, Weible, & Olmosk, 2010; Eckles, 2010; Hollinger &

Lanza-Kaduce, 2006; Moeck, 2002; Thomas & De Bruin, 2012). Newspapers have

reported on different cases of cheating in higher education. Zou (2011) reported that

students at a Boulder, Colorado institution cheated on assignments by having their peers

submit in-class assignments during their absence. The students used hand-held devices,

called clickers, which were registered under the owner’s name, to submit class work

when they were not in class. Zou’s interview with a professor from the University of

Texas revealed that many students exchanged answers, which were then submitted via

the clickers. This resulted in students’ receiving credit for work that was completed by

their friends. The Air Force Academy in Colorado Springs, Colorado, also reported

cheating among 78 cadets whose scores on a calculus final exam were much lower than
10

the scores on their previous online math exam (Rodgers, 2012). The cadets apparently

used a math program, called Wolfram Alpha, to obtain questions from the same test bank

that was used to create the exam. The extent of cheating could be influenced by field of

study, as found by Sendag, Duran and Fraser (2012). Their study found that engineering

and physical science students showed disproportionately higher numbers of cheating

compared to their social science and education peers. Another influence to the extent of

cheating, as indicated by Sendag et al., is the modality in which courses are taken. They

found that students who only took face-to-face classes admitted to more cheating

practices than students who took at least one class online.

Florida has not been spared from cheating and had made national headlines in

2007 when almost two dozen athletes at Florida State University were caught cheating in

their online classes (Kaczor, 2007). The students involved in online academic dishonesty

were all athletes who were either receiving scholarships at the time of the incident, or had

received scholarships in the past. That incident revealed several common forms of online

cheating: having someone else take the exam, receiving the answers from someone who

has already taken the exam, receiving strictly prohibited assistance during the

examination (Kaczor, 2007). Specifically, these Florida athletes had others write their

papers for them and submitted them as their own and had their tutors take their exams for

them (Kaczor, 2007).

For-profit schools are also under fire when it comes to issues of integrity. Their

continued growth and cost of tuition are often mentioned as reasons why they may be

lacking in rigor and integrity (Klor de Alva, 2011). The extent of cheating and concerns

about it are not limited to the United States. Thomas and De Bruin (2012) surveyed 917
11

full-time faculty in Johannesburg, South Africa, to learn about the faculty perceptions of

online cheating and actions they take to prevent it. The data that were gathered showed

that 92.6% of the respondents felt that online cheating compromised the university’s

ideals.

No Evidence of Cheating

Attempts to find out whether the problem of cheating online is more serious than

cheating in face-to-face classrooms have not been successful. Several studies consistently

concluded that cheating online is as much of an issue as cheating in the traditional

classroom (e.g., Grijalva et al., 2010; Klor de Alva, 2011; Krsak, 2007; Watson & Sottile,

2010). The studies reported a considerable amount of evidence of cheating being a

serious problem, but the extent of the problem varied (Grijalva et al., 2010; Klor de Alva,

2010; Krsak, 2007; Watson & Sottile, 2010). Some of the shortcomings in the findings

stem from the fact that the research is limited by privacy issues. As such, Watson and

Sottile (2010) could not provide additional information regarding the majors of the

undergraduate college students to show whether students with specific majors were more

likely to cheat. Their study also failed to address the frequency of cheating by

individuals. Witherspoon, Maldonado, and Lacey (2012) showed in their study that

students who cheat are more likely to cheat by using contemporary methods (r = .78, p <

.001), rather than the more traditional forms of cheating (r = .68, p < .001). Contemporary

methods include, but are not limited to, the use of cell phones, text messages, and the

purchase of research papers on the Internet. The researchers considered some examples

of traditional forms of cheating to be whispering during the exam, turning in work

completed by someone else, improper citations, and copying someone else’s answers.
12

In one study where 225 postsecondary students completed the Academic

Dishonesty Survey, findings showed that students enrolled in face-to-face courses

appeared to be more likely to cheat compared to their online peers (Stuber-McEwen et

al., 2009). There is only speculation about the reason(s) why online students do not

appear to cheat as much, but researchers indicated it may have to do with the increased

level of motivation by online students. The students’ motivations and self-direction in the

online environment may also be at a higher level, as more independent work is required

(Stuber-McEwen et al., 2009).

Brown et al. (2010) conducted a study among administrators to gauge their

perceptions of online dishonesty. A survey was sent to 555 business school deans who

held a membership at the Association to Advance Collegiate Schools of Business. The

responses sent back by 177 deans showed that 78% of them thought that less than 40% of

their students participated in cheating. Only 5.1% indicated that cheating was a very

serious problem, while 48.3% stated the problem was moderate. The perception of deans

who had an honor code at their school was that cheating was not a serious problem, in

contrast to the deans whose schools were lacking an honor code. Those latter deans

perceived the problem of dishonesty to be more serious. Overall, the results show that the

deans underestimated the overall extent of dishonesty. Brown et al. (2010) speculated that

the deans may have lacked awareness of the problem of dishonesty because most of them

did not teach and may have had poor communication with faculty who experienced these

problems in their classrooms.

Federal Regulations

The Obama Administration implemented revised regulations to the Higher


13

Education Act motivated by the rationale of protecting the integrity of online courses.

These changes mandate that accreditation of institutions of higher education will be

contingent upon the establishment of “processes through which the institution establishes

that the student who registers in a distance education or correspondence education course

or program is the same student who participates in and completes the program and

receives the academic credit” (2008, pp. Pub. L. No. 110-315, para. 115, II, ii, Stat.

3325). The 2010 adoption of this revision by the SACSCOC Board of Trustees has, in

turn, increased the pressure on institutions which seek their accreditation.

Comparison of Online Cheating to Traditional Cheating

In a study by Black et al. (2008) about the prevalence of online cheating as

compared to cheating in the face-to-face environment, it was found that there was a high

proportion of postsecondary students who cheated in both online classes and face-to-face

classes. But the authors proposed that their reasons for cheating might have been

different. Black et al. compared 1,068 participants’ perceptions about cheating and found

that several factors contributed to the likelihood of the postsecondary students resorting

to dishonest behavior. These factors ranged from the students’ credit load to the level of

interaction with their instructor. Contrary to other studies (Shaw, 2004; Spaulding, 2009;

Watson & Sottile, 2010), postsecondary students in Black et al.’s study perceived that

online students engage in less cheating than those in traditional classrooms. Watson and

Sottile (2010) conducted a study among undergraduate and graduate university students

to expand the limited amount of research that had been done in regard to online cheating.

The participants of their study self-reported on cheating, including ways in which they

cheated. Stuber-McEwen et al. (2009) explained that face-to-face university students are
14

more likely to cheat because of pressures they feel from instructors who set date and time

deadlines. Such pressures could result in students’ cheating out of panic. According to

this hypothesis, they are less likely to panic in online classes where they have more

flexibility with their time (Stuber-McEwen et al., 2009).

A quantitative survey was administered among 635 undergraduate and graduate

students at a medium-sized university in Appalachia (Watson & Sottile, 2010). The focus

of the study was on cheating behaviors in online and face-to-face classes by examining

cheating behavior and perceptions of whether or not online or traditional face-to-face

classes experienced greater cheating behaviors. Gender and participation in sports were

seen as contributing factors to cheating as males and athletes showed higher instances of

cheating. The conclusion of the study was that students in face-to-face classes were more

likely to cheat, possibly because of their stronger social relationships with their

classmates (Watson & Sottile, 2010). These connections purportedly enabled students to

find peers willing to share information that led to dishonest behavior.

Turner Dille (2011) studied 343 students from various institutions throughout the

United States to find whether or not there is a difference between face-to-face students

and online students and their reported cheating behaviors. Findings were that cheating

was prevalent in both modalities, but that students who cheated in face-to-face courses

were 7.3 times more likely than online students to cheat in their online courses as well

(Turner Dille, 2011). Furthermore, Turner Dille’s results showed that 15.5% of students

admitted to cheating in their online courses, compared to 18.4% who admitted to cheating

in their face-to-face courses.

In general, students are much more likely to engage in face-to-face cheating


15

methods, than they would in digital forms of cheating, claimed Stephens, Young, and

Calabrese (2007). In their study, they found that cutting and pasting information from the

Internet is far more common now than it was in the past, and students create cheat sheets

on their electronic devices rather than using notes. In general, students’ self-reports

indicated that 19% of the 1305 students in the study were more likely to resort to

traditional cheating methods, compared to the 7% who reported using handwritten notes.

Stephens et al. (2007) found that students who cheat did not exclusively rely on either

type of cheating, but instead, they used both types of cheating as dictated by the

circumstance. The findings also suggested that students view both types of cheating as

equally serious.

Theoretical Framework

To gain better understanding of how academic dishonesty can be prevented, this

study has drawn upon Gallant and Drinan’s (2008) institutional theory to explain what

motivates students to cheat and the faculty and administrative role in preventing it.

Gallant and Drinan proposed a four-stage process, which is to guide an institution toward

change. The stages are recognition and commitment, response generation, response

implementation, and institutionalization. During Stage 4, Gallant and Drinan advised that

the institution should focus on academic integrity. They suggest that academic integrity is

considered institutionalized when policies and procedures related to it become widely

accepted and implemented. Gallant and Drinan (2008) contended that a new norm would

emerge upon this institutionalization. A case study by Gallant and Drinan illustrated the

progression through the four-stage model, as they examined the lack of faculty response

to academic dishonesty. During the recognition and commitment stage, the institution
16

would have to recognize that the problem exists and commit to taking the necessary steps

to change it. The institution’s response to the existing problem is said to occur in the

second stage, while moving into Stage 3, the implementation stage. Finally, acceptance in

the institutionalization stage would result from a buy-in by faculty, administrators and

students who would all see the benefits and the long-term effects from the new process

(Gallant & Drinan, 2008).

Hart and Morgan (2010) reported on a comparative, descriptive study of nursing

students based on Gallant and Drinan’s (2008) theoretical framework. The 377 students

who took part in their study were composed of traditional face-to-face and online

students. Hart and Morgan found that students reported low levels of cheating in both

groups and high rankings in terms of how they rated academic integrity. Face-to-face

students reportedly had more instances of cheating, which Hart and Morgan (2010)

speculated to be possibly the result of the way integrity information and other academic

policies are disseminated. According to their analysis, online students have to exert more

independence and are expected to seek out written materials and policies on integrity, as

compared to face-to-face students who experience the verbal dissemination of this

information by their instructor. The information obtained by the online students is more

in depth than the condensed version presented in a traditional classroom (Hart & Morgan,

2010). Hart and Morgan suggested that upholding and supporting the policy of academic

integrity by the online students may be their attempt to protect the reputation of their

degree and reduce the general concerns of academic dishonesty in online courses.

Honor codes. Contrary to the findings of Hart and Morgan (2010), Patnaude

(2008) found that the presence of an institutional honor code does not decrease the
17

likelihood of cheating among students. As part of a study conducted by Patnaude (2008),

365 online faculty from five campuses at the University of Houston completed a “Faculty

perceptions of academic honesty online” survey (p. 37). The study compared the

perceptions of faculty who had reported to have taught at a university that had an honor

code to faculty who had reported to have taught at a university that had no honor code.

There was a significant statistical difference (p = .009) between the two groups: faculty

who taught at a university that had an honor code perceived cheating to be higher among

their students compared to their colleagues at universities without an honor code.

Patnaude indicated, however, that no significant statistical difference in perceptions of

student cheating existed between faculty who did not know whether an honor code

existed and those who were aware of the honor code. In concurrence with Hart and

Morgan (2010), Miller, Shoptaugh, and Wooldridge (2011) concluded that internalized

integrity standards can be highly effective. They found that honor codes can be effective

in that regard, as they underscore the students’ moral character and instill in the student a

responsibility that is integral in addition to their contribution to the academic community

as a whole.

In their study, LoSchiavo and Shatz (2011) found that the impact of honor codes

depends on the course delivery method. They implemented three studies in their

Introductory Psychology course. Students in their fully online sections did not show a

significant drop in cheating when they signed the honor code. The students in the blended

courses who signed the honor code were 30% less likely to cheat (N = 165) than the

students who did not sign the honor code (57.6% and 81.8% respectively; LoSchiavo &

Shatz, 2011). Loschiavo and Shatz attributed the significant difference between cheating
18

patterns of students in the blended classroom versus those in the fully online classroom to

the academic setting. LoSchiavo and Shatz (2011) hypothesized that when students have

personal interactions with their peers and their teachers, they may feel a sense of moral

obligation to be honest.

Reasons for cheating. Mayhew, Hubbard, Finelli, Harding, and Carpenter (2009)

pointed out that cheating in high school could be an indicator that the student will cheat

in college. Of the 527 college students who participated in their study, they found that

71.3% of the students reported that they never cheated on in-class exams while in college,

compared to 50% who reported that they cheated while they attended high school.

Twenty-seven percent reported that they cheated only a few times while in college,

whereas the remainder reported more instances of cheating. Additionally, 40% reported

that they cheated a few times on their tests while in high school. Mayhew et al. (2009)

claimed that cheating in high school is highly predictive of cheating in college. They

suggested that cheating can be diminished if instructors develop better understanding into

the motivations of cheating and if the students are made aware of their violation of

academic expectations that Mayhew et al. dubbed the “cheating norm” (2009, p. 432).

Chase’s (2010) study of academic dishonesty with 2,302 students enrolled at a

university in South Florida revealed that student connectedness played a significant role

in their engagement with dishonest behavior. Chase concluded that the more connected

students were, the less likely they were to cheat. Chase also found a positive correlation

between the number of classes a student is enrolled in and their likelihood to cheat.

Findings showed that the more classes the student was enrolled in, the more likely they

were to cheat in their online classes. Students in Chase’s (2010) study reported that they
19

were less likely to cheat if their instructor showed engagement within the course and care

for the students. Findings by Sendag et al. (2012) did not fully support the notion that

institutional policies served as a deterrent. They surveyed 1,153 students from a

Midwestern university regarding the extent to which they cheated. Humanities and

Education students were least affected by the policies, which led Sendag et al. to question

whether the distribution of such policies varied and if such variations influenced their

effectiveness. Their findings also showed that freshmen were more likely to engage in

cheating compared to older students in their sophomore or senior year.

Gross (2011) challenged institutions and instructors to reevaluate their views on

cheating, as the values of students from this new millennium have shifted. As a result of

this shift, students look at cheating as a legitimate way of getting through school. Gross

argued that ignoring this value shift will keep institutions stuck in their old views where

they fail to become more tolerant of behaviors that are now widely acceptable and no

longer condemned. In turn, negative, moralistic reactions to cheating only address the

issue on a superficial level. Gross (2011) suggested “the need is to adapt performance

criteria to these new realities rather than act to repress or punish them” (p. 436). Gross

concluded that educators should therefore reevaluate expectations of their students by

searching for new ways to contribute to the student’s personal growth and learning

process. Students’ level of motivation also plays a role in their likelihood to cheat.

Sendag et al. (2012) stated that more than two fifths of the 1,153 students they studied

reported that they felt overwhelmed by their assignments. About 32% did not feel

motivated by their assignments, or they did not feel capable of doing them or felt

pressured to get good grades. Gross pointed out that the current generation of students is
20

pressured by the values of the work environment they strive to enter, where striving for

credentials and good grades on a transcript lead them to a sense of entitlement. They feel

empowered to challenge their teachers and offer suggestions for grade improvements, so

they have a chance at competing against their peers. A solution to making improvements

in academia is to encourage instructors to become more flexible by allowing students to

have a say in their individual learning plan (Gross, 2011). According to Gross, this will

likely enhance learning and make the relationship between the student and their teacher

more effective.

What is considered cheating? In a study with 81 second-language instructors at

22 colleges across the United States, Correa (2011) found that what they consider

dishonesty in their classrooms varies. While there might be agreement on some forms of

dishonesty, for example, submitting a paper that was purchased online or one written by

another student, the instructors varied in the way they rated the seriousness of cheating.

Some forms of cheating that ranked low on perceived seriousness were asking another

student what was on the test when they took it, enlisting help from tutors or native

speakers, and using online translators. On the question related to who the ultimate victim

of cheating is, participants almost unanimously agreed that the student is the ultimate

victim (98.8%), while the institution ranked second (80.2%; Correa, 2011). Jones (2011)

found that a student’s perception of what qualifies as teaching depended on the scenario.

The students unanimously agreed that turning in someone else’s assignment as their own

is cheating. Most (92%) of the 48 students sampled agreed that improperly citing

information that was directly copied from an Internet course is cheating. Only 75% of

students considered the purchase or download of a paper dishonest. A clear majority


21

(73%) of students did not think that submitting the same paper in multiple classes is

cheating (Jones, 2011).

The results of Correa’s (2011) study further showed that most instructors (70.6%,

n = 75) preferred to handle cheating by giving the student a zero for their work, rather

than following the institutional policy and dealing with the issue outside of the classroom.

Approximately one third (34%, n = 70) of instructors who caught their students cheating

indicated that they reported some (34%, n = 70) of the students, but not others; one third

(31%, n = 70) reported all of the students; and one third reported none of the students

who reportedly cheated. In their responses, instructors wrote that they lacked the support

from their institution when it came to enforcing policies on cheating, while others wrote

that dealing with the student directly was enough.

While academic integrity policies could offer clarity on cheating for full-time

faculty, the buy-in may not be the same among part-time faculty (Hudd, Apgar, Bronson,

& Lee, 2009). An example of how part-time faculty’s understanding of the policies is

slightly different from that of full-time faculty, was provided in a study by Hudd et al. in

which the part-timers indicated that they did not see collaboration on homework and the

use of notes during exams without authorization as serious violations of the dishonesty

policy. While less than 5% of part-time faculty considered unauthorized collaboration a

major violation, 41% of full-time faculty classified it as such. Their study also showed

that for the most part, students felt that it was up to the instructor to take steps to prevent

cheating, rather than for the students to take personal responsibility to conduct

themselves honestly. The biggest difference among the responses received from full-time

and part-time faculty was a matter of perception. While 68.5% of full-timers felt that
22

there was a lot of cheating, only 34.1% of part-timers concurred. This led Hudd et al.

(2009) to conclude that part-time instructors may be less likely to include integrity

policies on their syllabi and discuss issues of cheating in their classes. Hudd et al.

concluded that part-time instructors may lack awareness because of their limited time on

campus and limited involvement in professional development where more emphasis may

be placed on institutional policies. Additionally, they thought that part-time instructors

who teach at various campuses might have false expectations of the students based on

their experiences on a campus where cheating rates are low.

When Pincus and Schmelkin (2003) reported on a study on the perceptions of

academic dishonesty among faculty, they ranked their findings of the 212 usable surveys

returned on a continuum. The majority of faculty considered behaviors like getting

answers from another student during an exam and stealing exams extreme forms of

cheating behaviors. However, the results showed mixed responses on whether using a

previous exam to study is considered cheating or to be considered an excellent tool for

preparation. The overall findings showed that faculty look at cheating on a continuum,

which varies among the more serious to the less severe types of cheating (Pincus &

Schmelkin, 2003). The differences between the student and faculty perceptions were

highlighted as noteworthy, as students did not see sabotage of someone’s work as a

serious offense, whereas faculty ranked that very high. Pincus and Schmelkin (2003)

recommended that institutional policies need to be clear on the different types of cheating

and how to deal with them. They felt that existing policies often exclude what may be

considered minor infractions, which could create confusion among students. Pincus and

Schmelkin (2003) commented that faculty would benefit from having institutional
23

guidelines on how to deal with cheating that are on a continuum based on its severity.

Sanctions should be determined based on the type of violation. Students might not

understand that their behavior is considered cheating, as many of them underreported

their own cheating that they saw as collaboration according to Williams, Tanner, Beard,

and Hale (2013). At the Midwestern university where their study was conducted,

Williams et al. found no institutional honor code, but they learned that the school had a

student code of conduct in place. Furthermore, students were well informed of the

school’s policy during orientation in their freshman year as well as in classes that they

attended. Despite the exposure to the policies, 67% of respondents admitted on the survey

that they engaged in dishonest behaviors over the past year. Additionally, of the 562

responses that were received, 59% of the students indicated that they engaged in

unauthorized collaboration.

According to Hudd et al. (2009), it is important to ensure that students receive the

same message against dishonesty in classes taught by part-time and full-time faculty.

This is especially important because of the large number of adjuncts that teach various

classes at institutions across the United States (Hudd et al., 2009). Hudd et al. pointed out

that faculty sometimes feel that enforcing rules against dishonesty is not their job and that

students should have learned about integrity policies in high school. The authors claimed

that attitude contributes to the perpetuation of cheating, especially since it makes the

instructors less likely to reduce the perpetrator’s grade or take any other corrective action.

In a study with 250 undergraduate students from a university in the northeast of

the United States, Thakkar (2012) asked open-ended questions about their perceptions of

different aspects of academic cheating. There were six main themes among the questions
24

asked, which touched on issues related to understanding the policy on academic integrity,

the roles of the students, the roles of the instructors, prevalence, attitudes, and prevention.

The survey revealed that the majority of students acknowledged that they were made

aware of the institutional policy in cheating through student orientation and their

instructor. The students’ responses varied in regard to their interpretation of what

constituted cheating, and they were particularly confused about plagiarism. The students

mostly felt that an institutional policy was necessary, and that the burden of enforcing the

policy rested on the instructors. Students in the study reported resentment towards

instructors who chose to ignore reports of cheating (Thakkar, 2012).

Thakkar’s (2012) study revealed that students felt strongly about implementation

of preventative measures against cheating. The recommendations on prevention ranged

from improved proctoring during exams to more individual instructor involvement with

the students to help them improve. The students agreed that policy enforcement by

faculty, in addition to frequent reminders of the policy, decreases the chances that

students would resort to cheating.

Bruner’s theory of learning. Insights into how students learn were provided by

Bruner (1960), who argued that learning occurs when learners are motivated and

information is presented to them in a structured fashion. When students are presented

with new information, Bruner said that they will grasp this information best when they

spend enough time to absorb it. If students are not excited about the materials they learn,

they will lack excitement and they will be more likely to forget it quickly. Students are

generally tested on what Bruner felt to be trivial facts which are only remembered

through rote memorization. The intrinsic motivation to learn is thus displaced by the
25

pressures of getting credentialed, which in turn pushes some students to engage in

dishonest behaviors. Bruner questioned the usefulness of “automizing devices” or

“teaching machines” (p. 83), which may not adequately present challenging content and

relevant exercises or aid in the student’s ability to comprehend information presented

through this medium.

Kohlberg’s theory of motivation for cheating. Educators are responsible for

teaching their subject matter, but arguably also for teaching morals and values to their

students. Kohlberg (1981b) claimed that part of an educator’s duty is to enforce

classroom rules, and overlooking those would result in the decline of moral behavior

among students. According to Kohlberg (1981a), people’s motivation to cheat often

relates to the norms of the group they are part of. This group determines a “moral

atmosphere,” which lays the foundation of how the group members will behave. As a

result, Kohlberg concluded that it is not just the individual that should be addressed when

it comes to moral decisions. Instead, he proposed that attempts should be made to raise

the moral level of the entire group.

Kohlberg’s (1981b) list of motives which determines people’s choice of whether

to behave morally provided insights which might be useful in a classroom setting. He

identified the fear of being punished, expectation of receipt of rewards, anticipated

approval, craving respect of others, and feelings of self-condemnation as motivators

influencing why one would elect to make a morally sound decision. Furthermore,

Kohlberg contended that the critical issue in cheating is “recognition of the element of

contract and agreement implicit in the situation” (Kohlberg, 1981b, p. 44). Following

from this reasoning, the likelihood of cheating increases if the situation is such that a test-
26

taker is not being supervised and the possibility of sanctions is unclear (Kohlberg,

1981b).

The psychology behind cheating was also studied by Staats, Hupp, Wallace, and

Gresley (2009) who described students who do not cheat as heroes with special

characteristics. Staats et al. found that the students who fit the attributes of being brave,

honest and empathetic are most likely to be honest because cheating brings them a feeling

of guilt, which has an overall restraining effect on their possibility of cheating at all.

Based on these findings, Staats et al. suggested that attempts to prevent cheating must be

influenced by an understanding of the psychology of the so-called heroes. Staats et al.

(2009) compiled a list of traits to determine the characteristics of academic heroism.

Based on their list, they created an instrument that consisted of questions that would help

them determine where participants ranked in areas of empathy, honesty and courage. The

Short Index of Bravery, the Morally Debatable Behaviors Scale, The Interpersonal

Reactivity Index, and the Faces Index were existing instruments which laid the

foundation for the modified instrument used by Staats et al. Their study of 383

Midwestern undergraduate students found in their anonymous, self-reported survey that

the students who ranked high on bravery, heroism, and empathy ranked low on past,

current, and future intent of cheating. Staats et al. (2009) found that the characteristics

were weakly correlated with gender. Their theory suggested that combating academic

cheating should involve an effort of institutions to increase the students’ levels of

bravery, courage and empathy. Students should be encouraged to be courageous, even

when they are afraid of failing exams, and schools may consider awarding students who

display those characteristics. Academic heroism, claimed Staats et al. (2009), should be
27

celebrated and acknowledged through formal ceremonies, like graduation. Additionally,

Staats et al. encourage institutions to offer more support for faculty who often fail to

follow through with reporting academic misconduct for fear of retaliation or wasted

efforts.

One problem with cheating is that students may rationalize their behavior and not

see any fault in their actions (Brent & Atkisson, 2011). This differs from purposeful

cheating that is done in order to get admitted into an institution, or because of pressure or

convenience (Devlin & Gray, 2007). When 56 Australian students participated in a study

in 2003, they revealed that some of their cheating was done because of external pressures

(Devlin & Gray, 2007). Claims about a lack of choice because of strict application

policies and education cost were similar to the reasons used by students in the United

States who justified cheating on exams (Brent & Atkisson, 2011; Devlin & Gray, 2007).

Brent and Atkisson (2011) warned that these attitudes must be considered when an

institution designs policies to prevent cheating.

In their study, Brent and Atkisson (2011) surveyed 420 students enrolled at a

Midwestern university. The purpose of the survey was to compare different perspectives

on cheating. The students were asked to answer questions related to the attitudes toward

cheating among fellow students. The students’ responses indicated that the perpetrators

mostly denied their responsibility in cheating, a tactic in line with neutralization by Sykes

and Matza (1957). Brent and Atkisson (2011) designed their survey to include questions

related to a blend of theoretical perspectives. In addition to questions related to the Sykes

and Matza theory, Brent and Atkisson included questions that were in line with Scott and

Lyman’s (1968) theory on reformulation. Brent and Atkisson (2011) found that students
28

most likely cheat because of personal crises they may be going through. The excuses fall

under Scott and Lyman’s theoretical perspective on neutralizing acts or consequences by

offering excuses or justifications (Scott & Lyman, 1968). These are referred to by Scott

and Lyman (1968) as “accounts” or explanations offered for behavior that is considered

wrong or unacceptable. The act itself is not denied, but the reason for committing the act

is somehow justified. Brent and Atkisson (2011) claimed that the theory of accounts

offers a partial explanation of students’ cheating behaviors. This helps to explain why

Chapman, Davis, Toy, and Wright (2004) learned in their exploratory interviews with 40

students that the students saw nothing wrong with providing a friend with questions they

could expect on an exam, as it would help the friend get a better grade. The students’

answers led Chapman et al. to develop a questionnaire for a sample of 824 business

students at a western university. Fifty-eight percent of students felt that it was considered

cheating to pass information on a test to another student, after the professor’s specific

request not to do so. Further findings indicated that although students think cheating is

morally wrong, they continue to do it because they perceive that the benefits are higher

than the cost. The students also indicated that they felt that everyone else was doing it

(Chapman et al., 2004).

Brent and Atkisson’s (2011) study revealed that 245 of the 401 students who

completed the survey indicated that cheating could never be justified. However, 144

students indicated that under certain conditions cheating could be justified; for example,

if the result could move them further along. This justification supports Sykes and Matza’s

(1957) theoretical perspective on neutralization (Brent & Atkisson, 2011). Sykes and

Matza (1957) studied different types of deviant behavior, ranging from minor offenses to
29

serious crimes, and concluded that much can be explained by the theory of association,

which states that delinquency arises from the acceptance of new norms and behaviors.

According to Sykes and Matza, delinquent behavior arises for different reasons,

depending on which technique is adopted by the person who violates the norm. The type

that is directly in line with Brent and Atkisson’s (2011) findings is “denial of

responsibility” (Sykes & Matza, 1957, p. 667). When it comes to exams, students more

likely admitted their wrong-doings, but would often justify their response by offering

excuses, such as stating that the material was not covered during class lectures (Brent &

Atkisson, 2011). Students, according to Brent and Atkisson, see cheating on exams much

differently than they see cheating on homework. Students in Brent and Atkisson’s (2011)

study reported that working together was almost an essential part of learning. As such,

Brent and Atkisson stated that instructors must hold up their end of the bargain, and

clearly indicate in their course contract what constitutes cheating and which behaviors

would be considered unacceptable. Additionally, clear statements of the ramifications,

including punishments, need to be mentioned on the contract (Brent & Atkisson, 2011;

Chapman et al., 2004). Miller et al. (2011) found in their study, however, that students

who were made aware of the harsh consequences of cheating were more likely to cheat.

They concluded that “punishment has its effect when we make the salience of

punishment high, but is likely to have little effect when the perception is that the

probability of being caught is low” (p. 180).

Catalogue of Different Types of Combative Measures

Role of faculty. In an effort to increase credibility and to maintain accreditation

status many schools have looked for ways to lower instances of cheating and also to
30

lower the perception that cheating is widespread, especially in online courses (Moeck,

2002; Parry, 2009; Prince et al., 2009; Roach, 2001). During their interviews of 225

upper- and lower-level undergraduate students, Stuber-McEwen et al. (2009) found that

these adults who were also enrolled in traditional postsecondary classrooms all reported

that they had cheated in the past. Their self-reports showed a higher instance of cheating

in the classroom by students whose cheating was prompted by panic during the exam,

rather than by deliberate planning to cheat. Stuber-McEwen et al. (2009) stated that

students in online courses may be better motivated and therefore less inclined to cheat,

and that instructors in online courses may be more vigilant about preventing cheating

because of their perception that more cheating occurs online. The SACSCOC (2010)

mandate that institutions wishing to retain their accreditation are under pressure to take

measures to ensure that faculty strictly enforces their institution’s code of conduct dealing

with dishonesty.

In an effort to minimize the amount of cheating that takes place, Moten et al.

(2013) suggested rapport-building on the part of the instructor. The instructor will get to

know the student through frequent interactions, which will give an idea of the student’s

writing and testing style. Having the students sign a dishonesty statement with each

submission, administering proctored exams and using multiple versions of exams were

mentioned as viable options to curtail dishonesty. Other suggestions include setting

cheating traps by creating websites that contain the exam questions with incorrect

answers. The instructor can take on the role of “class mole” by enrolling themselves in

the class under an alias (Moten et al., 2013). This fake student may then inadvertently be

included in conversations that could catch cheating students in the act.


31

Harkins and Kubik (2010) suggested that in a face-to-face classroom, safeguards

to prevent cheating could be proctoring written exams, assignments, or other graded class

activities. Students do not always realize their behavior is considered cheating, claimed

Harkins and Kubik, and these students sometimes feel that they are engaging in

collaborative behavior with the resources that are available to them. According to

Harkins and Kubik (2010), students make use of readily available tools online, and these

students do not realize that the availability does not justify their use in the context of a

summative evaluation. Harkins and Kubik mentioned that this form of cheating may be

considered ethical by the students because it is widespread and seems to have become the

norm. Harkins and Kubik dubbed this type of cheating “collaborative ethical cheating”

(2010, p. 139), because it is common among students who, as he claimed, have learned to

cheat defensively. Davis et al. (2009) stated that it is easier to plagiarize when

information is so easily available through the Internet. Additionally, students are

competing in a global environment where they often feel pressured to get ahead so they

may enter the workplace, which embraces speed and innovation. Workers are expected to

access information quickly and perhaps it is felt that copying from online resources is not

frowned upon by employers (Harkins & Kubik, 2010).

Harkins and Kubik (2010) added that the types of cheating among students has

moved beyond the traditional exchanges of answers or getting answers from the person

who sits close enough to them that they can read their answers. Students now use devices

that are not always easy to detect because they have gotten smaller and more

sophisticated. Harkins and Kubik contended that students have easy access to digital

media, the Internet, and software which can give them unauthorized access. Many cell
32

phones are now equipped with Internet access, which tempt students to take pictures of

their exams for friends (Harkins & Kubik, 2010). Even teachers expect more

collaborative work, as they encourage their students to tap into the multitude of resources

available online (Davis et al., 2009). This can contribute to students’ misunderstanding of

their limitations when it comes to the use of the information that is obtained. The vast

array of resources is beyond the teachers’ control, and they struggle to prevent cheating

or to enforce the school’s honor code (Davis et al., 2009). Patnaude (2008) suggested that

honor codes should be developed by instructional technology departments at their

respective institutions, which should be specifically designed for courses that are

delivered online. Enforcement and acceptance of those customized honor codes may be

more successful than enforcement of general honor codes which were initially designed

for face-to-face classes (Patnaude, 2008).

Preventative measures against cheating may need to start with a look at why

students are inclined to cheat. Kohn (1999) posited that rewards and punishment are

useful for training animals, but he warned that these behaviorist techniques impede

learning. Instead of feeling motivated by good grades or awards, students need intrinsic

motivation, which will help them understand the value of learning (Kohn, 1999). When

students are motivated to learn, he argues, they will perform better as a result, and when

their interest gets triggered, the students’ overall achievement improves. Kohn therefore

suggests that educators should design intriguing and engaging tasks to serve as intrinsic

motivation for the students. Kohn (1999) says that when students are given the

opportunity to play an active role in their learning process, they perform much better than

when they are passive recipients of information who must demonstrate their knowledge
33

by scores on assignments and examinations. Students may perform well because of the

immediate reward they work towards, but their long-term interest in learning is

negatively affected by complying with the status quo. Kohn warns that students may lose

their motivation to learn when the rewards cease to exist. Kohn (1999) challenged the

system by questioning the value of the evaluation process that is currently in place in

academia. The pressures are not only on the students who have to perform to standard,

but also on the teachers who are restricted by measures set by the institutions. These

measures are usually grade or performance related, which in turn drives the teachers to

feel pressured to get the materials across to the students within a limited environment of

constraint (Kohn, 1999). Sendag et al. (2012) mentioned that peer pressure contributes to

the instances of cheating in online classes, and educators need to consider incorporating

lessons on how to utilize positive peer pressure.

Correa’s (2011) study concluded that many instructors do not take their role in

combating cheating seriously. Correa complained that they do not explain to their

students what cheating is and warned that there cannot be an expectation of integrity if

the students are not given the academic policy on cheating. Correa stressed the

importance of following the institutional policy on cheating to ensure the credibility of

the school. Simply giving the student a zero and handling the matter individually, stated

Correa, aids in poor record-keeping as future instructors would have no way of knowing

whether the student committed a first offense. This point is supported by Thakkar (2012)

who stressed the importance of following through after an incidence of cheating is

discovered. Thakkar recommended that the burden of preventing cheating should be

shared with students who can become anonymous informants who might get incentivized
34

by rewards. The role of faculty in the prevention of cheating was highlighted by Thomas

and De Bruin (2012), who stated that barriers against cheating will only be effective

when faculty commit to advising students what cheating entails, explain what the

consequences are of cheating and finally, commit to taking steps to report cheating and

follow through with disciplinary actions. In their research with online faculty in

Johannesburg, South Africa, Thomas and De Bruin (2012) learned that some instructors

do not feel responsible for curtailing cheating by their students. Conversely, of the 60%

of faculty who reported that they had reported cheating in the past, 80% indicated that

they would much rather provide students with policies regarding academic integrity, than

take disciplinary action once cheating occurs. They blamed their inaction or

unwillingness to take action on their workloads and lack of evidence that cheating in fact

occurred, thus resulting in psychological discomfort. Faculty also blamed the institution’s

lack of consistency in dealing with reported dishonesty.

Williams et al. (2012) proposed that institutions should implement a required

module on academic integrity that students must take within their first year of enrollment.

The early exposure was expected to elicit open discussion of students with their peers and

their instructors, which would address any questions the students may have. Additionally,

Williams et al. claimed it would create a platform where incorrect information or

misconceptions could also be cleared up. Williams et al. (2012) suggested that faculty

members should also be educated on the topic to gain better understanding of dishonest

behaviors and their responsibility to combat them.

Other suggestions on how to combat cheating range from the instructor checking

the students’ citations, to the use of webcams, increasing the number of required papers
35

that can be checked for plagiarism, limiting the exam time, incorporating the use of

Skype for oral examinations, using different assignments in the classroom, providing

clear guidelines on rules and expectations, locking Internet sites while the exam is in

progress, and using full screen programs to create the exams, which prevent students

from minimizing the screen (Cole & Swartz, 2013).

Ways to prevent plagiarism. Jones (2011) recommended the incorporation of

the academic integrity policy and the institutional honor code as part of the syllabus. She

suggested that the policies should be clear and the steps that would be taken when such

policies are violated should also be mentioned. According to Jones, online instructors

should make specific mention of what is considered cheating, because the expectations in

the online environment may be different from face-to-face. The policies should be

reviewed during the course orientation, and students should be quizzed on the policy to

ensure their understanding (Jones, 2011). Jones proposed the use of an entertaining

activity to draw students’ attention to the policy. The syllabus or the learning activity

related to academic integrity should include links to tutorials in the Internet, which

provide additional background information.

Copyright issues have a bearing on the issue of plagiarism as they help students

understand the problems with cheating. Since students come from diverse backgrounds

and schools, they may not understand what constitutes plagiarism especially because of

changes which almost seem to promote plagiarism. Farnsworth and Bevis (2006) argued

that materials of others, such as information or photos should be assumed to be protected,

and permission should be obtained prior to adopting the information. Farnsworth and

Bevis (2006) stated that students over the age of 18 are protected by copyright laws, but
36

they must understand that information submitted for their classes for the purpose of

assignments, for example, gets added to their institution’s database. Students are often

not allowed to submit the same work for different classes without the permission of the

instructor, said Farnsworth and Bevis. Their views are not widely accepted because the

interpretation of academic dishonesty in terms of submission of one’s work for more than

one class varies from institution to institution (Schmelkin, Gilbert, Spencer, Pincus, &

Silva, 2008). In their study with 560 students, Schmelkin et al. found that students’

perceptions of cheating on papers are different from how they perceive cheating on

exams. The lack of clarity of what constitutes cheating may lead to unintentional

cheating, misinterpretation, or lack of consistent action from the instructor in response to

cheating behavior (Schmelkin et al., 2008). To prevent violations of the integrity policies,

students should be asked to provide a written copy with citations for written and oral

presentation according to (Jones, 2011). These submissions, Jones pointed out, can be

submitted to plagiarism detection programs, such as SafeAssign.

In their article, Harkins and Kubik (2010) argued that “copyleft” encourages

cheating, since it is the antithesis of copyright. They claimed that it allows users to find

and modify materials and claim them as their own. Lessig (2008) pointed out that writers’

creativity is stifled when they are unable to produce information that was modified,

without the permission of the original author. While some consider it plagiarism, Lessig

called this form of creative writing remixing, where authors freely use materials from

others to create a different version. He argued that allowing users to edit web-based or

print-based material encourages creativity and should therefore not be held by a standard

of plagiarism rules which stand in the way of the creative process.


37

Harkins and Kubik (2010) stated that access to music and other software provides

all users an opportunity to creatively make modifications. This applies to writings as

well, and students have free access to papers they can in turn modify and call their own

(Harkins & Kubik, 2010). Simonson et al. (2012) provided descriptions of various ways

in which materials are protected by copyright laws. They stated that an instructor’s notes

are subject to protection. They further explained that since material in online courses is

digitally presented to students, this material is considered “fixed” and may not be

reproduced by the student without permission from the instructor. Simonson et al. (2012)

also discussed different forms of plagiarism, and claim that “online entrepreneurs” are

particularly troublesome because they sell prewritten papers to any interested buyer, who

can make changes as they see fit, and submit the work as their own. Simonson et al.

brought up the issue of student’s intellectual property rights, as they mentioned that the

services offered by websites such as Turnitin.com or SafeAssign could pose a breach of

those rights. Their concern stems from the fact that the students’ papers get added to the

databases of the aforementioned companies without the students’ permission.

Witherspoon et al. (2012) and Heckler, Rice, and Hobson Bryan (2013) stated that

students’ awareness of technological cheating detection resources may serve as a

deterrent and prompt students to take charge of their academic success with honest

pursuit.

In their study, Heckler et al. (2013) found that when students knew their work was

going to be submitted through a plagiarism detection program, they were less inclined to

cheat, and the problem of plagiarism was reduced. The researchers used secondary data

from Turnitin to review the scores of seven courses offered in the fall of 2010 and the
38

spring of 2011. In their courses, the students were provided with a syllabus which

included the academic integrity policy. In the fall of 2010, the students were asked to

submit their papers, without being told by their instructor that it would be submitted

through a plagiarism detection system. In the spring of 2011, the students were required

to submit their paper through the plagiarism detection service (Heckler et al., 2013).

Turinitin results are expressed in percentages, which indicate the amount of overlap

found. The results showed that students who were unaware that their paper was going to

be submitted for plagiarism detection were most likely to plagiarize from other students.

They ranged between 0% to 76% in overlap. The mean was 16.33% and SD = 16.92%.

The students who were aware that their paper was going to be submitted to detect

plagiarism had a range of 0% to 48.33%, mean = 9.34%, SD = 8.8% (Heckler et al.,

2013). Their findings showed that males were more likely to plagiarize than their female

counterparts. The researchers concluded that the use of plagiarism detection software

provided a significant prediction of plagiarism. The conclusion is in line with Moten et al.

(2013), which suggested the use of Turinitin.com, WriteCheck.com, and

Duplichecker.com to detect plagiarism in submitted work.

In Baron and Crooks’ (2005) research, they mentioned that instructors need to be

vigilant about catching the students who engage in plagiarism. As part of a solution, they

offered that instructors could provide students with in-class writing exercises, which

helps to set a baseline for these instructors who later assign papers that have to be

completed outside of the classroom. Baron and Crooks (2005) proposed that the

instructor could compare the writing style of a student’s in-class work to assignments

completed at home. They also wrote that issues of instructors who notice significant
39

differences in a student’s writing styles are not uncommon. In online classes, instructors

have numerous ways of obtaining writing samples from students, because students are

expected to engage in writing continuously through emails and discussions (Davis et al.,

2009). Farnsworth and Bevis (2006) suggested that teachers can look for the sudden

changes in writing style by looking for sudden changes in the font of printed work, and

stylistic differences in the reference list, which may have been pasted from different

sources.

Patel, Bakhtiyari, and Taghavi (2011) recommended that teachers should require

students to submit documents that are unlocked. PDF documents often have a locking

feature, which prevents the use of plagiarism detection tools. An instructor who tries to

submit a paper in PDF format to verify originality will receive an error message and will

not receive any results (Patel et al., 2011). There are ways around plagiarism detection

tools, and Patel et al. stated that tricks are being used to make the tools ineffective.

Replacing spaces with dots, called “Dot Replacement” and changing the dot color to

white apparently tricks the detection programs. Rather than reading independent words,

the program will process the text as single word sentences (Patel et al., 2011). Translator

services on the Internet also offer an opportunity to change sentences, when text is

translated into another language and then translated back. Patel et al. explained that the

initial translation is often not a direct translation, but rather a paraphrased version of the

text. This can be done multiple times with different languages, each one offering its own

interpretation. When converted back, the translated text offers a paraphrased version of

the original text with a different sentence structure, which will not be detected by

originality programs, such as Turnitin.com, PlagiarismDetect.com, and iThenticate (Patel


40

et al., 2011).

When students are taught to use online citation tools, stated Jones (2011), they get

in the habit of generating a reference list, which should be submitted with their work.

Jones recommended that instructors familiarize their students with tools such as Easybib

and the Citation Generator.

Another solution offered by Baron and Crooks (2005) is the use of portfolios.

They mentioned that students who keep a portfolio during the semester would have

multiple samples of their work, similar to the writing sample that can serve as a baseline

of students’ work. Additionally, Baron and Crooks stated that instructors need to increase

their level of awareness, as students do not always remove the evidence of their cheating

ways: they may leave information in the headers or footers, which instructors can detect

if they activate those functions while reading the paper.

Baron and Crooks (2005) pointed out that reporting cheating students for

disciplinary action is not consistent among instructors, who may see it as additional work

or not worth the trouble of reporting. In their research, Williams et al. (2012) learned that

of the 74% of faculty who acknowledged knowing that cheating takes place in their

classes, only 18% reported it. Institutions often have policies on academic dishonesty,

and instructors are advised to include those policies in their syllabi and apprise students

of the consequences. Baron and Crooks (2005) speculated that these policies alone deter

cheating and that therefore enforcement should be compulsory. If not enforced, Baron

and Crooks argued, students quickly realize that they can get away with dishonest

practices. They pointed out that students’ work that is submitted online can be checked

for plagiarism through available programs, such as Turnitin and Integriguard, or by


41

simple checks with search engines such as Google, which usually picks up exact

sentences that were copied into a student’s writing assignment. Farnsworth and Bevis

(2006) also recommended the use of Google, which is an easily accessible search engine

that can track plagiarism by typing parts of paragraphs or sentences in the search area to

look for plagiarized information. Williams et al. (2012) found that faculty don’t usually

report instances of cheating as they lack evidence, see it as trivial, or that the student will

eventually suffer the consequences when they get caught in future classes.

Chapman et al. (2004) suggested a college-wide campaign to combat cheating,

that would enlighten the students with factual information regarding the extent of

cheating. Since students overestimated the occurrence of cheating by others, Chapman et

al. proposed that the tactic might be as successful as a similar approach used to combat

alcohol use at universities. This, however, is not supported by McCabe and Trevino

(1997) who reported that awareness of the academic integrity policy and peer reporting

has not proven to make a significant difference.

High teacher and learner interaction. Like other researchers (Prince et al.,

2009), Baron and Crooks (2005) have recommended high levels of interaction between

students and between the student and their instructor. Prince et al. (2009) have listed

other practices that deter online cheating, such as including group projects and requiring

prompt feedback. Students can engage in group interactions by creating multiple

discussion questions and posting them on the class discussion board. The instructor can

then assign each student a set of discussion questions to answer (Farnsworth & Bevis,

2006). Prince et al. (2009) suggested that students should be assessed in multiple ways,

so their final grade in the class is determined by their participation on exams, quizzes,
42

discussions, papers and group activities. The use of open-book exercises and

collaborative work can foster students’ ability to synthesize information from different

resources, stated Farnsworth and Bevis (2006).

According to Lieber (2012), students form their own conclusions on cheating and

faculty efforts to reduce it. Lieber observed that they reported lower incidences of

cheating when their teachers used various versions of the test during the examination and

if they only reused tests or portions of tests for 2 years or less. Changing the questions

would lower the students’ chances of obtaining an advanced copy. Random-spaced seat

assignment and different exam versions were indicated as providing additional cheating

barriers. The role of proctors was highlighted by Lieber (2012) as well, particularly the

actions of the proctor who provides close monitoring of the students. Some examples

included staying in the room, keeping a watchful eye and walking around in the room on

occasion. Lieber examined whether providing instructors financial incentives for

deterring cheating made a difference. His findings were that the likelihood of these

incentives is rare because of budget constraints, and that instructors are generally

intrinsically motivated to deter cheating.

Setup of online exams. Various researchers proposed that to lower the instance

of cheating, instructors can change the order of the questions and change exams

frequently to ensure that exam questions or answers are not shared between students

(Baron & Crooks, 2005; Farnsworth & Bevis, 2006; Moeck, 2002). Open-ended

questions require a deeper level of thinking and involvement, stated Baron and Crooks

(2005), and could be used instead of multiple-choice questions. In turn, they explained

that these essay questions should carry more weight than multiple-choice question. Other
43

ways to lower cheating offered by researchers include using a variation of different types

of questions, varying the order of the questions (Moeck, 2002), and limiting the test

availability to only one hour on a specific day to lower the chances of sharing test

information (Farnsworth & Bevis, 2006). Students who are unable to take the test at that

time should be given an alternate test with different questions, stated Farnsworth and

Bevis (2006).

Baron and Crooks (2005) claimed that engagement in group projects shifts the

responsibility as well, arguing that this makes the students responsible for their share of

the work. Interaction with others supposedly makes it more difficult to cheat (Baron &

Crooks, 2005). Moeck (2002) suggested that administering tests more frequently also

deters cheating. Furthermore, he stated that conferences with students help establish

rapport, which he claimed to be a deterrent against cheating. Moeck explained that as the

students build a relationship with their instructor, they may feel a sense of guilt or may be

fearful of the instructor’s finding out about their dishonest behavior. Moeck (2002)

pointed out that conferences can be set up via the telephone, the computer or even face-

to-face.

Ullah, Xiao, Lilley, and Barker (2012) designed a “profile based authentication

framework (PBAF)” to authenticate students who take online exams. Along with a user

identification and password, students are required to answer challenging questions that

are used to identify themselves. Ullah et al. stated that unlike the banking experience

where users are less likely to share their user identification and password, students may

be much more willing to share their personal information with others if their intent is to

cheat. The PBAF uses a two-step approach to authenticate the student, namely, the initial
44

login with their username and password, followed by a series of profile and challenge

questions. Students who fail to answer the questions correctly are denied access and are

reported. In their study, Ullah et al. (2012) tested the PBAF on 34 participants from

universities within the UK and other universities outside of the UK. The authentication

process was done for 7 days spread over a 3-week span. The results of their study showed

that well-designed questions make it difficult for inauthentic users to answer the

questions correctly within a short time. Critical in the validity of the PBAF, said Ullah et

al. is the selection and design of authentication questions which will not lead to

misinterpretation or allow multiple ways to answer them.

Testing centers. One common practice to ensure integrity is that of using testing

centers which have proctors who monitor test-takers (Baron & Crooks, 2005; Prince et

al., 2009). Prince et al. (2009) suggested that proctors should require two forms of

identification from the students, to ensure that they are indeed the person they claim to

be. Institutions that do not have an on-campus testing center, or who have students who

reside outside of the region where the institution is located, can seek the assistance of a

nationwide testing center such as the National College Testing Association (NCTA, n.d.,

cited in Prince et al., 2009). Participating schools can join this consortium of 259

participants located throughout the United States as well as in two other countries.

Students who wish to take their proctored exam at any of the NCTA centers need to pay a

fee that ranges depending on the location of where the exam is administered.

Jung and Yeom (2009) offered an alternative to the use of proctors placed in the

same room with the test-taker. An elaborate system which provides remote monitoring of

students while also securing their identity is called the Security Control system in the
45

Online Exam (SeCOnE). Each student’s computer would need to be equipped with a web

camera and microphone and the SeCOnE system software would need to be installed.

The software serves as a verification tool, which establishes the identity of the test-taker

and delivers questions and answers through encryption. Additionally, screen shots of the

examinee are taken throughout the test-taking period, which can be reviewed for

suspicious behavior, such as navigation away from the screen. The system also provides a

way to lock any communication tools during the examination, thereby minimizing a

student’s ability to strike up a chat or email conversation with someone else (Jung &

Yeom, 2009). Prince et al. (2009) recommend that nonproctored exams should be used

for extra credit type activities, and they should not make up a large percentage of the

student’s final grade in the course.

Mirza and Staples’s (2010) study on the use of cameras for monitoring purposes

during examinations found that 80% of the 33 students that were monitored reported

feeling uncomfortable during the test. The students felt psychological pressure, which

Mirza and Staples warned could lead to anxiety during the exam. The students did report,

however, that they were more likely to cheat when they are being monitored by a camera

as compared to having a live proctor in the room during the examination.

Some students fail to see the value of education and seem to worry more about the

grade they will receive at the end of the term, than the quality of education and course

outcomes, claimed Bedford, Gregg, and Clinton (2011). Bedford et al. (2011) observed

that in order to be considered for jobs or universities, students focus on the grade, rather

than their education. In their study, 20 faculty from University of West Alabama

responded to the call for participation in a pilot program where the Remote Proctor was
46

going to be evaluated (Bedford, Gregg, & Clinton, 2009; Bedford et al., 2011). These

instructors had their students complete their exams while being proctored remotely. Each

participating student had to install the required software and submit their picture and

fingerprint for identification purposes before they were allowed to take the exam

(Bedford et al., 2009; Bedford et al., 2011). Students were made aware that they were

being watched and that the Remote Proctor would record any suspicious behavior. The 30

students were asked to purposefully engage in suspicious behavior, and the recordings

were given to the faculty for their review. Of the students who were part of the study, 15

responded favorably to the use of Remote Proctor, while 5 did not like it. The remainder

of the 30 students who were part of the study had no opinion (Bedford et al., 2009;

Bedford et al., 2011). Faculty also reported favorably in terms of the use, with 14

answering yes, three saying no, and three not expressing their opinion. Based on their

findings, Bedford et al. (2011) recommend that institutions implement a policy to verify

the students’ identification prior to their taking an exam and using live or remote proctors

to help curb the extent of cheating. The recommendations were made despite the

limitations pointed out by the researchers: at the time of the study, the Remote Proctor

was not available for Macintosh computers; it could not be installed on computers of

military students in Iraq and Afghanistan; nor could it accommodate some students with

special needs (Bedford et al., 2009). After the study and upon implementation of the

Remote Proctor at the small southern regional universities, there were reports of 600 calls

for IT assistance and students expressing privacy concerns (Bedford et al., 2009).

Tutors and biometrics. Students who work with tutors, or have a relationship

with teaching assistants, also build connections that deter cheating, claimed Baron and
47

Crooks (2005). They have to answer to these individuals who closely monitor their

progress. Any suspicious deviation from the norm might raise red flags, and the

possibility of that happening may be enough to keep students on an honest path.

Baron and Crooks (2005) argued that the use of biometrics is the best method to

prevent cheating. The student’s handwriting can be sampled, and their voice and

fingerprints can also be used as forms of identification. One example of a biometric

program is Securexam Remote Proctor, which in addition to scanning fingerprints also

provides a full camera view of the students while they are taking their exam (Parry,

2009). Some researchers (Baron & Crooks, 2005; Bedford et al., 2011; Parry, 2009)

argued that the U.S. federal government’s regulation online students’ identity verification

(Higher Education Opportunity Act, 2008) is something that would be best handled with

the use of biometrics. However, Baron and Crooks mentioned that biometric verification

is not only costly, but it also raises the issue of privacy, as it is not devoid of security

issues and does not guarantee that students’ records will be kept confidential. In a pilot

study, 20 faculty used the Software Secure Remote Proctor, biometric software that

verifies an individual’s identity, with their college students to determine its effectiveness

(Bedford et al., 2011). Students were encouraged to engage in activities which are usually

forbidden during testing, such as using books and talking. All these activities were

captured by the Remote Proctor and were reported by the monitoring company. Students

were less likely to deny their guilt because their actions were recorded. As a result, the

Remote Proctor was deemed to be a highly effective monitoring system, which helps

increase student integrity (Bedford et al., 2011).


48

Chapter Summary

Although concerns about dishonesty in online courses continue, most research has

not provided scientific evidence that academic cheating warrants special focus on the

online environment. Assessments by Baron and Crooks (2005); Grijalva et al. (2010);

Hollinger and Lanza-Kaduce (2006); Shaw (2004); Spaulding (2009); and Watson and

Sottile (2010) of overall cheating have indicated that cheating is more common in face-

to-face courses. Faculty have several available measures they can implement in their

courses to prevent it from happening in the first place. Gallant and Drinan’s (2008)

theory pointed to the importance of implementation of institutionalized policies on

dishonesty, which must be carried out by faculty and administrators, while Bruner (1960)

focused on engaging students in thought-provoking materials and lessons to stimulate

their honest participation. Kholberg and Kohn (1981a), on the other hand, argued that

placing more importance on the intrinsic motivation of learning rather than credentialing

would make students less likely to cheat. Understanding the motivations for cheating may

offer insights into combative measures (Brent & Atkisson, 2011). A variety of techniques

were reviewed, such as proctoring examinations (Baron & Crooks, 2005; Harkins &

Kubik, 2010; Prince et al., 2009), in-class writing assignments (Baron & Crooks, 2005),

and honor codes (Patnaude, 2008). Researchers also suggested the use of security or

biometric systems (Bedford et al., 2011; Jung & Yeom, 2009; Parry, 2009). This study

explored the current state of instructor and administrative awareness and involvement in

ways to prevent cheating.

Research Questions

The research questions for this study are:


49

1. To what degree do instructional college faculty perceive dishonesty as a

problem in their online classes?

2. How do online faculty judge the seriousness of online cheating and how well

do they think their college deals with it?

3. What strategies are used by college instructors to safeguard online course

integrity?

4. To what extent do instructional college faculty follow the institution’s code of

conduct in response to academic dishonesty?

5. What types of support do instructional college faculty desire to help lower

online cheating?

6. To what degree do instructional college faculty perceive the acceptance of the

use of institutional measures to prevent online cheating?


50

Chapter 3: Methodology

The problem addressed by this study was the lack of documentation about the

phenomenon of cheating in online instructional environments in terms of the extent,

causes, effects, procedural preparedness, and future planning.

Participants

The target population for this study was all instructors who teach fully online

courses at the researcher’s community college site, as well as online instructors from two

other community colleges in Florida. According to Creswell (2005), the target population

should consist of individuals with a common characteristic that the researcher can

identify. The common characteristic among the selected participants is that they all teach

fully online courses. Since approximately 289 instructors at the researcher’s institution

teach about 570 fully online courses, all instructors were invited to participate in the

study (E. Muirhead, Executive Assistant of Distance Learning, personal communication,

September 30, 2013). Among 120 institutions nationwide, the Aspen Institute ranked this

institution in the top 10% of community colleges nationwide. It is the largest institution

of higher education in its county, and its top four areas of study for 2010–2011 were

business administration, liberal arts, criminal justice, and nursing. The college offers

Bachelor and Associate degrees in addition to certificates and applied technology

diplomas. With a student population of 67,258 in the 2010–2011 academic year, the

college employed 1,182 adjuncts and 420 full-time instructional faculty. There are three

main campuses and six centers spread throughout the county (Broward College, n.d.-a).

The researcher also invited all online instructors from a community college in a

neighboring county to participate in the study. This institution had 48,966 students
51

enrolled for the 2011–2012 school year. The college offers Bachelor and Associate

degrees, as well as certificates, vocational degrees, and preparatory programs. There are

four campuses in the county and one satellite location (Palm Beach State College, n.d.).

The highest number of graduates were in the areas of nursing, paralegal, emergency

medical services, and business administration (Palm Beach State College, 2013a). In the

2011–2012 academic year, the college offered 802 online courses (Palm Beach State

College, 2013b). In the spring of 2013, the college had 159 fully online instructors

teaching 344 sections (S. Beitler, E-Learning Director, personal communication, January

29, 2013).

The third institution included in this study served over 25,000 students during the

2011–2012 school year. This college has six campuses and several centers spread

throughout the county (Santa Fe College, n.d.-a). Like the other institutions included in

this study, this college offers Associate and Bachelor degrees in disciplines such as

Health, Early Childhood, and Nursing (Santa Fe College, n.d.-a). They offer

approximately 400 online classes during the spring and fall semester, taught by

approximately 200 online instructional faculty (L. Ciardulli, Assistant Vice President of

Academic Technologies, personal communication, July 24, 2013).

The demographic makeup of the participants spans a wide range of age, race, and

gender categories. Demographic information gathered from the participants at the time of

participation provided exact information, but specific focus was placed on the extent of

experience and gender of the instructors. The procedure followed to gather the sample for

this study was to contact the directors of the instructional technology department at the

selected institutions to either obtain a list of email addresses of all online instructors or
52

make arrangements to disseminate the survey (Fowler, 2009). The instructors were

contacted via email and an invitation to participate in the study was extended, as

proposed by Sue and Ritter (2007). The instructors were sent a reminder email

approximately 10 days after the initial invitation in an attempt to reach as many

participants as possible (Fowler, 2009). Creswell (2005) estimated that 350 individuals

would be a good sample size to partake in a research study, thereby making the combined

populations of fully online instructors at all proposed institutions a suitable size. Sue and

Ritter (2007) posited that the number of participants likely increases if all the members of

the population are invited to participate. They suggested that the number of participants

who will respond increases when they are preliminarily contacted through various

methods, such as email, telephone, and regular mail. An agreement to participate makes

nonresponses less likely to occur. According to Fowler (2009), the importance of sample

size depends on the nature of the study. Fowler stated that while a study which has been

repeated many times may require a large sample size, studies that have not been done as

much can be statistically sound even with a smaller sample. Fowler suggested securing a

sample, which is reflective of the population by ensuring each individual had an equal

chance of being selected, that probability sampling be used, and that the design be such

that the sample reflects the entire population. Fowler warned that the appropriate size of

the sample should not just be based on statistical suggestions, but rather on the individual

study and its goal. He also cautioned that studies should not be approached solely based

on predicated margins of error.

The research method used for this study was mixed-methods. Participants were

asked to answer survey questions for the quantitative portion of the study. The qualitative
53

portion of the study involved a focus group meeting, which provided the researcher with

information that was used to validate the data gathered from the surveys. According to

Tashakkiro and Teddlie (2003), Creswell (2008), and Pinto (2010), mixed-methods

research is a newer approach to research design, which enables the researcher to mix

quantitative with qualitative data collection procedures to obtain deeper understanding of

their topic. Pinto mentioned that mixed-methods offer deeper understanding of the data

that are gathered and allows for triangulation between the quantitative and qualitative

data. Triangulation is believed to improve the validity of the research. Though it does not

come without critique, Pinto (2010) believes that triangulation provides a more holistic

view than single method studies.

In the quantitative portion of this research study, the participants were asked to

answer a questionnaire consisting of 18 multiple-choice questions consisting of multiple

items. This questionnaire was securely delivered online via Google forms. Sue and Ritter

(2007) warned about invited participants not responding to the request to partake in a

study to which they were invited. There were people who wished not to be part of this

research study, and others who initially agreed to complete the survey but changed their

mind. The participants completed an online survey, which Sue and Ritter explained to be

a relatively quick and low cost option to gather data. In an effort to increase the number

of survey responses, Fowler’s (2009) recommendations were followed. The potential

participants were contacted via email to inform them of the study and the importance of

their participation. The survey was easy to navigate and was kept short and concise.

Participants were incentivized by an opportunity to win a prize. Fowler explained that

there may be those who do not answer every question in the survey and more
54

importantly, there may be people who do not submit any response at all. To reduce this

sample bias due to nonresponse, Fowler (2009) suggested sending an advance letter to

inform the participants of the study. In the advance letter, the participants will learn of the

purpose of the survey and the purpose of the study. For the qualitative portion of the

research, participants were invited to a focus group meeting to further discuss the survey

questions

Instrument

The instrument used for this study was a modified version of the Academic

Integrity Survey (AIS, Appendix A), developed by McCabe in 1999 (McCabe, Trevino,

& Butterfield, 1999). Revisions of the survey were made in 2003 (Eckles, 2010). Dr.

McCabe, who is currently a professor of Management and Global Business at Rutgers

University in New Jersey, was contacted via email by the researcher to request

permission to use his survey. He gave written permission to the researcher to modify and

use the instrument (D. McCabe, Creator of Academic Integrity Survey, personal

communication, June 7, 2013). The revised survey, consisting of 96 items, was modified

to fit the purpose of the study (Appendix B). According to Creswell (2005), it is

important to establish the validity and reliability of an instrument. For the study to be

considered valid, Creswell stated that the researcher should obtain useful information

from the participants, which can be used to make generalizations about the population.

Reliability, on the other hand, refers to the expectation of the instrument yielding similar

and consistent results with each use (Creswell, 2005). Boehm et al. (2009), Eckles

(2010), and Hart and Morgan (2010) all utilized the AIS, and each established reliability

and validity of the instrument prior to conducting their studies. Eckles stated that validity
55

of the instrument was based on the survey’s being designed by one of the leaders in the

field of academic integrity, Donald McCabe. Survey questions were answered on a 5-

point Likert scale ranging from never to very often, or responses were answered on a

checklist where specific behaviors were marked on a 5-point Likert scale which ranged

from not cheating to very serious cheating (Boehm et al., 2009; Eckles, 2010; Hart &

Morgan, 2010). The researcher’s study gathered information from all faculty who teach

online, to assess their attitudes and opinions in regard to dishonest behavior among their

students. The AIS is broken down into three main themes, namely, academic

environment, specific behaviors and demographics (McCabe et al., 1999). The purpose of

the survey was to measure the extent to which instructional faculty are aware of various

methods of cheating in their classrooms, to gather information about measures that are

already used by instructional faculty to enforce the institution’s code of conduct (Eckles,

2010; McCabe et al., 1999). In his research, Eckles (2010) evaluated and reviewed the

instrument for validity and reliability and found it to be solid in both areas. Eckles

performed the Cronbach’s Alpha statistical analysis, which revealed a score of .911. This

score indicates that the rate for internal reliability is high.

The purpose of the AIS was to find out the perceptions of faculty about students

who cheat, what factors contribute to cheating, the effects of honor codes used in

academia and the likelihood of that lowering the instances of cheating, and the effects of

academic integrity policies at institutions (McCabe et al., 1999). The writer employed a

modified version of the AIS, which places more emphasis on faculty’s perception

regarding students’ likelihood of cheating and measures taken by the institutions to

prevent cheating before it takes place in the context of online courses (Appendix B).
56

While there is no specific reason to let the researcher believe that cheating in the online

environment is alarming at any of the three institutions, the (SACS, 2010) has stated that

accreditation of higher institutions will partially be determined on their ability to show

that they have taken measures to reduce online academic cheating. The instrument

contains questions about the participant’s attitude about students who cheat. Nitko and

Brookhart (2011) explained that when attitudes are measured, one looks at

“characteristics of persons that describe their positive and negative feelings toward

particular objects, situations, institutions, persons, or ideas” (p. 433). In this case, the

instrument elicits faculty’s attitudes regarding the types of dishonest behavior their

students commonly exhibit, what measures they took after cheating was detected and

how academic policies affect cheating. Nitko and Brookhart explained that part of a

structured personality inventory known as the “self-report characteristic” (p. 434)

requires the respondent to look at their own feelings of something specific.

Evaluation of Technical Adequacy: Validity and Reliability

Content validity. In order to determine whether an instrument is considered

adequate for use, it is important to determine the validity of the instrument. According to

Nitko and Brookhart (2011), validity is “the soundness of your interpretations and uses of

students’ assessment results” (p. 35). Nitko and Brookhart pointed out that there are four

principles that are used to determine whether a survey is valid. There must be evidence

that the survey is appropriate, the way the instrument is used must also be appropriate,

the values implied in the results of the survey must be appropriate, and finally, the

consequences of the interpretations must be consistent with the values (Nitko &

Brookhart, 2011). Another factor to consider when determining the validity of a survey is
57

content validity. This measures whether the survey questions and the scores assigned to

the questions represent all of the possible questions that can be asked given the

circumstance (Creswell, 2005). In establishing content validity, the reviewer of the

survey has to take a look at the way it was planned and which procedures were followed,

stated Creswell. Eckles (2010) established content validity based on the fact that the

instrument was created by McCabe, whom he described as “a leading expert in the field

of academic integrity issues in higher education” (p. 58). The modifications made to the

AIS were merely to customize the instrument to the participating research sites.

Criterion-related validity. In addition to content validity, Eckles (2010)

established criterion-related validity. Creswell (2005) explained that “it determines

whether the scores from an instrument are a good predictor of some outcome (or

criterion) they are expected to predict” (p. 165). Eckles’ findings were based on his

research which revealed that the survey was examined by experts in the field.

Internal and external validity. External validity was established when Eckles

(2010) carefully identified and selected his population from which he ultimately drew his

participants. The population consisted of faculty and administrators employed at a

western U.S. public institution of higher education. Additionally, he did not generalize

his results to groups outside of his population, as that would have created a threat to

external validity.

Validity analysis and validity coefficients. It is important to note that no validity

data were actually provided in any of the aforementioned categories. When assessments

are given to participants, the scoring of those assessments will determine whether the

researcher of this study was able to analyze validity or not. Eckles (2010) made an
58

inference about the validity of the instrument based on the designer’s credibility in the

field.

Reliability. Another evaluation that determines adequacy is reliability. Creswell

(2005) claimed that it should be the goal of good research to have reliable measures or

observations. According to Nitko and Brookhart (2011), reliability is the degree to which

students’ results remain consistent over replications of an assessment procedure. To

assess a test for reliability, Eckles used Cronbach’s Alpha statistical analysis. The score

was .911, which is “of a high internal consistency reliability rating” (Eckles, 2010, p. 58).

Boehm et al. (2009) conducted a pilot study as part of their research, in an effort to

reestablish reliability and validity. The researchers asked experts to rate the survey

questions on how clear and consistent they were. The required score of 3.0 was exceeded

for clarity (3.6) and consistency (3.3). Additionally, the consistency reliability coefficient

of .768 on a Spearman-Brown formula added to the conclusion that the instrument was

reliable.

To measure for internal consistency, a Cronbach’s Alpha statistical analysis was

performed on the modified survey for this study. Multon and Coleman (2010) explained

that the Cronbach’s Alpha analysis is appropriate to run on scale items that highly

correlate with one another. The only question with such a correlation is question 1 about

the academic environment. The 5-item scale yielded a value of α = .87, indicating high

reliability. Scale means were 3.39 for severity of penalties for cheating (SD = 1.14), 2.78

for average student’s understanding of the college’s policies concerning cheating (SD =

1.01), 2.68 for student support of the policies (SD = 0.96), 3.80 for faculty support of the

policies (SD = 1.04), and 3.09 for effectiveness of the policies (SD = 1.02).
59

Recommendations for Future Use of the Instrument

Eckles (2010) made several recommendations in regards to future research and

the utilization of the Academic Integrity survey. He suggested that the survey should be

adapted to include a “not applicable” option for some of the questions, as respondents did

not all have experience in, or exposure to, the questions related to policies at the

institution. The survey only contained a quantitative approach, and Eckles suggested that

qualitative follow-up questions upon receipt of the quantitative portion would expand the

study further. This would make the study a mixed-methods approach.

Measured Domains

For his research, Eckles (2010) measured a variety of domains: the academic

environment; faculty responsibility in conveying institutional policies to their students;

primary sources of policies regarding academic integrity; perception of the frequency of

cheating; faculty awareness and responsiveness to cheating; and safeguards implemented

to reduce or prevent cheating. Each of the aforementioned categories contained a set of

questions that needed to be answered by the respondents.

Item Selection

To determine how items were selected for the test, the writer evaluated the

original writings by McCabe (Mc Cabe et al., 1999). McCabe explained which factors

were going to drive the research. He listed honor codes (institutional factors) and moral

norms (personal factors). There was a comparison between schools that had honor codes

and schools that did not. The idea behind that was to find out if having an honor code

deters students from being dishonest in the first place (McCabe et al., 1999).
60

Procedures

The instrument used for this mixed-methods study was a modified version of the

AIS (Appendix B). Creswell (2005) stated that surveys can yield useful information

which in turn aid in the evaluation of a program. In order to gather data, the researcher

employed the modified version of the AIS (DuPree & Sattler, 2010) and made it available

online through utilization of an electronic questionnaire. At the start of the study, the

researcher submitted required paperwork to the Institutional Review Board (IRB) at the

institution where she is a student, as well as the three institutions that agreed to

participate in the research. The directors of the respective distance education offices were

contacted and each explained that their procedure would be to disseminate the survey

once IRB approval was obtained. The directors all agreed to be the liaisons who would

distribute the survey via email, as it was against the policy of the institutions to provide

the researcher with a list of their online faculty. Upon receipt of the IRB approval, an

email was sent to the director of distance education to request that all online faculty be

contacted. The IRB approval from their respective institutions was attached to the email,

along with an invitation letter from the researcher, which explained the purpose of the

study and requested participation of the recipient. The modified AIS was sent to all

online instructional faculty. An informational letter of protocol included basic

information about the survey, as well as a request for the participants to indicate their

interest in participating in a focus group by responding to the email (Sue & Ritter, 2007).

Signed consent was not required for the online survey as the surveys were anonymous

and are considered nonintrusive. Prospective participants were made aware that the

survey would take 15 to 20 minutes to complete, and the letter provided background
61

information of the researcher, the purpose of the study, as well as the risks and benefits of

participating in the study. The invitation contained a URL, which took the participant to

the 18-question online survey, created in Google forms.

Addressing nonresponse and bias. There are different reasons why prospective

respondents decide not to participate in a study, or fail to answer all survey questions.

Participants may refuse to respond because they have no interest in participating (Merkle,

2013). The request for participation may not have reached the prospective participant,

wrote Merkle, or they did not understand the nature of the survey because of language

barriers, physical or mental disabilities. Sue and Ritter (2007) further explained that fear

of the lack of anonymity may affect participants’ participation. Even when participants

are promised anonymity, Sue and Ritter argued that some fear that their responses might

be traced back to them, raising their skepticism to participate or answer certain questions.

The problem of nonresponse has been addressed by researchers who have also

offered recommendations on how to reduce it (Merkle, 2013). Merkle pointed out that

nonresponse does not necessarily indicate that there is bias. As Groves et al. (2004)

stated, it almost never happens that all participants who are invited actually participate in

the study. Nonresponse is not automatically an issue when respondents fail to participate

as “response rates alone are not quality indicators” (Groves et al., 2004, p. 59). Instead,

Groves et al. explained that nonresponse bias may be reduced when the response rate is

high, but that there are ways to help reduce the bias and increase the response rate.

Merkle (2013) argued that reducing the correlation between the likelihood of response

and the variable of the survey itself would help to reduce bias. According to Groves et al.,

the quality of the survey statistics may be harmed by nonresponse, but the researcher
62

would have no way of knowing ahead of time whether nonresponse will have a negative

effect on their study. Nonresponse bias, stated Groves et al., arises “when the causes of

the nonresponse are linked to the survey statistics measured” (2004, p. 178). Based on

writings by Groves et al., nonresponse is to be expected, and key survey statistics ought

to be carefully looked at to ensure that nonresponse was not a result of these key

statistics.

Because the survey for this study pertains to online education, one way of

reducing bias was to deliver the survey online, where faculty have an assumed level of

comfort because of their online course delivery status. Prospective participants were

asked to complete the survey within 14 days of receipt of the email.

Fowler (2009) and Merkle (2013) suggested that the rate response for a survey

likely increases if participants are made aware of the importance of the study. In

following Fowler and Merkle’s advice, 10 days after the initial email was sent,

participants were sent a reminder email, which indicated the importance of the survey to

the college and the benefit of the results that would contain ways to improve the job of all

online instructional faculty. The second reminder included an appeal to instructional

faculty who had already completed the survey to encourage their colleagues to do the

same. Fowler (2009) mentioned that increasing the amount of contact increases the

likelihood of the participants to respond. Based on Fowler’s advice, an email was sent out

to the prospective participants one final time after an additional 10 days.

The use of incentives has been suggested (Fowler, 2009; Sue & Ritter, 2007), as a

way to motivate the participants to complete the survey Accordingly, the researcher of

this study offered participants a chance to enter sweepstakes where four people had a
63

chance to win a $25 gift card from Amazon.com. The participants received their prize

after final completion of the survey when the random drawing was held. They had an

opportunity to complete an online form on Google docs with their name and email

address through which they were notified. Participants’ names were in no way linked to

their survey answers, as they submitted that information through a different program.

After the period to submit the survey had expired, all the names of the sweepstakes

participants were entered in www.randompicker.com and four winners were selected.

Fourteen days after the initial invitation was sent to the directors, the first

reminder letter was sent via email. The directors were asked to craft their own reminder

letter, or to use the reminder letter that was written by the researcher. Each director

elected to personalize the reminder letter that was provided by the researcher. They sent it

along with the required IRB forms. The final request to send a reminder was sent to the

directors after 10 more days. They each customized the letter that was provided by the

researcher and emailed it to the prospective participants. The respondents completed the

survey completely voluntarily and were provided full disclosure of potential harm prior to

entering the survey.

Focus group to provide triangulation. In addition to the use of a survey, an

eight-member focus group consisting of instructional faculty met to discuss the most

effective measures to prevent cheating, and perceptions and motivation of cheating at the

institutions. The participants of this focus group were given brief information regarding

the nature of the study, as suggested by Sue and Ritter (2007). Focus group participants

were made aware of the importance of their participation in the study and the potentially

negative effect nonresponse may have (Fowler, 2009). Additionally, they were assured
64

that their participation was anonymous and that transcripts of their words would be coded

or protected by password secrecy and the recordings would be kept in a secured place

(Sue & Ritter, 2007). As supported by Fowler (2009), the respondents need to feel

comfortable with their participation in the study, thus ensuring their confidentiality is

critical.

In the initial information letter sent to all online teaching faculty, they were asked

to send an email to the researcher if they wished to participate in the focus group. An

electronic record of the email responses was kept of those instructors who indicated their

interest in participating in a focus group, which provided the qualitative portion of

information that was collected. A letter was sent via email to those who indicated their

interest in partaking in the focus group. Morgan (2008) stipulated that the size of the

focus group is to be determined by the researcher, based on the needs pertaining to the

study. Morgan (2006) defined a focus group as having six to eight members selected

from the group that is interviewed by a moderator. In accordance with this

recommendation, the eight-participant focus group for this study consisted of

instructional faculty. Eight of those who indicated their interest in the focus group were

selected at random. Three extra names were drawn as alternate participants. An email

was sent to the eight participants to invite them to a face-to-face meeting scheduled for

one month after the initial mail date of the survey. Because some of the eight participants

declined the invitation, instructors from the alternate group were solicited to fill their

spot. After the selection, the members were apprised of the contents of the letter of

permission they were asked to sign. A copy of the signed consent form was given to the

participants and the original signed consent forms were placed in a locked cabinet. These
65

consent forms included information on how their comments/responses in the focus group

would be recorded. The focus group was facilitated by the researcher. The results of the

open-ended questions from the focus group and the responses from the modified AIS

were triangulated. Creswell (2005) mentioned that the process of triangulation can be

used to examine the accuracy and credibility of the responses. Tashakkori and Teddlie

(2003) concurred with Creswell’s explanation regarding triangulation and added that the

qualitative and quantitative information that is gathered complement one another as they

each reflect their own perspective. The interaction of the focus group provided additional

insights into the phenomenon of online cheating which may not have been obviously

revealed with the survey. Short (2006) acknowledged the controversy regarding the

advantages of focus groups, but illustrated with an example about an eight-member

group, how this small group can address issues that are not delved into in the survey.

Data Collection and Analysis

Research Question 1. To what degree do instructional college faculty perceive

cheating as a problem in their online classes?

Instructional faculty were asked questions on the modified AIS related to their

perception of dishonest behavior in their classrooms. The questions relied on self-

reporting to obtain an indication of whether and to what degree the faculty were aware

that students cheat in their classes. Results indicated whether demographic information

could have influenced the answers (Appendix B, Questions 4, 5, 9, 10, 12, 13).

Research Question 2. How do online faculty judge the seriousness of online

cheating and how well do they think their college deals with it?

There were questions on the modified AIS about the seriousness of cheating,
66

faculty’s perception of different types of academic dishonesty, and the existence of

institutional integrity policies. (Appendix B, Questions 9 and 13).

Research Question 3. What strategies are used by college instructors to

safeguard online course integrity?

To find out which strategies instructors use to minimize the instances of cheating

in their online courses, they were asked two questions (Appendix B, Questions 6, 14) on

the modified AIS which determined whether any measures were taken at all. If measures

were in place, the results of the surveys provided an indication of what was put in place.

Faculty were asked to indicate on the survey whether assessments in their courses are

taken in a proctored environment, whether online resources, such as Turnitin.com are

used to detect plagiarism for written assignments, or if no action is taken to ensure course

integrity.

Research Question 4. To what extent do instructional college faculty follow the

institution’s code of conduct in response to academic cheating?

Faculty were asked to answer a series of questions (Appendix B, Questions 3, 6,

7, 8) on the modified AIS related to the institution’s code of conduct. They were also

asked what steps are taken when there is a breach of the code of conduct. Faculty

responses were analyzed to determine the extent to which instructional faculty enforce

the institution’s policies.

Research Question 5. What types of support do instructional faculty desire to

help lower online cheating?

Faculty had an opportunity to answer a question (Appendix B, Question 15) on

the modified AIS to indicate what they need in order to increase their awareness about
67

online cheating. Additionally, they were able to express what support the institution can

provide to help them be successful in their efforts to reduce or prevent cheating. The

qualitative responses were coded into groups to determine the distribution of scores.

Research Question 6. To what degree do instructional faculty perceive the

acceptance of the use of institutional measures to prevent online cheating?

Questions (Appendix B, Questions 1, 2, 7, 8, 11, 13) related to this research

question gave faculty an opportunity to express whether they feel that institutional

measures to prevent cheating are successful.

Upon receipt of completed surveys, the results were entered on PASW Statistics

18, formerly known as SPSS, a statistical program, which was used to evaluate the

descriptive statistics to analyze the results (Boehm et al., 2009; Creswell, 2005; Eckles,

2010; Hart & Morgan, 2010). Creswell (2005) explained that the grouped frequency

distribution will help summarize the data more easily. To explain the results, data

collected about knowledge of the institution’s code of conduct were converted into

percentages and a descriptive analysis, namely median and mode. According to Creswell

(2005), descriptive statistics are helpful in summarizing the trends and tendencies of data

that are gathered. The data analysis provided information about the variance for each set

of values, which were all relevant in order to make sense of the data. Creswell (2005)

confirmed that the SPSS program provides a good basis for scoring data collected by the

researcher. Information that was obtained was reported in written form and tables.

A Pearson correlation was performed to determine possible patterns between

variables (Creswell, 2005). An analysis helped determine whether there is a correlation

between “The average student’s understanding of the college’s policies concerning


68

cheating” and “Student support of these policies”; “Student support of these policies” and

“Faculty support of these policies”; “Faculty support of these policies”; and “The

effectiveness of these policies.” A Pearson correlation was also performed on the number

of times a student was caught cheating (Question 5) and the steps taken as a result

(Question 6) and to help determine whether frequency of cheating (Question 5) is

correlated with the severity of punishment (Question 6). In Question 13, “Cheating is a

serious problem at this institution” was tested for correlation with “Faculty members are

vigilant in discovering and reporting suspected cases of academic dishonesty.”

Faculty demographics including gender, years of experience, and teaching

discipline (Questions 16, 17, and 18) were tested for correlation with the instructors’

reaction (Question 6). More specifically, Question 16, “How many years have you been

teaching at the college level?” was tested for correlation with the faculty’s reaction to

evidence of cheating (Question 6). The researcher tested whether a correlation exists

between the faculty’s gender (Question 17) and the type of reaction to evidence of

cheating (Question 6). The faculty’s teaching discipline (Question 18) was tested against

their reaction to evidence of cheating (Question 6) to see if a statistically significant

correlation exists.

The focus group answered the same questions on the modified AIS, except the

questions were open-ended, rather than closed. The open-ended questions provided the

researcher with qualitative responses, which were audio-recorded by the researcher.

Subsequently, they were written down, organized into common themes, coded, and

analyzed. According to Creswell (2005), the use of a focus group can result in the

gathering of extensive data. Members of the focus group for this study had an opportunity
69

to go into more depth about the extent of cheating by students and ways to prevent it. The

purpose of the focus group was to allow the group members an opportunity to engage in a

conversation regarding academic cheating in the online environment. Their perspectives

were particularly useful as they provided deeper insights into the research questions,

along with the possibility of elucidating any hidden variables (Davern, 2008). The group

members all had experience with the online platform, and their efforts in increasing

student success while maintaining credibility of the institutions added to the value of the

group. Its homogeneity got the members to share experiences that were similar or

different and served to further support the quantitative portion of the study (Davern,

2008).

Summary of Procedural Steps

Survey group steps:

1. The Director of Instructional Technology of each participating institution was

contacted. Since the information could not be obtained due to institutional policy, a

liaison sent correspondence to all online faculty, which contained an informational form

for participation in the research study.

2. A Google forms URL for survey access was included in the informational

letter. Participants needed to click on the URL for secure access.

3. The participants spent 15–20 minutes to complete the survey.

4. Fourteen days after initial contact, a reminder was sent to the population to

complete the survey.

5. Ten days after that, a final reminder email was sent.

Focus group steps:


70

1. The informational form for participation instructed participants to contact the

researcher via email to express their interest. Interested participants did not have to

complete the survey to be part of the focus group.

2. An electronic list was kept of participants who expressed their interest.

3. Since more than eight participants expressed interest, eight were randomly

selected to become focus group members 5 days after the survey portion of the study

closed. After 14 days, focus group members were invited to a meeting to discuss the

Modified AIS questions. The group met in a conference room at the researcher’s

worksite, where group members who were unable to meet in person had an opportunity to

be present via conference call. To ensure the privacy of the participants, the meeting was

held in a closed room, which limited the voices from being heard by others who may

have been in the building.

4. The focus group members were advised of the general purpose of the group:

to have a discussion about the Modified AIS questions in an effort to triangulate their

responses with the ones obtained through the survey. The group members were asked not

to discuss the focus group conversation outside. Additionally, they were asked not to

identify students, but to speak in generalities.

5. The one-hour meeting was recorded on a portable audio recorder for further

analysis. No names of participants were recorded. The participants were coded as P1

through P8 and their answers were coded as follows: Academic Environment questions

were coded AE1a, AE1b, AE1c, etc. Demographics questions were coded: D16, D17,

D18.

6. After the meeting, the researcher listened to the data wearing headphones,
71

sorted and recorded them electronically and analyzed the results by comparing the

answers to the electronically submitted surveys for purposes of triangulation. The

researcher listened to and transcribed the audio recordings in her private home office. The

recordings and transcripts were secured in a locked cabinet at the researcher’s home

office.

7. All information collected for the focus portion of the study will be destroyed

after 3 years following the completion of the study by deleting the electronic files and the

audio recording, and shredding any hard copies that exist.


72

Chapter 4: Results

Purpose of the Study

The purpose of this mixed-methods study was to provide an inquiry into the

phenomenon of cheating in online courses. The previous chapter provided details about

the steps taken to implement the study. This chapter will discuss the results of the data

analysis.

Correcting for a Technical Problem

Days after the invitations were sent to the participants, the researcher received a

few emails which stated that there was a technical glitch with one of the questions

(Question 9). The question instructed participants to select one answer from the left

column (Part I) and another answer from the right column (Part II). The participants were

only able to select one answer from either column, resulting in 42 answer submissions for

Part I and zero submissions for Part II. As a result, the researcher had to change the

question into two parts: in Part I, the participants selected one answer and in Part II, they

selected the other answer. By the time the correction was made, the researcher had to

evaluate the likely effect of the 42 submissions in which the respondents were limited to

selecting from either the left column or the right but not both. The chi squares (for Part I)

and correlations (for Part II) were completed to determine whether Question 9 responses

differed between the first 42 participants and the rest (see Appendix C). No significant

differences were found (χ2 ranged from .742 to 5.622, p ranged from .132 to .863, df =

3). These results suggest that modifying the survey did not affect the way participants

responded to Question 9, Part I (see Appendix C).

The results of the survey and the focus group meetings are included in the
73

remainder of the chapter.

Demographics

A total of 588 online faculty from the three research institutions were invited to

complete the online survey. Of those who were invited, 22% completed the survey: 51

males (39.2%) and 79 (60.8%) females indicated their gender, and one participant did not

complete the gender question (N = 131). Table 1 shows the breakdown by academic

discipline.

Table 1

Area of Primary Teaching Responsibility

Data for this study Institutional data, winter 2014


Area
Frequency % Frequency %

Arts 1 .8 14 3.3
Business 17 13.2 74 17.2
Communication/journalism 9 7.0 41 9.5
Engineering 2 1.6 0 0.0
Humanities 22 17.1 45 10.5
Math or Science 31 24.0 101 23.5
Nursing/health professions 23 17.8 69 16.0
Social/behavioral sciences 24 18.3 86 20.0
Missing* 2

Total 129 430

* Missing indicates how many participants did not submit a response.

Two faculty did not respond to the question, perhaps because their discipline was

not listed or they chose not to answer for other reasons. It is worth noting that the same

participants failed to complete any of the demographic questions.


74

Table 2 displays the number of years participants have taught at the college level.

The majority of participants taught at the college level at least 8 years.

Table 2

Number of Years Teaching at the College Level

Years Frequency %

0–2 4 3.1
3–7 39 30.2
8–12 35 27.1
13 or more 51 39.5
Missing* 2

Total 129

* Missing indicates how many participants did not submit a response.

The focus group consisted of six males and two females (n = 8). The members

came from different academic disciplines, namely communication (n = 1); math or

science (n = 5); business (n = 1); social and behavioral science (n = 1). All of the focus

group members had more than 13 years of college level teaching experience.

Perception of Dishonesty as a Problem in Online Classes

Research Question 1. The first research question was: To what degree do

instructional college faculty perceive dishonesty as a problem in their online classes?

Combined results indicated that the majority of instructors (57.3%) thought that

plagiarism at their institution occurs often or very often (Table 3). When faculty were

asked how frequently they thought students inappropriately shared work in group

assignments, the majority (51.9% combined) indicated that it occurred often to very

often. The frequency of cheating, based on the total of those who responded, is presented
75

in Table 3. Means and standard deviations for Question 4a–4c are presented in Table 4.

Table 3

Aggregated Survey Responses: Frequency of Cheating, Questions 4a–4c

4a* 4b* 4c*


Response
n % n % n %

Never 1 .8 2 1.6 5 3.9

Very seldom 4 3.1 6 4.7 19 14.7

Seldom/sometimes 41 31.8 40 31.0 53 41.1

Often 51 39.5 40 31.0 27 20.9

Very often 23 17.8 27 20.9 15 11.6

No opinion 9 7.0 14 10.9 10 7.8

Missing** 2 2 2

Total 129 129 129

* 4a—How frequently do you think plagiarism on writing assignments occurs in the online courses at your
institution?; 4b—How frequently do you think students inappropriately share work in group assignments
occurs in the online courses at your institution?; 4c—How frequently do you think cheating during tests or
examinations occurs in the online courses at your institution.
** Missing indicates how many participants did not respond.

Table 4

Means and Standard Deviations, Questions 4a–4c

Question Means Standard deviation

4a 3.76 .830
4b 3.73 .940
4c 3.24 1.006

Faculty were asked which dishonest behaviors they witnessed their students
76

engaging in during the past 3 years. When asked how often, if ever, they saw a student

cheat during an online test or examination, the type of dishonest behavior that was

selected by participants most often (68.1% in the combined Once and More than once

categories) is paraphrasing or copying a few sentences from a book, magazine or journal

(not electronic or Web-based) without footnoting them in a paper s/he submitted (see

Table 5). Behaviors that were never observed by the majority of respondents were using

digital technology (such as text messaging) to get unpermitted help from someone during

an online test or assignment (65.8%), helping someone else cheat on an online test

(65.2%), copying from another student during an online test with his or her knowledge

(61.4%) and getting questions or answers on an online test from someone who has

already taken a test (58.5%). More than 25% of participants teach in math, science and

engineering—areas that generally do not require research papers. Therefore, there were

several who selected the “Not Relevant” option. Over 41% of participants indicated that

they caught students using a “paper mill” (a paper written and previously submitted by

another student) and claiming it as his/her own work once or more than once. The results

are in Table 5. The mean values indicate that the respondents deemed every question to

be between moderate and serious cheating (see Table 6).

A combined majority of faculty (89.7%) indicated that their students used the

Internet or other electronic means only (57.0%) or the Internet primarily (32.7%) to

access paraphrased or copied material from a written electronic source (see Table 7).

Respondents were asked if they ever offered an online test or exam at their

institution and 83.7% (n = 108) answered affirmatively. Those who answered yes were

then asked if they ever observed collaboration, use of books on a closed book exam,
77

students receiving unauthorized help or looking up information on the Internet when not

permitted. For this question, respondents had to check all that applied. The type of

cheating most frequently observed by faculty was students’ looking up information on the

Internet when not permitted (30.5%). The types of cheating observed are shown in Table

6.

Table 5
Aggregated Survey Responses: Frequency of Specific Cheating Behaviors, Questions
9a1–9d1

9a1* 9b1* 9c1* 9d1*


Response
n % n % n % n %
Never 48 39.7 54 44.6 69 58.5 75 65.2
Once 9 7.4 11 9.1 2 1.7 2 1.7
More than once 28 23.1 34 28.1 31 26.3 25 21.7
Not relevant 36 29.8 22 18.2 16 13.6 13 11.3
Missing** 10 10 13 16
Total 121 121 118 115
* 9a1—Fabricating or falsifying a bibliography in an online assignment; 9b1—Working on an online
assignment with others when the instructor asked for individual work; 9c1—Getting questions or answers
on an online test from someone who has already taken a test; 9d1—Helping someone else cheat on an
online test.
** Missing indicates how many participants did not respond.

Table 6
Means and Standard Deviations, Questions 9a1–9d1
Question Means Standard deviation

9a1 3.42 0.67


9b1 3.32 0.69
9c1 3.80 0.53
9d1 3.77 0.58
78

While some participants (49.7%) indicated that they agreed or strongly agreed

that cheating is a serious problem at their institution, more than half (50.5%) indicated

that they strongly disagreed, disagreed or were unsure. The mean score of 3.54 supports

this conclusion (see Table 7).

Table 7
Aggregated Survey Responses: Frequency of Specific Cheating Behaviors, Questions
9e1–9h1

9e1* 9f1* 9g1* 9h1*


Response
n % n % n % n %
Never 70 61.4 75 65.8 17 15.0 48 41.4
Once 8 7.0 9 7.9 12 10.6 19 16.4
More than once 18 15.8 16 14.0 65 57.5 29 25.0
Not relevant 18 15.8 14 12.3 19 16.8 20 17.2
Missing** 17 17 18 15
Total 114 114 113 116
*9e1—Copying from another student during an online test with his or her knowledge; 9f1—Using digital
technology (such as text messaging) to get unpermitted help from someone during an online test or
assignment; 9g1—Paraphrasing or copying a few sentences from a book, magazine or journal (not
electronic or Web-based) without footnoting them in a paper s/he submitted in an online class; 9h1—
Turning in a paper in an online class from a “paper mill” (a paper written and previously submitted by
another student) and claiming it as his/her own work.
**Missing indicates how many participants did not respond.

Table 8

Means and Standard Deviations, Questions 9e1–9h1

Question Means Standard deviation

9e1 3.83 0.53


9f1 3.79 0.59
9g1 3.17 0.77
9h1 3.83 0.55
79

Table 9
Aggregated Survey Responses: Frequency of Specific Cheating Behaviors, Questions
9i1–9m1

9i1* 9j1* 9k1* 9l1* 9m1


Response
n % n % n % n % n %

Never 63 53.8 49 42.6 45 40.9 53 47.7 60 53.1

Once 17 14.5 20 17.4 17 15.5 16 14.4 10 8.8

More than once 10 8.5 28 24.3 38 34.5 31 27.9 25 22.1

Not relevant 27 23.1 18 15.6 10 9.1 11 9.9 18 15.9

Missing** 14 12.0 16 13.9 21 19.1 20 18.0 18 15.9

Total 117 115 110 111 113

*9i1— How serious is using an electronic/digital device as an unauthorized aid during an exam; 9j1—
How serious is turning in a paper copied, at least in part, from another student's paper, whether or not the
student is currently taking the same online course; 9k1— How serious is using a false or forged excuse to
obtain an extension on a due date or delay taking an online exam; 9l1— How serious is turning in work
done by someone else in an online class; 9m1— How serious is cheating on a test in an online class in any
other way.
**Missing indicates how many participants did not respond.

Table 10

Means and Standard Deviations, Questions 9i1–9m1

Question Means Standard deviation

9i1 3.71 0.62


9j1 3.81 0.51
9k1 3.34 0.82
9l1 3.84 0.52
9m1 3.77 0.58
80

Table 11

Source of Material Used by Student to Paraphrase or Copy Material

Method Frequency %

Internet or other electronic means only 61 57.0

Primarily Internet or other electronic 35 32.7


means

Primarily hard (paper) copies of sources 1 .9

Have observed/suspected both methods 10 9.3


equally

Missing* 24

Total 107

*Missing indicates how many participate did not respond.

Table 12

Aggregated Survey Responses: Types of Cheating Observed, Questions 12a–12d

12a* 12b* 12c* 12d*


Response
n % n % n % n %

Yes 25 23.1 23 21.2 13 12.0 33 30.5

Total 108 108 108 108

*12a—Collaborated with others during an online test or exam when not permitted?; 12b—Used notes or
books on a closed book online test or exam?; 12c—Received unauthorized help from someone on an online
test or exam?; 12d—Looked up information on the Internet when not permitted?
81

Table 13

Cheating is a Serious Problem at Your Institution

Response Frequency %
Strongly disagree 1 .8

Disagree 10 7.8

Not sure 54 41.9

Agree 46 35.7

Strongly agree 18 14.0

Missing* 2

Total 129

*Missing indicates how many participants did not respond.

Focus group members unanimously agreed that plagiarism on writing assignments

happens often at their institution.

Participant 6 stated:

A lot more plagiarism in discussion postings because Turnitin does not work with
the discussion feature. Cheating for proctored–never. Nonproctored I think it
happens, but there is no way you can prove it,
Participant 2 stated:

I have had students hack each other’s accounts with the tests. And it’s quite easy.
Here at XX college, you know everyone’s user name from the mail system and
the default password is your birthday and everyone has their birthday on
Facebook. I always tell my students change your password and they don’t. Once
you’re in the test, it does not take much time–it’s very, very quick.

Participant 4 stated:

Cheating can also be something like looking into Google and translating the
answer to another language and translating it back.
Focus group members were asked how often, if ever, they have seen a student
82

cheat during an online test or examination at their institution. Three members indicated

that they have seen cheating once to a few times. Some of their comments were as

follows:

Participant 2 stated:

A few times. On more than one occasion I have had students hack each other’s
account. Another circumstance when the students took the test simultaneously.

Participant 3 stated:

Great many times, as I work in learning resources. The problem I’ve had with
mathematics is that students would write down the problem and come to us for
help on solving the problem for them and then they go in and put in the answers.

Participant 7 stated:
Many times. It’s obvious when you’ve been doing it for 16 some-odd years.

Next, focus group members were asked how often, if ever, they have observed or

become aware of a student in their class engaging in different cheating behaviors during

the last 3 years. Two indicated that they observed fabricating or falsifying a bibliography

in an online assignment more than once, one indicated that he witnessed students working

on an online assignment with others although the instructor had asked for individual

work.

Participant 3 noted regarding students collaborating with others during an online

test or exam when not permitted:

Witnessed it not in my own course, but other staff. I did not do anything when I
witnessed it, because I think it should be up to the faculty to design the course so
this does not happen.
Participant 6 stated:

I’ve suspected, but was not able to prove it.

One also indicated that he observed once that students got questions or answers
83

on an online test from someone who had already taken the test. No one indicated that

they witnessed students help someone cheat on an online test. Two noted that they

became aware of students copying from another student during an online test with his or

her knowledge. One focus group member once observed or became aware of a student

using digital technology to get unpermitted help from someone during an online test or

assignment. Once, two focus group members became aware or observed a student

paraphrasing or copying a few sentences from a book, magazine or journal, without

footnoting them in a paper he or she submitted in an online class. Finally, more than once

two focus group members observed or became aware of a student using a false or forged

excuse to obtain an extension on a due date or delay taking an online exam.

There were several forms of cheating that were never observed by any of the

focus group members, namely, turning in a paper in an online class from a “paper mill”

and claiming it as his/her own work; using an electronic/digital device such as an

unauthorized aid during an exam; turning in a paper copied from another student’s paper;

turning in work done by someone else in an online class; cheating on a test in an online

class in any other way.

When focus group members were asked how they believed students assessed

material if they paraphrased or copied material from a written electronic source without

citing it, each member stated that students accessed the information from the Internet.

Focus group members were asked whether they agreed or disagreed with the

following statement: Cheating in online classes is a serious problem at this institution.

Four stated that they were not sure, two agreed and one strongly agreed.

Research Question 2. The second research question was as follows: How do


84

online faculty judge the seriousness of online cheating and how well do they think their

college deals with it?

To answer this question, the first step was to assess whether faculty even think

cheating is a problem at their institution. Descriptive statistics were used to assess

faculty’s attitudes about the severity of cheating and different measures in response to

online cheating (see Table 14). The median and mode of 3 indicate that faculty were

unsure about cheating being a serious problem at their institution. It’s important to

establish the faculty’s uncertainty, as it may influence their perceptions on the factors that

influence online cheating.

One of the survey questions was about the fairness of the student judicial process

(see Table 14). The median (3) and mode (3) indicated that faculty were not sure how fair

the process is. The mode (4) for the response to whether students should be held

responsible for the academic integrity of other students indicates that the most frequently

reported answer is agree. The median value is 3. The median and mode of 3 for faculty

vigilance showed that participants are unsure whether other faculty members are vigilant

in discovering and reporting suspected cases of academic honesty in their online classes

(see Table 14). The lack of commitment is another factor that could contribute to

cheating. Faculty were also unsure about the fairness and impartiality of the college’s

judicial process that handles student cheating, as indicated by a median and mode of 3.

Next, the perception of different types of cheating was measured to determine to

what extent faculty interpreted behaviors as cheating or not. Each of the dishonest

behaviors was seen as cheating to some extent by each participant who answered the

question. Most of the types of dishonest behaviors were identified by more than 80% of
85

Table 14
Aggregated Survey Responses: Faculty Attitudes Toward Online Cheating, Questions
13a–13d
13a* 13b* 13c* 13d*
Response
Freq. % Freq. % Freq. % Freq. %

Strongly 1 .8 5 3.8 16 12.2 4 3.1


Disagree
Disagree 10 7.6 5 3.8 37 28.2 17 13.0
Not Sure 54 41.2 58 44.3 17 13.0 64 48.9
Agree 46 35.1 52 44.3 41 31.3 36 27.5
Strongly 18 13.7 8 6.1 18 13.7 8 6.1
Agree
Missing** 2 1.5 3 2.3 2 1.5 2 1.5

Total 129 98.5 128 97.7 129 98.5 129 98.5

*13a – Cheating in online classes is a serious problem at their institution; 13b – Our student judicial
process is fair and impartial; 13c – Students in online classes should be held responsible for monitoring the
academic integrity of other students; 13d – Faculty members are vigilant in discovering and reporting
suspected cases of academic dishonesty in their online classes.
**Missing indicates how many participants did not respond.

Table 15

Mean, Median, Mode, and Standard Deviations, Questions 13a–13d

Question Mean Median Mode Standard deviation

13a 3.54 3.00 3 0.86


13b 3.41 3.00 3 0.83
13c 3.06 3.00 4 1.29
13d 3.21 3.00 3 0.86

respondents as serious cheating (see Table 9). Only 36.8% of respondents indicated that

they thought of as serious cheating paraphrasing or copying a few sentences from a book,
86

magazine or journal (not electronic or Web-based) without footnoting them in a paper

s/he submitted in an online class (M = 3.17, SD = 0.77), and 44.3% considered to be

serious cheating when students were working on an online assignment with others when

the instructor asked for individual work (M = 3.32, SD = 0.69). For those questions, the

mean scores are closer to 3, which indicates that the respondents felt that the dishonest

behavior was considered more moderate. Table 9 provides more details.

Table 16

Seriousness of Behavior, Questions 9a2–9d2

9a2* 9b2* 9c2* 9d2*


Response
n % n % n % n %

Not 0 0 0 0 0 0 0 0
cheating

Trivial 9 10.0 12 12.4 6 6.0 8 7.9


cheating

Moderate 34 37.8 42 43.3 8 8.0 7 6.9


cheating

Serious 47 52.2 43 44.3 86 86.0 86 85.1


cheating

Missing** 41 34 31 30

Total 90 97 100 101

*9a2 - Fabricating or falsifying a bibliography in an online assignment; 9b2 - Working on an online


assignment with others when the instructor asked for individual work; 9c2 - Getting questions or answers
on an online test from someone who has already taken a test; 9d2 - Helping someone else cheat on an
online test.
**Missing indicates how many participants did not respond.
87

Table 17

Means and Standard Deviations, Questions 9a2–9d2

Question Means Standard deviation

9a2 3.42 0.67


9b2 3.32 0.69
9c2 3.80 0.53
9d2 3.77 0.58

Table 18

Seriousness of Behavior, Questions 9e2–9h2

9e2* 9f2* 9g2* 9h2*


Response
n % n % n % n %
Not 0 0 1 1.0 3 2.8 1 1.0
cheating
Trivial 7 6.9 6 5.8 15 14.2 5 4.9
cheating
Moderate 3 3.0 7 6.8 49 46.2 5 4.9
cheating
Serious 91 90.1 89 86.4 39 36.8 92 89.3
cheating
Missing** 30 28 25 28
Total 101 103 106 103

*9e2 - Copying from another student during an online test with his or her knowledge; 9f2 - Using digital
technology (such as text messaging) to get unpermitted help from someone during an online test or
assignment; 9g2 - Paraphrasing or copying a few sentences from a book, magazine or journal (not
electronic or Web-based) without footnoting them in a paper s/he submitted in an online class; 9h2 -
Turning in a paper in an online class from a “paper mill” (a paper written and previously submitted by
another student) and claiming it as his/her own work.
**Missing indicates how many participants did not respond.
88

Table 19

Means and Standard Deviations, Questions 9e2–9h2

Question Means Standard deviation

9e2 3.83 0.53


9f2 3.79 0.59
9g2 3.17 0.77
9h2 3.83 0.55

Table 20

Seriousness of Behavior, Questions 9i2–9m2

9i2* 9j2* 9k2* 9l2* 9m2*


Response
n % n % N % n % N %

Not 1 1.0 0 0 4 3.8 1 1.0 1 1.0

cheating

Trivial 6 5.9 5 4.9 11 10.5 4 3.8 5 4.8

cheating

Moderate 14 13.9 10 9.7 35 33.3 6 5.7 11 10.5

cheating

Serious 80 79.2 88 85.4 55 52.4 94 89.5 86 81.9

cheating

Missing** 30 28 26 26 28

Total 101 103 105 105 103

*9i2 - Using an electronic/digital device as an unauthorized aid during an exam; 9j2 - Turning in a paper
copied, at least in part, from another student’s paper, whether or not the student is currently taking the same
online course; 9k2 - Using a false or forged excuse to obtain an extension on a due date or delay taking an
online exam; 9l2 - Turning in work done by someone else in an online class; 9m2 - Cheating on a test in an
online class in any other way.
**Missing indicates how many participants did not respond.
89

Table 21

Means and Standard Deviations, Questions 9i2–9m2

Question Means Standard deviation

9i2 3.71 0.62


9j2 3.81 0.51
9k2 3.34 0.82
9l2 3.84 0.52
9m2 3.77 0.58

The focus group results showed that the six out of the eight participants were

between not being sure and agreeing that cheating in online classes is a serious problem

at this institution, much like the survey respondents. Two participants noted that they

would have to guess at their answer, because they “need to look at data.” When asked if

the judicial process is fair and impartial, five agreed, whereas two were not sure. For the

question on whether students in online classes should be held responsible for monitoring

the academic integrity of other students, five varied between disagree to strongly

disagree.

Survey respondents were asked to indicate the seriousness of dishonest behaviors.

Several types of dishonest behavior were marked as “not cheating,” such as paraphrasing

or copying a few sentences from a book, magazine or journal (not electronic or Web-

based) without footnoting them in a paper s/he submitted in an online class (2.8%) and

using a false or forged excuse to obtain an extension on a due date or delay taking an

online exam (3.8%).

The focus group members had a much different perception on dishonest behaviors
90

than the survey respondents. All of the focus group participants stated that the forms of

cheating are all “serious cheating”, with the exception of two who considered using a

false or forged excuse to obtain an extension on a due date or delay taking an online

exam to be trivial cheating.

Participant 5 stated:

Using a false or forged excuse to get more time–all the time. That seems more
moderate. It’s more like boundary pushing, Not as serious as the last one.
Participant 8 stated:

I believe in my own mind that it’s false or forged, but I consider it trivial.

Research Question 3. The third research question was: What strategies are used

by college instructors to safeguard online course integrity? First, 60 respondents indicated

that they saw a student cheat at least once. Those respondents were then asked to answer

what their likely reaction would be if they were convinced, even after discussion with a

student, that a student had cheated on a major test or assignment in their online course.

They had to check all the reactions that applied to them. One answer—fail the student for

the test assignment—received a majority of responses (61.6%). Respondents had an

opportunity to write in their own answer, if they had a reaction to cheating that was not

provided as an answer option. The responses were as follows:

1. “There is a procedure for reporting students that is used in our institution”

2. “Closer scrutiny of the student's future exams”

3. “Discuss the assignment with the student in an effort to prove he/she couldn't

verbally support the writing”

4. “Retest with new test”

5. “Zero for the assignment”


91

6. “Failed student for that question”

Details are shown in Table 22.

Table 22

Aggregated Survey Responses: Reactions to Cheating, Questions 6a–6d

6a* 6b* 6c* 6d*


Response
N % n % n % n %

Yes 23 38.3 15 25.0 37 61.6 14 23.3

Table 23

Aggregated Survey Responses: Reactions to Cheating, Questions 6e–6i

6e* 6f* 6g* 6h* 6i*

Response N % n % n % n % n %

Yes 11 18.3 14 23.3 10 16.6 2 3.3 3 5.0

*If you were convinced, even after discussion with the student, that a student had cheated on a major test or
assignment in your online course, what would be your most likely reaction? *6a—Reprimand or warn the
student; 6b—Lower the student’s grade; 6c—Fail the student for the test assignment; 6d—Fail the student
for the course; 6e—Require student to retake test/redo assignment; 6f—Report student to the Dean of
Students; 6g—Report student to your Chair/Director or Dean; 6h—Do nothing about the incident; 6i—
Other. Total of percentages indicate that respondents in some cases selected multiple responses. Total
number of respondents for each response = 60.

In addition to the actions mentioned above, which outline the instructors’

reactions to cheating, there are several safeguards employed by faculty to aid in the

reduction of cheating. Respondents checked all options that applied to them. The most

widely used are provision of information about cheating (65.6%), Internet or plagiarism

software (59.5%), discussing the importance of honesty (52.7%) and changing exams
92

regularly (51.1%). A small percentage of participants (1.5%) indicated that they use no

safeguards in their courses. At-home proctor software was selected by only 9.9% of the

respondents. See Table 24 for an exact breakdown.

Table 24

Aggregated Survey Responses: Safeguards to Reduce Cheating

N = 131 N % Yes

14a. None 2 1.5


14b. Internet or plagiarism software 78 59.5
14c. Provide information about cheating 86 65.6
14d. Change exams regularly 67 51.1
14e. Different versions of exams 46 35.1
14f. Discuss importance of honesty 69 52.7
14g. Remind students about policy 58 44.3
14h. Closely monitor students taking exam 33 25.2
14i. On-campus proctored testing center 47 35.9
14j. Off-campus proctored testing center 20 15.3
14k. At-home webcam computer proctor 13 9.9
14l. Password protected exams 49 37.4
14m. Secure exam browser lockdown 23 17.6

Note. 14a. None. I do not use any special safeguards in my courses, 14b. Use the Internet,
or software such as Turnitin.com, to detect or confirm plagiarism, 14c. Provide information
about cheating/plagiarism on course outline or assignment sheet, 14d—Change exams
regularly, 14e—Hand out different versions of an exam, 14f—Discuss my views on the
importance of honesty and academic integrity with my students, 14g—Remind students
periodically about their obligations under the institution’s academic integrity policy,
14h—Closely monitor students taking a(n) test/exam, 14i—On-campus proctored testing
center, 14j—Off –campus proctored testing center, 14k—At-home webcam computer
proctor, 14l—Password protected exams, 14m—Secure exam browser lockdown.

Focus group members were asked what safeguards they employ in their courses,
93

and indicated that the most widely used safeguards are the Internet or software such as

Turinitin.com to detect or confirm plagiarism and using on-campus proctored testing

center. While that was the second highest selected safeguard, survey respondents

indicated that providing information regarding cheating or plagiarism is their most likely

action to safeguard their course. Three focus group members indicated that they no longer

give exams or they no longer base the students’ grades on results of high stakes exams.

Safeguards that were mentioned by other focus group members are providing information

about cheating/plagiarism on the course outline or assignment sheet, handing out

different versions of the exam and using password-protected exams. Some of the remarks

regarding safeguards were as follows:

Participant 2 stated:

Refuse to teach a course where all of the tests would be online. I don’t see the
point of that. I would accept offsite as long as it is a reputable place.

Participant 5 stated:

My biggest concern with webcam or off-campus is the cost. If the cost situation
could be resolved where I don’t have to take into consideration that I want to give
five tests in my course and it is $20 to $25 a pop—that all of a sudden becomes a
lot of money. I don’t trust secure lockdown browser. I don’t have confidence with
that type of technology where all of a sudden you’re roped into “I can’t get it
installed or the system froze.”

Research Question 4. The fourth research question was as follows: To what

extent do instructional college faculty follow the institution’s code of conduct in response

to academic dishonesty? Respondents were able to select more than one response. Table

12 shows that with the exception of 8.4% of respondents, all respondents knew about the

academic integrity policy. The majority (61.8%) learned about it in the faculty handbook,

followed by 41.2% who learned about the policies from the faculty orientation program.
94

See Table 25 for further details.

Table 25
Aggregated Survey Responses: Primary Source From Which Faculty Learned About
Academic Integrity Policies
Response n Yes %

Faculty Orientation Program 54 41.2


Faculty Handbook 44 61.8
Department Chair 43 33.6
Other Faculty 32.8
Students 2 1.5
Dean or other Administrators 20 15.3
Publicized Results of Judicial Hearings 3 2.3
College Catalog 36 27.5
I have never really been informed about campus 11 8.4
policies concerning student cheating
Other 12 9.2

Note. Total number of participants: 131.

Faculty were asked what their reaction to cheating would be if they were

convinced that a student cheated on a major test or assignment. Table 22 shows that

23.3% would report the student to the Dean of Students, 16.6% to their Chair/Director or

Dean; but 3.3% indicated that they would do nothing about the incident.

When asked whether an incident of cheating was ever ignored and why, 38 out of

125 participants (30.4%) who answered the question indicated that they have ignored it.

Those 38 respondents were asked to indicate on a checklist what the reason was they

ignored cheating. Most of them (84.2%) indicated that they ignored it because they

lacked evidence or proof of cheating. Survey respondents had an opportunity to write in


95

their own answer, if their reason for ignoring cheating that was not provided as an answer

option. The responses were as follows:

1. “My exams are designed so that students who cheat them fail. Saves me from

having to get into the whole bureaucratic mess of bringing student up on charges.”

2. “Academic integrity is important; however, a draconian response to a glance

at a classmate's paper would be inappropriate, IMHO.”

3. “How can I prove another person took the exam; perfect score in minimal

time.”

4. “Using books and notes would not help one cheat on an oral French test.”

5. “The student was not passing the course. Did not matter if the student earned

100% the balance of the grades were so poor, it not make a difference”

Table 26

Aggregated Survey Responses: Frequencies and Reasons for Ignoring Cheating

Response Frequency %

7a—Lacked evidence/proof 32 84.2

7b—Cheating was trivial/not serious 7 21.9

7c—Lack of support from administration 4 12.5

7d—Student is the one who will ultimately suffer 9 28.1

7e—Didn’t want to deal with it; system is so 5 13.2


bureaucratic

7f—Not enough time 1 2.6


7g – other 5 13.2

Total 38

Faculty were asked if they ever referred a case of cheating to their Chair, Dean or
96

anyone else and how satisfied they were with the way the case was handled. Of the 58

people who answered, 70.7% indicated they were very satisfied (36.2%) or satisfied

(34.5%). See Table 27 for further details. The most likely reason 78 respondents did not

submit an answer is because they never referred a case.

Table 27

Degree of Satisfaction by Faculty With Handling Cases of Cheating

Response Frequency %

Very satisfied 21 36.2


Satisfied 20 34.5
Unsatisfied 4 6.9
Very unsatisfied 6 10.3
Neutral 7 12.1
Missing* 73

Total 58

*Missing indicates how many participants did not respond. Total respondents = 58.

Five focus group members responded favorably toward receiving information

about the academic integrity policies at their institution from the college catalog. For this

question, respondents could select multiple sources if the integrity policy was received in

that manner. Four members indicated that they also received this information from the

faculty orientation program, the faculty handbook, the department chair and from other

faculty. One focus group member indicated that information was obtained from the dean

or other administrator.

Five of the focus group members—those who indicated that they were convinced

that a student cheated on a major test or assignment—stated that they would fail the
97

student for the test or assignment. In each of the following categories, one focus group

member each indicated that their reaction would be to lower the student’s grade, fail the

student for the course, and do nothing about the incident. Participant 2 explained in

regard to what action would be taken if a student had cheated: “unless I can really

validate then there is no point [to take any action]. Unless I can convince myself, then

there is no way of really convincing anyone else [that the student cheated].” Participant 3

mentioned “If I were to catch someone in the test environment then they would fail that

particular test. And anything else I would ignore. I would have to be sure.” Participant 4

said “My first year, I ignored it because I did not know how to proceed.”

Participant 7 mentioned in respect to notifying the administration of cheating:

Have I known it happened and decided not to proceed further on the chain of
commands? Absolutely, because, as others have said, my standpoint is obvious:
they’ve cheated. But they already received punishment—they failed the test or
assignment. Why bother?—the penalty is in place.

Participant 8 said:

I usually fail the student on that assignment and tell them not to do it again. With
my multiple-choice quiz I usually do [ignore cheating] because I can’t prove that
it was done. With my experience, they will ultimately fail. I usually teach six
classes and it’s hard. It’s time-consuming.

There were two focus group members who indicated that they have referred a

suspected case of cheating to their Chair or someone else. One was very satisfied with the

way it was handled, while the other (Participant 6) mentioned “I was hoping that the dean

was going to give me more direction. It was left to me to decide.”

Research Question 5. The fifth research question was: What types of support do

instructional college faculty desire to help lower online cheating? Plagiarism detection

software, like Turnitin.com is the most widely selected choice of safeguards (50.0%) as
98

shown in Table 27. Other safeguards were written in by participants:

1. “Different version of the test for each student”

2. “Time frame for completion thus providing time to cheat once test started”

3. “Change the test or generate random test questions”

4. “Large data base of questions”

5. “The structure of the class can reduce cheating greatly. Multiple, smaller

assignments that ask for written explanations can make cheating a lot more difficult”

6. “Higher-order thinking and application exams versus recall of information”

The details of the survey participants’ answers are reflected in Table 28.

Table 28

Aggregated Survey Responses: Additional Safeguards Faculty Would Employ

Safeguard n %
15a—Plagiarism detection software, like TurnItIn.com 52 50.0
15b—On-campus proctored testing center 33 31.7
15c—Off –campus proctored testing center 18 17.3
15d—At-home webcam computer proctor 33 31.7
15e—Password-protected exams 37 35.6
15f—Secure exam browser lockdown 34 32.7
15g—other 6 5.8

Note. 15a—Plagiarism detection software, like TurnItIn.com; 15b—On-campus proctored testing center;
15c—Off –campus proctored testing center; 15d—At-home webcam computer proctor; 15e—Password-
protected exams; 15f—Secure exam browser lockdown; 15g—other.

When asked which safeguards focus group members would use if they were

available, Participant 1 answered “Turnitin for discussions if it was available and

password protected exams.” Participant 2 stated, “I would accept offsite as long as it is a


99

reputable place.” Participant 4 mentioned “in Moodle, you have test banks with three

different versions of the same question.”

Participant 7 mentioned

Would love to have at home webcam computer proctor. Problem is the cost. To
have them pay $125 a semester, just… I can’t ask that of them. So until the cost
can be mitigated I won’t do it.

Research Question 6. The sixth research question was as follows: To what

degree do instructional college faculty perceive the acceptance of the use of institutional

measures to prevent online cheating? To answer this research question, faculty answered

a Likert-scale question where they had to rate their perception very low (1), low (2),

medium (3), high (4), or very high (5). The most repeated answer was for faculty support

of the policies, which is indicated in Table 28 by a mode of 5 and a median of 4 (M =

3.80, SD = 1.058).

With faculty being highly or very highly supportive of institutional integrity

policies (Table 28), it is interesting to note that the most widely selected answer by

faculty on how information regarding plagiarism is conveyed is via their syllabus (74.4%;

Table 29). See Tables 28 and 29 for more information.

Thirty-eight respondents indicated that they ignored a suspected case of cheating.

Those 38 were then asked to check all reasons that applied to them from a checklist

provided. Faculty who ignored a suspected incident of cheating checked off lack of

evidence proof as the primary reason why they did so (84.2%, n = 32) (see Table 30). As

far as referring a suspected case of cheating to the Chair, Dean or anyone else, 44.6% (n

= 58) indicated that they had and 70.7% were very satisfied (36.2%) to satisfied (34.5%)

(see Table 26).


100

Table 29
Aggregated Survey Responses: Faculty Ratings of Institutional Measures to Prevent
Online Cheating
Response n Median Mode Mean SD
1a. Severity of penalties for cheating in 121 3.00 3 3.26 1.173
online classes at your institution
1b.Student’s understanding of the 125 3.00 3 2.71 1.022
college’s policies concerning cheating
in online classes
1c. Student support of these policies 101 3.00 3 2.69 .935
1d. Faculty support of these policies 120 4.00 5 3.80 1.058
1e. Effectiveness of these policies 117 3.00 3 2.98 1.025

Focus group members were asked to rate the severity of penalties for cheating in

online classes at their institution. Their answers were as follows:

Participant 1 stated:

Really high, because I have seen where it has gone through the ranks—not in my
case, but I have seen where it—it occurred in other cases–where it went from the
Dean to the Associate Dean all the way up to the Dean of Student Affairs. I think
we have the appropriate setup to take care of cheating.

Participant 2 stated:

I agree with the fact that we have a process in place that works. I am not so sure
that I would rate the overall severity being high because it is very much at the
discretion of the instructor as the instructor determines their own syllabus. So I
could have one penalty and another colleague could have another penalty for the
same infraction. So institutionally, I don't think we’re highly effective that way.

But I do agree that once you set your policy the procedure does work, assuming
that policy is then seen through.

Participant 3 said:
101

Table 30
Aggregated Survey Responses: How and When Faculty Discuss Institutional Dishonesty
Policies, Questions 2a–2d
2a* 2b* 2c* 2d*
Response
n % n % n % n %

Do not discuss 3 2.3 10 7.8 4 3.1 19 15.0


On individual 0 0 19 14.8 42 32.8 18 14.2
assignments
In syllabus of course 96 74.4 54 42.2 39 30.5 39 30.7
outline
At start of semester 21 16.3 26 20.3 17 13.3 19 15.0
Other 1 .8 1 .8 6 4.7 2 1.6
Not relevant 8 6.2 18 14.1 20 15.6 30 23.6
Missing** 2 3 3 4

Total 129 128 128 127

*2a—When, if at all, in your online courses do you discuss with students your policies concerning
plagiarism? 2b—When, if at all, in your online courses do you discuss with students your policies
concerning permitted and prohibited group work or collaboration? 2c—When, if at all, in your online
courses do you discuss with students your policies concerning the proper citation or referencing of sources?
2d—When, if at all, in your online courses do you discuss with students your policies concerning
falsifying/fabricating research data?
**Missing indicates how many participants did not respond.

With respect to specifically online classes in my discipline, mathematics, I would


rate it medium. The reason being is that the way my online courses are structured.
For example for the course that I teach, the students tend to do 30% of the work at
home and there is no way we can monitor what they do and how they do it. Thirty
percent of their grade comes from what they do away from the college. However,
70% of the grades comprises proctored tests. So we have proctored tests then for
70% of the grade. So from that particular point of view, you know, when we
proctor the tests here and if the student tries to cheat, then the penalties are quite
high.

Participant 4 stated:
102

Table 31

Reason Cheating Was Ignored

Response n %

7a—Lacked evidence/proof 32 84.2

7b—Cheating was trivial/not serious 7 18.4

7c—Lack of support from administration 4 10.5

7d—Student is the one who will ultimately suffer 9 23.7


7e—Didn’t want to deal with it; system is so bureaucratic 5 13.2
7f—Not enough time 1 2.6
7g—other 5 13.2

I have to say no opinion, because I have not seen the process go through.

Participant 5 stated:

I don’t know if I think that there is a culture of severity for cheating, because I
don’t think it’s something that can be quantified, I guess you would say. Because
for me, it’s like, like your case where 30% is taken at home. Is that really…? And
if your brother does it for you? Well, can I prove that? And the administration is
in a position of ”Well, did it really happen?” I don’t really think that … it’s sort of
ubiquitous, it’s not really well defined. I agree that if it is in my syllabus, I can
really say I’m behind that. The administration would do the same. But I think it’s
a difficult situation to prove and a difficult situation to apply a penalty for
something you really can’t define.

Participant 6 stated:

When you look at the syllabus template that the Institutional technology
department provides and their statement on what the penalties are, it very much
follows the policy that is in place by the college, which is very open-ended. I
think that the severity is dependent on the instructor and the department that the
instructor is in as to how much they want to actually enforce it.

Participant 7 stated:

Yes, there is a culture of severity: the penalty is set out and it’s severe. But the
103

position the administration takes is ”Well, but can you prove it?” And that’s a
very difficult thing when you teach solely online the way I do. So I think it is
much more complex.

Participant 8 stated:

Well, I can only echo what everyone else says in terms of “There is a policy in
place,” but it’s extremely subjective from our perspective as professors and from
those who are above us—technically the associate dean and the dean of students.
And it’s subjective also in the sense of “What are the penalties on our end?” If we
pursue punishing the student, there is an atmosphere in the college where they
would rather give the student the benefit of the doubt. Although I have, very early
on in my career, I’ve sat in on grade appeals and that is where we find that we
have a lot of coverage and advocacy, but when it comes to severely punishing
someone for cheating … I don’t know far the school would like to go. And I
wonder as well–just to add on to what I said—with this atmosphere of retention–
well that’s something to consider when retention is based on ... or monies is based
on retention … that’s something else we have to figure out when we see cheating.

To the question of how faculty would rate the average student’s understanding of

the college’s policies concerning cheating in online classes, focus group members

answered as follows:

Participant 1 said:

I would go low on that one–they know it but they still do it.

Participant 2 said

From my experience it’s very low–students seem to be very unaware of what


constitutes cheating, even when it is specified in the syllabus

Participant 3 said:

I would say low as well–I have no reason why, except that from interacting with
students. I would say that they are not aware and that they will see how much they
can get away with and push to the boundaries. Maybe they are aware of it and
they decide to push the boundaries

Participant 4 said:

I think it’s less an understanding of the policies concerning cheating as much as


what they know they can get away with. As opposed to well is the boundary
between slacking off a bit, and just turning in good enough work to get by or
104

when do you cross over into cheating like taking somebody’s notes instead of
your own or turning in someone else’s work as your own.

Participant 5 said:

I think it’s really a two-pronged problem: The first is that I don’t think they
understand. They go on the Internet and think, ”Well, this is like research.” They
can put that in their discussion. Now I just take it and put it into Google and, look,
it comes up as this other guy’s article. They don’t really realize that that’s not
theirs–you have to cite that. So I think they don’t really know and they also think
too, if they can push a little bit and try to get to the edge. I think it’s probably a
combination. I think we should probably push for more: Maybe they can have a
module or something to explain what it is–what cheating really is.

Participant 6 said:

I feel that students are given enough opportunity to actually know what it is,
because the orientation has a page with a lot on academic honesty. Like I teach a
course where the orientation assignment that they had to do was to go and find the
academic honesty policy in the syllabus and paste it in, and submit that
assignment. The students did that. And then it comes back to now–OK, I think
they know. In this one class I caught four people cheating, even after submitting
the assignment that said find that academic policy and show that you’ve read it by
submitting it. So I think it is also a question of knowing really what it is because it
is kind of broad–that policy statement. Does that tell the students enough? I have
a suspicion that academic honesty is not really a priority for the K-12 system.
Their mindset is set at that level and when they come to the college they think
they can just continue with that.

Participant 7 said:

I would say that awareness and compliance are two vastly different issues. And to
that point, two years ago, I was required by my college to do a culture project. I
teach Spanish. And I gave them very very specific instructions especially
concerning not stealing photographs that were copyright-protected. They were
given really really really detailed instructions about don’t do this, look for
creative commons images that give you permissions that allow you with
attributions. I would say that out of 90 students between my four classes that
semester, I had to no-credit at least 20 of them for violating that policy.
105

Participant 8 said:

Again I feel that the there is a policy, from my understanding, since I’ve been
teaching online–there is a hyperlink on the syllabus. In my syllabus quiz I have a
question about academic honesty, plus it is adequate in terms of notice. But are
the students reading it? Possibly not. I also feel that many students, especially in
teaching History, they may have had the 1101 class where they are introduced to
the idea of academic honesty. I just think that they try to see what they can get
away with.

They seem genuinely shocked to get caught when they are confronted.

Focus group participants all rated effectiveness of student support of the policies

against cheating either very low or low. They also rated faculty support for the policies

mostly low (n=3), yet some rated them high (n=2) and very high (n=1). Accordingly, the

effectiveness of the policies were also rated low (n=4) by most and only one rated it high.

Reasons stated why effectiveness is rated low are: “There is uncertain administrative

support. Let’s be real: it is a lot of work.” (Participant 2), and Participant 3 said:

There is all of the hoops to jump through once you catch a student, even when it is
red-handed. All of the paperwork, and then the back and the forth and then the
meeting and all of that stuff and how you’re gonna prove it. Even in a face-to-face
class where the student … if you catch a student with a cell phone with pictures
and all that stuff. What do you do at that point? Do you get that cell phone? How
will you prove now what the student had on the cell phone and all that stuff? So
that’s the problem there. So I think from this point it is prevention–from the
faculty standpoint: for example, giving multiple tests, organization. That leads to
the effectiveness of these policies. Of course we want a fair process for the
students, but at the same time, does it become a burden for the faculty?

Focus group members were asked if they had ever ignored a suspected incident of

cheating in one of their courses for any reason. While one stated that they had, the rest

(n=7) indicated that they took action such as failing the student for the test. The one that

mentioned to have ignored it explained that he was new to the college at the time and did

not know what the procedure was that needed to be followed.


106

When asked how strongly focus group members agreed or disagreed that faculty

members are vigilant in discovering and reporting suspected cases of academic

dishonesty in their online classes, one was unsure, while two said that they varied

between unsure and agree. The rest (n = 4) agreed. One remark was that there is likely a

difference between part-time and full-time faculty, with part-timers being less likely to be

as vigilant as full-time faculty (Participant 3). An additional comment by Participant 7

was “not sure especially with regards with the vigilance just because I hear too often

from my students that other online instructors don’t pay attention.”

Correlations and associations. The study examined whether there is a

correlation between “The average student’s understanding of the college’s policies

concerning cheating" and "Student support of these policies." The correlation between the

"average student's understanding of the college's policies concerning cheating" and

"student support of these policies" is statistically significant, r=0.41, p<.001 (see Table

31). These results indicate that the average student's understanding of the college’s

policies concerning cheating has a moderate positive correlation with student support of

these policies.

The correlation between the “Student support of these policies” and “Faculty

support of these policies,” r = 0.60, p < .001, is statistically significant (see Table 19).

According to these results, there is a moderate positive correlation between the students’

and faculty’s support for the policies concerning cheating in online classes.

The correlation between the “Faculty support of the college's policies concerning

cheating” and “The effectiveness of these policies” is statistically significant, r = 0.67, p

< .001 (see Table 31). These results indicate that the average faculty’s as well as the
107

average student’s support of the college’s policies concerning cheating has a moderate

positive correlation with the effectiveness of these policies.

Table 32

Pearson Correlations of Institutional Policies, Support, and Effectiveness

Correlations N r p
1b. The average student’s 100 0.41 <.001
understanding of the college’s policies
concerning cheating in online classes
vs. 1c. Student support of these policies
1b. The average student’s 116 0.53 <.001
understanding of the college’s policies
concerning cheating in online classes
vs. 1d. Faculty support of these policies
1c. Student support of these policies vs. 96 0.60 <.001
1d. Faculty support of these policies

1e. The effectiveness of these policies 115 0.53 <.001


vs. 1b. The average student’s
understanding of the college’s policies
concerning cheating in online classes
1e. The effectiveness of these policies 93 0.60 <.001
vs. 1c. Student support of these policies
1e. The effectiveness of these policies 112 0.67 <.001
vs. 1d. Faculty support of these policies

Question 13, "Cheating is a serious problem at this institution," was tested for

correlation with "Faculty members are vigilant in discovering and reporting suspected

cases of academic dishonesty." There is no evidence of a correlation, r = 0.01, p < .001

(see Table 33).


108

Table 33
Pearson Correlations: Cheating is a Serious Problem Versus Faculty are Vigilant in
Reporting
Correlation N r p
13a. Cheating in online classes is a serious 129 0.01 <.001
problem at this institution vs. 13d Faculty
members are vigilant in discovering and
reporting suspected cases of academic
dishonesty in their online classes

The researcher tested whether a correlation exists between the faculty’s number of

years of teaching at the college level (Question 16) and the type of reaction to evidence of

cheating (Question 6). The correlation between the faculty’s years of teaching and the

respondent’s type of reaction to the evidence of cheating was weak when all the

responses were combined, r= 0.25 (see Table 34).

Table 34

Pearson Correlations: Actions Taken for Cheating Versus Years of Experience

Correlation N R p
16. How many years have you been 68 0.25 <.001
teaching at the college level vs. (q6) Actions
Total

The researcher tested whether a relationship exists between the faculty’s gender

(Question 17) and the type of reaction to evidence of cheating (Question 6). The

relationship between the faculty’s gender and the respondent’s type of reaction to the

evidence of cheating was weak for any type of response (Table 35). Cross-tabulations

showed female faculty would more likely reprimand the student than male faculty by 10

percentage points, would be twice as likely to lower their grade or fail the student for the
109

course. The largest difference, 16 percentage points, was in female faculty’s being more

likely to fail the student for the test or assignment than male faculty. Chi square analyses

were used to determine whether faculty’s gender is associated with their response to

cheating in the areas which showed a significant difference between the male and female

responses. No significant associations were found. Table 35 shows a trend that female

respondents were markedly more punitive in their responses to cheating than males.

Table 35

Aggregated Cross-Tabulation: Responses to Cheating by Gender

Male Female
Pearson
Response to cheating (n = 28) (n = 41) chi-
square
% Yes % Yes

Reprimand or warn the student 11.6 21.7 .24


Lower the student’s grade 7.2 14.5 .75
Fail the student for the test/assignment 18.8 34.8 1.17
Fail the student for the course 7.2 13 .216
Require student to retake test/redo assignment 7.2 8.7
Report student to the Dean of Students 8.7 11.6
Report student to your Chair/Director or Dean 7.2 7.2
Do nothing about the incident 1.4 1.4
Other 1.4 2.9

Cross-tabulation was used to examine whether there is a relationship between

faculty's teaching discipline and their reactions to evidence of cheating. No significant

relationships were found (see Table 36). These results show that faculty’s teaching

discipline is not interrelated with their reaction to evidence of cheating. The respondents

from the Social/Behavioral sciences have a notable difference in their reaction to


110

cheating. Overall, their reaction is higher than in other disciplines. In Table 36, the

reaction to cheating is reflected by discipline.

Table 36

Aggregated Cross-Tabulation: Reactions to Cheating by Discipline, Questions 6a–6e

Area of 6a* 6b* 6c* 6d* 6e*


teaching N %Yes n %Yes n %Yes n %Yes n %Yes
Humanities 7 30.4 3 23.1 11 31.4 5 35.7 5 45.5
Math or 4 17.4 4 30.8 11 31.4 2 14.3 0 0
Science
Nursing/ 4 17.4 2 15.4 5 14.3 1 7.1 2 18.2
Health
Social/ 8 34.8 4 30.8 8 22.9 6 42.9 4 36.4
Behavioral
Science
Total 23 13 35 14 11

Note. If you were convinced, even after discussion with the student, that a student had cheated on a major test or
assignment in your online course, what would be your most likely reaction? *6a—Reprimand or warn the student; 6b—
Lower the student’s grade; 6c—Fail the student or the test assignment; 6d—Fail the student for the course; 6e—
Require student to retake test/redo assignment.

Table 37

Aggregated Cross-Tabulation: Reactions to Cheating by Discipline, Questions 6f–6i

6f* 6g* 6h* 6i*


Area of teaching % %
n n % Yes n % Yes n
Yes Yes

Humanities 4 28.6 5 55.6 0 .0 1 .0

Math or Science 3 21.4 1 1.1 0 .0 2

Nursing/Health 2 14.3 0 .0 1 50.0 0 .0

Social/Behavior 5 35.7 3 33.3 1 50.0 0


al Science

Total 14 9 2 3

Note. 6f—Report student to the Dean of Students; 6g—Report student to your Chair/Director or Dean; 6h—Do nothing
about the incident; 6i—Other; *Total of percentages exceeds 100% indicating that respondents in some cases selected
multiple responses.
111

Chapter Summary

The findings of the research were presented in this chapter. The survey answers of

the participants’ responses were analyzed with descriptive statistics and sampling

distributions and compared to the qualitative responses from the focus group members.

The perceptions of cheating at their respective institutions varied, with the majority of

faculty being unsure, or disagreeing that cheating is a serious problem at their institution.

Faculty mostly indicated that they had not personally witnessed students engaging in

obtaining answers to online tests or copying answers from another student and were

unsure whether dishonesty is a problem at their institution, but they strongly believed

copying information from the Internet without proper citation (plagiarism) to be the

primary type of dishonesty. Students’ monitoring one another to ensure academic

integrity was identified by faculty as the factor that mostly influences cheating, but focus

group members expressed concern regarding students in this role, questioning whether it

is a fair burden.

To safeguard online course integrity, college instructors identified the use of

preventative strategies like providing integrity policy information in the syllabus and

using plagiarism detection software, or reactive strategies, like failing the student for the

test or assignment. Additionally, the use of proctored testing environments on campus or

off campus was also commonly selected. Respondents indicated that the at-home webcam

was not widely used, nor was it selected by many as a feasible tool, as the cost for

students seeking those options was said to be high; and faculty indicated that they were

more likely to utilize it if the cost for each use was reduced.

Respondents indicated that they knew their institutional policy on academic


112

integrity from reading the college handbook, for example, but their reaction to cheating

was not always in line with the institution’s policy, manifested by about 30% confessing

to ignoring cheating at various times. Regardless of the faculty’s academic discipline,

lowering the student’s grade was the widely practiced reaction, while reporting the

incident to the department Chair or Dean proved to be an unpopular response. Some

faculty ignored cheating as they lacked proof that it took place. Desired support to help

lower cheating included on-campus proctored exams and at-home webcam computer

proctor.

The degree to which instructional college faculty perceived the acceptance of the

use of instructional measures to prevent online teaching depended on the level of support.

Respondents perceived students to have a low level of understanding of the policies,

which resulted in low support of them. Faculty were highly supportive of the policies and

perceived them as being very effective, but they were mostly unsure about the

effectiveness of the student judicial process as they had not seen data related to this

effectiveness.

Neither gender, discipline, nor the number of years faculty taught at the college

level seemed to have a significant relationship with the punishment in general, or the type

of punishment faculty used to reprimand students for cheating. There was a slight

indication of females in this study being more punitive compared to males. The same

seemed true for faculty from the social and behavioral sciences. Chapter 5 will provide a

discussion of the summary of findings, along with limitations, implications, conclusions

and recommendations.
113

Chapter 5: Discussion

The purpose of this study was to provide an inquiry into the phenomenon of

cheating in online courses. This mixed-method study on cheating in online classes at the

college level was conducted as an inquiry into the problem of dishonesty from the

perspective of faculty. The findings of the study were presented in Chapter 4, where the

data of the survey portion of the research, as well as the information obtained from the

focus group meeting, were organized by each of the six research questions that were the

foundation for the study.

Overview of the Study

There are many studies that address the problem of cheating in online classes

(e.g., Bedford et al., 2011; Brent & Atkisson, 2011; Chapman et al., 2004; Correa, 2011;

Devlin & Gray, 2007); Hudd et al., 2009), and increased pressure by the Federal

Government (2008; Higher Education Opportunity Act, 2008) has resulted in

implementation of processes to help prevent dishonesty. Despite these efforts, research

has shown that the perception about cheating is still ambiguous, which results in reduced

effort to implement strategies for reduction (Pincus & Schmelkin, 2003). Moreover, there

is some evidence that the gap between students and faculty perception of what constitutes

cheating is widening, which makes implementation of strategies more difficult (McCabe,

Butterfield, & Trevino, 2012). As indicated by Pincus and Schmelkin (2003), faculty do

not always view academic honesty in two dichotomous categories of existence. Rather,

they found that faculty often view dishonesty on a continuum that ranks forms of

dishonesty on different levels based on their perceived level of severity. The findings of

this study were consistent with the notion of a continuum, as faculty rated paraphrasing
114

or copying a few sentences from a book without proper footnoting as a much lower case

of dishonesty than copying from another student during an online test with his or her

knowledge.

The research questions for this study were:

1. To what degree do instructional college faculty perceive dishonesty as a

problem in their online classes?

2. How do online faculty judge the seriousness of online cheating and how well

do they think their college deals with it?

3. What strategies are used by college instructors to safeguard online course

integrity?

4. To what extent do instructional college faculty follow the institution’s code of

conduct in response to academic dishonesty?

5. What types of support do instructional college faculty desire to help lower

online cheating?

6. To what degree do instructional college faculty perceive the acceptance of the

use of institutional measures to prevent online cheating?

Five hundred and eighty-eight online faculty from three Florida community

colleges were invited to partake in the study. The initial invitation with two reminders

were sent via email by a liaison from the department of Instructional Technology at each

of the three participating colleges. The mixed-methods study consisting of an 18-question

survey was completed by 131 online faculty (22%). The AIS was modified with

permission of D. McCabe, Creator of AIS (personal communication, June 7, 2013), who

developed the survey. Participants were asked to sign up for a one-hour focus group
115

meeting which addressed the same questions. Eight volunteers were selected to attend the

meeting. The purpose of the focus group meeting was to obtain an in-depth view from the

faculty and to triangulate the answers obtained from the survey.

Summary of Findings

The sample for the quantitative part of the study consisted of 51 males (39%), 79

females (61%), and two other members who did not disclose their gender. Cross-

tabulations showed that there is no significant relationship between gender and the

response to cheating, although female faculty indicated a slightly more punitive attitude

than male faculty.

Representative sample. Davern (2008) stated that a sample is considered to have

“strong external validity” (p. 721), when its make-up is reflective of the population. He

further explained that this representation then makes generalization possible. To

determine if the study’s sample is representative of the target population, the researcher

obtained comparative demographic data from the participating institutions and

determined the gender breakdown of online instructors for the Winter 2013–2014

semester to be 374 females (61.5%) and 234 males (38.5%; L. Ciardulli, Assistant Vice

President of Academic Technologies, personal communication, April 10, 2014, E.

Muirhead, Executive Assistant, personal communication, April 12, 2014, and S. Arsht,

eLearning Student Success Specialist, personal communication, April 25, 2014), and this

was comparable to what was obtained in the current study’s sample.

The researcher obtained information from the participating institutions regarding

the breakdown of instructors by discipline in the Winter 2013–2014 semester. Disciplines

were grouped the same way in which the groups were combined for the statistical
116

analysis of this study, which resulted in 430 online instructors altogether in subject areas

that matched the ones for this. This breakdown falls in line with the breakdown of this

study, with all of the disciplines being within 4% difference in terms of representation,

with the exception of faculty in the business department, which had a 6.6% higher

representation in the survey.

Demographic influence on cheating. A cross-tabulation did not indicate any

definitive trends between faculty’s teaching discipline and their reaction to any evidence

of cheating. The number of years of teaching did not indicate a significant bearing on

their reaction to cheating, except when it came to having the student retake a major test or

redo an assignment when cheating was discovered. The results showed that the greater

the number of years of teaching experience, the more likely that faculty are to have the

student retake the test or redo the assignment. The results for each research questions will

be discussed in detail in the next section.

Perception of dishonesty as a problem. Research Question 1 was, “To what

degree do instructional college faculty perceive dishonesty as a problem in their online

classes?”

Fifty-one (57.3%) respondents indicated that they believed that plagiarism

occurred often in their online classes. Studies done with students who had to self-report

their instances of cheating support faculty’s inclination to believe that students cheat in

their classes (Harkins & Kubik, 2010; McCabe et al., 2012). The perception of cheating is

based on speculation, except for plagiarism that involves copying lines without citations.

This explains why the highest percentage of faculty (41.9%) expressed uncertainty about

cheating being a serious problem at their institution. This trend could be attributed to
117

cheating being a less noticeable problem in the online environment because online faculty

aren’t as well positioned to be able to witness cheating in an online context.

Focus group discussion revealed that many of the different types of cheating

cannot physically be witnessed by the instructor, due to the mode of delivery. The

participants further explained that speculation of cheating is difficult to prove without

reasonable doubt, but that easy access to electronic materials makes it more likely for

students to try. This includes the use of multiple electronic devices while taking exams:

one device has the exam open, while the other device is used to look up answers. Another

method used for cheating that was discussed by focus group members was plagiarism

when submitting discussion posts, as the discussion feature does not have the plagiarism

detection software. Hacking into accounts was also cited to be a common way to cheat, as

obtaining username and password information from other students seems rather easy.

Turning in papers from a “paper mill” is not widely noted as a common way to cheat.

Seriousness of cheating and colleges’ responses. Respondents were unsure

whether cheating is a serious problem at their institution. The uncertainty about the

existence of cheating likely affects the faculty’s reaction to cheating. Focus group

members argued that their answers were based on guesses, as they did not see any data

from their college that provided factual information. A weak relationship exists between

“cheating is a serious problem at this institution” and “faculty members are vigilant in

discovering and reporting suspected cases of academic dishonesty.” This may indicate

that published institutional data regarding cheating will likely encourage faculty to

become more vigilant and to enforce the institutional integrity policy.

Another factor that may influence cheating is the perception of the instructors
118

about the seriousness of cheating. More than 89% of instructors indicated that turning in

a paper from a paper mill or turning in work done by someone else is considered serious

cheating. There were a few forms of cheating that were seen as trivial to moderate, such

as paraphrasing or copying a few sentences from a book without proper footnoting or

students submitting false or forged excuses to get an extension on exams or assignments.

When faculty’s perception and reaction are inconsistent, their reaction to the type of

cheating may also vary. The focus group discussion addressed this issue, where members

mentioned that students often test the boundaries to see how much they can get away

with. This understanding echoes Correa’s (2011) conclusions that students learn about the

culture of academic integrity at their institution and if faculty does not take their role in

combating cheating seriously, it will continue to exist.

Participants of the survey study and focus group members differed in their rating

of peer influence. Survey study participants mostly agreed that students in online classes

should be responsible for the integrity of other students, while focus group participants

mostly disagreed because they felt that it should not be the students’ task to police other

students. McCabe and Trevino (1997) argued that peer reporting can be highly effective

since peers are more likely to find out from one another that someone has cheated. In

turn, stated McCabe and Trevino, the threat of its being reported may be enough to keep

students from cheating at all. Their study revealed that students were mostly affected by

the disapproval or potential negative reaction of their peers. McCabe and Trevino (1997)

therefore recommended that institutions that are serious about combating cheating must

look closely at ways to create a culture of cheating being unacceptable among peers.

The last factor that may influence cheating is the subject discipline of the faculty
119

member. The small pool of respondents in any of the disciplines makes generalizing

difficult. However, there were observed differences worth noting: based on the selection

of reactions that were offered, social and behavioral science respondents had the

strongest reaction to cheating, compared to the other disciplines. There were two

respondents who indicated that they would do nothing, even when they were convinced

that a student cheated. This shows that most faculty in the study are inclined to take

action once they have evidence of cheating, but that factors, such as bureaucratic barriers,

lack of time or understanding of personal responsibility may deter them from taking any

action at all.

Strategies to safeguard integrity. The response by faculty to different types of

cheating varied, and the results indicated that almost all faculty (n = 60) with the

exception of two indicated that they would take action. Failing the students for the test or

assignment is the most likely reaction, as indicated by 61.6% of respondents. Correa

(2011) claimed that enforcement of integrity policy helps to increase the institution’s

credibility, but as his study showed instructors would rather handle issues of dishonesty

on their own than follow the policy which may include referring the student to the chair,

director or dean of students. Focus group members for this research study stated that

there may also be a difference in understanding of the policies between part-time and

full-time faculty. Hudd et al. (2009) mentioned that part-time faculty’s understanding of

cheating differs and their strategies to combat cheating will differ as a result.

Plagiarism detection software, like TurnItIn.com was indicated as being widely

used by faculty, and most stated that they provide their students with information about

dishonesty and change their exams regularly. Other strategies cited to prevent cheating
120

include, but are not limited to, handing out different exam versions and using on-campus

proctored testing centers. There appears to be a lack of awareness among faculty

respondents about different safeguards that are available. In the focus group conversation

it was revealed that there was misunderstanding of how some safeguards work.

Additionally, respondents indicated that there is a lack of trust in some of the technology

used as safeguards: Some Learning Management Systems do not include plagiarism

software for their discussion feature, while the software is available in assignments. As a

result, faculty may not be able to utilize the software even when they are familiar with it.

The cost of off-campus proctored testing and webcam-proctored exams was mentioned as

a deterrent. Three focus group members indicated that they no longer give exams or they

no longer base the students’ grades on results of high stakes exams.

Suggested safeguards. Three focus group members indicated that they no longer

give exams or they no longer base the students’ grades on results of high stakes exams.

Safeguards that were mentioned by other focus group members are providing information

about cheating/plagiarism on the course outline or assignment sheet, handing out

different versions of the exam and using password-protected exams.

In the literature, there are different safeguards to protect online course integrity,

which have reportedly been used successfully:

1. Faculty should establish rapport with their students so they can recognize

patterns of cheating when it occurs (Moten et al., 2013). One of the focus group members

no longer gives tests, but gives assignments instead, with the goal of building rapport

with the students. Survey respondents indicated a preference to have conversations with

their students to discuss honesty and integrity, as well as the student’s obligations
121

regarding integrity. This may aid in building rapport.

2. Faculty should use multiple versions of exams (Moten et al., 2013). More than

35% of survey respondents indicated that they already use multiple versions of exams

and focus group members mentioned doing the same. One respondent suggested that each

student should have a different version of the test.

3. Faculty should require signed dishonesty statements from students (Moten et

al., 2013) and the college should add academic integrity policy to the syllabus (Jones,

2011). Focus group members discussed that this feature is currently available at their

institution. Focus group members discussed that their syllabi often include statements

about academic integrity. Perhaps requiring the students to sign the dishonestly statement

separately will reduce cheating. Since 73.3% of survey respondents indicated that they

provide information regarding dishonesty in their syllabus, they could include the

dishonesty statement recommended by Moten et al. (2013).

4. Faculty should make use of proctored exams (Harkins & Kubik, 2010; Lieber,

2012; Moten et al., 2013). When off-campus exams are administered, faculty should

utilize reputable testing centers like the NCTA (Baron & Crooks, 2005). While more than

a third of survey respondents utilize on-campus testing centers, only 15.9% indicated that

they use off-campus testing centers.

5. The instructor can be added to the class roster under a fictitious name (Moten

et al., 2013). This option was not discussed among focus group members, nor was there a

question on the survey that addressed it.

6. Faculty should provide clear guidelines on cheating. They should explain

different forms of cheating to students to clear up misunderstandings (Cole & Swartz,


122

2013; Harkins & Kubik, 2010). The survey results demonstrated how faculty are not in

agreement about the classification of cheating of different types of dishonesty.

Clarification of the guidelines should clear up misunderstandings for faculty and students

alike.

7. Faculty should develop a clear honor code and enforce it (Patnaude, 2008).

The development of an honor code was not addressed in the survey. It was clear that

faculty had different ideas on how they should deal with cheating, but enforcement has

been inconsistent. Additionally, it was mentioned during the focus group meeting that

following up is time-consuming, which makes buy-in difficult.

8. Faculty should make assignments challenging and intriguing to spark the

students’ interest and enthusiasm (Kohn, 1999). A survey respondent offered the

suggestion of incorporating more higher-order thinking questions and application type

questions on exams.

9. Faculty should utilize positive peer pressure (McCabe et al., 2012; Sendag et

al., 2012). This option was not discussed by the focus group members, nor was there a

question on the survey that addressed it.

10. Faculty should commit to combating dishonesty and following through with

the institutional guidelines (Correa, 2011; Thakkar, 2012; Thomas & De Bruin, 2012).

Survey respondents and focus group members expressed uncertainty about their

colleagues’ commitment to the institutional guidelines.

11. There should be college-wide consistency in handling dishonesty (Thomas &

De Bruin, 2012). Most survey respondents failed the student for the test or assignment

they cheated on, but the responses were very inconsistent and a few respondents admitted
123

doing nothing at all.

12. The college should institute a required orientation module that covers

academic integrity (Williams et al., 2012). Focus group members discussed that such

orientation is already required in their courses. It was not addressed in the survey by

survey respondents.

13. Faculty should use webcams (Cole & Swartz, 2013) or other remote

monitoring devices such as SeCOnE (Jung & Yeom, 2009). Twenty five percent of

survey respondents expressed an interest in the webcam option, while some faculty

indicated that they already use it. Others expressed their concern about the cost associated

with its use.

14. Faculty should require an increased number of written assignments (Cole &

Swartz, 2013). One focus group member identified written assignments as the preferred

method of assessing students. A survey respondent mentioned that written assignments

are being used.

15. Faculty should use the screen-lock option to prevent the student from

minimizing the screen from its full-screen mode while a student is taking an exam (Cole

& Swartz, 2013). No respondents addressed this issue.

16. Faculty should use plagiarism detection software like SafeAssign,

WriteCheck.com, Duplichecker.com, or Turnitin, iThenticate, Integriguard (Baron &

Crooks, 2005; Heckler et al., 2013; Jones, 2011; Moten et al., 2013; Patel et al., 2011;

Simonson et al., 2012). Almost 60% of survey respondents indicated that they already use

such software and almost 40% indicated their desire to use it. During their discussion,

focus group members shared that the software is very effective, but they expressed
124

concern that in some Learning Management Systems, the software is not available for

discussions, only for assignments. Survey respondents expressed desire for access to this

safeguard in their courses.

17. Faculty should use Google to search for exact sentence copies (Baron &

Crooks, 2005; Farnsworth & Bevis, 2006). Although this method was not specifically

addressed in the survey, one focus group member spoke about the effectiveness of this

method and felt that it is as effective as plagiarism detection software.

The research about safeguards offered additional options, which were not part of this

study. Future research in this area could focus on these methods and evaluate their

effectiveness:

1. Faculty should limit time on exams (Cole & Swartz, 2013).

2. Faculty should use Skype or other synchronous tools for oral examinations

(Cole & Swartz, 2013).

3. Faculty should compare the students’ writing to other writing they submitted

via email or discussions (Davis et al., 2009; Farnsworth & Bevis, 2006)

4. Faculty should require unlocked documents for submission so document can

be scanned through plagiarism detection program (Patel et al., 2011).

5. Faculty should look out for tricks, like transparent dots that are placed

between words (Patel et al., 2011).

6. Faculty should use portfolios to establish a writing baseline (Baron & Crooks,

2005).

7. Faculty should implement projects and assignments which require high

teacher-student and student-student interaction (Baron & Crooks, 2005; Prince et al.,
125

2009).

8. Faculty should include students in assignment design and topic design for

discussions (Prince et al., 2009).

9. Faculty should limit multiple-choice questions on exams and replace them

with critical thinking essay questions (Baron & Crooks, 2005).

10. Faculty should implement regular student conferencing (Moeck, 2002).

11. Faculty should require students to use tutors, as their relationship might deter

cheating (Baron & Crooks, 2005).

12. Faculty should use biometrics to verify students’ identities (Baron & Crooks,

2005).

Institutional code of conduct. Faculty in the study were made aware of their

institutional integrity policy via different avenues. Each of the institutions’ code of

conduct highlights the steps faculty must take in case of a breach, which includes referral

to the Dean of students (Broward College, n.d.-b; Palm Beach State College, 2013b). The

policy at one of the three institutions requires that faculty members determine the extent

of cheating and implement the appropriate punishment accordingly (Santa Fe College,

n.d.-b). The sources selected by the majority of respondents in respect to cheating policy

were the faculty handbook (61.8%) and the college’s orientation program (41.2%). Focus

group members mentioned that part-time faculty may not fully understand their role as

they’re only on campus briefly to teach their classes. They may not have been given

detailed information regarding what cheating is and how they are required to follow up,

should cheating be detected. The discussion also revealed that some part-time faculty

may work at multiple institutions, each with its own policy. This may lead to further
126

confusion. Additionally, there seem to be departmental differences on how dishonesty is

dealt with. Hudd et al. (2009) showed that the difference in perception of what cheating

entails is an issue that should be addressed. Their study confirmed the perception of focus

group members regarding the lack of understanding regarding policies and enforcement,

due to the short time spent on campus.

The main reason for this lack of understanding, indicated by 84.2% of survey

respondents (n = 32), was lack of proof. The focus group members also discussed their

reasons for ignoring cheating when it occurred, citing lack of proof as the main reason

why they failed to follow up. Thomas and De Bruin (2012) wrote about the lack of proof

and heavy workload as reasons why faculty fail to follow up on cheating. The

departmental differences were also highlighted by Thomas and De Bruin as a genuine

issue that hinders the enforcement of the school’s policy. Nonetheless, the chi squares

analysis showed no significant difference between respondents from different

departments and their reaction to teaching.

Desired support to lower cheating. The selections made by respondents for

additional safeguards against cheating revealed that faculty either (a) do not have the

safeguards available, (b) are unaware that the safeguards are already available through

their institution, (c) do not use some of the available safeguards because they are unaware

or unsure of how they can deter or detect dishonesty, and (d) lack commitment or desire

to safeguard their courses.

The survey respondents were asked which additional safeguards they would

employ if they were available. The answers in rank order, starting with the most desired

safeguard were: (1) Plagiarism detection software, like Turnitin.com was the most widely
127

selected choice of safeguards (50%), (2) password protected exams (35.6%), (3) secure

exam browser lockdown (32.7%), (4) at home webcam computer proctor (31.7%), (5)

off-campus proctored testing center (17.3%).

Other safeguards mentioned by faculty are (1) different version of the test for

each student, (2) time frame for completion thus providing time to cheat once test started,

(3) change the test or generate random test questions, (4) large data base of questions, (5)

the structure of the class can reduce cheating greatly, (6) multiple, smaller assignments

that ask for written explanations, (7) higher-order thinking and application exams versus

recall of information.

Focus group members added that the off-site proctored testing and webcam-

proctored testing are desirable methods, but the cost for use is deemed too high and deters

faculty form using those options. Their desire was to see the cost lowered.

Perceptions of acceptance of institutional measures to prevent cheating.

Faculty were asked “To what degree do instructional college faculty perceive the

acceptance of the use of institutional measures to prevent online cheating?” Survey

respondents rated faculty’s support of institutional policies with a mode of 5 (very high)

and a median of 4 (high). One indicator that the policy is accepted is that faculty widely

publishes this integrity policy in their syllabus. Another indicator of the acceptance is by

the enforcement of the policies by taking action when a student is caught cheating. While

the action by the faculty varies, they indicated that their action included giving the

student a failing grade for the exam or assignment. Institutions that have an institutional

policy in place are likely to include the steps to follow once cheating is detected. Focus

group members were not confident about the handling of cases that were referred to the
128

dean. Pincus and Schmelkin (2003) stressed the importance of clarity of institutional

policies and steps required by faculty. When faculty feel that they lack support from

administration, they will be less likely to take enforcement seriously (Correa, 2011).

Conclusions

Speculation regarding cheating in online classes has prompted pressure by the

Obama Administration on institutions to increase their efforts of authenticating that

students are indeed doing the required work (Higher Education Opportunity Act, 2008).

Accreditation within higher education depends on adherence to policies, which include

specific language about dishonesty online. The policy statement of the SACS, one of the

accrediting bodies used in Florida, provided guidelines in this regard, which include the

use of proctored environments for examinations and verification of the students’ identity

(SACS, 2010). This research study sought to find out how online faculty perceive the

instance of cheating and to what extent they take action when cheating is detected. The

idea that cheating is more common in the online environment than it is face-to-face is

inconclusive (e.g., Grijalva et al., 2010; Klor de Alva, 2011; Krsak, 2007; Watson &

Sottile, 2010). Cheating online is an ongoing problem, however, and institutions often

have integrity policies in place, which provide guidelines on how to proceed once

cheating is detected. Participants in this study indicated that the faculty handbook is

commonly where they find out about such guidelines. The problem is that not everyone is

aware of the guidelines and there are variations between departments on enforcement of

institutional policies. The research study showed that when there is evidence of cheating,

most faculty fail the student for the particular exam or assignment. Cheating is sometimes

ignored because of bureaucratic red tape or the time it takes to follow through with the
129

institutional procedures.

Plagiarism was identified as the type of cheating that is most commonly detected

by respondents. There are many safeguards available to protect the course integrity, and

plagiarism detection software, like TurnItIn, is already available in some Learning

Management Systems. The software is not widely used by respondents in this study,

because of lack of familiarity, mistrust of technology, or sparse availability of the tools

which impedes the efforts of the faculty. There appears to be a lack of knowledge by

faculty about safeguards that are available and their functionality. Lastly, part time

instructors may not be aware of their responsibility to take action.

On-campus proctored testing environments are utilized more frequently than off-

campus testing centers or webcam proctoring, although the use is limited. Faculty

recognize the additional protection proctoring offers, but they have not shown

commitment to its use. Moreover, some have expressed concern about the additional cost

the student has to carry. Other faculty no longer base their grades on high stakes exams or

they are unaware of any dishonest practices or the variations of cheating.

Implications

This mixed-method study confirmed that online students cheat and that many

faculty lack resources and commitment to actively combat cheating. Based on the results

of the study it can be concluded that uniform college-wide enforcement of the

institutional integrity policy may clear up confusion for full-time and part-time faculty.

Increased administrative efforts may also help to shift the direction, and these efforts

should include explaining the importance of enforcement, providing professional

development opportunities to teach faculty about the use and availability of safeguards.
130

These united efforts by administration and faculty may help to decrease the level of

dishonesty, thereby avoiding scrutiny from the accrediting bodies. The reputation of the

institutions will likely improve when it becomes widely known that the institution has

high standards and expectations and is serious about the integrity of its courses.

Limitations

The limitations of this study are as follows:

1. The study was conducted at community colleges, where the results may be

different than if it were conducted at a university. Faculty at these institutions differ, for

example, in their contractual obligations and their salaries, which may be linked to their

level of commitment. The student population they work with is different not only in size,

but perhaps also in their level of preparedness.

2. The researcher was limited by the required protocol in regard to reaching out

to the faculty. The participants were contacted by the administrators from the online

department at their respective colleges. Fowler (2009) recommended phone follow up if

participation was low after the email invitations were sent.

3. Possible contention between administration and faculty could have influenced

the decision to participate. Faculty may not feel supported by administrators due to, for

example, tensions between faculty, administrators, unions and boards. The requests to

participate in the survey were sent out by administrative liaisons who may have elicited

suspicion or apathy.

4. Faculty may have participated in other surveys and may have felt a sense of

survey overload.

5. The survey required a 20- to 30-minute time commitment which may have
131

deterred some invitees. Changing the questions by making them shorter and more concise

and eliminating some questions would help reduce the time of completion. For example,

the question about where paraphrased information was accessed may be eliminated, as it

did not provide critical information. The question regarding what constitutes cheating

should be presented as one question, thereby allowing the respondent to only read each

item once and selecting multiple answers.

6. The invitation letter was lengthy as it followed the required template and

contained required IRB approval forms. This method was not in line with Sue and

Ritter’s (2007) suggestion to keep invitation letters short and inviting. Participants were

offered an incentive for participation, but the incentive may have been unnoticed as it

was mentioned in the participation letter. Sue and Ritter (2007) suggested the use of a

flashing banner which would focus the readers’ attention immediately and increase

interest.

7. The population was not randomly selected, making generalization

questionable. According to Fowler (2009), the sample should be randomly selected so

conclusions can be generalized for the rest of the population. Respondents were solicited

through the department of instructional technology at their respective institutions.

8. The low response rate resulted in a small sample size, which may have

influenced the trends. Donmoyer (2008) asserted that online surveys have unique

challenges, which may result in problems with generalizability and, in turn, problems

with reliability due to low response. In some instances it was not possible to find trends

or draw conclusions because certain questions only pertained to those respondents whose

common answer led them to a follow up question thereby shrinking the pool of
132

respondents even further.

9. The survey was a modified version of the original AIS and so the reliability

data could not be confirmed as being the same for both versions. The researcher might

have improved the quality of the data analysis by testing the survey for reliability with a

selected group of volunteers of college instructors who would be excluded from the

actual study and then running it again a month later to measure the degree of consistency.

10. As suggested by Fowler (2009), respondents may have been concerned about

the level of anonymity due to the nature of some of the survey questions. Fowler called

this an interference, which potentially caused errors in the results.

11. The results of a study conducted in Florida may be different than results of a

similar study in a different state.

12. Because the survey questions were delivered via Google forms, an online

survey delivery program, participants did not have an opportunity to ask questions, which

may have led to misinterpretation of the items on the survey and perhaps inherent bias

and distortions in self-reported data.

13. There may be a potential for bias on the part of the researcher, who is a

faculty member at one of the schools that was used for the survey. Fowler (2009)

mentioned that the execution of a survey can lead to bias.

14. Due to a technical glitch, some initial responses were not properly recorded.

15. Finally, a limited number of safeguards was discussed in the research.

Recommendations for Future Studies

Future studies need to address the effectiveness of the different safeguards by

testing them and collecting longitudinal data on their impact. The implications of
133

cheating in the online environment span across different areas, such as credibility of the

institution and jeopardized accreditation. It is important to continue the research on the

extent of cheating and the efforts to combat it.

Studying the enforcement of institutional policies will help determine whether its

impact on cheating is favorable. The following data should be collected and analyzed:

distribution of such policies, the clarity of required steps, and the implication on faculty

who don’t adhere to the policies.

A comparative study between disciplines can help clarify if attitudinal differences

of faculty and students play a role. Other demographic differences, such as the number of

years teaching in higher education will help determine whether faculty tenure impacts the

rigor with which steps are taken to reduce cheating.

Several safeguards that were recommended by other researchers were not

discussed in this study, such as the use of synchronous online class sessions, critical

thinking activities and exams, limits on exam times and comparison of writing samples.

A future inquiry into the effectiveness of those safeguards may give faculty a more

focused approach into their efforts to combat cheating.

This study should be replicated with a larger number of colleges in order to

increase the size of the sampled population and boost the representativity and

generalizability.

Increased efforts to further research areas of deficiency that compromise online

course integrity combined with implementation of uniform combative measures against

cheating should help decrease the level of skepticism about the authenticity of those

courses.
134

References

Baron, J., & Crooks, S. M. (2005). Academic integrity in web based distance education.
Education Journals, 49(2), 40. doi:10.1007/BF02773970

Bedford, D. W., Gregg, J. R., & Clinton, M. S. (2009). Implementing technology to


prevent online cheating: A case study at a small southern regional university
(SSRU). Journal of Online Learning and Teaching, 5(2), 230–238.

Bedford, D. W., Gregg, J. R., & Clinton, M. S. (2011). Preventing online cheating with
technology: A pilot study of remote proctor and an update of its use. Journal of
Higher Education Theory and Practice, 11(2), 41–58.

Black, E., Greaser, J., & Dawson, K. (2008). Academic dishonesty in traditional and
online classrooms: Does the “media equation” hold true. Journal of Asynchronous
Learning Networks, 12(3–4), 23–30.

Boehm, P., Justice, M., & Weeks, S. (2009). Promoting academic integrity in higher
education. The Community College Enterprise, 15(1), 45–61.

Brent, E., & Atkisson, C. (2011). Accounting for cheating: An evolving theory and
emergent themes. Research in Higher Education, 52, 640–658.
doi:10.1007/s11162-010-9212-1

Broward College. (n.d.-a). Quick view: Broward college. Retrieved from


www.broward.edu/discover/Documents/Quick%20View%20Guide.pdf

Broward College. (n.d.-b). Student support services. Retrieved from


www.broward.edu/catalog/20122013%20BC%20Catalog/student%20rights%20a
nd%20responsibilities.pdf

Brown, B. S., Weible, R. J., & Olmosk, K.E. (2010). Business school deans on student
academic dishonesty: A survey. College Student Journal, 44(2), 299–308.

Bruner, J. S. (1960). The process of education. London: Cambridge, Harvard University


Press.

Chapman, K. J., Davis, R., Toy, D., & Wright, L. (2004). Academic integrity in the
business school environment: I'll get by with a little help from my friends.
Journal of Marketing Education, 26(3), 236–249.
doi:10.1177/0273475304268779

Chase, A. E. (2010). Academic dishonesty in online courses: The influence of students'


characteristics, perception of connectedness, and deterrents (Doctoral
dissertation). Retrieved from ProQuest Dissertations and Theses database.
(3405540)
135

Cole, M. T., & Swartz, L. B. (2013, February). Understanding academic integrity in the
online learning environment: A survey of graduate and undergraduate business
students. Paper presented at the ASBBS Annual Conference, Las Vegas, NV.

Correa, M. (2011). Academic dishonesty in the second language classroom: Instructorsʼ


perspectives. Modern Journal of Language Teaching Methods, 1(1), 65–79.

Creswell, J. W. (2005). Educational Research: Planning, conducting, and evaluating


quantitative and qualitative research (2nd ed.). Upper Saddle River, NJ: Pearson
Education.

Creswell, J. W. (2008). Mixed methods research. In L. Given (Ed.), The SAGE


Encyclopedia of qualitative research methods (pp. 527–530). Thousand Oaks,
CA: SAGE Publications Inc.

Davern, M. E. (2008). Representative sample. In P. J. Lavrakas (Ed.), Encyclopedia of


Survey Research Methods (pp. 721–723). Thousand Oaks, CA: SAGE
Publications, Inc.

Davis, S. F., Drinan, P., & Gallant, T. B. (2009). Cheating in school: What we know and
what we can do. Malden, MA: Wiley-Blackwell.

Devlin, M., & Gray, K. (2007). In their own words: A qualitative study of the reasons
Australian university students plagiarize. Higher Education Research and
Development, 26(2), 181–198. doi:10.1080/07294360701310805

Donmoyer, R. (2008). Generalizability. In L. M. Given (Ed.), The SAGE Encyclopedia of


Qualitative Research Methods (pp. 372–373). Thousand Oaks, CA: SAGE
Publications.

DuPree, D., & Sattler, S. (2010). McCabeʼs Academic Integrity Survey Report 2010.
Retrieved from Texas Tech University Ethics Center website:
www.depts.ttu.edu/provost/qep/docs/McCabe_Academic_Integrity_Report_Cover
.pdf

Eckles, B. T. (2010). A study of faculty and academic administratorsʼ perceptions of


academic dishonesty in higher education in relation to the learning organization
for which they work (Doctoral dissertation). Retrieved from ProQuest
Dissertations and Theses database. (3455094)

Farnsworth, K., & Bevis, T. B. (2006). A fieldbook for community college online
instructors. Washington, DC: Community College Press.

Fowler, F. J. (2009). Survey research methods (4th ed.). Thousand Oaks, CA: SAGE
Publications.
136

Gallant, T. B., & Drinan, P. (2008). Toward a model of academic integrity


institutionalization: Informing practice in postsecondary education. Canadian
Journal of Higher Education, 38(2), 25–43.

Grijalva, T. C., Nowell, C., & Kerkvliet, J. (2010). Academic honesty and online courses.
College Student Journal, 40(1), 180.

Gross, E. R. (2011). Clashing values: Contemporary views about cheating and plagiarism
compared to traditional beliefs and practices. Education, 132(2), 435–440.

Groves, R. M., Fowler, F. J., Couper, M. P., Lepkowski, J. M., Singer, E., & Tourangean,
R. (2004). Survey methodology. Hoboken, NJ: John Wiley and Sons Inc.

Guernsey, L. (2001, April 26). For those who would click and cheat. New York Times.
Retrieved from www.nytimes.com

Harkins, A. M., & Kubik, G. H. (2010). Ethical cheating in formal education. On The
Horizon, 18(2), 134–146. doi:10.1108/10748121011050487

Hart, L., & Morgan, L. (2010). Academic integrity in an online registered nurse to
baccalaureate in nursing program. Journal of Continuing Education in Nursing,
41(11), 498–505. doi:10.3928/00220124-20100701-03

Heckler, N. C., Rice, M., & Hobson Bryan, C. (2013). Turnitin systems: A deterrent to
plagiarism in college classrooms. Journal of Research on Technology in
Education, 45(3), 229–248.

Higher Education Opportunity Act, 110-315 C.F.R. § 495 (2008).

Hollinger, R. C., & Lanza-Kaduce, L. (2006). Academic dishonesty and the perceived
effectiveness of countermeasures: An empirical survey of cheating at a major
public university. NASPA Journal, 33, 292–306.

Hudd, S. S., Apgar, C., Bronson, E. F., & Lee, R. G. (2009). Creating a campus culture of
integrity: Comparing the perspectives of full- and part-time faculty. Journal of
Higher Education, 80(2), 146–177. doi:10.1353/jhe.0.0039

Jones, D. L. R. (2011). Academic dishonesty: Are more students cheating? Business


Communication Quarterly, 74(2), 141–150. doi:10.1177/1080569911404059

Jung, I. Y., & Yeom, H. Y. (2009). Enhanced security for online exams. IEEE
transactions on education, 52(3), 340–349. doi:10.1109/TE.2008.928909

Kaczor, B. (2007, September 26). Nearly 2 dozen Florida State athletes accused of
cheating. USA Today. Retrieved from www.usatoday.com
137

Kelley, K., & Bonner, K. (2005). Digital text, distance education and academic
dishonesty: Faculty and administrator perceptions and responses. JALN, 9(1), 43–
52.

King, C. G., Guyette, R. W., & Piotrowski, C. (2009). Online exams and cheating: An
empirical analysis of business students’ views. The Journal of Educators Online,
6(1), 1–11.

Klor de Alva, J. (2011, June 19). For-profit learning is always cheaper; and other myths.
Chronicle of Higher Education. Retrieved from www.chronicle.com

Kohlberg, L. (1981a). The meaning and measurement of moral development (Vol. XIII).
Worcester, MA: Clark University Press.

Kohlberg, L. (1981b). The philosophy of moral development: Moral stages and the idea
of justice (Vol. 1). San Francisco, CA: Harper and Row Publishers.

Kohn, A. (1999). Punished by rewards : The trouble with gold stars, incentive plans, Aʼs,
praise, and other bribes. New York, NY: Houghton Mifflin Co.

Krsak, A. (2007). Curbing academic dishonesty in online courses. In Proceedings of


TCC-Teaching Colleges and Community Worldwide Online Conference 2007 (pp.
159–170). Honolulu, HI.

Kwong, T., Ng, H., & Mark, K. (2010). Students’ and faculty’s perception of academic
integrity in Hong Kong. Campus-Wide Information Systems, 27(5), 341–355.
doi:10.1108/10650741011087766

Lessig, L. (2008). Remix: Making art and commerce thrive in the hybrid economy. New
York, NY: Penguin Press.

Lieber, R. (2012). Student perceptions of faculty use of cheating deterrents. Journal of


Academic Ethics, 10, 327–333. doi:10.1007/s10805-012-9170-7

LoSchiavo, F., & Shatz, M. (2011). The impact of honor code on cheating in online
courses. MERLOT, 7(2), 179–184.

Mayhew, M. J., Hubbard, S. M., Finelli, C. J., Harding, T. S., & Carpenter, D. D. (2009).
Using structural equation modeling to validate the theory of planned behavior as a
model for predicting student cheating. Review of Higher Education, 32(4), 441–
468.

McCabe, D., Trevino, L. K., & Butterfield, K. D. (1999). Academic integrity in honor
code and non-honor code environments: A qualitative investigation. Journal of
Higher Education, 70(2), 211–234.
138

McCabe, D. L., Butterfield, K. D., & Trevino, L. K. (2012). Cheating in college: Why
students do it and what educators can do about it. Baltimore, MD: Johns Hopkins
University Press.

McCabe, D. L., & Trevino, L. K. (1997). Individual and contextual influences on


adademic dishonesty: A multicampus investigation. Research in Higher
Education, 38(3), 379–396.

Merkle, D. M. (2013). Nonresponse bias. In P. J. Lavrakas (Ed.), Encyclopedia of survey


research methods (pp. 532–534). Publixher's location: SAGE Publications Inc.
doi:10.4135/9781412963947.n340

Miller, A., Shoptaugh, C., & Wooldridge, J. (2011). Reasons not to cheat, academic-
integrity resposibility, and frequency of cheating. Journal of Experimental
Education, 79(2), 169–184. doi:10.1080/00220970903567830

Mills, W. A. (2010). Academic dishonesty in online education (Doctoral dissertation).


Retrieved from ProQuest Dissertations and Theses database. (3437631)

Mirza, N., & Staples, E. (2010). Webcam as a new invigilation method: Studentsʼ
comfort and potential for cheating. Journal of Nursing Education, 49(2), 116–
119. doi:10.3928/01484834-20090916-06

Moeck, P. G. (2002). Academic dishonesty: Cheating among community college


students. Community College Journal of Research and Practice, 26(6), 479–491.
doi:10.1080/02776770290041846

Morgan, D. (2006). Focus group. In V. Jupp (Ed.), The SAGE dictionary of social
research methods (pp. 122–124). Publisher's location: SAGE Publications Inc. .

Moten, J., Fitterer, A., Brazier, E., Leonard, J., & Brown, A. (2013). Examining online
college cyber cheating methods and prevention measures. Electronic Journal of e-
Learning, 11(2), 139–146.

Multon, K., & Coleman, J. (2010). Coefficient alpha. In N. J. Salkind (Ed.), Encyclopedia
of research design (pp. 160–164). Thousand Oaks, CA: SAGE Publications, Inc.

Nitko, A. J., & Brookhart, S. M. (2011). Educational assessment (6th ed.). Boston, MA:
Pearson Education, Inc.

Palm Beach State College. (2013a). Institutional research and effectiveness. Retrieved
from www.palmbeachstate.edu/ire/documents/acadmgmt/graduates_latest.pdf

Palm Beach State College. (2013b). Palm Beach State College 2013–2014 student
handbook. Retrieved from www.palmbeachstate.edu/catalog/documents/
studenthandbook2013-14.pdf
139

Palm Beach State College. (n.d.). Fast facts. Retrieved from www.palmbeachstate.edu/
crm/publications/fast-facts.aspx

Parry, M. (2009). Online educators wonʼt be forced to spy on students, new rules say.
Chronicle of Higher Education, 55(39), A19.

Patel, A., Bakhtiyari, K., & Taghavi, M. (2011). Evaluation of cheating detection
methods in academic writings. Library Hi Tech, 29(4), 623–640.
doi:10.1108/07378831111189732

Patnaude, K. A. (2008). Faculty perceptions regarding the extent to which the online
course environment affects academic honesty (Doctoral dissertation). University
of Houston, Houston, TX. ProQuest Dissertations and Theses database. (3323556)

Pincus, H. S., & Schmelkin, L. P. (2003). Faculty perceptions of academic dishonesty: A


multidimensional scaling analysis. Journal of Higher Education, 74(2), 196–209.
doi:10.1353/jhe.2003.0017

Pinto, R. (2010). Mixed methods design. In N. Salkind (Ed.), Encyclopedia of research


design (pp. 813–819). Thousand Oaks, CA: SAGE Publications Inc.

Prince, D. J., Fulton, R. A., & Garsombke, T. W. (2009). Comparisons of proctored


versus non-proctored testing strategies in graduate distance education curriculum.
Journal of College Teaching and Learning, 6(7), 51.

Roach, R. (2001). Safeguarding against online cheating. Black Issues in Higher


Education, 18(8), 92.

Rodgers, J. (2012, June 8). AFA discovered cheating by comparing online, final exams.
Gazette. Retrieved from www.gazette.com

Santa Fe College. (n.d.-a). Information. Retrieved from www.sfcollege.edu/about/

Santa Fe College. (n.d.-b). Santa Fe Cummunity College rules manual. Retrieved from
www.dept.sfcollege.edu/rules/studentcodeofconduct.pdf

Schmelkin, L. P., Gilbert, K., Spencer, K. J., Pincus, H. S., & Silva, R. (2008). A
multidimensional scaling of college students' perceptions of academic dishonesty.
Journal of higher education 79(5), 587–607. doi:10.1353/jhe.0.0021

Scott, M., & Lyman, S. (1968). Accounts. American Sociological Review, 33(1), 46–62.

Sendag, S., Duran, M., & Fraser, M. R. (2012). Surveying the extent of involvement in
online academic dishonesty (e-dishonesty) related practices among university
students and the rationale students provide: One university’s experience.
Computers in Human Behavior, 28, 849–860. doi: 10.1016/j.chb.2011.12.004
140

Shaw, C. (2004). Academic dishonesty in traditional and online courses as self reported
by students in online courses (Doctoral dissertation). Retrieved from ProQuest
Dissertations and Theses database. (3120331)

Short, S. (2006). Focus groups: Focus group interviews. In e. Perecman & S. R. Curran
(Eds.), A handbook for social science field research: Essays and bibliographic
sources on research and design methods (pp. 104–117). Thousand Oaks, CA:
SAGE Publications, Inc. doi:10.4135/9781412973427

Simonson, M., Smaldino, S., Albright, M., & Zvacek, S. (2012). Teaching and learning
at a distance: Foundations of distance education (5th ed.). Boston, MA: Pearson.

Sloan Consortium. (n.d.). Class differences: Online education in the United States, 2010.
Retrieved from www.sloanconsortium.org/publications/survey/class_differences

Southern Association of Colleges and Schools. (2010). Distance and correspondence


education. Retrieved from www.sacscoc.org/pdf/Distance%20and%
20correspondence%20policy%20final.pdf

Spaulding, M. (2009). Perceptions of academic honesty in online vs. face-to-face


classrooms. Journal of Interactive Online Learning, 8(3), 183–198.

Staats, S., Hupp, J. M., Wallace, H., & Gresley, J. (2009). Heroes donʼt cheat: An
examination of academic dishonesty and students' views on why professors donʼt
report cheating. Ethics and Behavior, 19(3), 171–183.
doi:10.1080/10508420802623716

Stearns, S. A. (2001). The student-instructor relationship's effect on academic integrity.


Ethics and Behavior, 11(3), 275–285. doi:10.1207/S15327019EB1103_6

Stephens, J. M., Young, M. F., & Calabrese, T. (2007). Does moral judgment go offline
when students are online? A comparative analysis of undergraduatesʼ beliefs and
behaviors related to conventional and digital cheating. Ethics and Behavior,
17(3), 233–254. doi:10.1080/10508420701519197

Stuber-McEwen, D., Wiseley, P., & Hoggatt, S. (2009). Point, click, and cheat:
Frequency and type of academic dishonesty in the virtual classroom. Online
Journal of Distance Learning Administration, 7(3), n.p.

Sue, V. M., & Ritter, L. A. (2007). Conducting online surveys. [Online book]. SAGE
Publications, Inc. doi:10.4135/9781412983754.

Sykes, G., & Matza, D. (1957). Techniques of neutralization: A theory of delinquency.


American Sociological Review, 22(6), 664–670.

Tashakkori, A., & Teddlie, C. (Eds.). (2003). Handbook of mixed methods in social and
141

behavioral research. Thousand Oaks, CA: Sage.

Thakkar, M. (2012). A qualitative analysis of college students' preceptions of academic


integrity on campus. Academy of Educational Leadership Journal, 16, 81-88.

Thomas, A., & De Bruin, G. (2012). Student academic dishonesty: What do academics
think and do, and what are the barriers to action? African Journal of Business
Ethics, 6(1), 13–24. doi:10.4103/1817-7417.104698

Turner Dille, E. (2011). A multi-institutional investigation into cheating on tests in


college online courses. (Doctoral dissertation). University of South Carolina,
South Carolina. (3488362)

Ullah, A., Xiao, H., Lilley, M., & Barker, T. (2012). Using challenging questions for
student authentication in online examination. International Journal for
Infonomics, 5(3/4), 631–639.

Watson, G., & Sottile, J. (2010). Cheating in the digital age: Do students cheat more in
online courses? Online Journal of Distance Learning Administration, 8(1), n.p.

Western Cooperative for Educational Telecommunications. (n.d.). WCET learn.


Retrieved from www.wcet.wiche.edu/learn/student-authentication

Williams, S., Tanner, M., Beard, J., & Hale, G. (2012). Academic integrity on college
campuses. International Journal for Educational Integrity, 8(1), 9–24.

Witherspoon, M., Maldonado, N., & Lacey, C. (2012). Undergraduates and academic
dishonesty. International Journal of Business and Social Science, 3(1), 76–86.

Zou, J. J. (2011, September 4). With cheating only a click away, professors reduce the
incentive. The Chronicle of Higher Education. Retrieved from
www.chronicle.com
142

Appendix A

McCabe Academic Integrity Survey 2010: Screen Shot of Faculty Survey


143
144
145
146
147
148
149
150

From “McCabeʼs Academic Integrity Survey Report 2010,” by D.DuPree and S. Sattler,
2010. Copyright 2003 by Don McCabe, Texas Tech University Ethics Center website:
www.depts.ttu.edu/provost/qep/docs/McCabe_Academic_Integrity_Report_Cover.pdf.
Adapted with permission.
151

Appendix B

Modified Academic Integrity Survey


152

Modified AIS

Modified Academic Integrity Survey

Academic Environment
Please tell us about the academic environment at your institution. Please note that
all responses will be part of the aggregated data and no individual responses will
be released or identified with any individual.
1. How would you rate
Very No
Very low Low Medium High
high opinion
The severity
of penalties
for cheating
in online
classes at
your
institution
The average
student’s
understanding
of the
college’s
policies
concerning
cheating in
online
classes?
Student
support of
these
policies?
Faculty
support of
these
policies?
The
effectiveness
of these
policies?
153

2. When, if at all, in your online courses do you discuss with students your
policies concerning: (which applies best?)
In
On syllabus At start
Do not Not
individual of of Other
discuss relevant
assignments course semester
outline
Plagiarism
Permitted and
prohibited group
work or collaboration
The proper citation
or referencing of
sources
Falsifying/fabricating
research data
3. Please note the primary sources from which you have learned about the
academic integrity policies at your institution (Check all that apply).

o Faculty orientation program


o Faculty handbook
o Department chair
o Other faculty
o Students
o Deans of other administrators
o Publicized results of judicial hearings
o College catalog
o I have never really been informed about campus policies concerning
student cheating
o Other:

4. How frequently do you think the following occur in the online courses at your
institution?
Very Very No
Never Seldom/sometimes Often
seldom often opinion
Plagiarism on
writing
154

Very Very No
Never Seldom/sometimes Often
seldom often opinion
assignments
Students
inappropriately
sharing work
in group
assignments
Cheating
during tests or
examinations

5. How often, if ever, have you seen a student cheat during an online test or
examination at your institution?

o Never
o Once
o A few times
o Several times
o Many times

6. If you answered anything other than Never to Question 5, please answer the
following question. If you were convinced, even after discussion with the student,
that a student had cheated on a major test or assignment in your online course,
what would be your most likely reaction? (Check all that apply)

o Reprimand or warn the student


o Lower the student’s grade
o Fail the student or the test assignment
o Fail the student for the course
o Require student to retake test/redo assignment
o Report student to the Dean of Students
o Report student to your Chair/Director or Dean
o Do nothing about the incident
o Other:
155

7. Have you ever ignored a suspected incident of cheating in one of your courses
for any reason?

o Yes
o No

If you answered Yes, did any of the following influence your decision? (Check all
that apply)

o Lacked evidence/proof
o Cheating was trivial/not serious
o Lack of support from administration
o Student is the one who will ultimately suffer
o Didn’t want to deal with it; system is so bureaucratic
o Not enough time
o Other:

8. Have you ever referred a suspected case of cheating to your Chair, Dean, or
anyone else?

o Yes
o No

If you answered Yes, how satisfied were you with the way the case was handled?

o Very satisfied
o Satisfied
o Neutral
o Unsatisfied
o Very unsatisfied

Specific Behaviors
9. Students have different views on what constitutes cheating and that is
acceptable behavior. We would like to ask you some questions about specific
behaviors that some students might consider cheating. This is a two-part question.
In part one, please mark how often, if ever, you have observed or become aware
of a student in your class engaging in any of the following behaviors during the
last three years. If a question does not apply to any of your courses, please check
156

the “Not Relevant” column. For example, if you do not use tests/exams, you
would check “Not Relevant” for questions related to tests/exams. In part 2, you
will be asked the same questions, but this time you will mark how serious you
think each type of behavior is.
Part 1: How often, if ever, you have observed or become aware of a student in
your class engaging in any of the following behaviors during the last three years?
More than
Never Once Not relevant
once
Fabricating or
falsifying a
bibliography in
an online
assignment
Working on an
online
assignment with
others when the
instructor asked
for individual
work.
Getting
questions or
answers on an
online test from
someone who
has already
taken a test
Helping
someone else
cheat on an
online test.
Copying from
another student
during an online
test with his or
her knowledge.
Using digital
technology
(such as text
messaging) to
get unpermitted
help from
someone during
157

More than
Never Once Not relevant
once
an online test or
assignment.
Paraphrasing or
copying a few
sentences from a
book, magazine
or journal (not
electronic or
Web-based)
without
footnoting them
in a paper s/he
submitted in an
online class.
Turning in a
paper in an
online class
from a “paper
mill” (a paper
written and
previously
submitted by
another student)
and claiming it
as his/her own
work.
Using an
electronic/digital
device as an
unauthorized aid
during an exam.
Turning in a
paper copied, at
least in part,
from another
student’s paper,
whether or not
the student is
currently taking
the same online
course.
158

More than
Never Once Not relevant
once
Using a false or
forged excuse to
obtain an
extension on a
due date or
delay taking an
online exam.
Turning in work
done by
someone else in
an online class.
Cheating on a
test in an online
class in any
other way.
Part 2: How serious do you think each type of behavior is?
Trivial Moderate Serious
Not cheating
cheating cheating cheating
Fabricating or
falsifying a
bibliography in
an online
assignment
Working on an
online
assignment with
others when the
instructor asked
for individual
work.
Getting
questions or
answers on an
online test from
someone who
has already
taken a test
Helping
someone else
cheat on an
159

Trivial Moderate Serious


Not cheating
cheating cheating cheating
online test.
Copying from
another student
during an online
test with his or
her knowledge.
Using digital
technology
(such as text
messaging) to
get unpermitted
help from
someone during
an online test or
assignment.
Paraphrasing or
copying a few
sentences from a
book, magazine
or journal (not
electronic or
Web-based)
without
footnoting them
in a paper s/he
submitted in an
online class.
Turning in a
paper in an
online class
from a “paper
mill” (a paper
written and
previously
submitted by
another student)
and claiming it
as his/her own
work.
Using an
electronic/digital
160

Trivial Moderate Serious


Not cheating
cheating cheating cheating
device as an
unauthorized aid
during an exam.
Turning in a
paper copied, at
least in part,
from another
student’s paper,
whether or not
the student is
currently taking
the same online
course.
Using a false or
forged excuse to
obtain an
extension on a
due date or
delay taking an
online exam.
Turning in work
done by
someone else in
an online class.
Cheating on a
test in an online
class in any
other way.

10. If you indicated in Question 9 that students have paraphrased or copied


material from a written electronic source without citing it in one or more of your
courses, please tell us how you believe they accessed this material:

o Internet or other electronic means only


o Hard (paper) copies or sources only
o Primarily Internet or other electronic means
o Primarily hard (paper) copies of sources
o Have observed/suspected both methods pretty equally
161

11. Have you ever offered an online test or exam at your institution?

o Yes
o No

12. If you have answered Yes to Question 11, have you ever observed a student
who: (Check all that apply)

o Collaborated with others during an online test or exam when not


permitted?
o Used notes or books on a closed book online test or exam?
o Received unauthorized help from someone on an online test or exam?
o Looked up information on the Internet when not permitted?

13. How strongly do you agree or disagree with the following statements?
Disagree Strongly
Disagree Not sure Agree
strongly agree
Cheating in
online classes
is a serious
problem at this
institution
Our student
judicial
process is fair
and impartial
Students in
online classes
should be held
responsible for
monitoring the
academic
integrity of
other students
Faculty
members are
vigilant in
discovering
and reporting
suspected
cases of
162

Disagree Strongly
Disagree Not sure Agree
strongly agree
academic
dishonesty in
their online
classes
14. What safeguards do you employ to reduce cheating in your online courses?
(Check all that apply)

o None. I do not use any special safeguards in my courses


o Use the Internet, or software such as Turnitin.com, to detect or
confirm plagiarism
o Provide information about cheating/plagiarism on course outline or
assignment sheet
o Change exams regularly
o Hand out different versions of an exam
o Discuss my views on the importance of honesty and academic
integrity with my students
o Remind students periodically about their obligations under the
institution’s academic integrity policy
o Closely monitor students taking a(n) test/exam
o On-campus proctored testing center
o Off –campus proctored testing center
o At-home webcam computer proctor
o Password protected exams
o Secure exam browser lockdown
o Other:

15. What additional safeguards would you employ to reduce cheating in your
online courses, if they were available? (Check all that apply)

o Plagiarism detection software, like TurnItIn.com


o On-campus proctored testing center
o Off –campus proctored testing center
o At-home webcam computer proctor
o Password protected exams
163

o Secure exam browser lockdown


o Other:

Demographics
o

16. How many years have you been teaching at the college level?

o 0-2
o 3-7
o 8-12
o 13 or more

17. Gender?

o Male
o Female

18. In which of the following areas is your primary teaching responsibility?

o Arts
o Business
o Communication/Journalism
o Engineering
o Humanities
o Math or Science
o Nursing/Health professions
o Social and behavioral sciences
o Other:

Focus Group: The researcher will invite 8 focus group members for a one hour
conversation about the survey questions. If you are interested in joining the focus
group, please add your contact information to this link. Your information cannot
be traced back to your survey answers.
https://docs.google.com/forms/d/1Z_zK5e4ryLjktEBUzLCysWXPmcjRnTN1BrU
pglJphQU/viewform
164

Thank you for your participation! Please click to enter into the sweepstakes for a
chance to win a $25 Amazon giftcard https://docs.google.com/forms/d/1Gxqi-
F2IfpLEk4IFbaHn4SzgULSXVtXYMukHJVW6J7Y/viewform

Never submit passwords through Google Forms.


Powered by
This content is neither created nor endorsed by Google.
Report Abuse—Terms of Service—Additional Terms

From McCabeʼs Academic Integrity Survey Report 2010, by D. DuPree and S. Sattler,
2010. Reprinted with permission. Retrieved from Texas Tech University Ethics Center
website: www.depts.ttu.edu/provost/qep/docs/
McCabe_Academic_Integrity_Report_Cover.pdf
165

Appendix C

Chi Square Test of the First 42 Questions


166

The survey was delivered via Google forms and there was a technical glitch, which

disallowed the first 42 respondents from selecting multiple answers as indicated in the

question. Instead, respondents could only select one answer from question 9a and one

answer for 9b. Chi square test results indicate that this glitch did not significantly

influence the respondents’ answers when compared to subsequent submissions after the

error was corrected.

Chi-Square Results: Question 9 Comparisons, First 42 Respondents vs. Remainder


Chi df N p
Question Square

q9a1 - How often a student in my class fabricated or 5.62 3 121 0.13


falsified a bibliography in an online assignment
q9b1 - How often a student in my class worked on an online 5.24 3 121 0.16
assignment with others when the instructor asked for
individual work
q9c1 - How often a student in my class got questions or 2.62 3 118 0.45
answers on an online test from someone who had already
taken a test
q9d1 - How often a student in my class helped someone else 1.27 3 115 0.74
cheat on an online test
q9e1 - How often a student in my class copied from another
4.06 3 114 0.26
student during an online test with his or her knowledge
q9f1 - How often a student in my class used digital
technology (such as text messaging) to get unpermitted help 4.19 3 114 0.24
from someone during an online test or assignment
q9g1 - How often a student in my class paraphrased or
copied a few sentences from a book, magazine or journal 2.34 3 113 0.50
(not electronic or Web-based) without footnoting them in a
paper s/he submitted in an online class
q9h1 - How often a student in my class turned in a paper
from a "paper mill" (a paper written and previously 3.32 3 116 0.35
submitted by another student) and claiming it as his/her own
work
167

q9i1 - How often a student in my class used an 2.28 3 117 0.52


electronic/digital device as an unauthorized aid during an
exam
q9j1 - How often a student in my class turned in a paper 3.41 3 115 0.33
copied, at least in part, from another student's paper,
whether or not the student in currently taking the same
online course 1.25 3 110 0.74
q9k1 - How often a student in my class used a false or
forged excuse to obtain an extension on a due date or delay
taking an online exam 0.74 3 111 0.86

q9l1 - How often a student in my class turned in work done


by someone else in an online course
2.23 3 113 0.53
q9m1 - How often a student in my class cheated in any other
way

You might also like