Holder
Holder
Abstract
Predictors of persistence previously found useful in distinguishing successful from unsuccessful distance learners were assembled
in a 60-item survey. The survey was completed by 259 learners enrolled in associate's, bachelor's, or master's level distance learning
courses in accounting, business administration, information services, criminal justice, nursing, management, and education. The
survey measured variables related to academics, environment, motivation, and hope as predictors of persistence, where persistence
was defined as continuing beyond the first three classes in one of the three degree-granting programs. Persisters (N = 209) tended to
score higher on environmental measures of Emotional Support, Self-efficacy, and Time and Study Management than non-persisters
(N = 50). Surprisingly, high scores on a measure of Learner Autonomy (independent learning) were associated with non-persistence in
the online programs. The findings were interpreted in the context of the cohort model used in the online programs attended by the
students surveyed in the study.
© 2007 Elsevier Inc. All rights reserved.
The explosive growth of distance education during the 1990s profoundly altered and reshaped postsecondary
institutions. The footsteps down the hallowed halls of academia are rapidly being replaced with keystrokes zipping
through cyberspace. Online learning during the 90's experienced expansive growth that has carried over into the 21st
century. The World Wide Web has quickly expanded the manner and ways that postsecondary institutions provide a
quality education. The growth rate of online learning has been phenomenal during the last few years with online
enrollment growing faster than traditional student enrollment (Oblinger & Hawkins, 2005) and gaining recognition as
the postsecondary wave of the future (Klett, 2004). The turn-of-the-century perception of online learning as a potential
watershed for colleges and universities has quickly been realized (Kiernan, 2003; Leonard & Guha, 2001; Meyer, 2002).
This considerable growth has led institutions to experience a demographic change as significantly older students are
entering and re-entering postsecondary education. Such changes have brought a greater emphasis on the particular
needs, characteristics, lifestyles, motivations, enrollment patterns, and unique roles and responsibilities of these adult
learners (Kilgore & Rice, 2003). This growing population of adult learners views online learning as a flexible and
valuable option now available to them as they balance demanding work, family, and other responsibilities.
1096-7516/$ - see front matter © 2007 Elsevier Inc. All rights reserved.
doi:10.1016/j.iheduc.2007.08.002
246 B. Holder / Internet and Higher Education 10 (2007) 245–260
As a result of these changing patterns, institutions that have traditionally catered to 18–25-year-old students have
responded by trying to eliminate or minimize barriers that affect electronic learners. With dropout rates of distance
education students estimated to be 10–20% higher than those for in-person learning (Carr, 2000; Diaz, 2002), greater
effort has been expended to retain the online learners, since this represents a significant financial and professional
concern to these institutions. Berge and Huang (2004) note, “Retention of students at the course, program, or degree
level has been a timeless concern of educators. The lack of retention, or dropout, has historically challenged
educational systems and seems to be especially acute in distance learning” (p. 2). Stated by Rovai (2002), “Persistence,
that is, the behavior of continuing action despite the presence of obstacles, is an important measure of higher education
program effectiveness” (p. 1).
1. Theoretical framework
Institutions and researchers have shown an increased concern for the readiness potential of online learners (Kerr,
Rynearson, & Kerr, 2006). A review of the research indicates that successful and persistent electronic learners have
need of specialized learning skills, which are not as essential for the traditional brick-and-mortar student. The concern
regarding the retention of online students is clearly stated by Bocchi, Eastman, and Swift (2004):
There are a variety of reasons for this higher attrition rate [as compared to traditional brick-and-mortar rates],
including students' feelings of isolation, difficulty adjusting to a self-directed approach, and their finding that
such courses are more rigorous than anticipated and that faculty members and students lack experience with
online learning. (p. 246).
In the specialized learning environment of web-based instruction, the ability to work independently, sustain one's
focus on personal and academic goals, maintain motivation in spite of conflicting commitments, and demonstrate
computer proficiency are among some of the qualities and life approaches that increase successful completion. A
review of research related to a number of these factors is necessary to understand the purpose of this study and future
directions of persistence studies of online learners.
Much research of online learning targets the preparedness construct, including various aspects of preparation that
may be associated with, and predictive of, retention and persistence among online learners. It is recognized that
successful students prone to persist are prepared students, and that preparation comes in different forms. Previous
academic records have been the traditional standard, often supplemented by national standardized examinations,
application essays and personal interviews. More recent research suggests that students thriving in an all-Internet
learning environment appear to possess qualities and approaches to life in addition to being academically prepared,
including attributes such as student engagement, self-directedness, computer self-efficacy, self-discipline, time
management skills, motivation and commitment, goal- and relevancy-orientations, and individualistic learning styles
(Diaz, 2000; Howell, Williams, & Lindsay, 2003; Leasure, Davis, & Thievon, 2000; Meyer, 2002; Moore, Sener, &
Fetzner, 2006; Morris, Wu, & Finnegan, 2005; Parker, 2003).
Loubeau and Heil (2000) note that much of the research during the 1990s focused on the technical and computer
preparation of students, barriers that impact successful completion, and the development of screening tools to help
identify levels of preparedness for online learning. These areas were given attention because the online learning method
of instruction necessitates mature self-discipline and demonstrated competencies in learning and study strategies
(Romano, Wallace, Helmick, Carey, & Adkins, 2005; Sankaran, Sankaran, & Bui, 2000; Sizoo, Malhotra, & Bearson,
2003). Additionally, researchers have indicated that student characteristics, such as the level of student engagement,
independent learning style, self-motivation, and effective time management, influence student grades and retention in
online learners (Cauffman, 2000; Diaz, 2000; Morris, Finnegan, & Wu, 2005; Thiele, 2003).
Sizoo et al. (2003) believe students in a distance-learning format will need to be self-disciplined and have effective
learning skills. These researchers examined the learning strategies of in-class and DE learners and suggested sources
B. Holder / Internet and Higher Education 10 (2007) 245–260 247
and methods of acquiring necessary skills, which may be lacking. Overall, they believed perceived readiness (or lack
of) has bearing on persistence and retention issues of online learners. Along this line, Northrup, Russell, and Burgess
(2002) assessed 52 graduate students in an online master's program. They noted the essential importance of these
students' self-monitoring their progress in order to survive in the online courses.
Loomis (2000) investigated the relationship between individual study and learning styles with academic
performance. He gave the Learning and Study Strategies Inventory (LASSI) to 28 students in a Research Methods class
(Communication and Journalism department). By or shortly after the mid-term exam, five of the students had dropped
the course (82% retention rate). He identified several factors as contributors to academic performance in this online
course: time management, study skills (with particular emphasis on attitude and ability to identify main ideas), and the
ability to focus attention on assignments. Academically successful students effectively used study aids, but time
management skills correlated the strongest with the final grade.
Brenner (1997) analyzed students' cognitive styles in distance education courses at a community college. The
students were assessed for field dependence and field independence. The grades in the course did not reveal a statistical
difference between these two cognitive styles. However, Brenner did conclude that effective distance learners must
possess self-discipline, self-planning, and self-organization.
Osborn (2001) tested the reliability and validity of a brief assessment instrument to identify at-risk, Web-based, distance
education students. Among the variables that discriminated between completers and non-completers were study envi-
ronment, recognized as allocating a place and time to accomplish tasks related to the course, and computer confidence,
recognized as a level of comfort in approaching tasks that involve the use of a computer.
In Diaz and Cartnal's (1999) study comparing learning styles of online students (N = 68) with an equivalent on-campus
class (N = 40), the Grasha–Riechmann Student Learning Style Scales (GRSLSS) was utilized. The students enrolled in the
distance education class were significantly more independent learners than those comprising the on-campus class (p b .01).
However, in Aragon, Johnson, and Shaik's (2002) exploratory empirical study examining the relationship of learning style
and learning outcomes of students in online and face-to-face environments, no statistical difference was found between the
two groups. Among other instruments included in the research, the Independent Learner subscale of the GRSLSS was also
used. However, the sample size was much smaller (N = 19) in each group. Further investigation of the independent learning
preference as it relates to retention in E-learning is needed.
Environmental considerations are also deemed important factors in satisfaction and persistence of online students
(Lim, 2001). Substantial preparation and support are seen as important influences on student achievement outcomes
(House, 1999). Technical preparation, personal and familial support, financial and work concerns, and freedom to
engage and dedicate the necessary time and energy into succeeding in an online learning format all appear to be
environmental considerations influential to retention in web-based, distance education.
Non-traditional students differ from the traditional college-age students in some significant ways (Howell et al.,
2003). In writing about 32 trends affecting distance education, the authors believe students in distance education tend to
be practical problem solvers. They also believe students' “life experiences make them autonomous, self-directed, and
goal-and relevancy-oriented—they need to know the rationale for what they are learning. They are motivated by
professional advancement, external expectations, the need to better serve others, social relationships, escape or
stimulation, and pure interest in the subject” (Student/Enrollment Trends, Number 3). However, Howell et al. also note
that these students face many demands on various fronts, including time and scheduling, money, and long-term
commitment constraints. They also express that these students experience insecurity concerning their ability to succeed
in distance learning.
The practical and significant need for adequate technical preparation on the part of the learner is well documented
(Osika & Sharp, 2003). Osika and Sharp highlight the increasing demands placed on higher education students to not
only master course material, but also demonstrate effective use of technology. After surveying faculty, an inventory of
the minimum technical competencies expected of students in Web-based instruction was constructed. Summarizing the
technical needs of students, Lorenzetti (2003) states, “Online students' needs are different — they need different kinds
and intensities of support” (p. 1).
Loubeau and Heil (2000) focused on the readiness of health administration students for distance learning education,
specifically technology readiness. A 25-item survey was utilized as a measure of students' familiarity with Internet basics
248 B. Holder / Internet and Higher Education 10 (2007) 245–260
and web browsing, use of e-mail, and discussion groups. The availability of computer access, on and off campus, was also
measured. Of the 98 students from which the data was collected, 83% perceived some computer knowledge deficiencies
with 39% seeing themselves as needing extensive computer training. The presenters stated three learning objectives. One
was the need to recognize the limitations of student preparedness; another was the need to identify specific computer skill
deficiencies in students; and the third was the need for remediation before beginning distance education.
Lorenzetti (2003) presents the concern for online students to have technological and computer support. The author
stresses, “Administrators [related to DE] need to fully consider the anxiety levels of students who are already nervous
about studying remotely” (p. 1). In her mind, if technological and computer support needs remain unmet, “students may
go away underserved — perhaps to another university, perhaps out of continuing education entirely” (p. 1).
Lim (2001) had a total of 235 adults enrolled in Web-based distance education at five institutions complete
questionnaires. She concluded that computer proficiency and self-efficacy had a positive effect on satisfaction and
seemed to give learners increased confidence in the ability to handle the academic demands of DE. The level of
computer proficiency was found to be a significant factor affecting satisfaction and future participation in this distance-
learning format. Learners with high computer self-efficacy tended to indicate greater satisfaction. The author believes
her study suggested that adult learners with higher levels of computer efficacy also tended to have a higher academic
self-concept.
Song, Singleton, Hill, and Koh (2004) believed their study of 76 graduate students' online learning experiences
indicated that most learners agreed that course design, learner motivation, time management, and
comfortableness with online technologies impact the success of an online learning experience. Participants
indicated that technical problems, a perceived lack of sense of community, time constraints, and the difficulty in
understanding the objectives of the online courses as challenges (p. 59).
In essence, support considerations and technological preparation are seen as vital.
As stated earlier, Osborn (2001) found computer confidence (defined as a level of comfort in approaching tasks that
involve the use of a computer), in addition to allocating a place and time to accomplish tasks related to the course, to be
a positive means of differentiating completers from non-completers in DE courses. In addition, Mollison (2000)
indicated finances and time are major concerns of adult students. Continuing his thought, candidates for online or
distance education courses should have the qualities necessary for completion — discipline and self-motivation.
The persistent online learner appears to need not only certain academic qualities and environmental support, but also
a high motivation level (Morris, Wu et al., 2005). Aviv (2004) presents various reasons distance learners appreciate the
online learning environment. The most frequent reasons specified included studies, career, weekday, family/work,
interaction, and online. For the most part, the learners appreciate the online learning environment for factors that are
associated with their own life situations and personal motivation.
In their article regarding academic persistence, Bird and Morgan (2003) particularly evaluated issues, themes, and
concerns of prospective adult distance education learners. Among others, motivation was listed as a key theme. The
motivational factors influential in persistence were clearly defined goals, belief in one's ability to achieve such goals,
and the anticipation of significant meaning when the goal is accomplished. In total, their survey of adult learners and
their enrollment decisions in DE uncovered six themes: fears, motivation, family support, academic preparedness,
suitability of programs, and identity change.
Academics and motivation are tied together in Visser, Plomp, Amirault, and Kuiper (2002). They studied the impact
of motivational intervention on 81 international students from 5 different continents and 22 countries. Finding that
motivational messages effectively increased the proportion of students who completed the courses as compared to
previous years, they implored instructional designers and instructors to understand motivational principles and use that
knowledge to help students overcome frequent motivational issues surrounding the distance learning community.
For his doctoral dissertation, Jamison (2003) focused on motivation-related variables when he surveyed 333 adult
community college students with a 54-item Web-based instrument. While focusing exclusively on motivation-related
variables such as goal activation, goal salience, goal alignment, emotional activation, and responsive environment, he
was successful in correctly classifying course completers and non-completers. The predictive model developed using
discriminant analysis was able to correctly classify student completion in more than 90% of the cases.
B. Holder / Internet and Higher Education 10 (2007) 245–260 249
Leasure et al. (2000) conducted a comparison study of students in the nursing field. Student outcomes were
compared between students in traditional courses and those using Web-based learning technology. No significant
difference in examination scores or course grade was found between the two groups. They did find that the learners
who reported themselves as self-directed and able to avoid procrastination and maintain their own pace were the most
appreciative of the Web-based courses.
In addition to the above factors, Snyder's conception of hope was alleged to have promise as an additional variable
in the prediction of persistence (Snyder et al., 1991). Snyder and his colleagues have devised an adult dispositional
Hope scale that is derived from his hope model. Snyder (1995) defined hope as the “process of thinking about one's
goals, along with the motivation to move toward (agency) and the ways to achieve (pathways) those goals” (p. 355).
Hope as a construct was developed in the context of goal-setting behavior (Snyder et al., 1991). The concept of hope as
“individuals' perceptions regarding their capacities to (1) clearly conceptualize goals, (2) develop the specific strategies
to reach those goals (pathways thinking), and (3) initiate and sustain the motivation for using those strategies (agency
thinking)” (Snyder, Lopez, Sholey, Rand, & Feldman, 2003, pp. 122–123) is a strengths-based concept within the
emerging positive psychology field and might have promise as a predictor of retention.
Embodying some of the definition of hope, Kemp (2002) outlined a correlation study in which she correctly
classified 66% of the 121 students in the sample as either persisters or non-persisters. Three predictors of persistence
were used as independent variables: resilience, life events, and external commitments. The dependent variable was
persistence and defined as successful course completion. Successful course completers tended to score higher on three
skills relating to persistence: confidence to make the most of bad situations, ability to make things better, and
persistence at working through difficulties. Kemp concluded with the following thought, “Knowledge of students'
resiliency skills (and by extension their ‘risk quotient’) allows distance educators to target interventions to those most in
need. Understanding the conditions under which resiliency skills will generalize to differing activities in academic
contexts offers valuable possibilities for intervention and instructional strategies that may help students build both
competence and the necessary accompanying self-perceptions of competence” (p. 78).
3. Research question
4. Method
4.1. Participants
The participant pool consisted of all students beginning enrollment in the Adult and Professional Studies online,
degree-completion programs of a growing university in the Midwest. These students were pursuing a degree at the
associate's, bachelor's, or master's level in a broad range of areas, such as accounting, business administration,
information services, criminal justice, nursing, management, and education. The web-based questionnaire was made
available to 407 students. One-half way through the first course in each program, students were given the opportunity
to participate in the study. Most received a small incentive of five bonus points for participating. The ultimate sample of
259 participants appeared to closely match the population of the university's online students. Of the complete sample
utilized, 209 were classified as persisters, and 50 were classified as non-persisters.
4.2. Instrumentation
The online survey designed for this study was a compilation of 60 items, the majority of which came from
previously published studies (see Table 1). The majority of the items listed in Table 1 were chosen from previously
250 B. Holder / Internet and Higher Education 10 (2007) 245–260
Table 1
Cronbach alphas and corrected item–total correlations
Source Response item Alphas Corrected item–total
correlations
Hope scale .79
Pathways — 4 items .66
A I can think of ways to get out of a jam. .33
A There are lots of ways around any problems. .49
A I can think of many ways to get the things in life that are most important to me. .48
A Even when others get discouraged, I know I can find a way to solve the problem. .48
Agency — 4 items .70
A I energetically pursue my goals. .45
A My past experiences have prepared me well for my future. .40
A I've been pretty successful in life. .57
A I meet the goals that I set for myself. .54
Academics scale .74
Time and study management — 6 items .75
B I usually study where I can concentrate on my course work. .46
B I make good use of my study time in my courses. .60
B I find it hard to stick to a study schedule. (R) .46
B I have a regular place set aside for studying. .47
B I make sure I keep up with the weekly readings and assignments for my courses. .52
B I often find that I don't spend very much time on my courses because of other activities. (R) .55
Meta-cognitive self-regulation — 3 items .39
B When I become confused about something I'm reading in my course work, I go back and try to figure it out. .25
B I try to change the way I study in order to fit the course requirements and the instructor's expectations. .24
B When I study for my courses, I set goals for myself in order to direct my activities in each study period. .26
Learner autonomy — 5 items .50
C I prefer to work by myself on assignments in my courses. .27
C I learn a lot of content in my classes on my own. .33
C I feel very confident about my ability to learn on my own. .32
C I like classes where I can work at my own pace. .25
C When I don't understand something, I first try to figure it out for myself. .24
Environment scale .75
Computer/Internet self-confidence — 3 items .67
D I have real concerns about communicating electronically. (R) .52
E I sometimes wonder if I have sufficient computer keyboarding skills for doing online work. (R) .58
E I feel comfortable composing text on a computer in an online learning environment. .43
Fiscal support — 4 items .59
G I sometimes wonder if finances might delay or interfere with completing my degree. (R) .36
G I believe the current financial burden of my education will be worth it. .40
G I sometimes wonder if my education is really worth all the investment that I'll put into it. (R) .44
G I am satisfied that I will be able to meet my financial needs while I pursue my degree. .36
Emotional support — 3 items .43
G Overall, my support group of family and friends encourages me to complete my program of study. .20
G I sometimes wonder if I'll need more support than I am getting right now. (R) .34
G I have concerns that online learning will be a lonely experience. (R) .28
Motivation scale .73
Intrinsic goal orientation — 3 items .59
B I prefer course material that really challenges me so I can learn new things. .42
B I prefer course material that arouses my curiosity, even if it is difficult to learn. .50
B The most satisfying thing in my program of study is trying to understand the content as thoroughly as .30
possible.
End goal orientation — 3 items .60
G I have comfortably arranged my life and activities around my desire to meet my educational goals. .40
F Learning helps me achieve challenging personal goals. .47
F I use learning as a vital resource in accomplishing my professional or personal goals. .46
Self-efficacy for learning and performance — 4 items .58
B I am certain I can understand the most difficult material presented in the readings for this degree program. .45
B I question if I can do an excellent job on the assignments and tests in this program. (R) .39
B I expect to do well in this program. .46
B. Holder / Internet and Higher Education 10 (2007) 245–260 251
Table 1 (continued)
Source Response item Alphas Corrected item–total
correlations
Self-efficacy for learning and performance — 4 items
B Considering the difficulty of this program, the teachers, and my skills, I think I will do well in completing .42
this degree.
Compliant learner — 6 items .75
F The instructor is the best person to monitor, evaluate, and determine how well I learn. .51
F The instructor helps me stay on task and meet course objectives. .44
F I do well on a course if I rely on the instructor. .34
F I rely on the instructor to assess my learning achievement. .49
F I know that the instructor can show me the best way to evaluate achievement of my learning goals. .64
F The instructor can plan my best learning approach for accomplishing training objectives. .52
(A) Snyder (1995); (B) Duncan and McKeachie (2005); (C) Riechmann and Grasha (1974); (D) McVay (2001); (E) Bernard et al. (2004);
(F) Martinez (2005); (G) Created for this project. (R) indicates reverse coding of the item response for scoring.
validated instruments. The original Hope scale (Snyder, 1995) was composed of four items each for the agency and
pathways subscales. Babyak, Snyder, and Yoshinobu (1993) conducted confirmatory factor analysis to test the
psychometric properties of the Hope scale and concluded that a two-factor solution best represented the item responses.
Snyder reported acceptable reliability with Cronbach Alphas of .74 and .84, and test–retest reliabilities (3 to 10 weeks)
over .80. These two, four-item Hope subscales appeared to show appropriate internal consistency and temporal stability
for the purposes of the study.
Pintrich and McKeachie originally developed the Motivated Strategies for Learning Questionnaire (MSLQ) with 81
items, 6 motivation subscales, and 9 learning strategy scales (Duncan & McKeachie, 2005). It has been utilized
throughout the world and translated into multiple languages. Duncan and McKeachie state, “The MSLQ has proven to
be a reliable and useful tool that can be adapted for a number of different purposes for researchers, instructors, and
students” (p. 117). Duncan and McKeachie indicate that the 15 scales can be used together or singly. Internal
consistency estimates of reliability are within acceptable ranges with 12 of the scales having Cronbach Alphas of .68 or
above (items chosen to be included in this research were selected from scales with Alphas from .74 to .93).
The Grasha–Riechmann Student Learning Style Scales (GRSLSS) assesses six student-learning styles: Independent,
Dependent, Avoidant, Participant, Collaborative, and Competitive (Riechmann & Grasha, 1974). A 5-point Likert scale
format is used in the original GRSLSS with 15 items for each scale with test–retest reliabilities ranging from .76 to .83
(N = 269).
Bernard, Brauer, Abrami, and Surkes (2004) developed a 38-item questionnaire to assess the achievement outcomes
of online learning success, with 13 of the items derived from McVay (2001). Factor analysis produced four scales in a
sample of 167 students prior to their beginning an online course, labeled Confidence in Prerequisite Skills, General
Beliefs about Online Learning, Self-management of Learning, and Desire for Interaction with Others. Six of the eight
items for the Confidence in Prerequisite Skills scale were initially used in this study. This scale had a Cronbach Alpha
of .79 and corrected item-total correlations from .35 to.63.
The Learning Orientation Questionnaire (LOQ) is a 25-item, online survey to measure how adults generally approach
learning and performance (Martinez, 2005). Scores are used to identify gaps between the potential learning and
performance proficiency and actual learning and performance proficiency. Several universities and corporations have used
it, and it has been field tested by over 15,000 subjects. In a sample of 1277 subjects, the internal-consistency reliability
coefficient for the utilized subscale was .79. A correlation of .85 was obtained during a test–retest reliability analysis.
5. Reliability analyses
Four scales were scored from the survey — Academics, Environment, Motivation and Hope. A fifth scale, Compliant
Learner, was added into the survey and analyzed with the subscales. The survey consisted of the eight-point response scale
provided for the Hope scale items. While the items presented in Table 1 groups the items according to their scale, the survey
used in the research presented the items in a different order to minimize response bias due to similar content in adjacent items.
Item-total correlations were used to maximize the internal consistency reliability of the original scales comprised of
items selected from the literature to measure each of the constructs. The original scales produced Cronbach Alphas of
252 B. Holder / Internet and Higher Education 10 (2007) 245–260
.79 for the eight-item Hope scale, .72 for the18-item Academics scale, .75 for the 14-item Environment scale, .68 for
the 12-item Motivation scale, and .69 for the eight-item Compliant Learner scale. While these Alphas suggest moderate
internal consistency reliabilities for the original scales, inspection of the correlations between the scale items and the
total scale scores revealed several items with near-zero item-total correlations. These items were removed, and the
Alphas were recalculated for the remaining items in each scale. If the Alpha improved or remained the same, scores for
the shortened scales replaced the original scores and were used to test the study's hypothesis.
The eight item Hope scale was retained in its entirety with Alpha = .79. The Academics scale was reduced from 18 to
14 items with Alpha increasing from .72 to .74. The Environment scale was reduced from 14 to 10 items, and Alpha
remained the same at .75. The Motivation scale was reduced from 12 to 10 items, and Alpha increased from .68 to .73.
The Compliant Learner subscale was reduced from 8 to 6 items, and the Alpha increased from .69 to .75. These changes
reduced the original 60-item survey to the 48 items shown in Table 1. Table 1 shows the items comprising these
subscales, the Cronbach Alphas for each scale and subscale, and the corrected item-total correlations.
In addition to the main scales shown in Table 1, subscales were scored for the constructs of hope, academics,
environment, and motivation. The Hope subscales were Pathways and Agency; Academics subscales were Time and
Study management, Meta-Cognitive Self-regulation, and Learner Autonomy; Environment subscales were Computer/
Internet confidence, Fiscal Support, and Emotional Support; and Motivation subscales were Intrinsic Goal Orientation,
End Goal Orientation, and Self-efficacy for Learning and Performance.
The highest Cronbach Alpha for the Academics subscales was .75 for Time and Study Management, followed by .50
for Learner Autonomy, and .39 for Meta-cognitive Self-regulation. Cronbach Alphas for the Environment subscales of
Computer/Internet, Fiscal Support, and Emotional Support were .67, .59, and .43, respectively. The Motivation
subscales ranged from a high of .60 for the End Goal subscale to a low of .58 for Self-efficacy, with the Intrinsic Goal
Orientation subscale of .59 in the middle. The Hope scale Cronbach Alphas were .66 for the Pathways subscale and .70
for the Agency subscale.
6. Procedures
The university uses a cohort model of grouping students together into classes and offers various options for completing
an associate's, bachelor's, or master's degree in the areas of accounting, business administration, information services,
criminal justice, nursing, management, and education. The university provides both on-site and online options for all but
one of the programs included in the data collection. Instructors use the face-to-face delivery mode for the on-site option,
with the Internet as the delivery mode for the online option. Course descriptions, learning objectives, and numerous
assignments are the same in either option. Yet, the online programs incorporate asynchronous discussions that are carried
out over the period of a week at a time; thus, the focus and activities for the on-site programs are quite often different from
those of the online programs. Students choose which program most fits their need and – provided an on-site course is
available nearby – may opt to take a course in a different delivery format or change programs altogether.
During the data collection period, once a student was enrolled in a program of study and consented to participate in the
research project, he or she was asked to complete the survey in an online, web-based format that was accessible using
standard Web browsers. Responses were captured into an output file and converted to SPSS. The students learned of the
opportunity to participate in the study via a workshop document that outlined the required activities expected of the
students, as well as the optional activity of participating in this research project. The URL for the web-based questionnaire
was indicated in the respective workshop documents. This document appeared approximately midway through the first
course in each student's respective program. While just a few of the programs refrained from offering bonus points for
completing the survey, five bonus points were offered to the majority of the students, and references to this effect were
included in both the workshop document and the Informed Consent document (i.e., the first Web page of the questionnaire).
7. Results
7.1. Demographics
Table 2 presents the demographic characteristics of the sample. Females outnumbered males two to one, and the
majority of the students were between 30 and 39 years of age. More than 80% were Caucasian with Black/Non-
Hispanic making up the next largest group. Nearly one quarter of the sample was taking classes on the master's level,
B. Holder / Internet and Higher Education 10 (2007) 245–260 253
Table 2
Demographics
Numbers of learners (N = 259) Percent
Gender
Male 87 33.6
Female 172 66.4
Age
20–29 65 25.1
30–39 119 45.9
40–49 60 23.6
50 and over 13 5.0
Unknown 1 .4
Race/Ethnicity
American Indian 3 1.2
Asian or Pacific Islander 5 1.9
Black, non-Hispanic 32 12.4
Caucasian 214 82.6
Hispanic 4 1.5
Unknown 1 .4
Academic pursuit
Associates 80 30.9
Bachelors 117 45.2
Masters 62 23.9
Status
No employment/full-time learner 7 2.7
Full-time employee/full-time learner 181 69.9
Full-time employee/part-time learner 54 20.8
Part-time employee/part-time learner 5 1.9
Part-time employee/full-time learner 10 3.9
Part-time learner 3 20.0
Online experience
One or more course online 98 37.8
First time online learner 161 62.2
slightly under a third was taking classes on the associate's level, and just over 45% was pursuing courses leading to a
bachelor's degree. Nearly two-thirds of the students had no prior experience with online classes and almost 70%
reported being employed fulltime while pursuing their degree.
Out of the total sample of 259 participants, 30.9% were pursing their associate's degree, and 23.9% were pursuing
their master's degree. The largest number of participants was pursuing a bachelor's degree (45.2%). The university's
continuation rate was 80.0%, 84.5%, and 72.6% for the online associate's, bachelor's, and master's programs,
respectively. While it is a combined 80.7% continuation rate for the three programs, those pursuing their bachelor's
degree achieved a noticeably higher persistence rate than did those pursuing a master's degree. Table 3 shows the
means and standard deviations of the four scales in the persister and non-persister groups.
7.2. Analysis
An independent-samples t test was conducted to evaluate the hypothesis that students persisting in their respective
programs would have higher mean scores than non-persisters on the scales for Hope, Academics, Environment, and
Motivation. Using a one-tailed test, only the Environment means showed a statistically significant difference between the
persistence group and the non-persistence group, t(257) = 1.86, p = .03. Non-significant differences were obtained for the
Academics means (t(257) = .64, p = .26), Motivation means (t(257) = 1.14, p = .13), and Hope means (t(257) = −1.13,
254 B. Holder / Internet and Higher Education 10 (2007) 245–260
Table 3
Statistics of students by persistence and total scale scores
Persisters (N = 209) Non-persisters (N = 50)
Hope
M 53.82 54.76
SD 5.83 5.15
Academics
M 97.82 96.89
SD 9.86 10.52
Environment
M 61.87 58.84
SD 10.30 10.39
Motivation
M 61.67 61.02
SD 5.31 6.84
p = .13). To evaluate whether the 12 subscales would have higher mean scores for persisting students, independent-samples
t tests were conducted, and the one-tailed results are presented in Table 4.
To complete the statistical analysis of persistence, a discriminant analysis was performed to assess prediction of group
membership. Discriminant analysis can be used to predict membership on the basis of quantitative predictor variables. The
12 subscales were entered together as the independent variables or predictors, and the dependent variable (also known as
the criterion variable) was group membership as either persisters or non-persisters. The overall Wilks' Lambda indicates
significant differences in means on the predictors between the two groups, Λ = .91, X2 (12, N = 259) = 23.39, p = .025. This
indicates that there are differences between the persisters and non-persisters across the 12 predictor variables in the given
Table 4
T-test for equality of means
Subscale t df Significance
Hope scale
Pathway − 1.20 257 p = .12
Agency − .64 257 p = .26
Academics
Time and study management 1.78 257 p = .04⁎
Meta-cognitive self-regulation .49 257 p = .31
Learner autonomy − 2.00 257 p = .03
Environment
Computer/Internet self-confidence 1.16 257 p = .13
Fiscal support 1.08 257 p = .14
Emotional support 2.30 257 p = .01⁎⁎
Motivation
Intrinsic goal orientation .05 257 p = .48
End goal orientation .46 257 p = .32
Self-efficacy 2.00 257 p = .02⁎
Compliant learner − .52 257 p = .33 a
⁎p b .05, one-tailed. ⁎⁎p b .01, one-tailed.
a
Levene's test for equality of variances was significant, p = .04.
B. Holder / Internet and Higher Education 10 (2007) 245–260 255
population. A significant lambda means “one can reject the null hypothesis that the two groups have the same mean
discriminant function scores and conclude the model is discriminating” (Garson, n.d., Wilk's lambda, ¶ 1). Table 5
provides a summary of the predictor variables ordered by absolute size of their discriminant loadings. Also known as
structure coefficients or structure correlations, Garson notes that discriminant loadings are the correlations between a given
subscale and the discriminant scores that are associated with the discriminant function. These coefficients in Table 5
indicate the relative ability of each of these subscales to discriminate between persisting and non-persisting students. Based
on these coefficients, the subscales Emotional Support, Self-efficacy, and Time and Study Management were the most
influential in the discrimination, as was Learner Autonomy with a negative relationship. The resulting canonical
correlation of .30 indicates 9% of the variance in the criterion is accounted for by the 12 predictor variables.
The means on the discriminant function demonstrate a difference between the two groups. The persisters (M = .15) had a
significantly larger mean than did the non-persisters (M = −.64). The discriminant function correctly classified 81.5% of
the students in terms of successfully continuing in their chosen academic program past their first three courses, a percentage
essentially equal to the base-rate prediction (209/259 = 80.7%). As a means to address the cross-validation issue and take
into account chance agreement, a kappa coefficient was computed. Kappa is an alternative statistic that “evaluates the
percent of cases correctly classified except that it corrects for chance agreement” (Green & Salkind, 2003, p. 283). The
kappa coefficient can range from a − 1 to a + 1, with values that are greater than 0 indicating a better than chance-level of
prediction and values that are less than 0 indicate poorer than chance-level of prediction. A value of 0 indicates a chance-
level prediction and a 1 indicates a perfect prediction. The kappa obtained for this sample was .11, a positive but rather
small value. Finally, to assess how well the classification procedure might predict in a cross-validation sample, an estimate
of the percent of persisters/non-persisters correctly classified was conducted by using the leave-one-out technique. The
leave-one-out classification technique is a form of cross-validating the given classification table. Using this method, each
case is classified as a persister or non-persister using the discriminant function based on all cases except the given case.
Garson (n.d.) indicates that this gives a better estimate of what classification results would be in the population. Utilizing
this leave-one-out technique, a correct classification was made in 79.2% of the cases. Because these prediction rates were
essentially equal to the base rate of 80.7%, the findings suggest that the discriminant function, despite the four significant
predictors, was not a strong predictor of persistence.
8. Conclusions
There appeared to be three major criteria differentiating retention in this sample. Successful students prone to persist
tended to score higher in Emotional Support, Self-Efficacy, and Time and Study Management. The level of perceived
emotional support accounted for a large portion of the environmental scale difference between persisters and non-
persisters. Having the experience of a supportive group of friends and family and the comfort of knowing that they are
not alone in this learning process was a significant function related to students' persistence. Those who also had high
self-efficacy for learning and performance had higher expectations to do well in their program and a strong sense of
their own personal ability to succeed in their new learning environment. An additional key factor of successful students
Table 5
Discriminant function structure matrix
Subscale Correlation
Emotional support 0.46
Self-efficacy 0.40
Learner autonomy − 0.40
Time and study management 0.36
Pathway (hope) − 0.24
Computer/Internet confidence 0.23
Fiscal support 0.22
Agency (hope) − 0.13
Compliant learner − 0.10
Academic self-regulation 0.10
End goal orientation 0.09
Intrinsic goal orientation 0.01
256 B. Holder / Internet and Higher Education 10 (2007) 245–260
related to their time and study management. The persisters practiced good study habits, kept up with the weekly reading
and assignments, and managed their time and activities to a greater degree than did the non-persisters.
It is important to note there were no significant differences between persisters and non-persisters when considering the
Academics or Motivation scales as a whole. The findings in this research project did not specifically support previous
research that had found a significant relationship (Diaz & Cartnal, 1999; Jamison, 2003; Spitzer, 2000). However, other
researchers had not found significant differences as a result of these factors (Aragon et al., 2002; Brenner, 1997; DeTure,
2004; Loomis, 2000).
Three considerations are presented that could potentially guide and/or enhance an understanding of how findings in this
research should be interpreted, since some of the results were inconsistent with previous research. One such consideration
might be the timing of the survey in relation to a student's returning to academic studies. Tyler-Smith (2006) presents the
rationale that the multi-dimensional learning tasks of the first time online learner and the potential for cognitive overload
can be factors on early dropouts. Such an effect could presumably influence responses to questionnaire items as well. The
timing of the survey occurring approximately halfway in the first course, coupled with the wording of certain questionnaire
items, is presumed to have some influence on the survey results.
Second, completing some of the items in the survey may have been difficult, since many took the survey after only
three weeks into the program. For example, the three items comprising the Meta-cognitive Self-regulation subscale all
had wording involving practice or experience in the current class. This could reasonably account for a large degree of
variability in the responses and the lowest internal reliability of the subscales (Cronbach Alpha of .39, see Table 3). In
fact, the three subscales with the lowest Cronbach Alphas had all items, or a large percentage of the items, addressing
current experience or preparation for learning in this particular course, something the students would have minimally
experienced in only the first three weeks of the course. Therefore, it is uncertain whether the items themselves are
unreliable, or possibly irrelevant in light of the timing of the survey, or less-than-ideal items not worthy of inclusion.
Third, differential effects in the Compliant Learner and Learner Autonomy subscales could also have been more
pronounced. It seems reasonable to surmise that the Compliant Learner subscale could have been inflated due to the timing
of the survey. There is presumably much greater dependency on the individual instructor for direction, feedback, and
monitoring during the first few weeks of a course than after completing three courses and feeling more comfortable with
the instructional method and technology. A potentially stronger rationale could be accounted for by the large percentage of
students who were re-entering academia following a significant number of years posthigh school (Note: the mean age of
the sample of 259 students was 35.4 years with a range from 21 to 63 years of age). Due to this potential and the timing of
the survey, limited responses in the Learner Autonomy subscale and possibly inflated responses in the Compliant Learner
subscale should be considered.
The finding that Learner Autonomy did help to distinguish persisters from non-persisters in the discriminant analysis
(but in the opposite direction expected) is worth further consideration. There was, in fact, a negative trend noted, with an
overall tendency for students with lower learner autonomy scores to be in the persister group. In other words, students who
liked to think for themselves and were confident in their learning abilities with the preference to work alone on course
projects were not more likely to persist in their academic program, but were in fact, more likely to drop out. Rovai (2002)
gives an historical perspective on this issue, “Learner autonomy, that is, the concept of independence and self-direction, has
been a hallmark of adult education and an assumed characteristic of the non-traditional students enrolled in distance
education programs” (p. 12). While Diaz and Cartnal (1999) found that students who were enrolled in two distance
education classes were significantly more independent learners than those comprising the equivalent on-campus class,
Aragon et al. (2002) did not find a statistical difference between their two groups in terms of learner autonomy.
Surprisingly, the opposite effect was found for this sample.
A tentative explanation as to why students with higher Learner Autonomy scores did not persist as frequently is worth
considering. To believe that students with high Learner Autonomy (and, quite possibly academic prowess) would not
function as well in an online setting seems questionable in light of the historical view of successful completion of
distance education. Yet, Bean and Metzner (1985) somewhat address this potential when stating
Environmental variables are presumed to be more important for nontraditional students than academic variables,
which leads to the following results. When academic and environmental variables are both good (e.g., favorable
for persistence), students should remain in school, and when both are poor, students should leave school. When
academic variables are good but environmental variables are poor, students should leave school, and the positive
effects of the academic variables on retention will not be seen. When environmental support is good and academic
B. Holder / Internet and Higher Education 10 (2007) 245–260 257
support is poor, students would be expected to remain enrolled—the environmental support compensates for low
scores on the academic variables…. Thus, for nontraditional students, environmental support compensates for
weak academic support, but academic support will not compensate for weak environmental support. (p. 491–492).
A possible explanation for the finding that environmental support was a meaningful determinant in persistence over
and above academics and motivation may also be a reflection of the influence of the cohort model as chosen by the
institution. Students assigned to cohorts begin a program and proceed through it together. The classes are intentionally
kept small, and a camaraderie feeling is purposely fostered (A. Beekman, personal communication, November 5,
2006). Of the 28 cohorts represented, the average class size was 12.6 with a high of 18 and a low of 9. The cohort model
in an online learning design may be a particular match for these students in these particular programs at this particular
time due to the unique student and faculty characteristics, learning environment, program attributes, and external
attributes (Lorenzetti, 2003). In fact, Martinez (2003) recommends institutions explore the potential of mismatches
between learning orientation and online learning design in their retention efforts. The cohort model and emotional
support seemingly provided through this approach may, in part, account for the higher retention rate in this sample
when compared to retention rates of other institutions. Also, students with a greater need in this area would likely have
benefited more than those with stronger independent learner inclinations.
In consideration of the above, a final assertion is offered regarding the disparity in scores on the Learner Autonomy
subscale between persisters and non-persisters. The Learner Autonomy scale appeared to adequately measure what it
was intended to measure when one considers that the scale correlated highest with the Self-efficacy scale (r = .42,
p b .01) and lowest with the Compliant Learner (r = .07, p = .26), an expected finding. Dividing the Learner Autonomy
scale scores into quartiles and utilizing a crosstabulation, a comparison of the students scoring the lowest in learner
autonomy (N = 63) with the students scoring the highest in learner autonomy (N = 66) revealed an interesting
comparison. The proportion of students who were non-persisters, yet scoring low in learner autonomy was .14; the
proportion of students who were non-persisters, yet scoring high in learner autonomy was .27. Thus, the probability of
a student's being a non-persister was 1.93 times (.27/.14) more likely to occur if the student scored high in learner
autonomy as compared to low in learner autonomy. This does give some support to the explanation that learners high in
learner autonomy might become frustrated using this learning model. Consider the following: a new online student,
high in learner autonomy, is placed in a cohort of other people. Possibly entering the online learning format thinking it
will cater to freedom and independence, this student could be disappointed if not able to be on his or her own as had
been expected. Placed into an inter-dependent group, the student might be more prone to consider dropping out than
one lower in learner autonomy.
9. Recommendations
In light of the diversity of findings – both in this study and in prior studies of the online student outcomes – the
results should not be used to exclude or discourage potential students from becoming effective online learners. Rather,
administrators, academic counselors, and instructors should make efforts to develop a broad-based approach toward
identifying at-risk students and provide them with appropriate services, such as training opportunities, support, and
guidance. There still remains “the need for assessment to detect adult learners who are potentially at risk of failing in
their distance learning” (Lim, 2001, p. 43).
The present study appears to justify one central truth echoed by various other distance education retention researchers.
“The attrition process is undoubtedly a complex one. A theory that could fully explain every aspect of the attrition process
would contain so many constructs that it would become unwieldy if not unmanageable” (Kember, 1989, p. 279). Rather
than support the idea that only a few factors affect student persistence, it can be seen from the literature review and this
study that there are often many viable characteristics, reasons, and circumstances that are brought to bear when analyzing
persistence decisions. Acknowledging Berge and Huang (2004), “Reviewing the research and theoretical literature has
shown the complexity and multi-dimensional nature of the retention phenomenon” (p. 11) and efforts would be well spent
further quantifying the extent and influences of these variables.
The analysis used 12 variables to account for 9% of the variance in explaining or predicting retention in online
learners (similar results found in Dupin-Bryant, 2004). The set of factors that predicted student persistence among this
sample may not necessarily apply to other populations of distance learning students in other institutions. In light of the
complex nature of the adult learners, continued effort must be expended to explain more of the variance, both from the
258 B. Holder / Internet and Higher Education 10 (2007) 245–260
standpoint of the characteristics and circumstances of the student, as well as the technology methods and institutional
factors that have bearing on student retention. Rovai (2002) states the reason for the continued and daunting task that
remains, “There is no simple formula that ensures student persistence. Adult persistence in an online program is a
complicated response to multiple issues. It is not credible to attribute student attrition to any singe student, course, or
school characteristic. There are numerous internal and external factors that come into play, as well as interactions
between factors” (pp. 12–13).
The finding that learners high in independent thinking and autonomy were more prone to drop out of online learning
rather than persist was both surprising and unique. Past researchers of distance education and online learning had found
either a significant difference for success if the learners were independent or no difference at all. That independent learners
were slightly more prone to drop out of online learning appears to bring into question the accepted axiom, “Successful DE
learners are independent learners.” Quite possibly, the method of distance learning can have a significant bearing. The
hypothesis that the cohort model could be a sustainer of compliant learners, while at the same time a contester of
independent learners, bears further study. Further research of institutions using the cohort model of online instruction is
warranted.
While this study sought to address weaknesses found in other multivariate studies, such as a limited range of measures
and the use of single items to measure broad concepts (see Berge & Huang, 2004), only moderate benefits were obtained.
The improvement of a number of items within various subscales could enhance the effectiveness of the questionnaire. The
Cronbach's Alphas for the main scales were acceptable with a high of .79 for the Hope scale and a low of .74 for the
Academics scale. However, a few of the subscales had significantly lower Alpha scores. Internal consistency for the
subscales ranged from a low Cronbach Alpha of .39 to a high of .75. Adding more items to certain subscales could prove
beneficial. Additionally, consideration of revised wording in light of the given timing of the survey is warranted and would
presumably further the standardization of such measures.
In the short history of the online programs at the institution from which the sample was drawn, the graduation rate is
approximately 60%. Since the retention rate for this representative sample after three courses in the respective programs
was just over 80%, presumably another 20% or so are likely to drop out before completing the degree. For this institution,
Bauman's (2002) conclusion that most drop out before finishing the first three courses is not totally consistent (see also
Willging & Johnson, 2004). It can reasonably be presumed that roughly half of non-persisters drop out in the first three
courses, and the other half drop out after the first three courses. Based on this sample, the institution and others using
similar models of instruction may want to consider specialized efforts focused on retaining this “other half,” a significant
population of non-persisters, since such students have already demonstrated a reasonable degree of success in the program.
In fact, an institution's attrition management plan should take into account different approaches and services to dropouts
early in the program versus dropouts occurring later in the program's sequence. A follow-up study of this sample could
provide further insights. Future researchers should consider the potential benefit of using longitudinal research designs and
the incorporation of more institutions.
References
Aragon, S. R., Johnson, S. D., & Shaik, N. (2002). The influence of learning style preferences on student success in online vs. face-to-face
environments [electronic version]. American Journal of Distance Education, 16(4), 227−243 (Retrieved March 12, 2005 from http://www.ed.
uiuc.edu/hre/online/rp.htm).
Aviv, R. (2004). Why do students take online courses [electronic version]? Sloan-C View: Perspectives in Quality Online Education, 3(10), 5
(Retrieved November 16, 2004, from http://www.sloan-c.org/publications/view/v3n10/pdf/v3n10.pdf).
Babyak, M., Snyder, C. R., & Yoshinobu, L. (1993). Psychometric properties of the Hope Scale: A confirmatory factor analysis [electronic version].
Journal of Research in Personality, 27(2), 154−169 (Retrieved June 14, 2005, from EBSCOhost database).
Bauman, P. (2002). Student Retention: What you can control, and how [electronic version]. Distance Education Report, 6(16), 8 (Retrieved February
5, 2005 from FirstSearch database).
Bean, J. P., & Metzner, B. S. (1985, Winter). A conceptual model of nontraditional undergraduate student attrition. Review of Educational Research,
55(4), 485−540.
Berge, Z. L., & Huang, Y. P. (2004, May). A model for sustainable student retention: A holistic perspective on the student dropout problem with special
attention to E-learning. DEOSNEWS, 13(5) (Retrieved March 9, 2006 from http://www.ed.psu.edu/acsde/deos/deosnews/deosnews13_5.pdf).
Bernard, R. M., Brauer, A., Abrami, P. C., & Surkes, M. (2004). The development of a questionnaire for predicting online learning achievement
[electronic version]. Distance Education, 25(1), 31−47 (Retrieved June 16, 2005, from EBSCOhost database).
Bird, J., & Morgan, C. (2003, April). Adults contemplating university study at a distance: Issues, themes and concerns. International Review of
Research in Open and Distance Learning, 4(1) (Retrieved April 6, 2004, from http://www.irrodl.org/content/v4.1/bird_morgan.html).
B. Holder / Internet and Higher Education 10 (2007) 245–260 259
Bocchi, J., Eastman, J. K., & Swift, C. O. (2004). Retaining the online learner: Profile of students in an online MBA program and implications for
teaching them [electronic version]. Journal of Education for Business, 79(4), 245−253 (Retrieved February 5, 2005 from FirstSearch
database).
Brenner, J. (1997). An analysis of student's cognitive styles in asynchronous distance education courses at a community college. Richlands, VA:
Southwest Virginia Community College Retrieved March 15, 2004, from ERIC database.
Carr, S. (2000, February 11). As distance education comes of age, the challenge is keeping the students [electronic version]. The Chronicle of Higher
Education, 46(23) (Retrieved March 23, 2004, from http://chronicle.com/free/v46/i23/23a00101.htm).
Cauffman, J. (2000, July 11). Learning online some're [sic] off to college without leaving home [final edition]. The Patriot — News, D01 (Retrieved
February 19, 2004 from ProQuest database).
DeTure, M. (2004). Cognitive style and self-efficacy: Predicting student success in online distance education. American Journal of Distance
Education, 18(1), 21−38 (Retrieved February 12, 2005 from http://www.leaonline.com).
Diaz, D. P. (2000). Comparison of student characteristics, and evaluation of student success, in an online health education course. Unpublished
doctoral dissertation, Nova Southeastern University. Retrieved March 15, 2004, from http://www.alnresearch.org
Diaz, D. P. (2002, May/June). Online Drop Rates Revisted. Retrieved January 6, 2005, from http://technologysource.org/article/online_drop_rates_revisted/
Diaz, D. P., & Cartnal, R. B. (1999). Comparing student learning styles in an online distance learning class and an equivalent on-campus class [electronic
version]. College Teaching, 47(4), 130−135 (Retrieved October 1, 2005, from http://home.earthlink.net/~davidpdiaz/LTS/html_docs/grslss.htm).
Duncan, T. G., & McKeachie, W. J. (2005). The making of the motivated strategies for learning questionnaire. Educational Psychologist, 40(2),
117−128.
Dupin-Bryant, P. A. (2004). Pre-entry variables related to retention in online distance education [electronic version]. American Journal of Distance
Education, 18(4), 199−206 (Retrieved June 16, 2005 from EBSCOhost database).
Garson, G. D. (n.d.). Discriminant function analysis. Retrieved October 23, 2006, from http://www2.chass.ncsu.edu/garson/pa765/discrim.htm
Green, S. B., & Salkind, N. J. (2003). Using SPSS for Windows and Macintosh: Analyzing and understanding data (3rd ed.). Upper Saddle River, NJ:
Prentice Hall.
House, D. J. (1999). The effects of entering characteristics and instructional experiences on student satisfaction and degree completion: An
application of the input-environment-outcome assessment model. International Journal of Instructional Media, 26(4), 423−434.
Howell, S. L., Williams, P. B., & Lindsay, N. K. (2003). Thirty-two trends affecting distance education: An informed foundation for strategic planning
[electronic version]. Online Journal of Distance Learning Administration, 7(3) (Retrieved August 26, 2005, from http://www.westga.edu/~distance/
ojdla/fall63/howell63.html).
Jamison, T. M. (2003). Ebb from the Web: Using motivational systems theory to predict student completion of asynchronous Web-based distance
education courses [abstract]. Dissertation Abstracts International. A. The Humanities and Social Sciences, 64(02) (UMI No. AAT 3081396).
Kember, D. (1989). A longitudinal-process model of drop-out from distance education [electronic version]. The Journal of Higher Education, 60(3),
278−301.
Kemp, W. C. (2002). Persistence of adult learners in distance education. American Journal of Distance Education, 16(2), 65−81.
Kerr, M. S., Rynearson, K., & Kerr, M. C. (2006). Student characteristics for online learning success. The Internet and Higher Education, 9(2), 91−105.
Kiernan, V. (2003, August 8). A survey documents growth in distance education in late 1990s [electronic version]. The Chronicle of Higher
Education, 49(48), A.28 (Retrieved March 3, 2004, from ProQuest database).
Kilgore, D., & Rice, P. (Eds.). (2003). Meeting the special needs of adult students. New directions for student services, Vol. 102.
Klett, L. (2004, March 9). Eduventures names online education the postsecondary wave of the future; forecasts U.S. student enrollment will top one
million in 2005. Retrieved March 14, 2004, from http://www.eduventures.com/about/press_room/03_09_04.cfm
Leasure, A. R., Davis, L., & Thievon, S. L. (2000, April). Comparison of student outcomes and preferences in a traditional vs. World Wide Web-
based baccalaureate nursing research course. Journal of Nursing Education, 39(4), 149−154.
Leonard, J., & Guha, S. (2001). Education at the crossroads: Online teaching and students' perspectives on distance learning [abstract]. Journal of
Research on Technology in Education, 34(1), 51−57.
Lim, C. K. (2001). Computer self-efficacy, academic self-concept, and other predictors of satisfaction and future participation of adult distance
learners. American Journal of Distance Education, 15(2), 41−51.
Loomis, K. D. (2000). Learning styles and asynchronous learning: Comparing the LASSI model to class performance. Journal of Asynchronous
Learning Networks, 4(1), 23−32.
Lorenzetti, J. P. (2003). Close the gaps before your students fall through. Distance Education Report, 7(19), 1−3.
Loubeau, P. R., & Heil, J. K. (2000). An evaluation of the readiness of health administration students for distance learning education. The 128th Annual
Meeting of APHA, Abstract #9011 Retrieved February 21, 2004, from http://alpha.confex.com/apha/128am/techprogram/paper_9011.htm
Martinez, M. (2003). High attrition rates in e-learning: Challenges, predictors and solutions. Retrieved October 15, 2006 http://www.elearningguild.
com/articles/abstracts
Martinez, M. (2005). Learning orientation questionnaire-interpretation manual. Retrieved October 6, 2005, from http://www.trainingplace.com/
source/research/LOQPKG-Manual2005.pdf
McVay, M. (2001). How to be a successful distance education student: Learning on the Internet. New York, NY: Prentice Hall.
Meyer, K. A. (2002). Quality in distance education [electronic version]. ASHE-ERIC Higher Education Reports, 29(4), 1−122.
Mollison, A. (2000, August 21). Colleges adjust as more older people seek knowledge [final edition]. Palm Beach Post (pp. 1A). Retrieved February
19, 2004, from ProQuest database.
Moore, J. C., Sener, J., & Fetzner, M. (2006, July). Getting better: ALN and student success. Journal of Asynchronous Learning Networks, 10(3)
(Retrieved August 9, 2007 from http://www.aln.org/publications/jaln/v10n3/v10n3_6moore.asp).
Morris, L. V., Finnegan, C., & Wu, S. (2005). Tracking student behavior, persistence, and achievement in online courses. The Internet and Higher
Education, 8(3), 221−231.
260 B. Holder / Internet and Higher Education 10 (2007) 245–260
Morris, L. V., Wu, S., & Finnegan, C. (2005). Predicting retention in online general education courses. American Journal of Distance Education, 19(1),
23−36.
Northrup, P., Russell, L., & Burgess, V. (2002). Learner perceptions of online interaction. Proceedings of the 2002 World Conference on Educational
Multimedia, Hypermedia & Telecommunications ERIC Document Reproduction Service No. ED477075.
Oblinger, D. G., & Hawkins, B. L. (2005). The myth about E-learning [electronic version]. EDUCAUSE Review, 40(4), 14−15 (Retrieved October
30, 2006, from http://www.educause.edu/apps/er/erm05/erm05411.asp).
Osborn, V. (2001). Identifying at-risk students in videoconferencing and Web-based distance education. American Journal of Distance Education, 15(1),
41−54.
Osika, E. R., & Sharp, D. P. (2003). Minimum technical competencies for distance learning students. Journal of Research on Technology in
Education, 34(3), 318−325.
Parker, A. (2003). Identifying predictors of academic persistence in distance education. United States Distance Learning Association (USDLA)
Journal, 17(1), 55−62. Retrieved December 29, 2005, http://www.usdla.org/html/journal/JAN03_Issue/article06.html
Riechmann, S. W., & Grasha, A. F. (1974). A rational approach to developing and assessing the construct validity of a student learning style scales
instrument. The Journal of Psychology (87), 213−223.
Romano, J., Wallace, T. L., Helmick, I. J., Carey, L. M., & Adkins, L. (2005). Study procrastination, achievement, and academic motivation in web-
based and blended distance learning. The Internet and Higher Education, 8(4), 299−305.
Rovai, A. P. (2002). In search of higher persistence rates in distance education online programs [electronic version]. The Internet and Higher Education,
6, 1−16.
Sankaran, S. R., Sankaran, D., & Bui, T. X. (2000, March). Effect of student attitude to course format on learning performance: An empirical study in
Web vs. lecture instruction. Journal of Instructional Psychology, 27(1) (Retrieved February 20, 2004, from FirstSearch database).
Sizoo, S., Malhotra, N. K., & Bearson, J. M. (2003). Preparing students for a distance learning environment: A comparison of learning strategies on
in-class and distance learners [abstract]. Journal of Educational Technology Systems, 31(3), 261−273.
Snyder, C. R. (1995, January). Conceptualizing, measuring, and nurturing hope. Journal of Counseling and Development, 73(3), 355−360.
Snyder, C. R., Harris, C., Anderson, J. R., Holleran, S. A., Irving, L. M., Sigmon, S. T., et al. (1991). The will and the ways: Development and
validation of an individual differences measure of hope [electronic version]. Journal of Personality and Social Psychology, 60, 570−585.
Snyder, C. R., Lopez, S. J., Shorey, H. S., Rand, K., & Feldman, L. (2003). Hope theory, measurements, and applications to school psychology
[electronic version]. School Psychology Quarterly, 18(2), 122−139.
Song, L., Singleton, E. S., Hill, J. R., & Koh, M. H. (2004). Improving online learning: Student perceptions of useful and challenging characteristics
[electronic version]. Internet and Higher Education, 7, 59−70.
Spitzer, T. M. (2000). Predictors of college success: A comparison of traditional and nontraditional age students. NASPA Journal, 38(1), 82−98
(Retrieved October 5, 2005, from http://publications.naspa.org/naspajournal/vol38/iss1/art11/).
Thiele, J. E. (2003, August). Learning patterns of online students. Journal of Nursing Education, 42(8), 364−366.
Tyler-Smith, K. (2006, June). Early attrition among first-time eLearners: A review of factors that contribute to drop-out, withdrawal and non-
completion rates of adult learners undertaking eLearning programmes. MERLOT Journal of Online Learning and Teaching, 2(2), 73−85
(Retrieved October 30, 2006, from http://jolt.merlot.org/documents/Vol2_No2_TylerSmith_000.pdf).
Visser, L., Plomp, T., Amirault, R., & Kuiper, W. (2002). Motivating students at a distance: The case of an international audience. Educational
Technology Research and Development, 50(2), 94−110.
Willging, P. A., & Johnson, S. D. (2004, December). Factors that influence students' decision to dropout of online courses. Journal of Asynchronous
Learning Networks, 8(4) (Retrieved August 9, 2007, from http://www.aln.org/publications/jaln/v8n4/v8n4_willging.asp).