Student Engagement Self-Regulation Satisfaction and Success in
Student Engagement Self-Regulation Satisfaction and Success in
Student Engagement Self-Regulation Satisfaction and Success in
ScholarWorks
2020
This Dissertation is brought to you for free and open access by the Walden Dissertations and Doctoral Studies
Collection at ScholarWorks. It has been accepted for inclusion in Walden Dissertations and Doctoral Studies by an
authorized administrator of ScholarWorks. For more information, please contact [email protected].
Walden University
College of Education
Review Committee
Dr. Jennifer Courduff, Committee Chairperson, Education Faculty
Dr. Danielle Hedegard, Committee Member, Education Faculty
Dr. Jennifer Lapin, University Reviewer, Education Faculty
Walden University
2020
Abstract
Environments
by
Doctor of Philosophy
Education
Walden University
March 2020
Abstract
success. Little research has been conducted on the 3 constructs and perceptions of student
explored the relationship of the constructs and student success using the theoretical
instrument to measure the self-reported responses of learners and faculty. This instrument
was determined to be valid by content experts and reliable using statistical methods.
Using the convenience sampling strategy, 385 students and 61 faculty from a regional
Caribbean institution were selected. Data were analyzed using descriptive statistics,
correlation relationship between pairs of the constructs, and multiple linear regression
relationship between the constructs and perceptions of student success. The findings
showed that the construct pairs correlated significantly with each other. The findings also
perceptions of student success. The potential findings could lead to positive social change
in how universities approach the process of learning and instruction in online learning
Environments
by
Doctor of Philosophy
Education
Walden University
March 2020
Dedication
whose desire for knowledge and passion for excellence inspired me to strive for
I wish to thank the following people for their support and guidance during this
doctoral journey:
To my committee chair and mentor, Dr. Jennifer Courduff, for her guidance,
To the content experts, Dr. Danielle Hedegard and Mrs. Michelle Wooding-
Andrade, for their invaluable comments and suggestions for improvement of the survey
question items and for endorsing the newly designed instrument for its intended purpose.
To the institutions that granted permission for the surveys to be conducted and
To those individuals who took the time to participate in the pilot and actual
for their unwavering support, understanding, and patience throughout the dissertation
process.
Table of Contents
Introduction ....................................................................................................................1
Background ....................................................................................................................2
Definition of Terms........................................................................................................8
Limitations ...................................................................................................................11
Significance..................................................................................................................12
Summary ......................................................................................................................12
Introduction ..................................................................................................................14
i
Sense of Community and the Interaction Framework .......................................... 19
Constructivism ...................................................................................................... 24
Satisfaction .......................................................................................................36
ii
Student Engagement, Self-Regulation, and Student Success ............................... 41
Summary ......................................................................................................................43
Introduction ..................................................................................................................45
Methodology ................................................................................................................47
Population ............................................................................................................. 47
Sample Size........................................................................................................... 52
Instrumentation ............................................................................................................56
iii
Summary ......................................................................................................................73
Introduction ..................................................................................................................75
Introduction ........................................................................................................... 83
Summary ....................................................................................................................144
iv
Introduction ................................................................................................................147
Recommendations ......................................................................................................156
Implications................................................................................................................158
Conclusion .................................................................................................................160
References ........................................................................................................................161
v
List of Tables
Table 13. Summary of Inferential Statistical Tests per Research Question ..................... 68
Table 17. Question Coding for Perceptions of Student Success Items ............................. 81
Table 18. Reliability Statistics for the Construct/Factor Scales of the Student Survey
Table 19. Increase in Internal Reliability of the Student Engagement Scale for the Student
vi
Table 20. Reliability Statistics for the Construct/Factor Scales of the Student Survey
Table 21. Reliability Statistics for the Construct/Factor Scales for the Faculty Survey
Table 22. Reliability Statistics for the Construct/Factor Scales of the Student Survey
Table 23. Reliability Statistics for the Construct/Factor Scales of the Faculty Survey
Table 24. Comparison of Number of Participant Group Responses Before and After
Gender ....................................................................................................................... 94
Table 29. Program and Course Characteristics of the Student Participation Group ......... 99
Table 30. Teaching Characteristics of the Faculty Participant Group ............................ 100
Table 31. Program and Course Characteristics of the Faculty Participation Group ....... 101
Table 32. Descriptive Statistics for Student Engagement and Self-Regulation Practices of
Table 33. Pearson Correlation for Student Engagement and Self-Regulation Practices of
vii
Table 34. Descriptive Statistics for Student Engagement and Student Satisfaction of the
Table 35. Pearson Correlation for Student Engagement and Student Satisfaction of the
Table 36. Descriptive Statistics for Self-Regulation Practices and Student Satisfaction of
Table 37. Pearson Correlation for Self-Regulation Practices and Student Satisfaction of
Table 38. Durbin-Watson Statistic of the Independent Variables Against the Dependent
Table 39. Pearson Correlation for the Predictor and Outcome Variables for the Student
Table 40. Pearson Correlation for the Predictor and Outcome Variables for the Faculty
Table 41. Pearson Correlation for the Predictor and Outcome Variables for the Combined
Table 42. Descriptive Statistics of the three Constructs and Perceptions of Student
Table 43. Model Summary for Perceptions of Student Success (Dependent Variable)
viii
Table 44. Model Summary for Perceptions of Student Success (Dependent Variable)
With Independent Variables and Covariates of the Student Participant Group...... 133
Table 45. Model Summary for Perceptions of Student Success (Dependent Variable)
Table 46. Model Summary for Perceptions of Student Success (Dependent Variable)
With Independent Variables and Covariates of the Faculty Participant Group ...... 135
Table 47. Model Summary for Perceptions of Student Success (Dependent Variable)
Table 48. Model Summary for Perceptions of Student Success (Dependent Variable)
Table 49. Coefficients for the Student Participant Groupa Without Covariates ............. 138
Table 50. Coefficients for the Student Participant Groupa With Covariates .................. 139
Table 51. Coefficients for the Faculty Participant Groupa Without Covariates ............. 140
Table 52. Coefficients for the Faculty Participant Groupa With Covariates .................. 141
Table 53. Coefficients for the Combined Participant Groupa Without Covariates ......... 142
Table 54. Coefficients for the Combined Participant Groupa With Covariates .............. 143
ix
List of Figures
Figure 10. Scatterplot of student engagement and student satisfaction for the combined
Figure 11. Scatterplot of self-regulation practices and student satisfaction for student
Figure 12. Scatterplot of self-regulation practices and student satisfaction for faculty
x
Figure 13. Scatterplot of self-regulation practices and student satisfaction for the
Figure 14. Q-Q plot of student engagement for the student participant group ............... 117
Figure 15. Q-Q plot of student engagement for the faculty participant group ............... 118
Figure 16. Q-Q plot of student engagement for the combined participant group ........... 118
Figure 17. Q-Q plot of self-regulation practices for the student participant group ........ 119
Figure 18. Q-Q plot of self-regulation practices for the faculty participant group ......... 119
Figure 19. Q-Q plot of self-regulation practices for the combined participant group .... 120
Figure 20. Q-Q plot of student satisfaction for the student participant group ................ 120
Figure 21. Q-Q plot of student satisfaction for the faculty participant group................. 121
Figure 22. Q-Q plot of student satisfaction for the combined participant group ............ 121
Figure 23. Q-Q plot of perceptions of student success for the student participant group122
Figure 24. Q-Q plot of perceptions of student success for the faculty participant group 122
Figure 25. Q-Q plot of perceptions of student success for the combined participant
Figure 26. Matrix scatterplot of the three constructs and perceptions of student success
Figure 27. Matrix scatterplot of the three constructs and perceptions of student success
Figure 28. Matrix scatterplot of the three constructs and perceptions of student success
xi
Figure 29. Scatterplot of the residual values for the three independent variables against
the dependent variable for the student participant group ........................................ 126
Figure 30. Scatterplot of the residual values for the three independent variables against
the dependent variable for the faculty participant group ........................................ 127
Figure 31. Scatterplot of the residual values for the three independent variables against
xii
1
Chapter 1: Introduction to the Study
Introduction
Online learning has become a popular means of greater access to higher education
among students (Fonolahi, Khan, & Jokhan, 2014; Garrison & Vaughan, 2013;
O’Connor, 2014; Stack, 2015). Despite the increased access, the retention rate of students
using this learning modality remains a concern for higher education administrators.
University and college administrators are constantly exploring ways to improve student
the ability of institutions to actively engage students in the learning process and increase
students’ use of self-regulation skills (Cho & Shen, 2013; Mello, 2016). While there have
satisfaction globally, such studies are scarce in the English-speaking Caribbean. In this
quantitative study, I focused on the relationship among the three constructs of student
engagement, self-regulation practices, and student satisfaction and the impact of the
higher education institutions. The research may address a gap in the literature on the
together with perceptions of student success in a single study. The examination of the
three constructs together is innovative not only in the Caribbean but also globally. The
closing of the gap in knowledge has the potential to promote positive social change in the
levels in online learning. In Chapter 1, I present the background to the research study and
2
describe the problem statement and purpose of the study. This chapter includes the
research questions, which align with null and alternate hypotheses, theoretical
framework, and the nature of the study. In the remainder of Chapter 1, I describe the
Background
centered primarily on the transitioning process to the online learning platform (Beaubrun,
2012; Rhoden, 2013). Studies on the three constructs of student engagement, self-
regulation practices, and student satisfaction have been conducted outside the Caribbean
Graham, & Rucker, 2016; Kuh, Kinzie, Cruce, Shoup, & Gonyea, 2007; Larose, 2010;
Mello, 2016; Pellas, 2014; Puzziferro, 2008; Wang, Shannon, & Ross, 2013; Zhang et al.,
2015). There have been no studies on any one of the three constructs in Caribbean
institutions and no studies on the relationship between these constructs and student
success from the perspectives of students and faculty. The unique and innovative
combination of the three constructs as one research project was a gap in the literature.
Problem Statement
study using online learning platforms (Fonolahi et al., 2014; Garrison & Vaughan, 2013;
O’Connor, 2014; Stack, 2015). The shift to online learning requires institutions to ensure
student satisfaction through student engagement and self-regulation practices (Cho &
Shen, 2013; Mello, 2016). The constructs of student engagement (Pera, 2013), self-
3
regulation activities (Chapman, 2015; Cho & Shen, 2013; Greer, Pokorney, Clay, Brown,
& Steele, 2010; Wang et al., 2013; Zimmerman, 1989), and student satisfaction
(Kauffman, 2015; Saeler, 2015) have been studied in both traditional and online learning
environments. In these studies, researchers explored the impact of each of the constructs
relationship between the constructs within the pair. Previous researchers have focused on
the relationships between the pairs of constructs of (a) student engagement and student
satisfaction (Jackson, 2015; Johnson et al., 2016; Kuh et al., 2007; Larose, 2010); (b)
student engagement and self-regulation (Boekaerts, 2016; Mello, 2016; Pellas, 2014;
Zhang et al., 2015); and (c) self-regulation and student satisfaction (Puzziferro, 2008;
Wang et al., 2013) in either traditional classrooms or online settings. These studies have
shown a positive correlation between each of the constructs for the construct pairs of
student engagement and self-regulation and self-regulation and student satisfaction. The
relationship between the constructs of the construct pair for student engagement and
student satisfaction has produced mixed results. According to Jackson (2015) and Kuh et
al. (2007), a positive correlation exists between the constructs of student engagement and
that the lack of attention paid to fostering student engagement in the online environment
as opposed to the traditional classroom is the reason for poor student satisfaction rates at
community colleges. Although studies have been conducted on two constructs at a time,
4
there is a gap in the literature as it relates to examining three constructs at a time in both
Consequently, there is a gap in the literature on the correlation between the pairs of
explored this innovative relationship among the three constructs at the same time in
institution. The closing of the gap in knowledge has the potential to promote positive
social change in the innovative teaching and learning approaches adopted by universities
to increase student satisfaction levels in online learning. The research would be of interest
in the design of innovative programs that are more responsive to the needs of the student
The purpose of this quantitative research study was to understand the relationship
satisfaction and how this relationship impacts perceptions of student success in online
the gap associated with the relationship among the three constructs and student success in
between the three constructs and patterns of regression between the three constructs and
examine the self-reported experiences of students and faculty concerning the constructs
Research Questions
The research questions in this study were used to determine whether there was
The three sub questions and null and alternate hypotheses for the correlation
model:
in online courses.
The second question and null and alternate hypotheses correspond to the regression
model:
courses?
The theoretical framework for this study encompassed Knowles’ adult learning
theory (Phillips, 2005), self-regulated learning theory (Zimmerman, 1989), and the
constructivist model of Vygotsky that promotes student-centered learning (Ahn & Class,
2011). First, Knowles’ adult learning theory indicates that adult learners exhibit self-
directed learning characteristics during the engagement phase of the learning process
(Phillips, 2005). Furthermore, the theory has been used to illustrate the characteristics and
self-motivation to learn (Allen & Zhang, 2016). The alignment of these characteristics to
the process of learning can provide insight into the relationship between student
and metacognitive abilities of students (Zimmerman, 1989). All three abilities play a role
student engagement and self-regulation practices (Ahn & Class, 2011; An & Reigeluth,
2011; Chapman, 2015). These three theories support the constructs of student
The nature of this study was quantitative, using the cross-sectional research
design. I sought to establish both a pattern of correlation between pairs of the three
relationship between the three constructs and perceived student success. Using the cross-
sectional design, I selected student and faculty participants using the nonprobability
for the three constructs and student success, no one instrument incorporated all the
this study using the Moore (1989) interaction model as the common operational
definition for all three constructs. The interactions comprised learner-content, learner-
learning environment in the form of the learner-online environment. In the final design of
with the collection of data on the level of experiences of faculty and students in real-life
situations and allows for comparative studies in the online learning environment using
Definition of Terms
the learning process. The model is linked to Knowles’ adult learning theory, which
9
emphasizes the need for learners to self-direct their learning, become engaged in the
learning process, and be intrinsically motivated (Allen & Zhang, 2016; Schultz, 2012).
collaborative learning environment (Ahn & Class, 2011; An & Reigeluth, 2011).
Online learning: The use of a technologically enhanced platform for the delivery
and community among learners and instructors (Cox & Cox, 2008; Yuan & Kim, 2014).
learner-learner, and (d) learner-online platform (Moore, 1989; Zimmerman, 1986, 1989).
the active learning process at the four collaborative interfaces or interactions of (a)
include (a) academic achievement, (b) performance, (c) perceptions of the learning
environment, (d) success, (e) persistence, and (f) quality of the instructional design,
content, and delivery (Artino, 2007, 2008; Bolliger & Martindale, 2004; Kuo, Walker,
Belland, & Schroder, 2013; Kuo, Walker, Schroder, & Belland, 2014; Moore, 1989;
10
Puzziferro, 2008; Reinhart & Schneider, 2001; Thurmond, & Wambach, 2004;
attributed to satisfaction with the overall educational experience and the efforts of
students to engage in the learning process and self-direct their learning. Dimensions of
student success include academic achievement, such as pass rates, retention, persistence,
and advancement (Ashby, Sadera, & McNary, 2011; Cuseo, 2007; Subotzky & Prinsloo,
2011).
For this research study, I made several assumptions. First, I assumed that survey
participants’ responses would represent the actual experiences of the learners and faculty.
Second, I assumed that the institution’s representative would send the survey instrument
to all eligible participants. The remaining assumptions related to the correlation and
regression models. Correlation assumed that the variables associated with the constructs
The scope of the study was limited to a regional institution in the English-
speaking Caribbean. The population comprised both students and faculty members from
programs that offer at least one course online. Programs using the traditional face-to-face
learning model were excluded from the project. Before conducting the data analysis for
11
the study, I ascertained the validity and reliability of the instrument. For this study, I
achieved content validity by engaging two experts with experience in teaching online
question and the representativeness of the entire instrument based on its purpose. I
revised the questionnaire by clarifying question items and expanding the student success
indicators. The current instrument comprises a 13-item scale for student engagement, a
13-item scale for self-regulation practices, a 5-item scale for student satisfaction, and a
I determined the reliability of the new instrument through field testing, which I
conducted at different higher education institutions. I did not use the data collected from
the field test in the actual data analysis; I used the data to verify reliability through the
calculation of the Cronbach’s alpha statistic. I also used the data to establish construct
validity based on preliminary factor analysis of the correlation between the three
Limitations
internal and external validity. The threat to internal validity was reduced because the
intent of the study was to generalize within the target population and not to determine
cause and effect relationships between the predictor and outcome variables. Similarly, the
use of a cross-sectional study minimized the threat to external validity as I sought only to
regulation practices, and student satisfaction at one time as experienced by faculty and
students in the online learning environment in this quantitative study. This innovative
approach to the unique combination of the three constructs helps fill a gap in the
literature. The results of the research study might provide a model that can be used by
practices, and levels of satisfaction. The research would be particularly useful to faculty
and administrators designing instructional approaches that foster the alignment between
the three constructs to meet student needs. Additionally, the potential findings could lead
to positive social change in the way that universities approach the process of learning and
Summary
for the project, its purpose and significance, and social change implications. The chapter
also included the problem statement; research questions and hypotheses; research design
model, including development of a new instrument; the rationale for the target population
theory, the self-regulated learning theory, and the model for a constructivist learning
environment in setting the foundational principles of the study. The alignment of the
essential to achieving the purpose of the study. The gap in the literature related to the
exploration of the correlation of pairs of the three constructs and the innovative
relationship of the three constructs and perceived student success in a single study is
detailed.
14
Chapter 2: Literature Review
Introduction
The purpose of this quantitative research study was to understand the relationship
satisfaction and the alignment of this relationship to student success in online learning
outside of the Caribbean, have been conducted on the constructs of student engagement
(Pera, 2013), self-regulation activities (Chapman, 2015; Cho & Shen, 2013; Greer et al.,
2010; Wang et al., 2013; Zimmerman, 1989), and student satisfaction (Kauffman, 2015;
Saeler, 2015) in both the traditional and online learning environments. Researchers have
explored the impact between each of the constructs and students’ learning experiences.
Despite these studies, there is a gap in the literature concerning the study of the three
constructs at the same time in the online environment. Additionally, there is a gap in the
literature on the innovative relationship between the three constructs and perceptions of
to online learning (Milman, Posey, Pintz, Wright, & Zhou, 2015). The design of these
systems is required to satisfy the needs of learners while ensuring that students remain
engaged in the learning process and apply self-regulation skills to succeed (Cho & Shen,
Caribbean have centered primarily on the process of transitioning to the online learning
between pairs of the three constructs would add value to the transition process for these
responsive to the needs of online learners. The following four points substantiate this
innovation. First, there is little empirical data on the impact of any of the three constructs
in the Caribbean as it pertains to higher education learners in both traditional and online
learning environments. Second, there is no information on the study of any two of the
there is no information on the innovative study of the three constructs together globally in
the traditional and online learning environments in higher education. Fourth, there is no
study examining the innovative relationship of the three constructs and student success
together globally in traditional and online learning. Based on the gaps identified, this
research study can create a positive social impact on the teaching and learning process
positive social change at the policy level may emerge because of the exploration of
and student success in a single study when institutions are transitioning to the online
learning modality.
theory, self-regulated theory, and the constructivist model that supports student-centered
16
learning. My review includes the tenets of the constructs of student engagement, self-
regulation, and student satisfaction and the alignment of the constructs to perceptions of
student success. I conclude with a review of the online learning environment and the
relationship between this environment and factors associated with the three constructs
For this literature review, I used the databases in the Walden University library
SocINDEX, ProQuest Dissertations and Theses Global, and Google Scholar to search for
the following keywords and terms: transitioning to online learning, online learning
model. Keywords for the three constructs were searched first separately and then
combined to ensure there was a gap in the literature on the study of the constructs at the
same time.
literature review of the online learning environment to set the context for the study. The
review focused on the transition to online learning platforms and the need for innovative
change in the delivery of learning content. The review further linked the transition
programs of study using online learning platforms (Fonolahi et al., 2014; Gallagher &
LaBrie, 2012; Garrison & Vaughan, 2013; O’Connor, 2014; Stack, 2015). There also has
been increasing demand by students for institutions to use multiple formats in the
attractive and flexible option for both undergraduate and graduate students. The use of
this modality is evident in the significant growth in the number of students enrolled in
The shift to online learning requires that institutions review their pedagogical
approaches and strategies so that curricular designs accommodate learner diversity and
learner needs in the online environment (Judge & Murray, 2017; Stocker, 2018; Sun &
Chen, 2016). Cox and Cox (2008) and Yuan and Kim (2014) posited that providing a
because it fosters relationships and a sense of community among instructors and learners.
Yuan and Kim further asserted that this sense of community positively impacts student
The transition to using online learning platforms also depends on the abilities of
instructors to adjust their teaching and learning norms for instructional delivery and time
management skills (Martins & Nunes, 2016). Students expect instructors to establish or
promote a sense of community and maintain an online presence so they feel supported
throughout the entire learning experience (Loh, Wong, Quazi, & Kingshott, 2016;
18
Northcote, Gosselin, Reynaud, Kilgour, & Anderson, 2015). Student expectations were
consistent with the main predictors for instructors’ online teaching self-efficacy, which
acceptance and use of technology by students and their perceived level of technical self-
efficacy (Castillo-Merino & Serradell-López, 2014; Cheung & Vogel, 2013). Several
models have been proposed to measure the level of acceptance of technology use by
model (TAM; Davis, 1989) and the unified theory of acceptance and use of technology
(UTAUT; Venkatesh, Morris, Davis, G. & Davis, F., 2003), both of which rely on the
principle of an intent to use the technology and the associated behavioral patterns of
actual use.
TAM and UTAUT have been applied to studies in higher education institutions
using traditional, blended, and online learning modalities (Attuquayefio & Addo, 2014;
Awwad, & Al-Majali, 2015; Dečman, 2015; McKeown & Anderson, 2016; Nagy, 2018;
Padhi, 2018; Sattari, Abdekhoda, & Gavgani, 2017). In all the studies, the importance of
between undergraduate and graduate student cohorts suggested that it was important to
adjust approaches to content delivery of course materials between the two groups in the
19
online learning environment (McKeown & Anderson, 2016). In this study, graduate
students were more likely to accept readily and adopt the use of technology in the
researchers have focused primarily on mobile learning and UTAUT (Thomas, Singh, &
Gaffar, 2013; Thomas et al., 2014). Findings suggested that the context in which studies
using UTAUT were conducted played a significant role in determining the effects of
technology adoption among students at the higher education level. According to Thomas
et al.’s (2013) research showed that similar uncharacteristic outcomes could occur when
the United States, such as the English-speaking Caribbean, and justifies the need to
and teaching presence were fundamental constituents for the successful delivery of online
explored the relationship between the three constituents and the relative importance of
each constituent on the online teaching and learning process (Garrison, Anderson, &
Archer, 2009; Ke, 2010; Kehrwald, 2008; Tallent-Runnels et al., 2006; Wallace, 2003).
20
Although there was general agreement that the three constituents were integrated and
Garrison et al. (2009) positioned cognitive presence as the pivotal constituent; Kehrwald
(2008) concentrated on social presence; and Ke (2010), Tallent-Runnels et al. (2006), and
Wallace (2003) placed teaching presence as the key constituent. Given that the three
constituents involved both student and instructor interactions, they were reviewed as
being synonymous with social interaction, cognitive interaction, and teaching interaction.
and a community of learners in asynchronous and synchronous online settings (Sun &
Chen, 2016).
Sher (2009) recognized three types of interactions that occur in the online learning
to student and instructor, student and student, and student and educational content as it
engagement and teaching presence (interaction) to the activities between the student and
content or self-regulation skills. Ke (2010), on the other hand, likened cognitive presence
interplay between the learner and the online learning space. This newly constructed
(c) learner-learner, and (d) learner-online platform, and was named the four-phased
interaction model.
Theoretical Framework
I used three theoretical frameworks as the basis for exploring the association
and student satisfaction and the relationship between the three constructs and student
success. These theoretical frameworks are Knowles’ adult learning theory (Knowles,
1975; Phillips, 2005), self-regulated learning theory (Zimmerman, 1989), and the
constructivist model of Vygotsky that promotes student-centered learning (Ahn & Class,
2011). Each framework is presented separately below and shows the alignment between
I conducted the study at a higher education institution and used Knowles’ adult
theory is aligned with the andragogical model that incorporates and values the life
experiences that adults bring to the learning process (Schultz, 2012). Owing to the
traditional learning model due to financial, family, and work responsibilities when
compared to the choices of recent graduates (Aragon & Johnson, 2008; Hachey, Conway,
& Wladis, 2013; Ke, 2010; Rotar, 2017; Xu & Jaggars, 2011). The online learning
environment requires students to be actively engaged in activities that foster the six tenets
of Knowles’ adult learning theory (Allen & Zhang, 2016; Schultz, 2012). These tenets
underscore the learner’s ability to (a) develop a need or longing for acquiring knowledge,
(b) establish a learning baseline or foundation, (c) self-conceptualize, (d) position oneself
in a cognitive state of readiness to achieve goals, (e) develop problem-centered skills, and
(f) become a self-motivator. Chief among the characteristics of the adult learner is the
correlation that exists between self-directed learning and learner engagement, where
Contrary to this belief, Phillips (2005) proposed that self-directed learning must
be externally stimulated first for learners to become engaged. Phillips further purported
that once extrinsically motivated engagement occurs, continuous engagement of the adult
notion that self-directed learning is not always intrinsically motivated was supported by
Cox (2015), who cited an actual example of a conversation held with an adult learner.
Like Phillips’ proposition, this learner had to be externally driven to complete the
program of study.
Studies grounded in the Knowles’ adult learning theory have shown that both the
and implementation of an online GIS certificate program. The proposed model used for
distance learning environment” (p. 51). In this model, there was alignment between the
characteristics of the adult learner and the facilitator’s best practice approaches to content
delivery. Given that Knowles’ adult learning theory is predicated on students taking
charge of their learning, it follows that the theory is correlated with the constructs of
over the years, particularly in the social learning context (Zimmerman, 1989;
Zimmerman & Martinez-Pons, 1988). Zimmerman (1986) proposed a definition for self-
regulation in the academic setting that aligns learner metacognitive, motivational, and
further postulated that the theory for self-regulation learning requires learners to apply
academic targets. In this study, I used the three components of self-regulation to explore
the learner’s ability to control and optimize the learning process. Metacognition involves
behavior leads to the social interactions that take place within the learning environment.
The theory of self-regulation has led to studies related to the implementation of strategies
environment (Tabak & Nguyen, 2013; Wandler & Imbriale, 2017). These studies
supported the use of the Zimmerman self-regulation model in online learning platforms,
particularly due to the online social interactions necessary for student success (Kitsantas,
& Dabbagh, 2011; Lai, 2011; Lear, Linda, & Prentice, 2016).
Constructivism
Vygotsky on the social context of cognitive development (Brown, 2014; Guo, 2018;
Johnson, 2017; Wang, 2014). The Vygotsky approach to the constructivist theory fosters
constructivist model supports self-regulated learning, student engagement, and social and
critical requirement for online learning platforms as it allows students to take an active
Further, it is argued that the online learning environment is better aligned to the
constructivist model than the traditional mode of learning as the online platform allows
learners to create meaning from their interactions with the learning content (Guo, 2018;
25
Johnson, 2017). A collaborative learning environment also is integral to the application of
the andragogical learning model (Ahn & Class, 2011; An & Reigeluth, 2011). Studies
illustrating the use of the constructivist model confirmed that student competencies to
vanOostveen, Barber, DiGiuseppe, & Childs, 2017; Chitanana, 2012; Cortés & Barbera,
The interaction and interplay between Knowles’ adult learning theory, self-
regulated learning theory, and the constructivist model are evident from the literature
review conducted, and the three theories present a comprehensive, integrated framework
within which the study can be conducted. Furthermore, the theories are linked to the
successful delivery of the learning content through the creation and alignment of the
components of a social presence, cognitive presence, and teaching presence in the online
environment. These three components are synonymous with Moore’s (1989) interaction
theory that identifies three types of interactions occurring in the learning process, that is,
Student Engagement
and online learning environments (Centner, 2014; Czerkawski & Lyman, 2016; Dixson,
26
2015; Handelsman, Briggs, Sullivan, & Towler, 2005; Kuh, 2003; Mello, 2016; Pellas &
Kazanidis, 2015; Robinson & Hullinger, 2008; Schreiber & Yu, 2016). These studies
linked student engagement to the key factors impacting the learning process inclusive of
success. The role of the instructor in fostering student engagement also has been explored
and found to be an additional key factor, particularly considering the transition process to
online platforms (Cho & Cho, 2014; Ma, Han, Yang, & Cheng, 2015). More recently,
there has been a focus on the relationship between engagement and the collaborative
learning environment and associated tools due to the rapid advances in technology and
digital resources (Donaldson et al., 2017; Hew, 2016). Technology has been found to
enhance the collaborative environment through the creation of active learning sites.
finalize and accurately describe the term is based on how scholars conceptualize the
construct in the field (Azvedo, 2015; Dixson, 2015; Sinatra, Heddy, & Lombardi, 2015).
The definition proposed by Kuh (2003) that engagement correlates to the amount of
effort expended by the student in the learning environment was used in this study. This
that is recognized as valid and reliable throughout the educational sector (Dixson, 2015).
have emerged (Dixson, 2010, 2015; Handelsman et al., 2005; Ouimet & Smallwood,
27
2005; Roblyer & Wiencke, 2004). A study of student engagement in the traditional
learning environment revealed that four basic elements were in alignment with the Kuh
(2003) definition (Handelsman et al., 2005). The engagement elements comprised skills
with course content, peers, instructors), and performance (achieving desired goals). While
these engagement elements were fundamental in the development of the Online Student
Engagement Scale (OSE; Dixson, 2010, 2015), Schreiber and Yu (2016) applied the
themes of the South African Survey of Student Engagement (SASSE) to the study of
challenges, learning with peers, experience with staff, and campus environment. I applied
the engagement elements used by Dixson (2010, 2015) and the SASSE themes which
address the active learning components theorized by Vygotsky (Ahn & Class, 2011; An
& Reigeluth, 2011) to the four-phased interaction model and used the elements as the
foundation for the development of the instrument for this study. The combined
process. Despite this recognition by researchers and scholars, Khan, Egbue, Palkie, and
Madden (2017) reported that faculty faced challenges in fostering student engagement in
online learning spaces. Most of these challenges related to the transfer of traditional
instructional methods to the online environment without modifying these methods for
suitability in online settings. Khan et al. explored the mechanisms that could be employed
in increasing the level of participation and engagement among learners and concluded
that incorporating active learning strategies in the design and delivery of instructional
content was a critical requirement. The quantitative instrument for this study included an
Kahn, Everington, Kelm, Reid, and Watkins (2017) examined the need for students to
actions to take about the demands of online learning. In the decision-making process,
students first aligned the cause and effect of proposed actions to their social contexts
prior to exercising the appropriate actions. The Kahn et al. research study demonstrated
that the process of reflexivity could either encourage or discourage student engagement
actions where the latter could lead to frustration and eventually program withdrawal. By
practices.
metacognitive, motivational, and behavioral skills to the learning process to achieve the
desired outcomes (Zimmerman, 1986, 1989). The three components of self-regulation are
aligned to student performance and academic success (Artino, 2008; Broadbent & Poon,
2015; Cho & Shen, 2013). Studies have shown further that the metacognitive component
(Kuo et al., 2013, 2014; Lee, Kim, & Grabowski, 2010; Puzziferro, 2008).
traditional and online modalities (Cho & Cho, 2017; Pintrich, Smith, Garcia, &
30
McKeachie, 1993; Schraw & Dennison, 1994). The most recent instrument correlated
learner-learner with students’ online learning experiences (Cho & Cho, 2017). Measuring
the learning experiences as self-efficacy and course satisfaction, Cho and Cho (2017)
found that there was a positive relationship between (a) self-regulation in learner-content
interaction and learning experiences. There was no relationship found between self-
developed by Cho and Cho (2017) was a significant addition to the measurement tools
available for assessing self-regulation practices and the interaction theory, it did not align
the interactions with the components of metacognitive, motivational, and behavioral self-
model. Table 2 shows the integration of the practice of self-regulation with the four-
Student Satisfaction
of the instructional design, content, and delivery (Artino, 2007, 2008; Bolliger &
Martindale, 2004; Kuo et al., 2013, 2014; Puzziferro, 2008; Reinhart & Schneider, 2001;
Thurmond, & Wambach, 2004; Yukselturk & Yildirim, 2008). These predictive
combined, are viewed as the hallmark of the teaching and learning process (Yukselturk &
Yildirim, 2008). For instance, a study of the relationship between the construct of student
32
satisfaction and academic achievement in traditional versus online learning settings
Saeler (2015) showed that there was no statistical significance found between academic
achievement and satisfaction in both learning environments. As with the construct for
self-regulated practices, Moore’s (1989) interaction model has been used as a framework
to study the relationship between student satisfaction and interactions occurring at the (a)
have shown that there is a positive correlation between interaction and student
satisfaction in both distance and online learning (Ali & Ahmad, 2011; Bolliger &
Martindale, 2004; Bray, Aoki, & Dlugosh, 2008; Dennen, Darabi, & Smith, 2007; Kuo et
Kuo, Walker, Belland, and Schroder (2013) highlighted that more studies have
content interactions. These studies revealed that the first two types of interactions were
more suitably aligned to student satisfaction in online learning environments (Bolliger &
Martindale, 2004; Jung, Choi, Lim, & Leem, 2002; Sher, 2009). Conversely, Kuo et al.
(2013) found that the learner-instructor and learner-content interactions were better
studies using the interaction model showed that the learner-instructor interaction was the
key predictor of student satisfaction (Battalio, 2007), the Kuo et al. study placed the
inferred that the course design for online learning and ease of navigating the course
material were critical to the interactive ability of learners and the content.
33
In reviewing the dimensions of student satisfaction, it was clear that these
configured interaction model. For this study, the four-phased interaction model was
Table 3
Previous research studies have explored the relationships between two constructs
at the same time. There were no studies found that examined the three constructs together
in a single study in higher education institutions. The relationship of the constructs which
were investigated together and which represent all possible pairs of the three constructs
are presented as follows: (a) student engagement and student satisfaction (Jackson, 2015;
Johnson et al., 2016; Kuh et al., 2007; Larose, 2010); (b) student engagement and self-
regulation (Boekaerts, 2016; Mello, 2016; Pellas, 2014; Zhang et al., 2015); and (c) self-
regulation and student satisfaction (Puzziferro, 2008; Wang et al., 2013). These studies
satisfaction has produced mixed results. According to Jackson (2015) and Kuh et al.
(2007), a positive correlation existed between the constructs of student engagement and
that the lack of attention paid to fostering student engagement in the online environment
as opposed to the traditional classroom is the reason for poor student satisfaction rates at
community colleges.
Studies that used the National Survey of Student Engagement (NSSE) survey
higher education institutions (Jackson, 2015; Johnson et al., 2016). The positive
correlation that resulted between engagement and satisfaction was obtained at the
undergraduate level, regardless of the ethnicity of the students studied. While there was
alignment between the factors associated with student engagement in the two studies,
both the adult African American and adult Hispanic American student groups.
Additionally, the student engagement indicators of the NSSE were found to be positively
correlated to each other, thereby validating the alignment of the energumen factors
The studies of the relationship between the constructs of student engagement and
self-regulated learning showed a general positive correlation trend between the constructs
35
(Mello, 2016; Pellas, 2014; Zhang et al., 2015). In addition to engagement and self-
regulation, some of the studies included other constructs or factors such as self-efficacy
and self-esteem (Pellas, 2014) and academic burnout (Zhang et al., 2015). Both
engagement and self-regulation were further segregated into their respective components.
Engagement was categorized into cognitive, emotional, and behavioral (Pellas, 2014),
while self-regulation was split into the locomotion and assessment forms.
from the general positive correlation trend. Pellas (2014) reported that there was a
positive correlation between self-regulation and cognitive and emotional engagement but
Zhang et al. (2015) found that the locomotion form of self-regulation and student
and student engagement. These results suggest that while the two constructs, overall, can
exhibit a positive relationship with each other, mixed results can be obtained when the
correlation between each of the constructs (Puzziferro, 2008; Wang et al., 2013). Both
studies examined the relationship between the two constructs in online learning
environments. These results were the same for community college students (Puzziferro,
2008) and undergraduate and graduate students (Wang et al., 2013). The Motivated
Strategies for Learning Questionnaire (MSLQ) instrument was used in either the whole or
36
altered forms to monitor motivational levels of students. Students who had prior
motivation was directly linked to self-regulation learning strategies, and in all cases,
increased levels of motivation were associated with higher course satisfaction levels.
Satisfaction
Asmamaw, Mariam, & Mack, 2018). Although the research was not conducted in a
higher education institution, the study reported on the development and testing of a model
adolescents and youths. A review of the study showed that the psychological need
satisfaction construct was primarily related to sporting activities and not to the quality of
the learning experiences. This research study in youth sports academies justified the need
The gap in the literature for my study concerns the innovative relationship of the
single research project in higher education institutions. With the addition of student
the three constructs and student success was explored. I developed an instrument for this
study as there was no existing instrument that measured all three constructs and student
Student success has been aligned with the components of active learning (Ahn &
Class, 2011; An & Reigeluth, 2011). The factors contributing to student success are
similar to those presented for student satisfaction. These factors include academic
achievement, such as pass rates, retention, persistence, and advancement (Ashby et al.,
2011; Cuseo, 2007; Subotzky & Prinsloo, 2011). Student success is not only an important
alike. Chief among these factors is student retention and hence degree completion. The
retention rates tended to be lower for students taking online courses when compared to
students in face-to-face classes in one community college setting (Gregory & Lampley,
2016). Nonetheless, studies related to learners at community colleges have shown that the
differences in student success in the online learning environment and the traditional
learning environment are linked to the category of learners (Aragon & Johnson, 2008;
Hachey et al., 2013; Xu & Jaggars, 2011). Traditional learners or recent high school
learners or non-traditional learners tend to pursue online learning programs. In the online
environment, adult learners had more successful outcomes as these learners deliberately
chose this mode of learning as opposed to the younger learners who often were guided by
paradox arises with this argument regarding the differences in choices made by the recent
high school graduates and older adult learners. Researchers reported that online learners
tended to be more successful if they were proficient in using the technology associated
38
with the learning environment (Dupin-Bryant, 2004; Hachey et al., 2013; Harrell &
Bower, 2011; Kerr, M. S., Rynearson, & Kerr, M. C (2006). This finding suggested that
characteristics associated with self-directed learning and time management skills were
more likely to succeed in online environment (Johnson & Berge, 2012; Kenner &
Weinerman, 2011; Kerr et al., 2006; Kiely, Sandmann, & Truluck, 2004; Neuhauser,
For this study, student success factors comprised learners’ self-report on course
satisfaction with the online environment and overall institutional support. Furthermore,
this research study explored student and faculty perceptions of student success using the
same measurement scale in the newly constructed instrument. Table 4 presents the
There were no studies found that explored the three constructs of student
associated with student success in a single study. Instead, studies examined either the
relationship between each construct and student success or the relationship between two
of the constructs and student success. For the combination of pairs of constructs and
student success, I present the following studies: (a) student engagement, student
satisfaction, and student success (Burrow & McIver, 2012; Korobova & Starobin, 2015;
Webber, Krylow, & Qin, 2013); (b) self-regulation, student satisfaction, and student
success (Inan, Yukselturk, Kurucay, & Flores, 2017; Nicol, 2009); and (c) student
40
engagement, self-regulation, and student success (Fong et al., 2017; Rahal & Zainuba,
2016).
showed a positive correlation between each of the constructs and student success (Burrow
& McIver, 2012; Korobova & Starobin, 2015; Webber et al., 2013). The studies were
conducted in the traditional setting and used undergraduate student grades to measure
academic success. Two of the studies examined data from the 2008 NSSE survey to
assess the alignment of the engagement factors to student satisfaction and student success
(Korobova & Starobin, 2015; Webber et al., 2013) while the correlation to student
performance (Burrow & McIver, 2012). The NSSE benchmarks incorporated categories
and support. Findings from the Korobova and Starobin (2015) and Webber, Krylow, and
Qin (2013) studies confirmed that interactions between faculty and students and staff and
students were linked to the quality of the learning experience and desirable student
a positive correlation between each of the constructs and student success in online and
blended learning modalities (Inan et al., 2017; Nicol, 2009). In one study, self-regulation
41
was sub-divided into the four components (Inan et al., 2017), while self-regulation was
treated as a developmental process based on students’ interaction with the online segment
of the course design (Nicol, 2009). The use of the online learning environment allowed
students to monitor their progress through assessment feedback provided by the course
facilitator and build their confidence in controlling their own learning. Owing to the self-
regulation developmental exercise, students reported that they were more engaged in the
learning process and chose to expend greater effort in learning the course material to
achieve better grades. These comments were consistent with the tenets of the self-
regulation instrument of the first study, which assessed the extent to which students
planned, requested assistance, managed their time, and evaluated their learning (Inan et
al., 2017). The outcome of the self-regulated activities in both studies contributed to
higher satisfaction and academic success rates. Furthermore, a comparison of the studies
showed that the online environment promoted greater self-regulated activities and
provided more flexible opportunities for learning over the traditional learning
generally showed a positive trend between each of the constructs and achievement as one
of the factors aligned to student success (Fong et al., 2017; Rahal & Zainuba, 2016). The
activities, and the studies focused more on the relationship between self-regulation and
student success. While both studies agreed that self-regulation practices led to higher
42
student performance and factors of student success, Rahal and Zainuba (2016) showed
that this finding was not the case for at-risk students. Using the principles associated with
motivation and innovation, self-regulation skills were not always used for achieving
academic success (Rahal & Zainuba, 2016). Students who were most likely to perform
well always self-regulated their abilities to engage in the learning process. On the
contrary, students who were low achievers had the highest number of repeat chances and
persistence (Fong et al., 2017). This finding suggested that self-regulation was not
positively associated with all the factors for student success, such as student retention.
and metacognition, this definition was different from the one I used for the dissertation
research study. Fong et al. (2017), in their meta-analytical study, identified self-
regulation as one of the psychosocial components required to measure the student success
factors of achievement and persistence. The researchers posited that self-regulation on its
own and, by extension, student engagement, would not be high predictors for all factors
These results are particularly useful for this dissertation study, where I sought to
establish a relationship between the three constructs and perceived student success from
the viewpoint of both students and faculty. Additionally, the factors selected for student
success are wider than those examined in the Fong et al. (2017) study. The use of the
Summary
The shift to online learning has driven institutions to ensure student satisfaction
through student engagement and self-regulation practices (Cho & Shen, 2013; Mello,
2016). These three constructs are even more critical to student persistence and student
success in online learning environments. While studies have shown that there is
equivalence in learning in both traditional and online learning settings (Fonolahi et al.,
2014), educators have sought to enhance the online modality to drive and improve
learning environment and aligned these components to the three constructs. The
theoretical framework for the study examined the tenets of the Knowles’ adult learning
theory, the self-regulated learning theory, and the model for a constructivist learning
environment. The three constructs were presented as single concepts and construct pairs.
Similarities and differences between each of the constructs were highlighted. The gap in
the literature pertained to the absence of documented scholarly work in higher education
practices, and student satisfaction, and the impact of these constructs on perceptions of
student success.
45
Chapter 3: Research Method
Introduction
practices, and student satisfaction and how this relationship was aligned to perceptions of
education institutions. This relationship was measured from the perspectives of both
students and faculty. Studying the three constructs of student engagement, self-regulation
practices, and student satisfaction at the same time in a single research study in higher
education has been a gap in the literature. Additionally, the exploration of the innovative
relationship of the factors associated with the three constructs and student success in the
same study has been a gap in the literature. This cross-sectional research study required
the use of two instruments to capture data separately for students and faculty, and each
instrument incorporated the variables associated with the three constructs and student
success as one questionnaire. The overarching goals of the study comprised both a
correlational study of the constructs and a regression study of the constructs and
In Chapter 3, I present the research design and rationale for the development of a
satisfaction, and student success. I describe the process for scale development,
establishing validity and reliability of the instrument, field testing the instrument, and
46
administering the final iteration of the instrument to sample participants (Worthington &
Whittaker, 2006). The chapter also includes the data analysis plan and ethical procedures.
I used a quantitative cross-sectional design (Campbell & Stanley, 1963) given that
data were collected at a single point in time. As a descriptive study, the intention was not
sectional studies is that several variables can be explored at the same time. Cross-
sectional studies also allow for the collection of self-reported data over a short period
(Field, 2016). Due to the limited resources for this study, a short time frame to conduct
the research was consistent with the design choice. Many of the predictive studies
validate new or modified instruments or to add to the body of knowledge and scholarly
This study explored (a) the correlational relationship between all possible pairs of
satisfaction and (b) the multiple regression relationship of the constructs and perceived
student success. For the multiple regression model, the three constructs of student
satisfaction, and perceptions of student success were the main variables for the multiple
47
regression model, it was important to identify any other variables that could influence the
relationship between the independent and dependent variables. For this study, data in the
form of gender and age groups of student and faculty participants, and years of
experience of faculty participants were collected and managed as control variables given
that these variables remained unchanged during the period of the research project.
Consequently, the use of a cross-sectional design approach to the study supported the
relationships, and the continuation of research in the field (Campbell & Stanley, 1963).
Methodology
The methodology described the population for the study and the sampling
procedures used to determine the appropriate sample size. A separate population was
used for field testing of the instrument prior to its use as the final survey. Additionally,
the methodology section includes descriptions of how the data were collected and the
process for obtaining approval from the Walden University Institutional Review Board
Population
This research study was conducted at one of the campuses of a Caribbean regional
university, and the target population consisted of students and faculty at both the
undergraduate and graduate levels. Based on information from the institution’s 2017–
2018 annual report, the CCC campus (pseudonym) serves 15 countries across the
48
Caribbean and offers face-to-face, blended, and online courses and programs at the
professional, undergraduate, and graduate levels. The campus has an average annual
enrollment of approximately 6,000 students, 540 adjunct faculty members, and 467 full-
time staff members. The main program disciplines incorporate studies in the humanities
and social sciences. At the undergraduate and graduate levels, 62 programs of study
master’s degrees, and doctoral degrees. Only a subset of these seven program groupings
Although the CCC campus website specified that the institution offers a total of
78 online courses and programs, the exact number of online programs within each of the
program groupings could not be determined, as the delivery mode was not stated for all
program offerings. The number of online programs was approximated, as shown in Table
5. These programs were delivered in two main formats: (a) online only or (b) online or
blended.
Table 5
academic year 2017–2018 was 6,325, with 5,351 women and 974 men. Of the 6,325
students, 6,049 students were enrolled in the seven program groupings as shown in Table
6. Given that most programs offered were determined to use the online delivery modality,
the size of the target population was estimated as 540 faculty and 6,000 students.
Table 6
The intent of the sampling strategy was to select students and faculty who
represented the seven undergraduate and graduate program groupings offered at the CCC
Nachmias, Nachmias, & DeWaard, 2015). Probability sampling is employed when the
probability sampling allows for the comparisons of findings from different samples taken
from the same target population because the sampling parameters used are identical or
for this research study. First, most of the English-speaking Caribbean higher education
face programs. The transition to offering online programs in these institutions has been
slow, resulting in a limited number of online programs being offered. The only institution
that offers a broad selection of online programs is the CCC campus. Second, an
advantage of the CCC campus is that it was developed as an institution to offer both face-
to-face and online programs and courses at the undergraduate and graduate levels.
Furthermore, the programs offered at this campus cover typical types of programs found
in Caribbean institutions, from certificate programs to doctoral degrees. Third, while not
all disciplines are embraced at the CCC campus, the programs are accessible to a wide
faculty and students was not available from any of the institutions to obtain a
the CCC campus due to the range and number of online programs and courses offered
and the student and faculty populations, which would have representation across the
either the stratified sampling design or cluster sampling design so that I would employ
the program groupings as the strata or clusters. This technique would have resulted in
51
multi-stage selection points as the goal of the research questions was to identify faculty
and students as two separate sampling units, both of which are aligned to online
sampling for the study, where participants would be classified as volunteers. In this case,
entire classes used the online delivery modalities within the programs would comprise the
the process of data collection can be extended until the desired sample size is achieved, or
the availability of volunteer participants has been exhausted. Convenience sampling was
also chosen to minimize the time needed to construct sampling selection points to arrive
at the target population numbers for the study. While non-probability sampling may or
may not limit my ability to generalize the findings of the target population at the regional
campus, the convenience sampling method supports the testing of a new instrument on
the sample population of students and faculty were selected. I obtained a letter of
cooperation from the director of the regional campus to allow access to the students and
faculty in the programs selected (see Appendix D). Faculty and student participants were
invited to participate in the study and asked to complete an online survey instrument. The
survey instrument was sent to entire classes of students and faculty in online programs by
Sampling frame. In this study, the sampling units were the faculty who teach in
the online programs and their students. The online courses within these online programs
52
constituted the sampling frame. The eligibility criteria for the study were detailed as
inclusion and exclusion criteria. The inclusion criteria consisted of those programs that
were categorized by the institution as used the online delivery approach to offer the
programs to students. Given that the CCC campus was asked to distribute the survey
instrument, I did not require a list of the online programs and their attendant online
courses as a selection point. The exclusion criteria for this study were the programs and
Sample Size
The minimum sample sizes for the study were determined collectively for the
undergraduate and graduate student and faculty participants in the online programs using
the G*Power statistical tool available online. The G*Power tool is a statistical analysis
program used in survey research methods (Faul, Erdfelder, Lang, & Buchner, 2007). The
statistical tool’s computational range includes a variety of statistical tests, such as t-tests,
F-tests, correlation, and regression analyses (Faul, Erdfelder, Buchner, & Lang, 2009). I
In computing the required sample size prior to data collection, an a priori power
analysis was conducted based on the determination of several factors (Faul et al., 2007,
2009). These factors comprise (a) type of statistical tests, (b) statistical significance level
or the alpha- (α-) value, (c) effect size of the statistical analysis, and (d) power of the
statistical test. The α-value is normally set at 0.05 or 0.01. An α-value of 0.05 generally
means that there is a 5% probability that the results obtained will be due to chance. The
effect size represents the statistical differences obtained when comparing the results of
53
the variables and has three classification levels: small, medium, and large. When the three
classification levels are compared, a larger effect size results in a larger sample size.
Conventionally, the medium effect size is chosen and differs for correlation and
regression analyses. The power of the statistical test concerns the probability that the
differences between the variables, if found, are statistically significant. A power of 0.08
or greater is normally used in data analyses and signifies that there is an 80% chance that
Several statistical models were considered in the determination of the total sample
size for conducting the correlation and regression analyses. For each model, the α-value
and power level chosen were 0.05 and 0.95 respectively. The effect size classification
levels are given as 0.10 (small), 0.30 (medium) and 0.50 (large) for correlation analyses,
and 0.02 (small), 0.15 (medium), and 0.35 (large) for regression analyses (Faul et al.,
2007, 2009). The correlational models considered were the bivariate normal random
model (continuous variables) and the point biserial model (continuous and binary
variables). The total sample sizes for each of the student and faculty participant groups
were calculated as (a) 138 for the bivariate normal correlation model, and (b) 134 for the
point biserial correlation model with both models using a medium effect size of 0.30.
Given that the regression analysis involved two or more predictor variables, the multiple
linear regression model was identified as being appropriate for the research study. Both
the random and fixed multiple linear regression models were considered in the
calculation of the population sample size. The total sample sizes for each of the student
and faculty participant groups were calculated as (a) 68 for the random model, and (b)
54
119 for a fixed model by setting the number of tested predictors to three and using a
medium effect size of 0.15 for both models. Consequently, the total sample size for each
student and faculty participant group was 138, which was the largest sample size
calculated for the conduct of the statistical analyses in this study. I expected a minimum
response rate of 25%, and considering I intended to administer the survey instrument to
Data Collection
The research project explored the relationship of the three constructs of student
success in the online programs at the CCC campus from the viewpoint of both student
and faculty participant groups. The project used a single questionnaire for each
participant group that I developed to represent the three constructs and factors aligned to
perceived student success. The online programs differed in the number of progression
levels required for completion of the respective programs. The recruiting procedure
involved the collection of data from participants within the various levels for the 2018-
2019 academic year. The cross-sectional sampling of the academic levels of any one
Additionally, demographic data collected from participants included gender, age group
regional campus for (a) seeking permission to access the students and faculty as
participants and (b) sending the survey questionnaire. Students and instructors from the
55
regional campus in the selected programs were invited to participate through the
institution’s Email system. Given that a list of students and faculty was not be provided, I
contact at the institution. The invitation had an introduction to the survey, explained the
intent of the research project, provided instructions on how to proceed, and presented the
informed consent details. Participants were given information as to how they could exit
the survey at any time. My contact in turn uploaded the link to the questionnaire and
Data were collected using SurveyMonkey as the online survey platform. The
standard version of the software was purchased for the research study as it was more
versatile than the basic version for conducting the survey for this study. One of the
disadvantages of the basic version of Survey Monkey is that it only allows the collection
of data from a maximum of 100 participants. Considering the sample size was a
minimum of 138 student participants and 138 faculty participants, the standard version
was more than adequate. The standard version of the software accommodated the import
of the results into Microsoft Excel. From Microsoft Excel, I imported the data into SPSS,
Participants had 4 weeks to complete the survey. During this period, I sent the
three follow-up emails to students and faculty, encouraging them to participate in the
questionnaire, students and faculty would have voluntarily consented to be part of the
research study. The instructions also gave students and faculty the opportunity to exit the
Prior to collecting data for the study, the instrument was field tested under the
same conditions described for the data collection process using a sample of students and
faculty from two other higher education institutions inclusive of obtaining informed
consent. The purpose of testing the instrument was to identify any problems associated
with the question items and the methodology to be employed in the actual study. The
results of the field test were used only in determining the instrument’s reliability as
results were not used in the data analysis for the actual research study.
Instrumentation
(1989) interaction theory, and the factors related to perceived student success The Moore
for each of the constructs and perceptions of student success (see Tables 1-4). Tables 7-
57
10 below is a compilation of survey items for the constructs of student engagement, self-
regulation practices, and student satisfaction and factors of perceived student success.
I present the operational definitions used to conduct this study for the three
items. The operational definition for student engagement in the online learning
environment corresponds to the amount of effort that students expend in the active
Table 7
Student engagement
• Applies critical thinking skills • Understands better difficult concepts and
• Integrates own views with that of others content after interaction
• Prepares study notes • Collaborates as one-to-one or as a group
• Applies learning to real-life situations • Interacts with peers on mastering course
• Interacts with instructor material
• Discusses academic performance and • Respects peer differences
other matters related to academic goal • Values peer differences
achievements • Uses learning space to participate in
• Obtains meaningful feedback on course activities
assignments
Number of survey items: 13
platform.
58
Table 8
Self-regulation practices
• Allows time for review of content • Uses more than one way to communicate
• Develops plan to achieve learning goals • Develops plan to assist peers
• Implements plan to achieve learning • Implement plan to assist peers
goals • Monitors interactions with peers
• Completes course activities • Reflects on interactions with peers
• Checks online learning space for course • Responds to contributions by peers
material updates • Uses online activities to self-direct
• Initiates communication with instructor learning
Number of survey items: 13
learner-online platform.
Table 9
Student satisfaction
• Quality of learning experiences
• Course activities aligned to expectations
• Interactions with instructors
• Interactions with peers
• Orientation program to online learning
Number of survey items: 5
The operational definition for perceived student success in the online learning
Table 10
The student and faculty questionnaires were developed using the survey items
given in Tables 6-9 (see Appendices A and B). The questionnaires were a combination of
four scales for the variables: student engagement, self-regulation practices, student
satisfaction, and perceptions of student success. The same survey items were used for
both the student questionnaire and the faculty questionnaire allowing for the comparison
of student responses and faculty responses for each scale item for the four variables.
demographic information and Part 2 related to four variables of student engagement, self-
60
regulation practices, student satisfaction, and student success. The demographic items
included gender, age group, name and level of program, full-time and part-time status.
Table 11 defines the demographic variables and justifies the measurement used for each
variable.
Part II of the questionnaires used the five-point Likert scale: strongly agree (5),
agree (4), neither agree nor disagree (3), disagree (2), strongly disagree (1). Participants
rated the extent to which each scale item for each variable applied to either students’
while the student participants ascribed a personal value to the scale items for each
variable for the online program in which they are enrolled, faculty participants ascribed a
The variable scores were calculated using the numerical values assigned to the
Likert scale. The total possible scores for each variable are shown in Table 12. For
example, one of the student engagement question items required students to respond to “I
apply critical thinking skills to the course activities.” There were five possible responses
to this question based on the five-point Likert scale. If the response chosen was disagree,
the score calculated for this item was the numerical value 2.
61
Table 11
before being administered for data collection (Worthington & Whittaker, 2006). For this
study, content validity was achieved by engaging two experts in online learning to
evaluate the question item constructions. An invitation was first sent to the content
experts asking them to participate in a content review of the instrument (see Appendix
C). Once agreement was given, the experts were sent the questionnaire and asked to
provide feedback on (a) the comprehensiveness and relevance of each question item, and
(b) the representativeness of the entire instrument to its purpose. I received comments on
six question items in the both the student engagement and self-regulation practices
categories. These comments related to the clarity and complexity of the question item
construction. Additionally, comments were received on the purpose of the question items
in the perceived student success category. I revised the question items and reconstructed
the questions for the perceived student success category of the instrument. The
adjustments increased the number of items for student engagement by 1 to 13-item scale
63
and self-regulation practices by 3 to a 13-item scale. I refined the questions in the
through the application of the Cronbach’s alpha statistic and verification of significant
correlations between the question items. Additionally, field testing was to be used to
determine content validity and to conduct a preliminary factor analysis of the correlation
The field test of the instrument was conducted at two other higher education
institutions, one in the Caribbean and the other in the United States, prior to the
administration of the study. The field test participants were both students and faculty who
were not associated with the study site. Results from the field test were not used in the
Data were analyzed using the Statistical Package for Social Sciences (SPSS)
Version 23 software. Each participant data was screened first for completeness (Warner,
2013). Data screening is critical to the data analysis process as it allows the researcher to
maintain the integrity of the dataset. Data screening identifies incomplete responses such
as missing scores for the demographic information and the scale items for each variable,
incorrect scoring of items, coding errors, and extreme data. Prior to confirming the
findings that there were incomplete or extreme responses, I examined the data to
determine if the data could be modified or replaced. If the incomplete data influenced the
64
remaining dataset negatively in terms of the quality of the data collected and the results
of the analytical process, the data were cleaned by removing the data from the datasets
such that there were no missing scores for the demographic information and the scale
items for each variable. The data values were removed by defining the parameters and
rules within which data were acceptable. The SPSS tool selected the values that did not
comply with the predefined rules and removed the unwanted data.
Research Questions
and student satisfaction, and regression analysis of the three constructs as predictor
variables and perceived student success as the outcome variable. The intent of the study
concerned the self-reported opinions of both students and faculty as it related to their
experiences in the online learning environment. Figure 1 and Figure 2 represent the
Self-Regulation Self-Regulation
Practices Practices
Student Engagement
Student Satisfaction
was statistical significance in the correlation model for the pairs of constructs among
The three sub questions and null and alternate hypotheses for the correlation
model:
in online courses.
The second question and null and alternate hypotheses correspond to the regression
model:
courses?
Data analysis commenced with the data screening and cleaning procedures that
first identified abnormal or uncharacteristic datasets and second cleaned the data by
modifying, correcting, or removing these anomalies. Data analysis of the cleaned datasets
included descriptive and inferential statistics for both students and faculty. The
descriptive statistical analysis summarized the data for each demographic variable using
frequency data tables depicting mean scores, standard deviation, and the number of
respondents for the online programs represented in the data collected. Inferential statistics
were used to examine (a) the correlation model of pairs of the three construct variables of
student engagement, self-regulation practices, and student satisfaction, and (b) the
regression model of the three constructs and perceived student success as outlined in the
68
research questions. Table 13 summarizes the inferential statistics tests that were used for
Table 13
Correlation Analysis
continuous variables. Although the Likert scale is classified as ordinal variables, the
ordinal values can be converted to numerical values to allow for the correlation model to
calculated to examine the extent of the association (Field, 2016). The closer the Pearson
coefficient is to 1, the stronger the association between the two variables. Pearson
respectively. The two assumptions that must be satisfied for correlation analyses are (a)
the variables are normally distributed, and (b) the scores for the variables are independent
of each other. The fulfillment of the first assumption was determined by creating a
scatterplot of the data and checking to see if there was a linear relationship between the
two variables. The completion of the questionnaire by separate participants satisfied the
second assumption that the scores on the two variables were independent. If the criteria
69
of the two assumptions are met, r can be computed using SPSS. The correlation between
the variables will be significant if the probability value is p < .05 or p < .01. An example
of the correlation statement is represented by r(df) = correlation value, p < .05 where df =
degrees of freedom.
between multiple predictor variables (two or more variables) and outcome variables
(Warner, 2013). For the research project, the multiple linear regression assessed the
strength of the relationship between the three constructs and perceptions of student
success. The output of the multiple linear regression analysis is the Pearson coefficient, r,
multiple coefficient, R, as the coefficient of determination and its squared value, F-test to
measure the predictive value, beta coefficients, t-test, and the intervals associated with
95% confidence levels. The assumptions for multiple linear regression are that (a) the
variables are evenly distributed (normality); (b) a linear relationship exists between the
predictor and outcome variables (linearity); (c) the variance of error terms is similar
across the predictor variables (homoscedasticity), and (d) the absence of multicollinearity
is satisfied. These assumptions are tested while the multiple regression analysis is being
conducted. The first three assumptions are ascertained from the patterns of relationship
homoscedasticity are met when the confidence levels and statistical significance tests are
validated (Field, 2016). One of the statistics that is determined in the multiple linear
regression analysis is the Durbin-Watson test. If the value of this test is greater than the
70
cut-off point of 1, the assumption is that the residual values of the variables are
independent. In the coefficient table, if the collinearity statistic is less than the upper level
of the confidence interval of 95%, then the predictor variables are not highly correlated
with each other. This last assumption is further tested by reviewing the variance inflation
factor (VIF) statistic, which requires that the predictor variables are not highly correlated
The general equation for the multiple linear regression model shows the
relationship between the outcome variable and the three predictor variables as follows:
where Y is the outcome variable, B1 to B3 are the slopes of the predictor variables
X1 to X3, and B0 is the constant error. For statistical significance, the null hypothesis
should be rejected.
covariates that could interfere with a priori assumptions made during the data analysis
phase. Confounding variables are hidden variables that influence the results ascribed to
the relationship between predictor and outcome variables (Field, 2016; Warner, 2013).
Conversely, covariates are variables that could have a predictive influence on the
control variables. In this study, the control variables identified were gender, age group,
The reason for classifying these variables as control variables was due to the possibility
71
of influence on the predictor variables. Hence, there was a need to control the effects that
these control variables would have across the levels of the predictor variables.
Threats to Validity
the instrument to measure the intended variables and the precision to which the findings
as both external and internal. External threats to validity relate to the extent to which the
sample is representative of other populations, over time, and in other settings. Threats to
internal validity relate to extrinsic factors associated with selection biases and intrinsic
factors associated with changes within the sampling units over time, the stability of the
the target population. Furthermore, the study was conducted in the participants’ natural
setting, and data collected only at a single point in time and not through a pre-test and
post-test design structure. The results emerging from the singular collection period had
the potential to threaten the ability of the researcher to generalize to outside populations.
To address this threat, the cross-sectional research design was only intended to study the
Many of the threats that occur to internal validity correlate to pre-test and post-
test research designs. Given that the intent of this research study was to generalize the
relationship between the predictor and outcome variables and not to determine cause and
effect relationships, the threat to internal validity was reduced. Additionally, other
potential threats to internal validity were reduced through the exploratory nature of this
study and the short timeframe for the study, thereby reducing the focus on the cause and
effect interpretation of the results regarding the relationship between the variables.
existed in this study. Aligning the development of the survey instrument to the theoretical
framework of Moore’s interaction theory reduced the impact of this threat. The threat to
statistical conclusion validity was minimized by comparing the results of the relationship
Ethical Procedures
Students and faculty participated in the research study on a volunteer basis, and I
maintained confidentiality and anonymity throughout the research process. Given that I
did not send the questionnaire to the participants directly, the names and email addresses
of the students and faculty were not be captured. To alleviate any ethical concerns about
the recruitment of participants to the study, I followed up with my assigned contact at the
institution to ensure that the invitation and questionnaire link to participate had been
posted on the general website for students and faculty. Additionally, my institutional
73
liaison was independent of the participation pool of faculty members to eliminate any
conflict of interest that could arise. The administration and staff at the institution did not
have access to the data collected. The use of SurveyMonkey as the software survey tool
allowed for data collection to be accessed directly and only by the researcher, thereby
eliminating any ethical concerns regarding anonymity and confidentiality. The online
survey tool also allowed respondents’ information, including participants who would
have withdrawn from the study, and their survey data to be kept confidentially for the
period specified by the researcher. For the study, I will keep the data on SurveyMonkey
for the period of the dissertation study, after which I will delete the data from my
SurveyMonkey account.
The consent form prepared for the expert content evaluation and the field testing
of the instrument was submitted to the Walden University IRB for evaluation. Data from
the field testing did not form part of the statistical analyses of the survey findings. An
application for the conduct of the actual study was submitted to IRB for approval. The
Summary
practices, and student satisfaction and how this relationship was aligned to student
education institutions. This innovative relationship was measured from the perspectives
of both students and faculty. The cross-sectional study involved the design of a new
74
instrument to examine the four factors, and the operational definition for the constructs
used the Moore (1989) interaction model as the basis for the design. The two
questionnaires developed separately for students and faculty used the same thematic
approach to each of the four factors, and the wording of questions generally was the
same. The strategies for content validity and reliability were presented and included the
use of subject experts and pilot testing of the instruments. In Chapter 4, I discuss the
results obtained for validity and reliability and statistical analyses conducted to explore
Introduction
The purpose of this study was to explore the innovative relationship between the
how this relationship aligns to perceptions of student success in online learning settings at
measured from the perspectives of both students and faculty who self-reported their
experiences in the online environment. The overarching goals of the study comprised
both a correlational study of the constructs and a regression study of the constructs and
Two main research questions guided this study. The first question related to the
correlation model:
The three sub questions and null and alternate hypotheses for the correlation
model:
in online courses.
The second question and null and alternate hypotheses correspond to the regression
model:
courses?
relationships in a single study required that the instrument be pilot tested at different
institutions other than the institution selected for the actual study site. As a new
instrument, the validity and reliability of the instrument needed to be confirmed to verify
the accuracy of variables being measured and internal consistency of the scales. Validity
was determined as content validity in this study and described in Chapter 3. Details of the
Chapter 4 provides the results of the pilot study and the actual research study.
Chapter 4 is organized into five sections: (a) the results of the pilot study and how the
results impacted the reliability of the instrument, (b) the results of the actual study and the
reliability outcome of the instrument for the research study compared with the outcome
obtained for the pilot study, (c) the preliminary factor analysis of the instrument, (d) the
correlation analyses between pairs of the three constructs, and (e) the linear multiple
regression analysis between the three constructs as predictor variables and perceptions of
The research instrument was designed to ask the same questions of both student
using a four-phased interaction model to obtain the most relevant items for each of the
three constructs. This four-phased model allowed for the establishment of commonality
between the question items in relation to the interactions of (a) learner-content, (b)
model that comprised the first three interactions. This modification was necessary
interactions in the online learning space. Question items for perceptions of student
success were not linked directly to the four-phase interaction model but instead to the
Validity, as content validity, was established using one set of question items as
the instrument’s two questionnaires were the same for both students and faculty. In
Chapter 3, I described the stages to establish content validity for the question items. Two
content experts determined the validity of the instruments prior to the distribution of the
surveys to participants in the pilot study and the actual research study. While the research
questions focused primarily on each construct overall, I recognized that the internal
reliability using the Cronbach’s alpha test would not distinguish between the four
The pilot study was conducted at two institutions offering online programs and
courses using two surveys, one for students and one for faculty. The response rate was
very low, and this response rate may have been affected by the timing of distribution of
the surveys at both institutions. During the time that the surveys were open to the
completing projects for the close of the semester/quarter. For the student survey, a total of
79
10 responses were received. Of the 10 responses, two responses were incomplete. For the
faculty survey, a total of three responses were received, and one of those was incomplete.
The question items were coded for all constructs to represent the data more clearly. The
Table 14
Table 16
success were conducted by calculating the value of Cronbach’s alpha (α). According to
Kline (1999), a Cronbach statistic between 0.7 and 0.8 demonstrates acceptable reliability
of scales, and a Cronbach statistic above 0.8 exhibits good reliability. Table 18 illustrates
the Cronbach statistic for each of the construct and factor scales of the student survey,
Reliability Statistics for the Construct/Factor Scales of the Student Survey Instrument for
All Responses
Student engagement scale. For the original 13-item student engagement scale, an
initial value of Cronbach’s alpha was found to be 0.426, as shown in Table 18. This value
was below the acceptable statistic for the internal consistency of a scale. When three
questions were removed, the internal reliability of the scale increased to an acceptable
value of 0.777. Table 19 gives further details of the gradual increase in internal
Table 19
Increase in Internal Reliability of the Student Engagement Scale for the Student Survey
Instrument
scale, the value of Cronbach’s alpha was found to show good internal reliability at 0.824,
83
as shown in Table 18. The initial Cronbach statistic result indicated that no questions
were to be removed.
Student satisfaction scale. For the original 5-item student satisfaction scale, the
value of Cronbach’s alpha was found to be acceptable at 0.751, as shown in Table 18.
The initial Cronbach statistic result indicated that no questions were to be removed.
Student success scale. For the original 14-item student satisfaction scale, the
value of Cronbach’s alpha was found to be acceptable at 0.749, as shown in Table 18.
The initial Cronbach statistic result indicated that no questions were to be removed.
Due to the low participation numbers for the pilot study, the 13-item
questionnaire for student engagement was not adjusted to 10 items for the actual study as
I wanted to retest the internal reliability of the instrument using a larger sample size.
Although eight of the 10 participants answered all questions in the pilot study, the
Cronbach statistic was calculated on the 10 student responses and not the eight complete
responses. A preliminary result for the eight responses showed that the internal reliability
was lower than the internal consistency of the 10 responses. This result supported the
decision to test the reliability of the instrument using a larger population size.
Introduction
The actual study was conducted at a Caribbean regional institution (CCC campus)
with a population of students and faculty spanning several Caribbean islands. The two
questionnaires, one each for student and faculty participants, were not adjusted following
the pilot study as the sample size was small. The recruitment exercise was carried as
84
described in the methodology for the duration of the planned data collection period. The
internal reliability of the larger sample from the study site was calculated using
Cronbach’s alpha. Data analyses comprised both a descriptive study and an inferential
study.
Data Collection
The actual study was conducted over four weeks. Students enrolled in and faculty
teaching online courses were invited to participate in the study using the convenience
sampling method. Letters of cooperation and IRB approval from the institution (see
Appendix D) allowed the researcher to collect data from participants. The data collection
involved issuing invitation letters and three follow-up letters to participants through my
administrative contacts at the institution. Data collection was carried out as planned
except in one instance when the institution’s liaison for student participants indicated that
the second follow-up letter was not distributed as scheduled. To compensate for this
oversight, the survey remained open for an additional five days after the third and final
The internal reliability of the instrument used for the actual research study was
determined in two ways. First, Cronbach’s alpha was calculated using all the responses
from each participant group, and second, Cronbach’s alpha was calculated using only
those responses that had no missing demographic data. Participant groups were separated
for the internal reliability determination. Tables 16, 17, 18, and 19 give the results of the
The internal reliability for the scales for each participant group was calculated
using all the responses collected. Tables 20 and 21 show the Cronbach statistic for each
of the construct and factor scales. This statistic varied between 0.838 and 0.917 for the
student questionnaire and between 0.806 and 0.907 for the faculty questionnaire.
Table 20
Reliability Statistics for the Construct/Factor Scales of the Student Survey Instrument for
All Responses
Table 21
Reliability Statistics for the Construct/Factor Scales for the Faculty Survey Instrument
for All Responses
The internal reliability for the scales for each participant group was calculated
using responses that had no missing demographic data. Tables 22 and 23 show the
Cronbach statistic for each of the construct and factor scales. This statistic varied between
0.833 and 0.921 for the student questionnaire and between 0.794 and 0.906 for the faculty
questionnaire.
Table 22
Reliability Statistics for the Construct/Factor Scales of the Student Survey Instrument for
Responses With No Missing Demographic Data
Table 23
Reliability Statistics for the Construct/Factor Scales of the Faculty Survey Instrument for
Responses With No Missing Demographic Data
scales for all responses with the same scales for responses with missing demographic data
removed revealed little differences in the values of the Cronbach’s alpha. The high
degree similarity between the two sets of statistics confirmed that the reliability of the
instrument designed for students and faculty was consistent. Consequently, there was no
need to remove any of the items from the two questionnaires as the Cronbach’s alpha
result showed that the measurements for each of the constructs of student engagement,
The preliminary factor analysis was conducted to support the content validity
established in Chapter 3 for the question items of the newly developed instrument. Given
that the question items were the same for both student and faculty groups, the factor
analysis was conducted on the student group, which had a larger number of respondents.
A separate factor analysis was conducted for the three constructs and perceptions of
student success, as each of the four dimensions had a different operational definition. The
questionnaire was constructed by identifying four interaction factors for each of the three
constructs: (a) learner-content; (b) learner-instructor; (c) learner-learner; and (d) learner-
online platform. The group of questions for perceptions of student success did not pre
identify factors. The factor analysis used the principal components extraction method
engagement to determine how these items were grouped. All items were correlated with
at least one other item above the correlation value set at 0.3 (see Appendix E, Table E1).
Table E2), and was greater than the recommended value of 0.6 (Field, 2016). The same
table confirmed the significance of the factor analysis based on Bartlett’s test of
Four factors emerged from the principal component analysis for student
engagement (see Appendix E, Table E3). These four factors explained 71% of the
variance in the question items for eigenvalues over 1. The pattern matrix of these factors
revealed how the question items were loaded to represent the construct of student
engagement (see Appendix E, Table E4). An examination of the factor loading showed
that all the question items for the learner-content interface were categorized in the same
factor. The items for the learner-instructor interface also were placed in the same factor.
The items for the learner-learner interface were sorted into two factors, and the learner-
online platform question was placed in one of the two factors identified for the learner-
learner interface. The preliminary factor analysis showed that the factors for learner-
learner and learner-online platform could be improved by adding more questions that
self-regulation practices to determine how these items were grouped. All items were
correlated with at least one other item above the correlation value set at 0.3 (see
89
Appendix E, Table E5). The Kaiser-Meyer-Olkin measure of sampling adequacy was
0.81 (see Appendix E, Table E6), and was greater than the recommended value of 0.6
(Field, 2016). The same table confirmed the significance of the factor analysis based on
Four factors emerged from the principal component analysis for self-regulation
practices (see Appendix E, Table E7). These four factors explained 70% of the variance
in the question items for eigenvalues over 1. The pattern matrix of these factors revealed
how the question items were loaded to represent the construct of self-regulation practices
(see Appendix E, Table E8). An examination of the factor loading showed that all the
question items for the learner-instructor interface were categorized in the same factor.
The items for the learner-content interface shared three factors. The items for the learner-
learner interface were sorted into three factors, and the learner-online platform question
was placed in one of the same factors with the learner-content interface. The preliminary
factor analysis showed that the question items for learner-content and learner-learner
adding more questions that would create a further distinction between the items.
Student satisfaction. The factor analysis examined the five-question items for
student satisfaction to determine how these items were grouped. All items were
correlated with at least one other item above the correlation value set at 0.3 (Appendix E,
Table E9). The Kaiser-Meyer-Olkin measure of sampling adequacy was 0.82 (see
Appendix E, Table E10), and was greater than the recommended value of 0.6 (Field,
90
2016). The same table confirmed the significance of the factor analysis based on
Only one factor emerged from the principal component analysis for student
satisfaction. This factor explained 65% of the variance in the question items for
eigenvalues over 1 (see Appendix E, Table E11). There was no pattern matrix of these
factors given that one factor for the question items could not be rotated. An examination
of the preliminary factor analysis showed that the question items depicting the learner-
be improved by adding more questions that would create a further distinction between the
items.
for perceptions of student success to determine how these items were grouped. All items
were correlated with at least one other item above the correlation value set at 0.3 (see
0.90 (see Appendix E, Table E13), and was greater than the recommended value of 0.6
(Field, 2016). The same table confirmed the significance of the factor analysis based on
Three factors emerged from the principal component analysis for perceptions of
student success (see Appendix E, Table E14). These three factors explained 68% of the
variance in the question items for eigenvalues over 1. The pattern matrix of these factors
revealed how the question items were loaded to represent perceptions of student success
91
(see Appendix E, Table E15). The preliminary factor analysis showed that the question
items for perceptions of student success could be refined to create distinctive categories.
Overall factor analysis results. The preliminary factor analysis was conducted to
establish the construct validity of the question items within the three constructs and
perceptions of student success. The analysis revealed the number of factors associated
with the question items created for this study. For the constructs, student engagement,
and self-regulation practices, the results confirmed the four-factor design of the question
items. Despite the confirmation, some question items did not fit exclusively in the factors
as originally intended. For the construct, student satisfaction, the results contradicted the
four-factor design and established that a one-factor design was a better fit for the original
question items. For perceptions of student success, although the original question item
design did not identify the number of factors to be represented, a three-factor design
emerged. The preliminary factor analysis results were discussed further in Chapter 5.
The treatment of missing data was considered in this study. I found missing data
in participants’ responses to the demographic questions and the questions associated with
the three constructs and perceptions of student success. An initial total of 385 students
and 61 faculty members responded. The number of student responses was above the
minimum sample size of 138 calculated for this study, while the number of faculty
responses was below the projected sample size. The student and faculty responses were
resulted in a lower number of responses for the participant groups, as shown in Table 24.
92
Table 24
The survey instruments treated the scales for student engagement, self-regulation
practices, student satisfaction, and perceptions of student success as distinct from each
other. Although I found missing data for responses related to these scales, I noted that
missing data for one scale did not mean missing data for the other scales. As a result, I
kept the missing data for the scales and reported on them when I conducted the
The research sample comprised both student and faculty participant groups. The
baseline demographic characteristics presented relate to the gender, age group, and
country of residence of the sample. Gender and age group were identified as covariate
variables in Chapter 3 and were included in the baseline model to describe the
Table 25 shows the gender ratio for the student and faculty respondents. In both
participant groups, the percentage of female respondents was higher than male
respondents. Female respondents were 87.8% for the student participant group and 75.5%
The age ranges for the two participant groups are displayed in Table 26. There
were five age group ranges between 30 years and Under to Over 60 years. For the
student participant group, the highest number of respondents was found in the 31-40 age
group range (39.2%), and the lowest number of respondents was in the over 60 age group
(0.3%). For the faculty participant group, the highest number of respondents was found in
the 51-60 age group range with 35.8%, and the lowest number of respondents was in the
over 60 age group with 9.4%. There were no faculty members in the age group 30 and
Table 26
The age group ranges of each participant group were further categorized by
gender, as seen in Table 27. For the student participant group, the highest number of male
94
respondents was found in the 41-50 age group, and the highest number of female
respondents was found in the 31-40 age group range. The lowest number of male
respondents and the lowest number of female respondents were found in the same age
group range of over 60 years. For the faculty participant group, the highest number of
male respondents and the highest number of female respondents were found in the 51-60
age group range. The age groups with the lowest number of male and female respondents
differed for the faculty participant group. The lowest number of male respondents was
found in the 41-50 age group, whereas the lowest number of female respondents was
Table 27
The CCC regional institution serves several islands in the Caribbean. Figure 3 and
Figure 4 show the distribution of respondents by country of residence. For the student
participant group, 21 countries were represented, inclusive of two countries outside of the
Caribbean region. The largest number of student respondents was from Trinidad and
Tobago, followed by Jamaica. For the faculty participant group, 12 countries were
95
represented, inclusive of four countries outside of the Caribbean region. The largest
number of faculty respondents was from Trinidad and Tobago followed by Jamaica.
The research study used a non-probability sampling technique to explore the self-
reported opinions of students and faculty. The intent of the study was not to generalize
and determine cause and effect. The study was an exploratory one, particularly due to the
during the summer period and did not represent the total student and faculty population of
the institution. Consequently, the timing of the study impacted the representativeness of
The response rate for the student participant group could not be determined as
information on the size of the summer student population enrolled in online courses was
not provided. While the summer population size for students could not be ascertained, the
average annual student population or target population was approximately 6000 students.
Based on this annual projection of the total student population, the response rate for the
student participant group was 6.4% for all responses and 5.9% after removing the missing
demographic data.
The invitation to participate, follow-up reminders, and the link to the survey were
sent to 273 adjunct faculty members by the institution. The total response rate for the
faculty participant group was 22.3%. After the data cleanup exercise, the response rate
for faculty was 19.4%. This response rate was less than the 25% predicted in Chapter 3.
Furthermore, the response rate was less than the anticipated minimum sample size of 138
97
and would have required 50.0% of the faculty to respond with no missing data. There are
approximately 540 adjunct faculty members, and the final response rate of faculty
The results of the research study comprise descriptive and inferential statistics.
The descriptive statistical analysis of the responses summarized the data for the
demographic variables of the student cohort and faculty teaching statuses, program level
and year, number of programs, number of teaching years, number of courses, number of
hours spent online. The number of teaching years was identified as a covariate variable in
Chapter 3 and was included in the statistical analysis to describe the characteristics of the
faculty participant group. These variables were measured using frequency tables, mean,
standard deviation, and the number of respondents for the online programs. The
inferential statistical analyses presented the results of the correlation and multiple linear
the student participation group. These characteristics comprised student respondents’ full-
time and part-time statuses, and program level, and program year in which they were
enrolled. The frequency and percentage descriptions are provided. The largest student
respondent group was part-time (80.7%). Most of the student respondents were enrolled
in the Bachelor’s programs (61.4%), followed by the Master’s programs (16.5%). The
smallest student respondent group was enrolled in the Graduate Certificate (1.1%) and
98
Graduate Diploma programs (1.1%). Most of the student respondents were enrolled in
year 1 of their programs of study (41.8%). The numbers of student respondents enrolled
in year 2 (24.7%) and year 3 (25.6%) were almost the same. The smallest numbers of
Table 28
Demographics Frequency %
Cohort information
Full-time 68 19.3
Part-time 284 80.7
Total 352 100.0
Program level enrolled
Undergraduate certificate 8 2.3
Undergraduate diploma 6 1.7
Associate degree 21 6.0
Bachelor’s degree 216 61.4
Graduate certificate 4 1.1
Graduate diploma 4 1.1
Master’s degree 58 16.5
Doctoral degree 35 9.9
Total 352 100.0
Program year enrolled
1 147 41.8
2 87 24.7
3 90 25.6
4 22 6.3
5 3 0.9
6 1 0.3
7 1 0.3
8 1 0.3
Total 352 100.0
comprised the expected graduation year, the number of courses in which the student
99
respondents were enrolled, and the number of hours spent online per week. The central
tendency, standard deviation, and range of responses are given. The largest number of
student respondents selected 2021 as the expected graduation year. Most student
respondents indicated that they were pursuing two courses, while most respondents spent
Table 29
of the faculty participant group, which comprised faculty respondents’ full-time and part-
time teaching statuses and program teaching levels. The frequency and percentage
descriptors are provided. The largest faculty respondent group was part-time (84.9%).
Most of the faculty respondents taught in the bachelor’s programs (60.4%) followed by
the master’s programs (22.6%). The associate degree level had the smallest number of
faculty respondents (3.8%). There were no faculty respondents who taught at the
diploma levels.
100
Table 30
Demographics Frequency %
Status information
Full-time 8 15.1
Part-time 45 84.9
Total 53 100.0
Program teaching level
Associate degree 2 3.8
Bachelor’s degree 32 60.4
Master’s degree 12 22.6
Doctoral degree 7 13.2
Total 53 100.0
Faculty program and course characteristics. Table 31 displays the program and
the number of years teaching, number of programs teaching, number of courses teaching
in a select program, and number of hours spent online per week. The central tendency,
standard deviation, and range of responses are given. The largest number of faculty
respondents had been teaching for at least four years. Most faculty respondents indicated
they taught one online program of study. In their selected programs, most faculty
respondents taught one course while the majority of respondents spent 10 hours per week
teaching online.
101
Table 31
The inferential statistical analyses of the sample population were based on the
research questions for this study. The two main research questions involved a correlation
analysis of each pair of the three constructs and a multiple linear regression analysis of
the constructs and perceptions of student success. Prior to the conduct of the analyses, I
created subscales for each of the three constructs and perceptions of student success by
adding the respondent numerical values ascribed to the Likert scale of each survey item.
As indicated in Chapter 3, the maximum numerical values for the constructs and
Student Engagement 65
Self-Regulation Practices 65
Student Satisfaction 25
for each participant group. Due to the low response rate from the faculty participant
group, the student group and the faculty group were combined to create a third participant
group. The findings are also presented for the combined participant group.
possible pair of the three constructs was determined to address the first research question
and the attendant three sub questions. The overarching correlation question is as follows:
The sub questions and the associated null and alternate hypotheses are presented
under the applicable sections for the construct pairs. The relationship among the pairs of
constructs was calculated as the Pearson correlation coefficient, r (Field, 2016). The
r(df) =correlation value, p < .05 or p < .01 where df = degrees of freedom.
This correlation value gives the strength of the association between the variables
being measured and ranges from -1 (perfect negative correlation) to +1 (perfect positive
the Cohen effect size classification levels for correlation analyses (Faul et al., 2007,
2009). The absolute values of these levels are given as 0.10-0.29 (small or weak), 0.30-
0.49 (medium or moderate), and 0.50-1.00 (large or strong). The Cohen correlation
classification was used to discuss the strength of the relationship between the variable
of r were (a) the variables are normally distributed, and (b) the scores for the variables
are independent of each other. The first assumption was examined using a scatterplot to
confirm that there was a linear relationship between variables of each pair of constructs.
The second assumption was determined as being satisfied given that separate participants
the relationship between the variables of the constructs of student engagement and self-
regulation practices for student and faculty participant groups. Figure 7 shows the
relationship between the variables for the combined participant group. An examination of
the scatterplots showed that there was a linear relationship between the variables.
Consequently, the first assumption for the correlation model for students, faculty, and the
Table 32
Table 32 presents the descriptive statistics for student engagement and self-
regulation practices in terms of mean, standard deviation, and the number of responses.
The ranges of the mean values for the participation groups varied by 0.92 for the student
engagement construct and by 3.93 for the self-regulation practices construct. The
differences between the mean values for the two constructs showed that more
106
respondents indicated a higher level of agreement with the question items for student
engagement.
The first correlation research sub question and related null and alternate
Table 33 displays the Pearson correlation coefficient between the variables for
engagement and self-regulation practices was found to be significant (p < .01), r(295) =
.58, p .001 for the student participant group, and the strength of the relationship was
strong (r > .5). For the faculty participant group, the correlation between the two
variables was significant (p < .01), r(50) = .77, p .001, and the strength of the relationship
was strong (r > .5). For the combined participant group, the correlation between the two
variables was significant (p < .01), r(347) = .61, p .001, and the strength of the
relationship was strong (r > .5). The correlation between student engagement and self-
regulation practices was highest for the faculty participation group and lowest for the
student participant group. The findings showed that there is a strong significant
relationship between student engagement and self-regulation practices given that r > .5
107
and p < .01. Consequently, I rejected the null hypothesis and supported the alternate
hypothesis.
Table 33
Student engagement and student satisfaction. Figure 8 and Figure 9 show the
relationship between the variables of the constructs of student engagement and student
satisfaction for student and faculty participant groups. Figure 10 shows the relationship
between the variables for the combined participant group. An examination of the
scatterplots showed that there was a linear relationship between the variables.
Consequently, the first assumption for the correlation model for students, faculty, and the
Figure 10. Scatterplot of student engagement and student satisfaction for the combined
participant group.
Table 34 presents the descriptive statistics for student engagement and student
satisfaction in terms of mean, standard deviation, and the number of responses. The
ranges of the mean values for the participation groups varied by 0.92 for the student
engagement construct and by 1.16 for the student satisfaction construct. The differences
between the mean values for the two constructs showed that more respondents indicated a
higher level of agreement with the question items for student engagement.
Table 34
Descriptive Statistics for Student Engagement and Student Satisfaction of the Sample
Population
in online courses.
Table 35 displays the Pearson correlation coefficient between the variables for
engagement and student satisfaction was found to be significant (p < .01), r(294) = .56,
p .001 for the student participant group, and the strength of the relationship was strong
(r > .5). For the faculty participant group, the correlation between the two variables was
significant (p < .01), r(51) = .69, p .001, and the strength of the relationship was strong
(r > .5). For the combined participant group, the correlation between the two variables
was significant (p < .01), r(347) = .58, p .001, and the strength of the relationship was
strong (r > .5). The correlation between student engagement and student satisfaction was
highest for the faculty participation group and lowest for the student participant group.
The findings showed that there is a strong significant relationship between student
engagement and student satisfaction given that r > .5 and p < .01. Consequently, I
Pearson Correlation for Student Engagement and Student Satisfaction of the Sample
Population
the relationship between the variables of self-regulation practices and student satisfaction
for student and faculty participant groups. Figure 13 shows the relationship between the
variables for the combined participant group. An examination of the scatterplots showed
that there was a linear relationship between the variables. Consequently, the first
assumption for the correlation model for students, faculty, and the combined group was
satisfied.
112
Figure 11. Scatterplot of self-regulation practices and student satisfaction for student
respondents.
Figure 12. Scatterplot of self-regulation practices and student satisfaction for faculty
respondents.
113
Figure 13. Scatterplot of self-regulation practices and student satisfaction for the
combined participant group.
Table 36 presents the descriptive statistics for self-regulation practices and student
satisfaction in terms of mean, standard deviation, and the number of responses. The
ranges of the mean values for the participation groups varied by 3.93 for the self-
regulation practices construct and by 1.16 for the student satisfaction construct. The
differences between the mean values for the two constructs showed that more
respondents indicated a higher level of agreement with the question items for student
satisfaction.
114
Table 36
The third correlation research sub question and related null and alternate
Table 37 displays the Pearson correlation coefficient between the variables for
practices and student satisfaction was found to be significant (p < .01), r(294) = .45,
p. 001 for the student participant group, and the strength of the relationship was moderate
(.3 < r< .5). For the faculty participant group, the correlation between the two variables
was significant (p < .01), r(50) = .89, p .001, and the strength of the relationship was
strong (r > .5). For the combined participant group, the correlation between the two
115
variables was significant (p < .01), r(346) = .52, p .001, and the strength of the
relationship was strong (r > .5). The correlation between self-regulation practices and
student satisfaction was the strongest for the faculty participation group and lowest for
the student participant group. The findings showed that while the relationship between
self-regulation practices and student satisfaction was significant (p < .01), the relationship
was moderate for the student group (.3 < r < .5) and strong for the faculty and combined
groups (r > .5). Consequently, I rejected the null hypothesis and supported the alternate
hypothesis.
Table 37
Pearson Correlation for Self-Regulation Practices and Student Satisfaction of the Sample
Population
Correlation results of all paired constructs. The correlation results of all paired
showed that the relationships were significant but differed in association strength for the
participant groups. All construct pairs showed a strong relationship for all participant
groups except for the self-regulation practices and student satisfaction pair, which
showed a moderate relationship for the student group. Additionally, the relationship
116
between self-regulation practices and student satisfaction for the faculty group was the
highest of all the paired associations. In contrast, I found that the association between
self-regulation practices and student satisfaction for the student participant group was the
linear regression analysis was conducted between the three constructs as the predictor
variables and perceptions of student success as the outcome variable. The regression
analysis was conducted to assess the strength of the relationship between the constructs
and perceptions of student success, as given in the second research question. This
research question and the null and alternate hypotheses state as follows:
courses?
The assumptions of multiple linear regression to be satisfied were (a) the variables
are evenly distributed (normality), (b) a linear relationship exists between the predictor
and outcome variables (linearity), (c) the variance of error terms is similar across the
independent and dependent variables for each participant group. Q-Q plots are used
(Field, 2016). Figure 14, Figure 15, and Figure 16 showed the patterns of relationship for
Figure 14. Q-Q plot of student engagement for the student participant group.
118
Figure 15. Q-Q plot of student engagement for the faculty participant group.
Figure 16. Q-Q plot of student engagement for the combined participant group.
Figure 17, Figure 18, and Figure 19 showed the patterns of relationship for self-
Figure 17. Q-Q plot of self-regulation practices for the student participant group.
Figure 18. Q-Q plot of self-regulation practices for the faculty participant group.
120
Figure 19. Q-Q plot of self-regulation practices for the combined participant group.
Figure 20, Figure 21, and Figure 22 showed the patterns of relationship for
Figure 20. Q-Q plot of student satisfaction for the student participant group.
121
Figure 21. Q-Q plot of student satisfaction for the faculty participant group.
Figure 22. Q-Q plot of student satisfaction for the combined participant group.
Figure 23, Figure 24, and Figure 25 showed the patterns of relationship for
Figure 23. Q-Q plot of perceptions of student success for the student participant group.
Figure 24. Q-Q plot of perceptions of student success for the faculty participant group.
123
Figure 25. Q-Q plot of perceptions of student success for the combined participant group.
satisfaction, and perceptions of student success satisfied the assumption for normality and
demonstrated that the variables were evenly distributed. Normal distribution was
confirmed from the output of the plots, which showed that the data were close to the
diagonal lines for all graphs. While normality was observed in the graphs, the largest
deviations occurred with student satisfaction and perceptions of student success for the
The second assumption for multiple linear regression analysis was examined
using a matrix scatterplot diagram to determine the linear relationship of the predictor
and outcome variables. The matrix scatterplots for the student, faculty, and combined
participant groups are shown in Figure 26, Figure 27, and Figure 28 respectively. Each
matrix scatterplot of the four variables is sectionalized into 16 quadrants and shows the
124
individual relationship of each variable with the other. On examination of the matrix
scatterplots, I found that the graphs demonstrated a linear relationship between pairs of
all four variables. This linear relationship between the variable pairs confirmed that the
Figure 26. Matrix scatterplot of the three constructs and perceptions of student success
for student respondents.
125
Figure 27. Matrix scatterplot of the three constructs and perceptions of student success
for faculty respondents.
Figure 28. Matrix scatterplot of the three constructs and perceptions of student success
for the combined participant group.
The third assumption of homoscedasticity was determined in two ways. The first
method used a scatterplot diagram of the residual values against the predicted values to
126
show that the predictor variables were independent of the outcome variable (Field, 2016).
The second method was the computation of the Durbin-Watson statistic. A Durbin-
Watson value of over the cut-off point of 1 indicates that the assumption of
homoscedasticity is met. Field (2016) suggested that the closer the Durbin-Watson
statistic is to the value of 2, the more accurate the independence of the variables. Figure
29, Figure 30, and Figure 31 show the scatterplots of the residual values for each
participant group.
Figure 29. Scatterplot of the residual values for the three independent variables against
the dependent variable for the student participant group.
127
Figure 30. Scatterplot of the residual values for the three independent variables against
the dependent variable for the faculty participant group.
Figure 31. Scatterplot of the residual values for the three independent variables against
the dependent variable for the combined participant group.
groups showed heteroscedasticity as the data points were not evenly distributed across all
128
the values for the three independent variables. The presence of heteroscedasticity
indicated that the assumption for homoscedasticity was not met for these two participant
groups. Conversely, the scatterplot for the faculty group showed that the assumption for
homoscedasticity was met as the data points were evenly distributed. Table 38 shows the
Durbin-Watson statistic for the three participant groups. A review of the Durbin-Watson
test for the groups gave different results. The results of the test revealed that the residual
values were independent for all participant groups as the statistic was greater than 1.
Additionally, given that the Durbin-Watson statistic was closer to the value of 2, the test
suggested that the assumption for homoscedasticity had been met for all participant
groups.
Table 38
analysis was confirmed by an examination of the correlation coefficient and the variance
inflation factor (VIF) statistic for the independent variables. The absence of
multicollinearity signifies that the relationships between the predictor variables are not
strongly correlated. Field (2016) states that the coefficient values of the predictor
variables should not be greater than 0.8 or 0.9. Table 39, Table 40, and Table 41 show the
Pearson correlation coefficient for the four variables for the three participant groups. The
129
coefficient values for all the three predictor variables pairs for the student and combined
participation group were less than 0.8. The coefficient value for the faculty participant
group was greater than 0.8 for the self-regulation practices and student satisfaction
predictors and less than 0.8 for the remaining predictor pairs. The high correlation
Table 39
Pearson Correlation for the Predictor and Outcome Variables for the Student Group
Table 40
Pearson Correlation for the Predictor and Outcome Variables for the Faculty Group
Pearson Correlation for the Predictor and Outcome Variables for the Combined Group
Descriptive statistics for the four variables. Table 42 displays the descriptive
statistics for the independent and outcome variables by participant group. The results
obtained for the mean and standard deviation for the three constructs were comparable to
the results obtained for these statistics in the correlation model for both participant
groups. For the comparison of the four variables in the multiple linear regression model,
the number of student participants was reduced from 297 to 286. In like manner, the
number of faculty respondents was reduced from 52 to 51. The statistics for the standard
deviation of the three constructs were similar in value for the two participant groups, and
the largest difference was found to be 1.096 for the variable self-regulation practices. In
contrast, the values of the standards deviation for perceptions of student success differed
Descriptive Statistics of the three Constructs and Perceptions of Student Success for the
Sample Population
Covariates. The possible covariates for this study were identified in Chapter 3 as
gender, age group, and years of experience of faculty participants in using an online
learning environment. Covariates are those variables that could influence the outcome
variable (Field, 2016; Warner, 2013). Controlling for these covariates would give a better
predictive value of the independent variables on the dependent variable. The multiple
linear regression analyses for the participant groups compared the results for (a) the four
variables only, and (b) the four variables and covariates collectively.
Multiple linear regression analyses. The results of the regression analyses were
presented in two segments: (a) predictive nature of the model on the dependent variable,
and overall model fit; and (b) extent of the effect of each independent variable on the
dependent variable. Two multiple linear regression analyses were computed for each set
of results for each participant group: student, faculty, and combined. The first regression
analysis incorporated the data for the three independent and dependent variables only.
132
The second regression analysis incorporated the three independent variables, covariates
as independent variables and the dependent variable. The effect size classification levels
depict the strength of the variability and relationship between the independent and
dependent variables in the results for regression analyses (Faul et al., 2007, 2009). The
absolute values for the classification levels are 0.02-0.14 (small), 0.15-0.34 (medium) and
variable is given in the correlation tables for each participant group. The correlation
tables without covariates show whether the independent variables are significant
predictors of the dependent variable. The covariate correlation tables also show if the
covariates are significant predictors of the dependent variable for the participant groups.
Multiple linear regression model fit. The multiple linear regression tables present
the extent to which the regression model predicts the dependent variable and overall
model fit for each participant group. The square of the correlation, R2, explains the
amount and strength of the variance contributed by the independent variables on the
dependent variable (Field, 2016). The F ratio indicates the extent to which the model
predicts the dependent variable and fits the overall participant data. The probability, p,
Table 43 and Table 44 present the data for the student participant group. For the
student group without covariates, R2 = .25, F(3, 282), p < .001 and F(3, 282) = 30.89,
p < .001. The R2 result indicated that the predictor variables of student engagement, self-
regulation practices, and student satisfaction explained 25% of the variability of the
133
outcome variable, perceptions of student success. The variability for this group was
moderate and significant (p < .05). The F ratio result indicated that the regression model
was a good fit for the student participant data (F > 1 and p < .05). For the student group
with gender and age group as covariates, R2 = .26, F(5, 280), p < .001 and F(5, 280) =
19.51, p < .001. The R2 result indicated that there was a slight increase in variability
(26%) of the predictor variables on the outcome variable. The variability for this group
also was moderate and significant (p < .05). Although there was a decrease in the F ratio
result, the regression model was a good fit for the student participant data while
controlling for gender and age group (F > 1 and p < .05).
Table 43
Table 44
faculty group without covariates, R2 = .14, F(3, 47), p = .064 and F(3, 47) = 2.59,
p = .064. The R2 result indicated that the predictor variables of student engagement, self-
regulation practices, and student satisfaction explained 14% of the variability of the
outcome variable, perceptions of student success. Although the variability for this group
was moderate, it also was insignificant (p > .05). The F ratio result indicated that the
regression model was not a good fit for the faculty participant data given that the ratio
was insignificant at p > .05. For the faculty group with gender and age group as
covariates, R2 = .16, F(6, 44), p = .223 and F(6, 44) = 1.43, p = .223. The R2 result
indicated that although there was a slight increase in variability (16%) of the predictor
variables on the outcome variable, the result was insignificant (p > .05). The F ratio result
showed that the regression model was not a good fit for the faculty participant data while
Table 45
Table 47 and Table 48 present the data for the combined participant group. For
the combined group without covariates, R2 = .24, F(3, 333), p < .001 and F(3, 333) =
34.78, p < .001. The R2 result indicated that the predictor variables of student
variability of the outcome variable, perceptions of student success. The variability for this
group was moderate and significant (p < .05). The F ratio result indicated that the
regression model was a good fit for the combined participant data given that F > 1 and
p < .05. For the combined group with gender and age group as covariates, R2 = .26,
F(3, 333), p < .001 and F(3, 333) = 19.57, p < .001. The R2 result indicated that there was
a slight increase in variability (26%) of the predictor variables on the outcome variable.
The variability for this group also was moderate and significant (p < .05). Although there
was a decrease in the F ratio result, the regression model was a good fit for the student
participant data while controlling for gender and age group given that F > 1 and p < .05.
136
Table 47
Table 48
The tables for the student and combined participant groups showed a high degree
of similarity between the results regression model for the independent and dependent data
set without the inclusion of the covariates. The tables for these two participant groups
also showed a high degree of similarity between the results regression model for the
independent and dependent data set controlling for the covariates of gender and age
Additionally, the overall regression model was found to fit the independent and
dependent data set for the two groups. For the faculty participant group, the three
137
independent variables did not statistically predict the dependent variable. The contrast in
results between the student and combined groups and the faculty group may have been
Statistical significance of the three constructs. Table 49 and Table 50 present the
correlation data for the independent variables separately with and without covariates for
the student participant group. For the student group without covariates, the results of the
significantly by 0.21 (moderate) for student engagement (B = .21, t(282) = 2.62, p < .01),
0.30 (moderate) for self-regulation practices (B = .30, t(282) = 3.87, p < .001) and 0.32
(moderate) for student satisfaction (B = .32, t(282) = 2.49, p < .05). Consequently, the
Based on the findings, student satisfaction was the best predictor of perceptions of
For the student group with covariates, the results of the unstandardized
success. Covariate results for B showed that gender did not predict the outcome variable,
but age group predicated perceptions of student success. Perceptions of student success
varied positively and significantly by 0.20 (moderate) for student engagement (B = .20,
t(280) = 2.42, p< .05), 0.29 (moderate) for self-regulation practices (B = .29, t(280) =
3.80, p < .001) and 0.32 (moderate) for student satisfaction (B = .32, t(280) = 2.49,
p < .05). For the covariates, perceptions of student success varied positively and
significantly by 0.97 (high) for age group (B = .97, t(280) = 2.10, p < .05), but not
significantly for gender (B = .36, t(280) = .30, p > .05). Hence, the outcome variable,
engagement, self-regulation practices, and student satisfaction while controlling for the
covariates.
139
Table 50
Table 51 and Table 52 present the correlation data for the independent variables
separately with and without covariates for the faculty participant group. For the faculty
group without covariates, the results of the unstandardized coefficient, B, showed that
each independent variable did not predict perceptions of student success. Perceptions of
student success varied positively and insignificantly by 0.10 (low) for student
engagement (B = 0.10, t(280) = .57, p > .05), 0.26 (moderate) for self-regulation practices
(B = .26, t(280) = 1.00, p > .05) and negatively by 0.12 for student satisfaction
(B = -.12, t(280) = -0.25, p > .05). Consequently, the outcome variable, perceptions of
For the faculty group with covariates, the results of the unstandardized
coefficient, B, showed that each independent variable and covariate did not predict
insignificantly by 0.11 (low) for student engagement (B = .11, t(280) = .55, p > .05), 0.21
(moderate) for self-regulation practices (B = .21, t(280) = .79, p > .05) and negatively by
0.06 (low) for student satisfaction (B = -.06, t(280) = -.121, p > .05). For the covariates,
perceptions of student success varied positively and insignificantly by 1.71 (high) for
gender (B = 1.71, t(280) = .90, p > .05), 0.60 (high) for age group (B = .60, t(280) = 59,
p > .05), and negatively by 0.06 (low) for years teaching (B = -.06, t(280) = -.25, p > .05).
Hence, the outcome variable, perceptions of student success, was not significantly
controlling for age group (p > .05). The insignificance of the constructs while controlling
141
for the covariates for the faculty group may have been due to the small number of
respondents.
Table 52
Table 53 and Table 54 present the correlation data for the independent variables
separately with and without covariates for the combined participant group. For the
Perceptions of student success varied positively and significantly by 0.19 (moderate) for
student engagement (B = .19, t(333) = 2.55, p < .05), 0.29 (moderate) for self-regulation
practices (B = .29, t(333) = 4.12, p < .001) and 0.29 (moderate) for student satisfaction
(B = .29, t(333) = 2.40, p < .05). Consequently, the outcome variable, perceptions of
student success, was moderately and significantly predicted by student engagement, self-
student success.
Table 53
For the combined group with covariates, the results of the unstandardized
success. Covariate results for B showed that gender and age group did not predict the
0.18 (moderate) for student engagement (B = .18, t(331) = 2.40, p < .05), 0.29 (moderate)
for self-regulation practices (B = .29, t(331) = 4.12, p < .001) and 0.29 (moderate) by
student satisfaction (B = .29, t(331) =2.14, p < .05). For the covariates, perceptions of
student success varied positively and insignificantly by 0.94 (high) for gender (B = .94,
t(331) = .89, p > .05), and 0.38 (high) for age group ( B= .38, t(331) = 1.02, p > .05).
Hence, the outcome variable, perceptions of student success, was moderately and
143
significantly predicted by student engagement, self-regulation practices, and student
Table 54
Multiple linear regression results of all participant groups. The multiple linear
regression results of the three participant groups showed a high degree of similarity
between the student and combined groups. These two groups demonstrated that the
results for the faculty group showed that the three independent variables did not predict
the outcome variable. Although the faculty regression result was insignificant, the faculty
and student groups combined revealed that the independent variables do relate to the
outcome variable. The difference in results suggested that the respondent numbers for the
faculty group were too small for the multiple linear regression analysis. The statistical
144
significance of the analyses was therefore based on the results for the student and
combined groups only. Consequently, the null hypothesis for question 2 is rejected. The
Summary
A new instrument for each of the participant groups was developed to explore the
regulation practices and student satisfaction, and the regression relationship between the
constructs and perceptions of student success. A pilot study was conducted to determine
the reliability of the instrument at two institutions, which were different from the actual
study site. Only 10 students and three faculty responded to the pilot study. As a result, the
reliability of the instrument was calculated using the data set from the research site.
The instrument for the student and faculty participant groups was found to be
reliable using the data sets from the actual study. The Cronbach statistic was used to
determine internal reliability and measured between 0.794 and 0.906. Additionally,
preliminary factor analysis was conducted to determine the construct validity of the
instrument. The factor analysis confirmed the four-factor question item design for student
intended four-factor question item design for student satisfaction. Additionally, the factor
analysis identified a three-factor design for perceptions of student success. The question
items for perceptions of student success had not been categorized into factors for the
study.
145
The number of responses for the student and faculty participant groups varied.
Initially, 385 students and 61 faculty responded. After the demographic data were
cleaned, a total participant sample of 352 and 53 resulted. The minimum sample size of
138 for each participant group was achieved only for the student group. As a result of the
small respondent numbers for the faculty group, a combined participant group was
Descriptive statistical analysis of the responses summarized the data for each
residence, name and level of program, full-time and part-time status, number of online
courses, and number of hours spent online. The variables were measured using frequency
tables, mean, standard deviation, and the number of respondents for the online programs.
The two main research questions related to correlation and regression models. For
the correlation analysis research question, the relationships between pairs of the three
constructs were determined separately. The results showed that the construct pairs
correlated significantly with each other, but the strength of the relationships between the
pairs varied. The correlation strength was strong for all pairs of all participant groups
except for the self-regulation practices and student satisfaction pair, which showed a
moderate correlation for the student group. The null hypotheses for each of the pairs were
rejected given that p<.01. Consequently, the alternate hypotheses were supported as
follows:
in online courses.
The multiple linear regression analysis showed that there was a distinct difference
in results for the student, faculty, and combined groups. For the student and combined
participant groups, the null hypothesis was rejected as the three constructs as independent
dependent variable. For the faculty participant group, all the predictor variables were
statistically insignificant. Consequently, only the results for the student and combined
participant groups were used to measure the relationship between the constructs and
perceptions of student success. The alternate hypothesis supported the results as follows:
Introduction
fold. The first part of the research was a correlation study to explore the innovative
student satisfaction. The second part of the research was a regression study to examine
the relationship between the three constructs and perceptions of student success. The
correlation and regression relationships were measured separately from the perspectives
of both students and faculty who self-reported their experiences in the online
exploration of the relationships of the three constructs and student success in a single
A newly constructed instrument was used to capture data separately for students
and faculty using two questionnaires. Content validity was determined using content
experts. The reliability of the instrument was calculated using the Cronbach alpha
statistic. The results of the reliability test showed strong internal reliability of the question
items for the four variables of student engagement, self-regulation practices, student
revealed that the question items were based on a four-factor design for student
engagement and self-regulation practices as desired. The four-factor design was not
evident for the question items for student satisfaction. The number of question item
factors for perceptions of student success were not predetermined, and a three-factor
148
design emerged. Prior to conducting the research study, the instrument was field tested at
The results of the correlation study between pairs of the three constructs showed
statistical significance for the student, faculty, and combined participant groups. The
strength of the association between the pairs varied. The results of the regression study of
the relationship between the three constructs and perceptions of student success showed
statistical significance. These results were obtained for the student and combined
participant groups. The regression results for the faculty participant group were
insignificant, perhaps because of the small sample size, which was below the required
minimum sample size of 138. Consequently, the faculty data were not used in the
The findings are interpreted for the internal reliability and construct validity of the
instrument and the correlation and the multiple linear regression analyses of the study.
For the correlation analysis, the results are discussed for all participant groups: student,
faculty, and student and faculty combined. For the multiple linear regression analysis, the
results are discussed for the student and combined participant groups. The regression
analysis results of the faculty participant group were not consistent with those obtained
for the student and combined groups, and it was concluded that the difference was a
The internal reliability and construct validity of the new instrument were
calculated using the data provided by the actual study participant groups, as the pilot
study response rate was low. Given that content validity was already established in
Chapter 3, the strength of the internal reliability of the scales for the three constructs and
perceptions of student success confirmed the instrument’s acceptability for use in the
study. Content validity of the scale items verified the intent of the measures for each
construct and perceptions of student success. The results of construct validity showed that
learner-learner, and learner-online platform were not consistent for the three constructs.
The results of the correlation between the pairs of the three constructs of student
(Boekaerts, 2016; Mello, 2016; Pellas, 2014; Zhang et al., 2015). Studies conducted on
the relationship between student engagement and student satisfaction gave mixed results
(Jackson, 2015; Johnson et al., 2016; Kuh et al., 2007; Larose, 2010). Studies conducted
positive correlation (Puzziferro, 2008; Wang et al., 2013). The correlation results
engagement and self-regulation practices (Mello, 2016; Pellas, 2014; Zhang et al., 2015),
mixed results were obtained when the constructs were subdivided into smaller
components. My findings were consistent with the overall correlation results of previous
research. The mixed results in previous studies confirmed that the operational definition
was a key factor in determining the relationship between this pair of constructs. I did not
constructs, as this part of the analysis was beyond the scope of my study.
previous studies appeared to be related to the learning environment. Jackson (2015) and
Kuh et al. (2007) reported a positive correlation in the traditional environment. Larose
(2010) reported a negative correlation between the two constructs at the community
college level in the online environment. Furthermore, studies using the NSSE survey
instrument were conducted at the undergraduate level in the traditional setting, and there
was no partiality observed for ethnicity (Jackson, 2015; Johnson et al., 2016). Webber et
al. (2013) reported a positive relationship between dimensions of student engagement and
student satisfaction. Although I explored the relationship collectively for seven higher
positive correlation (Puzziferro, 2008; Wang et al., 2013). These studies were conducted
in the online learning environment using community college students (Puzziferro, 2008)
151
and undergraduate and graduate students (Wang et al., 2013). Researchers reported that
satisfaction rates (Inan et al., 2017; Nicol, 2009). My correlation results were consistent
with the previous studies for this pair of constructs. Of all the studies on the correlation
relationship between the three construct pairs, the correlation findings between self-
regulation practices and student satisfaction were more comparable to my study in terms
Previous studies on the pairs of constructs were not conducted in the same
research study at the higher education level. One study explored the three constructs at
the same time, but this study was conducted in youth sports academies (Tadesse et al.,
2018). Furthermore, the study was conducted to validate the factors of the scale items for
and youths. Although this study used a three-factor model for student engagement, a
four-factor model of self-regulation, and a three-factor model for student satisfaction, the
researcher did not explore the association between the constructs. Consequently, my
study closed the gap in the exploration of the correlation relationship between pairs of the
Multiple Linear Regression Analysis Between the Three Constructs and Student
Success
The results of the multiple regression analysis verified that there was a positive
and significant relationship between each of the three constructs and perceptions of
student success. The findings showed that the constructs predicted perceptions of student
152
success while controlling for the covariates of age and gender. The strength of predictive
relationships was either low or moderate. There were no previous studies found on the
use of a regression model to explore the relationship between the three constructs and
perceptions of student success at the same time. Nonetheless, previous studies reported
on the relationship between pairs of constructs and student success, as discussed below.
success (Burrow & McIver, 2012; Korobova & Starobin, 2015; Webber et al., 2013) in a
single study. The study by Webber et al. (2013) reported that the dimensions of the
student engagement associated with academic activities predicted higher levels of student
success (cumulative GPA). The other studies examined predictors within each construct
and not the relationship between constructs and student success (Burrow & McIver,
2012; Korobova & Starobin, 2015). The results of the Webber et al. (2013) study
current study.
regulation, and student success (Fong et al., 2017; Rahal & Zainuba, 2016). Self-
regulation led to higher levels of student performance and predicted student success. This
finding was more noticeable in high achievers as opposed to low achievers (Rahal &
Zainuba, 2016). Furthermore, the researchers noted that self-regulation was not a high
predictor for all dimensions of student success. Other studies showed a positive
and blended learning environments (Inan et al., 2017; Nicol, 2009). These studies also
153
confirmed self-regulation as a predictor of higher student success rates. The influence of
The findings of this study add to the current literature as it relates to the
exploration of the three constructs and perceptions of student success within the
Caribbean context. Further, the study adds to the body of knowledge in the examination
of (a) the three constructs together, and (b) the three constructs and perceptions of student
success in a single study. First, the findings show the correlation relationship between
pairs of the three constructs in a Caribbean institution and contribute to the understanding
of the association of each construct with the other. Second, the findings illustrate the
predictive relationship between the three constructs and perceptions of student success
and the importance of the constructs to student persistence. Third, the correlation and
designing online learning spaces that are responsive to the needs of students.
The study measured the self-reported scores of both students and faculty. While
responses, there were no studies that measured faculty responses. Consequently, the study
adds to the current literature on faculty’s self-reported views on the extent to which
students are engaged in the online learning environment, apply self-regulation practices,
studies have used separate questionnaires. There have been no studies that combined the
constructs and perceptions of student success in a single questionnaire and single study.
three constructs and perceptions of student success at the same time. As a result, my
study has added to the current literature in the use of a single questionnaire that can
produce comparable results to the use of separate questionnaires for the same constructs
regulation practices, and student satisfaction used Moore’s (1989) three interaction model
of (a) learner-content, (b) learner-instructor, and (c) learner-learner as part of the common
operational definition. The interaction model was enhanced by adding a fourth interaction
to represent the web-based technology presence. This new interaction was designated as
the learner-online platform interaction and was included in the operational definition. The
enhanced interaction model has added to the body of literature on interactions likely to be
The study had several limitations. These limitations are presented as follows:
The study was conducted at one higher education institution in the English-
speaking Caribbean. The selection of this institution was due to the limited
This sampling strategy did not facilitate the determination of a cause and effect
relationship between the predictor and outcome variables. Although the sampling method
supported the testing of a new instrument, the method did not allow the results to be
The response rate for the pilot study was very low. Although the pilot study
involved two institutions, a total of 10 students and three faculty members responded.
The low response made it difficult to conduct internal reliability and factor analyses on
the data obtained. Instead, these analyses were performed on the actual research study’s
data.
Data collection was dependent on the support of the institution’s liaison. I had no
control over the distribution of the invitation to participate, follow-up letters, and the link
to the questionnaires. In one instance, I was informed that one of the follow-up letters
was not distributed as intended, and I extended the length of the survey to accommodate
this oversight.
The minimum sample size of 138 was not achieved for the faculty respondents.
Initially, a total of 61 faculty members responded. After the data clean-up exercise, 53
faculty responses remained. Additionally, due to the online learning modality of the
program offerings, the survey was distributed only to adjunct faculty members.
The timing of the survey distribution exercise affected the possible number of
available responses. The questionnaires were distributed during the summer term, which
had fewer program offerings when compared to the September and January terms. In
156
order not to wait out an entire term following the pilot testing exercise, the research study
The preliminary factor analysis revealed that the factor loading of the question
items for the three constructs did not all correspond to the pre-determined interaction
factors. Although the interaction factors of (a) learner-content, (b) learner-instructor, (c)
learner-learner, and (d) learner-online platform were not explored as separate dimensions,
the factors formed the basis of the operational definitions for the constructs.
on their relationships with more than one instructor and more than one student. Faculty
The multiple linear regression analysis for the faculty participant group exhibited
insignificant results (p > .05). These results appeared to be due to the small number of
responses obtained given that the combination participant group gave significant results.
The findings of the faculty group were omitted from the final reporting of the regression
analysis results.
Recommendations
There are several recommendations for further research in the study of student
success in online learning environments. First, recommendations are presented for the
practices should be re-organized using the factor loading suggestions given in the
The question items for student satisfaction should be expanded and improved so
that they represent more accurately the four-factor design of the (a) learner-content, (b)
The question items for the learner-online platform interaction should be expanded
The question items for perceptions of student success should be categorized based
success once the adjustments to the question items have been completed.
The initial study was distributed to only adjunct faculty facilitating online
programs of study. A future study of full-time and adjunct faculty would allow for a
better representation of the faculty participation group. This study would confirm or
Future research involving a comparison of the interaction factors would allow for
a greater understanding of how these factors relate within the constructs and between the
constructs.
158
Comparisons of the responses to individual question items for the student and
faculty groups would determine if there are any statistical differences between the two
groups. The comparisons would also determine if there are any deviations from the
overall findings obtained for the constructs and perceptions of student success. For the
multiple linear regression model, the comparison would determine which questions items
Implications
The implications for this research study are presented for positive social change
and practical considerations that could arise from the results and findings. The positive
social change is presented in relation to the transition from traditional to online learning
take into account the collection of data for more than one construct in a single study and
how this approach may help in reducing questionnaire fatigue in students and faculty.
The combination of questions items for the three constructs and perceptions of
fatigue among survey respondents can be regarded as a positive response to this problem.
The findings of the correlation and regression analyses compared favorably with
the results from similar studies. The instrument could be used as a preliminary model in
predictors of student success based on their engagement and self-regulation activities and
levels of student satisfaction. Furthermore, the findings from the single instrument study
design of instructional approaches that foster the alignment between student engagement,
useful in determining how these constructs predict student success. The potential findings
could lead to positive social change in the way that universities approach the process of
Practical Implications
The use of a single questionnaire can give the institutions a quick overview of the
correlation among the pairs of the three constructs and the predictive nature of the
accommodate the conduct of several analyses and reporting of findings at one time or the
reporting of the findings at different times. The instrument could be used by the
160
ministries of education in the Caribbean territories to develop a baseline for the three
constructs and perceptions of student success and evaluate responses over time.
Conclusion
The findings from the study presented the results for the internal reliability and
construct validity of the instrument. The internal reliability of the instrument scales was
above 0.7, which is the acceptable statistic for the internal consistency of a scale. The
construct validity confirmed the original four-factor design of the instrument for student
between the three constructs and perceptions of student success were consistent with
findings from previous research studies. Pairs of the three constructs were positively and
predicted perceptions of student success. The same predictive result was obtained while
The positive social impact of my study is aligned with the innovative approach to
studying self-reported responses of students and faculty for the three constructs and
perceptions of student success. The new instrument could be used as a preliminary model
in higher education institutions in the Caribbean to learn about the predictors of student
success. A useful practical implication pertained to the establishment of baseline data for
the three constructs and perceptions of student success and evaluating trends over time.
161
References
http://www.isetl.org/ijtlhe/
Ali, A., & Ahmad, I. (2011). Key factors for determining students’ satisfaction in
doi:10.30935/cedtech/6047
Allen, I. E., & Seaman, J. (2014). Grade change: Tracking online education in the United
http://www.onlinelearningsurvey.com/reports/gradechange.pdf
Allen, T. O., & Zhang, Y. (2016). Dedicated to their degrees. Community College
doi:10.1080/21532974.2011.10784681
education.html
Ashby, J., Sadera, W. A., & McNary, S. W. (2011). Comparing student success between
https://www.ncolr.org/
Attuquayefio, S. N., & Addo, H. (2014). Using the UTAUT model to analyze students’
http://ijedict.dec.uwi.edu/
Awwad, M. S., & Al-Majali, S. M. (2015). Electronic library services acceptance and
https://www.infoagepub.com/quarterly-review-of-distance-education.html
Beaubrun, E. (2012). Distance learner ecologies of the University of the West Indies open
Blayone, T. J. B., vanOostveen, R., Barber, W., DiGiuseppe, M., & Childs, E. (2017).
Bolliger, D. U., & Martindale, T. (2004). Key factors for determining student satisfaction
from https://www.aace.org/pubs/ijel/
Bray, E., Aoki, K., & Dlugosh, L. (2008). Predictors of learning satisfaction in Japanese
Broadbent, J., & Poon, W. L. (2015). Self-regulated learning strategies and academic
Brown, L. (2014). Constructivist learning environments and defining the online learning
doi:10.26634/jsch.9.4.2704
Burrow, M., & McIver, R. P. (2012). The impact of changes to finance-major assessment
http://www.isetl.org/ijtlhe/
484. doi:10.1016/j.chb.2013.06.020
sp-1148215168
Retrieved from ProQuest Dissertations & Theses Full Text database. (UMI No.
3737728)
doi:10.1016/j.compedu.2012.12.003
iji.net/
Cho, M., & Cho, Y. (2014). Instructor scaffolding for interaction and students’ academic
doi:10.1016/j.iheduc.2013.10.008
Cho, M., & Cho, Y. (2017). Self-regulation in three types of online interaction: A scale
doi:10.1080/01587919.2017.1299563
Cho, M., & Shen, D. (2013). Self-regulation in online learning. Distance Education,
Cortés, A., & Barbera, E. (2013). Cultural differences in students’ perceptions towards
E-Learning, 555–564.
Cox, B., & Cox, B. (2008). Developing interpersonal and group dynamics through
https://www.projectinnovation.com/education.html
166
Cox, T. D. (2015). Adult education philosophy: The case of self-directed learning
Cuseo, J. (2007). Defining student success: First critical first step in promoting it. E-
https://sc.edu/about/offices_and_divisions/national_resource_center/publications/
e-source/
Czerkawski, B., & Lyman, E. (2016). An instructional design framework for fostering
016-0110-z
Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of
10.1080/01587910701305319
Dixson, M. D. (2015). Measuring student engagement in the online course: The online
10.24059/olj.v19i4.561
Donaldson, L., Matthews, A., Walsh, A., Brugha, R., Manda-Taylor, L., Mwapasa, V., &
learning Master’s programme. AISHE-J: The All Ireland Journal of Teaching &
http://journals.sfu.ca/aishe/index.php/
Duesbery, L., Brandon, R. R., Liu, K., & Braun-Monegan, J. (2015). Transitioning to
from http://www.infoagepub.com/
doi:10.1207/s15389286ajde1804_2
Faul, F., Erdfelder, E., Buchner, A., & Lang, A.-G. (2009). Statistical power analyses
using G*Power 3.1: Tests for correlation and regression analyses. Behavior
Faul, F., Erdfelder, E., Lang, A.-G., & Buchner, A. (2007). G*Power 3: A flexible
statistical power analysis program for the social, behavioral, and biomedical
Field, A. (2016). Discovering statistics using IBM SPSS statistics (4th ed.). Thousand
168
Oaks, CA: SAGE Publications.
Fong, C. J., Davis, C. W., Kim, Y., Kim, Y. W., Marriott, L., & Kim, S. (2017).
Fonolahi, A. V., Khan, M., & Jokhan, A. (2014). Are students studying in the online
Frankfort-Nachmias, C., Nachmias, D., & DeWaard, J. (2015). Research methods in the
Gallagher, S., & LaBrie, J. (2012). Online learning 2.0: Strategies for a mature market.
https://projects.iq.harvard.edu/cher/76
Garrison, D. R., Anderson, T., & Archer, W. (1999). Critical inquiry in a text-based
https://www.journals.elsevier.com/the-internet-and-higher-education
Garrison, D. R., Anderson, T., & Archer, W. (2009). Critical thinking, cognitive
Garrison, D. R., & Vaughan, N. D. (2013). Institutional change and leadership associated
with blended learning innovation: Two case studies. Internet & Higher Education,
169
1824–28. doi:10.1016/j.iheduc.2012.09.001
Greer, A. G., Pokorney, M., Clay, M. C., Brown, S., & Steele, L. L. (2010). Learner
Gregory, C. B., & Lampley, J. H. (2016). Community college student success in online
Hachey, A. C., Conway, K. M., & Wladis, C. W. (2013). Community colleges and
Handelsman, M. M., Briggs, W. L., Sullivan, N., & Towler, A. (2005). A measure of
191. doi:10.3200/joer.98.3.184-192
Harrell, I. L., II, & Bower, B. L. (2011). Student characteristics that predict persistence in
doi:10.1080/08923649409526853
Horvitz, B. S., Beach, A. L., Anderson, M. L., & Xia, J. (2015). Examination of faculty
316. doi:10.1007/s10755-014-9316-1
Inan, F. F., Yukselturk, E. E., Kurucay, M. K., & Flores, R. R. (2017). The impact of
http://www.aace.org
doi:10.18546/LRE.15.3.08
Johnson, D. M., Edgar, L. D., Shoulders, C. W., Graham, D. L., & Rucker, K. J. (2016).
land grant university. College Student Journal, 50(3), 335–346. Retrieved from
171
http://www.projectinnovation.com/college-student-journal.html
Johnson, S. G., & Berge, Z. (2012). Online education in the community college.
doi:10.1080/10668920903323948
Judge, D. S., & Murray, B. (2017). Student and faculty transition to a new online learning
doi:10.1016/j.teln.2017.06.010
Jung, I., Choi, S., Lim, C., & Leem, J. (2002). Effects of different types of interaction on
doi:10.1080/14703290252934603
Kahn, P., Everington, L., Kelm, K., Reid, I., & Watkins, F. (2017). Understanding
doi:10.1007/s11423-016-9484-z
doi:10.3402/rlt.v23.26507
Ke, F. (2010). Examining online teaching, cognitive, and social presence for adult
doi:10.1016/j.compedu.2010.03.013
doi:10.1080/01587910802004860
Kenner, C., & Weinerman, J. (2011). Adult learning theory: Applications to non-
traditional college students. Journal of College Reading and Learning, 41(2), 87–
96. doi:10.1080/10790195.2011.10850344
Kerr, M. S., Rynearson, K., & Kerr, M. C. (2006). Student characteristics for online
iheduc.2006.03.002
Khan, A., Egbue, O., Palkie, B., & Madden, J. (2017). Active learning: Engaging
Kiely, R., Sandmann, L. R., & Truluck, J. (2004). Adult learning theory and the pursuit
of adult degrees. New Directions for Adult and Continuing Education, (103), 17–
30. doi:10.1002/ace.145
Kitsantas, A., & Dabbagh, N. (2011). The role of Web 2.0 technologies in self-regulated
learning. New Directions for Teaching and Learning, 2011(126), 99–106. doi:
10.1002/tl.448
Kingdom: Routledge.
http://jistudents.org/back-issues
Kuh, G. D. (2003, March/April). What we’re learning about student engagement from
Kuh, G. D., Kinzie, J., Cruce, T., Shoup, R., & Gonyea, R. M. (2007). Connecting the
Results from the NSSE, and the Institutional Practices and Conditions That Foster
Student Success (Revised Final Report prepared for Lumina Foundation for
http://nsse.indiana.edu/pdf/Connecting_the_Dots_Report.pdf
Kuo, Y., Walker, A. E., Belland, B. R., & Schroder, K. E. (2013). A predictive study of
doi:10.19173/irrodl.v14i1.1338
Kuo, Y. C., Walker, A. E., Schroder, K. E. E., & Belland, B. R. (2014). Interaction,
satisfaction in online education courses. The Internet and Higher Education, 20,
35–50. doi:10.1016/j.iheduc.2013.10.001
Lai, K. W. (2011). Digital technology and the culture of teaching and learning in higher
with technology is key to America’s future. Web Study, Inc. (White Paper).
Lear, E., Linda, L., & Prentice, S. (2016). Developing academic literacy through self-
Lee, H. W., Kim, K. Y., & Grabowski, B. L. (2010). Improving self-regulation, learning
010-9153-6
Lee, J. (2012). Patterns of interaction and participation in a large online course: Strategies
Loh, C., Wong, D. H., Quazi, A., & Kingshott, R. P. (2016). Re-examining students’
Ma, J., Han, X., Yang, J., & Cheng, J. (2015). Examining the necessary condition for
approach: The role of the instructor. Internet & Higher Education, 24, 26–34.
doi:10.1016/j.iheduc.2014.09.005
doi:10.1108/ET-07-2015-0058
Milman, N. B., Posey, L., Pintz, C., Wright, K., & Zhou, P. (2015). Online master’s
doi:10.1207/ S15389286AJDE1602_4
Northcote, M., Gosselin, K. P., Reynaud, D., Kilgour, P., & Anderson, M. (2015).
176
Navigating learning journeys of online teachers: Threshold concepts and self-
http://www.iier.org.au/iier.html
Ouimet, J. A., & Smallwood, B. (2005). CLASSE – The class level survey of student
http://www.wiley.com.ezp.waldenulibrary.org/
doi:10.5944/openpraxis.10.1.623
from the virtual world of second life. Computers in Human Behavior, 35, 157–
170. doi:10.1016/j.chb.2014.02.048
Pellas, N., & Kazanidis, I. (2015). On the value of second life for students’ engagement
in blended and online courses: A comparative study from the higher education in
doi:10.1007/s10639-013-9294-4
Pera, A. (2013). The relationship between faculty practices and student engagement and
proquest-com.ezp.waldenulibrary.org/publication/136104
177
Phillips, N. (2005). Forced learning theory. Training, 42(6), 46. Retrieved from
http://www.trainingmag.com
Pintrich, P. R., Smith, D. A. F., Garcia, T., & McKeachie, W. J. (1993). Reliability and
doi:10.1177/0013164493053003024
doi:10.1016/j.ijme.2015.11.003
Reinhart, J., & Schneider, P. (2001). Student satisfaction, self-efficacy, and the
Robinson, C. C., & Hullinger, H. (2008). New benchmarks in higher education: Student
178
engagement in online learning [Electronic version]. Journal of Education for
Roblyer, M. D., & Wiencke, W. R. (2004). Exploring the interaction equation: Validating
https://pdfs.semanticscholar.org/5fa8/bc54928a65ee6d7e69f4a1a40b6973f8a2ac.p
df
Rotar, O. (2017). Rethinking the support system for adult students in online learning
ProQuest Dissertations & Theses Full Text database. (UMI No. 3714866)
Sattari, A., Abdekhoda, M., & Gavgani, V. Z. (2017). Determinant factors affecting the
doi:10.3991/ijet.v12.i10.7258
doi:10.1006/ceps.1994.1033
179
Schreiber, B., & Yu, D. (2016). Exploring student engagement practices at a South
doi:10.20853/30-5-593
Seo, K. K., & Engelhard, C. (2014). Using the constructivist tridimensional design model
for online continuing education for health care clinical faculty. American Journal
Sinatra, G. M., Heddy, B. C., & Lombardi, D. (2015). The challenges of defining and
doi: 10.1080/00461520.2014.1002924
Journal for the Scholarship of Teaching & Learning, 9(1), 1–18. Retrieved from
http://www.georgiasouthern.edu/ijsotl/index.htm
Subotzky, G., & Prinsloo, P. (2011). Turning the tide: A socio-critical model and
doi:10.1080/01587919.2011.584846
Sun, A., & Chen, X. (2016). Online education and its effective practice: A research
doi:10.28945/3502
Tabak, F., & Nguyen, N. T. (2013). Technology acceptance and performance in online
Tadesse, T., Asmamaw, A., Mariam, S. H., & Mack, D. (2018). Proposing and testing
http://www.thesportjournal.org/
Tallent-Runnels, M. K., Thomas, J. A., Lan, W. Y., Cooper, S., Ahern, T. C., Shaw, S.
M., & Liu, X. (2006). Teaching courses online: A review of the research. Review
Thomas, T. D., Singh, L., & Gaffar, K. (2013). The utility of the UTAUT model in
http://ijedict.dec.uwi.edu//index.php
http://www.itdl.org/
Venkatesh, V., Morris, M., Davis, G., & Davis, F. (2003). User acceptance of
doi:10.24059/olj.v21i2.881
Wang, C., Shannon, D. M., & Ross, M. E. (2013). Students’ characteristics, self-
doi:10.1080/01587919.2013.835779
182
Wang, Y. D. (2014). Applying constructivist instructional strategies to e-learning: a case
Webber, K. L., Krylow, R. B., & Qin, Z. (2013). Does involvement really matter?
Wojciechowski, A., & Palmer, L. B. (2005). Individual student characteristics: Can any
Xu, D., & Jaggars, S. S. (2011). The effectiveness of distance education across Virginia’s
from https://journals.sagepub.com/home/epa
Yuan, J., & Kim, C. (2014). Guidelines for facilitating the development of learning
Yukselturk, E., & Yildirim, Z. (2008). Investigation of interaction, online support, course
183
structure and flexibility as the contributing factors to students’ satisfaction in an
Zhang, S., Shi, R., Yun, L., Li, X., Wang, Y., He, H., & Miao, D. (2015). Self-regulation
doi:10.1016/0361-476x(86)90027-5
0663.81.3.329
284–29. doi:10.1037/0022-0663.80.3.284
184
Appendix A: Draft Student Questionnaire
programs and is sectionalized into two parts. Part I pertains to demographic information
Name of Program:________________________________________________________
student satisfaction, and perceptions of student success. Please read each statement and
185
rate your experience using ‘Strongly Agree’ (5), Agree (4), Neither Agree nor Disagree
(3), Disagree (2), Strongly Disagree’ (1). There are no right or wrong answers.
Student Engagement – Rate the extent to which the following statements apply to you. I
…
1. Apply critical thinking skills to the course activities
2. Integrate my own views with that of others when learning the course material
3. Prepare study notes to understand the course material
4. Apply my learning of the course material to real-life situations
5. Interact with my instructors at least once a week about the course material
6. Discuss academic performance and other matters related to the achievement of
academic goals with my instructors
7. Obtain meaningful feedback on assignments from instructors
8. Understand difficult concepts and content better after interacting with instructors
9. Collaborate with my peers in a one-to-one or group relationship
10. Interact with peers on mastering the course material at least once a week
11. Respect peer differences
12. Value peer differences
13. Use the online learning space to participate in the course activities
Self-Regulation Practices – Rate the extent to which the following statements apply to
you. I …
1. Give myself enough time to review the course material
2. Develop plans to achieve my learning goals
3. Implement plans to achieve my learning goals
4. Complete course activities assigned by the given deadline
5. Check the online learning space for course material updates at least twice weekly
6. Initiate communication with my instructors
7. Use more than one way to communicate with my instructors
8. Develop a plan to assist peers in understanding the course material
9. Implement a plan to assist peers in understanding the course material
10. Monitor interactions with peers about the course material
11. Reflect on interactions with peers about the course material
12. Take the initiative to respond to contributions by my peers in the online learning
space
13. Use the online course activities to guide my own learning of the course material
Student Satisfaction – Rate the extent to which you are satisfied with the following
statements. I am satisfied with the …
1. Quality of my learning experiences
2. Alignment of course activities to my expectations of the course
3. Interactions with instructors
4. Interactions with peers
186
5. Orientation program provided for online learning
Perceptions of Student Success – Rate the extent to which you agree with the following
statements. Academic success in an online course is influenced by
1. Obtaining better grades
2. Engaging in course activities
3. Participating in programs that assist in improving my understanding of the course
material
4. Self-directed learning
5. Interacting with instructors
6. Interacting with peers
7. Feeling of a sense of belonging to the online learning community
8. Meeting of course expectations
9. Being motivated intellectually
10. Feeling of a personal sense of accomplishment
11. Relevancy of course goals to professional goals
12. Relevancy of course goals to personal goals
13. Being satisfied with the delivery of the course content
14. Being satisfied with the support given to achieving academic goals.
187
Appendix B: Draft Faculty Questionnaire
online programs and is sectionalized into two parts. Part I pertains to demographic
Name of Program:______________________________________________________
student satisfaction, and perceptions of student success. Please read each statement and
188
rate your experience with students using ‘Strongly Agree’ (5), Agree (4), Neither Agree
nor Disagree (3), Disagree (2),Strongly Disagree’ (1). There are no right or wrong
answers.
Student Engagement – Rate the extent to which the following statements apply to your
students. Students …
1. Apply critical thinking skills to the course activities
2. Integrate their own views with that of others when learning the course material
3. Indicate that they prepare study notes to understand the course material
4. Apply their learning of the course material to real-life situations
5. Interact with me as instructor at least once a week about the course material
6. Discuss academic performance and other matters related to the achievement of
academic goals with me as instructor
7. Indicate that they obtain meaningful feedback on assignments from me as
instructor
8. Understand difficult concepts and content better after interacting with me as
instructor
9. Collaborate with their peers in a one-to-one or group relationship
10. Interact with their peers on mastering the course material at least once a week
11. Respect peer differences
12. Value peer differences
13. Utilize the online learning space to participate in the course activities
Self-Regulation Practices – Rate the extent to which the following statements apply to
your students. Students …
1. Allow enough time to review the course material
2. Develop plans to achieve their learning goals
3. Implement plans to achieve their learning goals
4. Complete course activities assigned by the given deadline
5. Check the online learning space for course material updates at least twice weekly
6. Initiate communication with me as instructor
7. Use more than one way to communicate with me as instructor
8. Indicate that they develop a plan to assist their peers in understanding the course
material
9. Indicate that they implement a plan to assist their peers in understanding the
course material
10. Monitor interactions with their peers about the course material
11. Reflect on interactions with their peers about the course material
12. Take the initiative to respond to contributions made by their peers in the online
learning space
13. Use the online course activities to guide their own learning of the course material
189
Student Satisfaction – Rate the extent to which your students are satisfied with the
following statements. Students report/indicate that they are satisfied with the …
1. Quality of learning experiences
2. Alignment of course activities to their expectations of the course
3. Interactions with instructors
4. Interactions with peers
5. Orientation program provided for online learning
Perceptions of Student Success – Rate the extent to which you agree with the following
statements about your students. Students’ academic success is influenced by
1. Obtaining better grades
2. Engaging in course activities
3. Participating in programs that assist in improving their understanding of the
course material
4. Self-directed learning
5. Interacting with instructors
6. Interacting with peers
7. Feeling of a sense of belonging to the online learning community
8. Meeting of course expectations
9. Being motivated intellectually
10. Feeling of a personal sense of accomplishment
11. Relevancy of course goals to professional goals
12. Relevancy of course goals to personal goals
13. Being satisfied with the delivery of the course content
14. Being satisfied with the support given to achieving academic goals
190
Appendix C: Content Review Invitation
Dear Colleague:
If you agree to conduct this evaluation, you will be sent the Content Expert Review
document with the question items and asked to comment on the comprehension and
relevance of each item and provide suggestions for improvement (if necessary). You will
also be asked to comment on any of the sections that are inadequately represented for the
intended purpose of the overall questionnaire. To perform the role of Content Expert, you
should have been teaching online courses/programmes at the higher education level for at
least five years where you would have been exposed to students’ levels of engagement,
satisfaction, and success, and students’ abilities to self-regulate (self-direct) their
learning.
Please indicate your agreement to act as Content Expert of the survey instrument for this
research project by replying to this email with the words “I agree to perform the role of
Content Expert for this questionnaire”. Kindly note that your participation is voluntary
and you may discontinue your involvement in the study at any time. If you have any
questions about the research project, you may contact me at
[email protected] or at (868) 298-7509.
Should you be in agreement, I look forward to receiving your evaluation within one week
of sending you the Content Expert Review document.
Yours sincerely,
Marcia Commissiong
191
Appendix D: Letter for Approval to Conduct Research
Dear XXX,
It was a pleasure speaking with you briefly this morning. This email is to (1) request
initial permission to conduct research at your institution, and (2) if you are in agreement,
to obtain information about your institution’s research approval process.
I am requesting initial approval to conduct the research study at YYY Campus. This
project will require support from your institution in the form of displaying/distributing
the study invitation and survey instrument online on my behalf. Since the survey
instrument is new, I will field test the instrument at another institution in order to
determine its validity and reliability prior to conducting the actual study at YYY Campus.
If you are generally in agreement with the study being conducted at your institution, I
will send an official request for your formal approval. In the interim, I am required to
start the application process for the IRB approval from Walden University. For the
Walden University IRB application, I am required to gather information about the IRB or
research approval process of your institution where I will conduct my study. The
information required will be the answers to the following questions (Yes/No):
• Does your institution have its own IRB (or other formal research approval
system)?
• Does your institution’s IRB process indicate that the Walden University IRB
should serve as the “IRB of Record” for my project?
• Does your institution’s IRB indicate that collecting data from your students and
faculty are exempt from your institution’s IRB review process?
• Does your institution’s IRB process indicate that it wishes to serve as the “IRB of
Record” for my project?
If your institution requires that I apply for approval from your institution’s IRB, that
is, your institution will serve as the “IRB of Record”, please respond to the following
with Yes/No:
• Does your institution’s IRB wish to conduct its IRB review before the Walden
IRB approval of my project?
192
• Does your institution’s IRB wish for the Walden IRB approval of my project to
occur first?
I will need to provide Walden University’s IRB with supporting documentation of your
institution’s position as it relates to the six questions above. This documentation can be in
the form of an email, memo, or copy of your university’s policy. Your response by email
to the questions above will be sufficient.
I thank you for your kind attention and look forward to a favorable response at your
earliest convenience.
Sincerely,
Marcia Commissiong
PhD student at Walden University and Principal Researcher
Dear Ms Commissiong
My apologies for the delay . As discussed I am supportive of this research. Copied on this
email is AAA who is Chair of our Ethics Committee. I am directing your email to her for
follow up directly with you. Please make contact with her.
I am hopeful that the process will be quick and smooth to enable your research.
Kind regards
XXX
Re: Research Ethics Approval for Conduct of Research Proposal- “Student Engagement,
Self-Regulation Practices, Student Satisfaction and Student Success in Online Learning
Environments”
193
Thank you again for submitting the above-named proposal for review by CCC Research
Ethics Committee and for addressing comments sent to you on March 8, 2019.
We are satisfied that you have addressed all matters raised. We therefore convey approval
for you to proceed with the conduct of your study as detailed in the documents submitted
to the CCC Research Ethics Committee on April 7, 2019.
Dear Ms Commissiong
Further to your previous correspondence and our telephone conversation before I went on
my break, the CCC campus will be happy to assist in the distribution of this survey
questionnaire to our online degree students. I will advise of the date on which it is sent
out in due course.
Kind regards,
194
Appendix E: Preliminary Factor Analysis Tables
Table E1
Table E2
Table E4
Table E6
Table E8
Table E10
Table E11
Table E13
Table E15