0% found this document useful (0 votes)
41 views

The Acceptance and Use of Computer Based Assessment in Higher Education

JSEA notes
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views

The Acceptance and Use of Computer Based Assessment in Higher Education

JSEA notes
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/283724880

The Acceptance and Use of Computer Based Assessment in Higher Education

Article  in  Journal of Software Engineering and Applications · October 2015


DOI: 10.4236/jsea.2015.810053

CITATIONS READS

14 399

3 authors:

Mahmoud Maqableh Ra'ed (Moh'd Taisir) Masa'deh


University of Jordan University of Jordan, Aqaba, Jordan
59 PUBLICATIONS   938 CITATIONS    175 PUBLICATIONS   3,083 CITATIONS   

SEE PROFILE SEE PROFILE

Ashraf Bany Mohammed


University of Jordan
19 PUBLICATIONS   319 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Environmental medicine: social and medical aspects View project

Dr. M.A. Akour Project View project

All content following this page was uploaded by Ra'ed (Moh'd Taisir) Masa'deh on 13 November 2015.

The user has requested enhancement of the downloaded file.


Journal of Software Engineering and Applications, 2015, 8, 557-574
Published Online October 2015 in SciRes. http://www.scirp.org/journal/jsea
http://dx.doi.org/10.4236/jsea.2015.810053

The Acceptance and Use of Computer Based


Assessment in Higher Education
Mahmoud Maqableh, Ra’ed Moh’d Taisir Masa’deh, Ashraf Bany Mohammed
Management Information Systems, Faculty of Business, The University of Jordan, Amman, Jordan
Email: [email protected]

Received 5 April 2015; accepted 10 September 2015; published 26 October 2015

Copyright © 2015 by authors and Scientific Research Publishing Inc.


This work is licensed under the Creative Commons Attribution International License (CC BY).
http://creativecommons.org/licenses/by/4.0/

Abstract
Computer Based Assessment (CBA) is being a very popular method to evaluate students’ perfor-
mance at the university level. This research aims to examine the constructs that affect students’
intention to use the CBA. The proposed model is based on previous technology models such as
Technology Acceptance Model (TAM), Theory of Planned Behavior (TPB), and Unified Theory of
Acceptance and Usage of Technology (TAUT). The proposed CBA model is based on nine variables:
Goal Expectancy, Social Influence, Facilitating Conditions, Computer Self Efficacy, Content, Per-
ceived Usefulness, Perceived Ease of Use, Perceived Playfulness, and Behavioral Intention. Data
were collected using a survey questionnaire from 546 participants who had used the computer
based exam system at the University of Jordan. Results indicate that Perceived Playfulness has a
direct effect on CBA use. Perceived Ease of Use, Perceived Usefulness, Computer Self Efficacy, So-
cial Influence, Facilitating Conditions, Content and Goal Expectancy have only indirect effects. The
study concludes that a system is more likely to be used by students if it is playful and CBA is more
likely to be playful when it is easy to use and useful. Finally, the studied acceptance model for
computer based assessment explains approximately only 10% of the variance of behavioral inten-
tion to use CBA.

Keywords
Computer Based Assessment, Social Influence, Facilitating Condition, Computer Self Efficacy,
Behavioral Intention to Use CBA

1. Introduction
Student assessment is a very essential element in any learning model. Instructors evaluate students and learning
output to direct and motivate them based on their achievement [1] [2]. There are two main types of students’ as-
sessment: Summative and Formative. Summative assessment aims to provide the sum-up of the teaching and

How to cite this paper: Maqableh, M., Masa’deh, R.M.T. and Mohammed, A.B. (2015) The Acceptance and Use of Com-
puter Based Assessment in Higher Education. Journal of Software Engineering and Applications, 8, 557-574.
http://dx.doi.org/10.4236/jsea.2015.810053
M. Maqableh et al.

learning, whereas formative assessment aims to study the feedback about the progresses of students and instruc-
tors [3]. Moreover, there are two main types of assessments systems: Paper Based System (PBS) and Computer
Based System (CBS). PBS is being disassociated gradually from learning practices because of continuous dis-
semination of Information and Communications Technology (ICT) [2]. At the same time, CBS is being replac-
ing the PBS due to the popularity of ICT. Students prefer CBS instead of PBS as they believe that it would be
exciting, interactive, secure, precise, smooth and credible [4].
Communications and computer technologies have been developed very quickly and it is being widespread and
is used for several purposes [5] [6]. Information and Communications Technology (ICT) is used intensively in
higher education at several aspects such as students’ evaluation and electronic learning [7] [8]. Computer Based
Assessment (CBA) systems are implemented using ICT tools and applications [4]. CBA is considered as a very
important tool to evaluate students at specific point and to help learners in identifying the gap between required
standard and actual level of the learners [7]. Currently, CBA is being adopted by many institutions replacing the
traditional paper and pen assessment for students [9]. Therefore, secondary and higher education are evaluating
students’ performance and achievement using CBA systems very intensively. CBA has several competitive ad-
vantages such as security, cost, and accuracy. Moreover, it reduces the required efforts and times for exams
generation, scheduling, marking, and results recording and analyzing [2] [10]. CBA systems are provided from
several international vendors from all over the world. It has been implemented to support various technologies,
educational environments, and cultures.
CBA is being a main part of electronic learning and assessment systems in higher education institutions.
Therefore, it is very essential to investigate the factors that affect the students’ attitude toward using CBA in or-
der to implement CBA systems successfully. This research aims to examine the factors that influence the stu-
dents’ attitude toward using CBA system in Jordan. Recent studies have shown that Perceived Usefulness, Per-
ceived Ease of Use, Perceived Playfulness, and Perceived Importance each has a significant role in Behavioral
Intention to use CBA [2] [4] [7] [11]-[15].
The paper is organized as follows. In Section 2, a review of theoretical background of CBA is presented. Sec-
tion 3 discusses the hypotheses development. Section 4 explains the research methodology in details. In Section
5, research results are shown. Section 6 discusses the results of collected date based on the proposed model. Fi-
nally, discussion and conclusions are drawn in Section 7.

2. Theoretical Background
Computer based assessment and the factors that influence students’ intention behavior have been studied insen-
sitively in the literature. Many researchers focus on studying the effect of some influencing factors such as Per-
ceived Usefulness, Perceived Ease of Use, and Perceived Playfulness [4] [7] [10] [14] [16]-[32]. (M. Thelwall,
2000) introduces a survey on the reasons of using computer assessment and focus on randomly generated open
access test [16]. The students are allowed to practice in their own free time before apply the same test in real.
This research concludes that random-based tests have major advantages over fixed ones. Moreover, this research
paper proofs the flexibility of CBA as a learning tool.
In 2002, C. Jantz et al. measure and examine the effectiveness of Interactive Multimedia (IMM) using a qua-
si-experimental pretest/post-test [17]. Results showed the significant increase in knowledge, attitude, and total
scores between pre and post tests for the intervention participants and they had greater increases than control
group. This study support the use of IMM in nutrition education and it considered as the basis to continue de-
veloping computer-based assessments. (R. Mayer 2002) studied the assessment of computer in problem solving
by referring to Bloom’s taxonomy for learning and teaching and assessing [33]. The study examines the cogni-
tive consequences of participating in after-school computer club. He proofs the possibility to produce computer-
based assessments of problem-solving transfer in different ways like: assessment of computer literacy (Near
Transfer) and assessment in problem-solving strategies for new games (Far Transfer). The study discovers the
usefulness of taxonomy in creating assessments that covers the range of problem-solving transfer when the goal
is to include problem solving transfer measurements.
Later on, a Web-based Educational System (WEAS) based on Bloom’s theory was introduced and tested on
science courses [18]. The system facilitates Human-Computer Interaction (HCI) techniques between students
and teacher. Gikandi et al. were reviewed 18 key empirical studies on online assessment in higher education
from year 2004 to year 2011 [34]. The survey focuses on the application of formative assessment within blended

558
M. Maqableh et al.

and online context. The main findings were extracted from the literature; the enhancement of the learner en-
gagement with high experience and valuable background due to effective online formative assessment.
(Terzis and Economides, 2011) built a model to investigate students’ intention to use Computer Based As-
sessment (CBA) called Computer Based Assessment Acceptance Model (CBAAM) [4]. The model was built
upon previous acceptance models like: Technology Acceptance Model (TAM), Theory Planned Behavior (TPB),
and Unified Theory of Acceptance and Usage Technology (UTAUT). They added two additional variables
(Content and Goal Expectancy) on current measurement variables. A survey questionnaire applied on a sample
of 173 participants enrolled in introductory course about informatics for the purpose of test data. Findings
showed that Perceived Ease of Use and Perceived Playfulness directly affected CBA, while other variables have
indirect effect on CBA. (V. Terzis et al., 2011) study extends the previous model (CBAAM) by considering the
gender in the measurements [35]. The results showed that both genders motivated to use CBA while it is playful
and has clear contents relative to the course.
(M. Alquraan, 2012) investigates different learning assessment methods used in higher education. Samples of
736 undergraduate students from four well-known universities in Jordan were engaged in the investigation
process [21]. The results showed that the most common used assessment method used is the paper-pencil test
while some scientific and medical colleges used other assessments but, still use paper-pencil tests. Moreover, the
study suggests the use of modern assessment tools and methods to improve traditionalism in higher education
assessment methods. Another research group conducted a study at Ilorin university-Nigeria on undergraduate
chemistry students [22]. A sample of 48 chemistry student evaluated using Computer Based Test (CBT) and a
questioner was carried out for investigation. Findings showed that 95.8% of the students were satisfied of using
CBT while 75% complained about anxiety of their computers. On the other side, about 29.2% were not fully
accepted the testing mode. From the testing analysis, it is obvious that a satisfactory about immediate scoring,
fastness and transparency in marking exists.
In 2012, conducted a study to identify how personality affects technology acceptance. It is a combination be-
tween CBAAM and Big Five Inventory Question (BFI) for the purpose of analyzing the effect of the five perso-
nality factors upon CBA’s [14]. A survey questioner with BFI questions was applied on 117 participants. Re-
sults indicated the negative effect of Neuroticism on Perceived Usefulness and Goal Expectancy. In addition,
Social influence and Perceived Ease of Use were determined by Agreeableness. Moreover, Perceived Impor-
tance is explained by Extroversion and Openness.
A dynamic CBA system for fluid mechanics course were conducted and assessment data were collected be-
fore and after applying the system [36]. The performance improvements were measured by the relative of cor-
rectly answered question in Fundamentals of Engineering (FE) Exam to National Average. Results showed that,
for the same sample, the students increased from below national level with 94% mean and 6% standard devia-
tion to above one with 100% mean and 2% of standard deviation. In fluid mechanics it was much higher than in
other subjects and students performance was more than the top tier programs in USA. A notable improvement in
student achievement due to the use of this system and instructor time also was reduced. Authors suggested re-
fining pre- and post-tests to relate them to metacognitive learning. The study showed the advantages of applying
CBA system and a new measure for problem solving skills was conducted which is the FE exam.
Another research was conducted to compare between traditional assessment and learning and educational
software [37]. The study was applied on a state primary school at north Cyprus. Two main groups were under
test, the first group consists of 26 students and taught using traditional lecture-based and the second one consists
of 29 students and taught using educational software called Frizbi Mathematics 4. Scores on achievement were
recorded 3 times; when starting the study, after intervention and after 4 months. Using some ANOVAs analysis
results compared and results showed that and compared using different variables and variations. The final find-
ings gave evidence that Frizbi Mathematics 4 which is computer-based educational software that includes
self-automated assessments is an effective tool for both assessments and learning. (V. Terzis et al., 2013) inves-
tigate the continuance acceptance in CBA context by checking out users expectations before and after interac-
tion with the system [15]. The results in confirmation in both Ease of Use and Playfulness, they are the direct
determinants of CBA. Moreover, all other indirect CBA determinants also were confirmed and discussed in de-
tails.
(E. Quellmalz, 2014) includes a section in chapter in the education encyclopedia which talked about assess-
ments in the next generation of science standards, where science phenomena needs more flexible, dynamic and
more complex representation [24]. Furthermore, students need a way to check out the effectiveness of the HCI.

559
M. Maqableh et al.

The migration of CBA from computer to other mobility devices could be effective tools for evidence of learning
data collection. Modern Technology will enhance both assessments of and for learning.

3. Hypotheses Development
3.1. CBAAM Model
Based on previous Technology Acceptance Models such as TAM, TPB and UTAUT, a new model called Com-
puter Bases Assessment Acceptance Model (CBAAM) was proposed [4]. The model used multiple constructs
from the existing models but added two new variables which are: Content and Goal Expectancy. Figure 1 de-
monstrates the research’s conceptual framework and the hypothesized relationships between the adopted con-
structs.
This model combined the following constructs to study the acceptance of a CBA:
H1: Perceived Playfulness will have a positive effect on the Behavioural Intention to use CBA.
H2: Perceived Usefulness will have a positive effect on the Behavioural Intention to use CBA.
H3: Perceived Usefulness will have a positive effect on Perceived Playfulness.
H4: Perceived Ease of Use will have a positive effect on the Behavioural Intention to use CBA.
H5: Perceived Ease of Use will have a positive effect on Perceived Usefulness.
H6: Perceived Ease of Use will have a positive effect on Perceived Playfulness.
H7: Computer Self Efficacy will have a positive effect on Perceived Ease of Use.
H8: Social Influence will have a positive effect on Perceived Usefulness.
H9: Facilitating Conditions will have a positive effect on Perceived Ease of Use.
H10: Goal Expectancy will have a positive effect on Perceived Usefulness.
H11: Goal Expectancy will have a positive effect on Perceived Playfulness.
H12: Content will have a positive effect on Perceived Usefulness.
H13: Content will have a positive effect on Perceived Playfulness.
H14: Content will have a positive effect on Goal Expectancy.
H15: Content will have a positive effect on the Behavioral Intention to Use CBA.
The following sections describe the research model constructs.

3.1.1. Perceived Playfulness


Moon and Kim (2001) extended TAM by adding the construct Perceived Playfulness [38]. This construct is de-
fined by three dimensions:

Goal H14
Expectancy Content
H10 H12

Perceived H13 H15


H8
Usefulness
Social
Influence H3 H2
H11
H1 Behavioral
Perceived
Intention to
Facilitating Playfulness
H9 Use
Conditions H5 H6
H4
Perceived
Ease of Use
H7
Computer
Self Efficacy

Figure 1. Research model.

560
M. Maqableh et al.

• Concentration: Determines whether the user is concentrated on the activity.


• Curiosity: Determines if the system aroused the user’s cognitive curiosity [39].
• Enjoyment: Determines whether the user is enjoying the interaction with the system or not.
Although the previous three dimensions are interdependent and linked, each of them alone does not reflect
total interaction of users with the system. A successful implementation of a CBA is able to hold Users’ concen-
tration, curiosity and enjoyment. Therefore, CBAAM assumed that the Behavioral Intention is positively af-
fected by the perceived playfulness as in the following hypothesis:
H1: Perceived Playfulness will have a positive effect on the Behavioral Intention.

3.1.2. Perceived Usefulness


As mentioned before, Perceived Usefulness is used to measure how much a person believes that his/her job per-
formance will increase when he uses a particular computer system. Many evidences were provided by research-
ers on the effect of Perceived Usefulness on the Behavioral Intention of users to use a learning system [40]-[42].
CBAAM also assumes that a learner’s concentration, curiosity and enjoyment will increase as a result of using a
useful system which leads to the following hypotheses:
H2: Perceived Usefulness will have a positive effect on the Behavioral Intention to use CBA.
H3: Perceived Usefulness will have a positive effect on Perceived Playfulness.

3.1.3. Perceived Ease of Use


It was also discussed that Perceived Ease of Use is used to measure the person’s belief that using a computer
system requires no effort. Previous research showed that Perceived ease of use has a direct effect on Perceived
Usefulness and Behavioral Intention [12] [43]. CBAAM assumes that Perceived Ease of Use will have a positive
influence on Perceived Playfulness because a system that can be used without much effort will smoothly enable
users to use it without any disturbance. For the previous effects of Perceived Ease of Use, the following hypo-
theses were made:
H4: Perceived Ease of Use will have a positive effect on the Behavioral Intention to use CBA.
H5: Perceived Ease of Use will have a positive effect on Perceived Usefulness.
H6: Perceived Ease of Use will have a positive effect on Perceived Playfulness.

3.1.4. Computer Self Efficacy


Research results show that there is a link between Computer Self Efficacy (CSE) and Perceived Ease of Use [12]
[44] [45]. Therefore, CSE has an impact on Perceived Ease of Use and also an indirect impact on Behavioral In-
tention. The following hypothesis was made:
H7: Computer Self Efficacy will have a positive effect on Perceived Ease of Use.

3.1.5. Social Influence


Social Influence can be defined as the effect of people’s opinion, superior and peers influence. There are three
elements that define Social Influence which are: Subjective Norm (SN), Image and Voluntariness [46]. To
measure Social Influence, Previous models used the constructs: Social Factors (MPCU), Image (IDT), Subjec-
tive Norm (TRA, TPB, C-TAM-TPB, and TAM2) [47]. According to TAM2, Subjective Norm and Image has
an influence of how users see a system as a useful one while Subjective Norm has no impact on Behavioral In-
tention if users are using a system voluntarily. UTAUT considered Social Influence one of the four more con-
structs that have direct effect on Behavioral Intention.
In CBAAM it was assumed that Social Influence has a direct impact on Perceived Usefulness. This was con-
cluded based on the fact that students usually feel insecure using a CBA, and they are affected by the opinion of
their friends, colleagues and seniors. Also, students discuss Perceived Usefulness and its added value as the
main topic regarding a CBA. The CBA in CBAAM is voluntary, so as proposed by TAM2 that it has no impact
on Behavioral Intention, in CBAAM they did not study its effect on it. The only hypothesis regarding Social In-
fluence is:
H8: Social Influence will have a positive effect on Perceived Usefulness.

3.1.6. Facilitating Conditions


Facilitating Conditions (FCs) are defined as the set of factors that affect the person’s belief to perform a proce-

561
M. Maqableh et al.

dure. There are many aspects of (FC); one of them is the technical support such as helpdesks or Online support
services [4]. Other factors are resources such as time and money [48].
In CBAAM, FC was defined as the support that is provided during a CBA. If users face difficulties while us-
ing a CBA, support must be given to help them overcome these difficulties. This support includes having an ex-
pert to answer students’ questions and queries if the CBA is used in a university. For the previous reasons, the
following hypothesis was made:
H9: Facilitating Conditions will have a positive effect on Perceived Ease of Use.

3.1.7. Goal Expectancy


In distance learning, the need of self-direction and goal orientation was highlighted by many studies [49] [50].
Self-management of learning was proposed by [49] as the degree to which a person feels he/she is able to en-
gage in autonomous learning and is self-disciplined. In terms of Technology Acceptance, learning goal orienta-
tion was proposed by [50] as a construct that affects learning acceptance. Also, Personal Outcome Expectations
was introduced by [51] as an ancestor of Intention of use [51]. This was based on [52] work, which proposed
that a person’s motivation to do an act is increased with increased outcome expectancy [52]. Finally, [53] em-
phasized this theory by showing that a person’s actions are strongly influenced by his/her expectations regarding
the consequences of these actions [53].
In CBAAM, a new construct called Goal Expectancy (GE) was introduced motivated by the previously men-
tioned studies. This construct defines a person’s belief that he/she is prepared well to use a CBA. GE has two
aspects based on two types of assessment (summative and formative). In summative assessment (which is expe-
rimented in their study), the first dimension measures a student’s satisfaction of his/her preparation. Students
have to study and prepare themselves in order to be able to answer the questions in the assessment. The second
dimension measures the student’s desired success level. Each student before the assessment predicts his perfor-
mance based on his/her preparation and put a percentage of correct answers as a goal that will give him satisfy-
ing performance.
It is assumed that GE highly influences Perceived Usefulness. However; this influence is dependent on the
type of assessment. In Summative Assessment, GE has an impact on Usefulness because students can under-
stand the questions and answer them. On the other hand, this is not applicable on Formative Assessment because
what adds the value is the feedback provided by the CBA to enable students from understanding their learning
material. Therefore, in Formative Assessment, GE has a negative impact on Perceived Usefulness as students
use it to learn more than to test their knowledge.
Moreover, this model assumes that Perceived Playfulness will be positively impacted by GE. In order for stu-
dents to meet their expectations of good performance they will concentrate more with the CBA, they will also be
able to answer the questions correctly and will enjoy the interaction with the system more if they are well pre-
pared. The following hypotheses are assumed:
H10: Goal Expectancy will have a positive effect on Perceived Usefulness.
H11: Goal Expectancy will have a positive effect on Perceived Playfulness.

3.1.8. Content
The last construct in this model is the content. (C. Ong et al., 2004) introduced content as an important construct
in learners’ satisfaction [54]. This construct examines whether the content is up-to-date, sufficient, useful and
satisfies users’ needs. In CBAAM, two dimensions of the content are studied; the course content and the ques-
tions content. Regarding course content, it is believed that it highly affects the perceived usefulness and play-
fulness of the CBA system. The content of the course can determine whether it is useful or not, interesting or not
and finally difficult or not. In this model also, questions content are examined to determine if they are clear, easy
to understand and related to the content of the course.
These dimensions of the content are proposed only in this model. Previous models examined content for dif-
ferent purposes. Therefore, the model assumes the content will affect Perceived Usefulness and Playfulness,
Goal Expectancy and Behavioral Intention as in the following hypotheses:
H12: Content will have a positive effect on Perceived Usefulness.
H13: Content will have a positive effect on Perceived Playfulness.
H14: Content will have a positive effect on Goal Expectancy.
H15: Content will have a positive effect on the Behavioral Intention to Use CBA.

562
M. Maqableh et al.

(K. Weinerth et al., 2014) examined the usability when applying CBA [55]. They discuss the impact of usa-
bility on CBA since no sufficient research in this issue. This review insures that currently few studies about the
interaction between use-ability and test use training if not neglected. Table 1 shows the frequency usage of usa-
bility extracted from this review.

4. Research Methodology
The study involved 546 students from which 340 were females (62.3%) and 206 were males (37.7%). Most of
the students’ age was between 17 and 23 years old. The students had a CBA exam that consisted of 45 multiple
choice questions each of which has four possible answers. The questions displayed to students were randomly
generated, and the assessment duration was 45 minutes after which every student had to answer a survey with 34
questions.

Table 1. Constructs and measurement items.

Construct Measurement items

PU1: Using the Computer Based Assessment (CBA) will improve my work.
Perceived usefulness
PU2: Using the Computer Based Assessment (CBA) will enhance my effectiveness.
(PU)
PU3: Using the Computer Based Assessment (CBA) will increase my productivity.

PE1: My interaction with the system is clear and understandable.


Perceived ease of use
PE2: It is easy for me to become skillful at using the system.
(PE)
PE3: I find the system easy to use.

CS1: I could complete a job or task using the computer.


CS2: I could complete a job or task using the computer if someone showed how to do it first.
Computer self efficacy
(CS) CS3: I can navigate easily through the Web to find any information I need.
CS4: I was fully able to use the computer and Internet before I began using the Computer Based
Assessment (CBA).

SI1: People who influence my behavior think that I should use CBA.
Social influence SI2: People who are important to me think that I should use CBA.
(SI) SI3: The seniors in my university have been helpful in the use of CBA.
SI4: In general, my university has supported the use of CBA.

Facilitating conditions FC1: When I need help to use the CBA, someone is there to help me.
(FC) FC2: When I need help to learn to use the CBA, system’s help support is there to teach me.

CT1: CBA’s questions were clear and understandable.


Content CT2: CBA’s questions were easy to answer.
(CT) CT3: CBA’s questions were relative with the course’s syllabus.
CT4: CBA’s questions were useful for my course.

GY1: Courses’ preparation was sufficient for the CBA.


Goal expectancy
GY2: My personal preparation for the CBA.
(GY)
GY3: My performance expectations for the CBA.

PP1: Using CBA keeps me happy for my task.


Perceived playfulness PP2: Using CBA gives me enjoyment for my learning.
(PP) PP3: Using CBA, my curiosity stimulates.
PP4: Using CBA will lead to my exploration.

BI1: I intend to use CBA in the future.


Behavioral intention to use
BI2: I predict I would use CBA in the future.
(BI)
BI3: I plan to use CBA in the future.

563
M. Maqableh et al.

The current research uses a Structural Equation Modeling (SEM) approach based on AMOS 20.0 to study the
causal relationships and to test the hypotheses between the observed and latent constructs in the proposed re-
search model. SEM can be divided into two sub-models: a measurement model and a structural model. While
the measurement model defines relationships between the observed and unobserved variables, the structural
model identifies relationships among the unobserved/latent variables by specifying which latent variables di-
rectly or indirectly influence changes in other latent variables in the model [56] [57]. Furthermore, the structural
equation modeling process consisted of two components: validating the measurement model and fitting the
structural model. While the former is accomplished through confirmatory factor analysis, the latter was accom-
plished by path analysis with latent variables [58]. Using a two-step approach assures that only the constructs
retained from the survey that have good measures (validity and reliability) will be used in the structural model
[57].
The basis for data collection and analysis is a field study in which respondents answered all items on a five
point Likert-scales ranging from 1 (strongly disagree) to 5 (strongly agree). Furthermore, elements used to con-
sider each of the constructs were primarily obtained from prior research. These elements provided a valued
source for data gathering and measurement as their reliability and validity have been verified through previous
research and peer reviews. The model of Behavioral Intention (BI) to Use CBA constructs and their corres-
ponding items (i.e. Perceived Usefulness (PU), Perceived Ease of Use (PE), Computer Self Efficacy (CS), So-
cial Influence (SI), Facilitating Conditions (FC), Content (CT), Goal Expectancy (GY), Perceived Playfulness
(PP) were adapted from [4]. Table 1 shows the measured constructs and the items measuring each construct.

Sample and Procedure


Empirical data for this study was collected through paper-based survey in Jordan. Specifically, a survey ques-
tionnaire was used to gather data for hypotheses testing from at the University of Jordan. Before implementing
the survey, the instrument was reviewed by three lecturers who are specialized in the Management Information
Systems (MIS) discipline in order to identify problems with wording, content, and question ambiguity. After
some changes were made based on their suggestions, the modified questionnaire was piloted on ten students
who are studying at the university. Based on the feedback of this pilot study, minor edits were introduced to the
survey questions, and the questionnaires were distributed to the participants. As per ethics policies, all potential
participants were briefed about the nature of the work and were requested to provide explicit approval. The pop-
ulation of this study consists of all students who studied Introduction to Electronic Commerce Course as elective
course during the first semester 2013-2014 from the University of Jordan located in Jordan, which counts of
more than 570 according to the university’s registration unit. The sample size of this study was determined
based on the rules of thumb for using SEM within AMOS 20.0 in order to obtain reliable and valid results. (R.
Kline, 2010) suggested that a sample of 200 or larger is suitable for a complicated path model [59]. Furthermore,
taking into account the complexity of the model which considers the number of constructs and variables within
the model and after eliminating the incomplete surveys, our sample size (546) meets the recommended guide-
lines of [59]-[61]. The demographic data of the respondents are reported in Table 2.
As showed in Table 2, the demographic profile of the respondents for this study revealed that the sample
consisted of more females; most of them between 17 and less than 23 years old, in their second and third aca-
demic years, and most of them use different types of IT more than 3 hours.

5. Research Results
5.1. Descriptive Statistics
All the 30 items were tested for their means, standard deviations, skewness, and kurtosis. The descriptive statis-
tics presented below in Table 3 indicate a positive disposition towards the items. While the standard deviation
(SD) values ranged from 0.75222 to 1.21275, these values indicate a narrow spread around the mean. Also, the
mean values of all items were greater than the midpoint (2.5) and ranged from 2.8553 (GY1) to 4.4377 (CS3).
However, after careful assessment by using skewness and kurtosis, the data were found to be normally distri-
buted. Indeed, skewness and kurtosis were normally distributed since all of the values were inside the adequate
ranges for normality (i.e. −1.0 to +1.0) for skewness, and less than 10 for kurtosis [59]. Furthermore, the order-
ing of the items in terms of their means values, and their ranks based on three ranges (i.e. 1 - 2.33 low; 2.34 -
3.67 medium; and 3.68 - 5 high) are provided.

564
M. Maqableh et al.

Table 2. Demographic data for respondents.

Category Frequency Percentage (%)


Gender
Male 206 37.7
Female 340 62.3
Total 546 100
Age
17 years - less than 20 183 33.5
20 years - less than 23 315 57.7
23 years - less than 26 31 5.7
26 years - less than 30 9 1.6
30 years and above 8 1.5
Total 546 100
Academic level
Year 1 57 10.4
Year 2 219 40.1
Year 3 172 31. 5
Year 4 72 13.2
Year 5 26 4.8
Total 546 100
Number of daily hours using different types of information technology
Less than half an hour 14 2.6
Half an hour - 1 hour 95 17.4
1 hour - less than 3 hours 200 36.6
3 hours and above 237 43.4
Total 546 100

Table 3. Mean, standard deviation of scale items.

Construct/items Mean S.D Order Rank Skewness Kurtosis


Perceived usefulness
PU1: 3.6520 1.00081 1 Medium −0.792 0.451
PU2: 3.6227 1.01933 2 Medium −0.747 0.188
PU3: 3.4945 1.04792 3 Medium −0.470 −0.422
Perceived ease of use
PE1: 3.6630 1.16209 3 Medium −0.783 −0.146
PE2: 3.9103 1.02501 2 High −1.020 0.632
PE3: 3.9139 1.04658 1 High −1.061 0.749
Computer self efficacy
CS1: 4.1190 0.85592 4 High −1.217 2.024
CS2: 4.1813 0.83617 3 High −1.334 2.473
CS3: 4.4377 0.75222 1 High −1.669 3.692
CS4: 4.3187 0.80200 2 High −1.302 1.897
Social influence
SI1: 3.4780 1.03004 4 Medium −0.512 −0.276
SI2: 3.5110 1.04261 3 Medium −0.478 −0.362
SI3: 3.6099 1.03153 2 Medium −0.705 0.099
SI4: 3.9469 0.87310 1 High −1.125 1.1771
Facilitating conditions
FC1: 3.4123 1.03895 1 Medium −0.454 −0.505
FC2: 3.4121 1.04374 2 Medium −0.480 −0.538

565
M. Maqableh et al.

Continued
Content
CT1: 3.4139 1.16718 3 Medium −0.531 −0.545
CT2: 3.0971 1.13320 2 Medium −0.131 −0.738
CT3: 3.3956 1.03032 1 Medium −0.587 −0.115
CT4: 3.4945 1.02490 4 Medium −0.693 0.069

Goal expectancy
GY1: 2.8553 1.21275 3 Medium −0.024 −1.028
GY2: 3.3498 1.08212 1 Medium −0.346 −0.601
GY3: 3.1026 1.15755 2 Medium −0.229 −0.684

Perceived playfulness
PP1: 3.3736 1.16034 4 Medium −0.480 −0.535
PP2: 3.3938 1.14165 3 Medium −0.502 −0.457
PP3: 3.4194 1.12107 2 Medium −0.503 −0.349
PP4: 3.4377 1.10244 1 Medium −0.072 −0.990

Behavioral intention to use


BI1: 3.9414 1.09051 2 High −1.110 0.852
BI2: 4.0513 1.01959 1 High −1.031 0.686
BI3: 3.9249 1.09538 3 High −0.910 0.148

Table 4. Measurement model fit indices.

Model x2 df p x2/df IFI TLI CFI RMSEA


Initial model 970.242 369 0.000 2.629 0.93 0.91 0.93 0.055
Final model 572.977 288 0.000 1.990 0.96 0.95 0.96 0.043

Table 4 shows different types of goodness of fit indices in assessing this study initial specified model. It de-
monstrates that the research constructs fits the data according to the absolute, incremental, and parsimonious
model fit measures, comprising chi-square per degree of freedom ratio (x2/df), Incremental Fit Index (IFI),
Tucker-Lewis Index (TLI), Comparative Fit Index (CFI), and Root Mean Square Error of Approximation
(RMSEA). The researchers examined the standardized regression weights for the research’s indicators and
found that all indicators had a high loading towards the latent variables. Moreover, since all of these items did
meet the minimum recommended value of factor loadings of 0.50; and RMSEA less than 0.10 [57] [59] [62],
they were all included for further analysis; except SI4, GY1, and PP4 which had loadings of 0.405, 0.376, and
0.163 respectively, thus excluded from further analysis. Therefore, the measurement model showed a better fit to
the data (as shown in Table 4). For instance, x2/df was 1.990, the IFI = 0.96, TLI = 0.95, CFI = 0.96; and
RMSEA 0.043 indicated better fit to the data considering all loading items.

5.2. Measurement Model


Confirmatory factor analysis (CFA) was conducted to check the properties of the instrument items. Indeed, prior
to analyzing the structural model, a CFA based on AMOS 20.0 was conducted to first consider the measurement
model fit and then assess the reliability, convergent validity and discriminant validity of the constructs [63]. The
outcomes of the measurement model are presented in Table 5, which encapsulates the standardized factor load-
ings, measures of reliabilities and validity for the final measurement model.

5.2.1. Unidimensionality
Unidimensionality is the extent to which the study indicators deviate from their latent variable. An examination
of the unidimensionality of the research constructs is essential and is an important prerequisite for establishing
construct reliability and validity analysis [64]. Moreover, in line with [56], this research assessed unidimensio-
nality using the factor loading of items of their respective constructs. Table 5 shows solid evidence for the un-
idimensionality of all the constructs that were specified in the measurement model. All loadings were above
0.50, except SI4, GY1, and PP4, which is the criterion value recommended by [62]. These loadings confirmed
that 27 items were loaded satisfactory on their constructs.

566
M. Maqableh et al.

Table 5. Properties of the final measurement model.


Constructs and Std. Std. Square multiple Error Cronbach Composite
AVE
indicators loading error correlation variance alpha reliability
Perceived usefulness 0.866 0.86 0.68
***
PU1 0.810 0.656 0.344
PU2 0.866 0.051 0.749 0.260
PU3 0.808 0.052 0.653 0.380
Perceived ease of use 0.846 0.82 0.61
***
PE1 0.805 0.648 0.474
PE2 0.802 0.044 0.644 0.374
PE3 0.812 0.045 0.659 0.373
Computer self efficacy 0.777 0.84 0.58
***
CS1 0.674 0.455 0.399
CS2 0.567 0.074 0.322 0.473
CS3 0.780 0.072 0.608 0.221
CS4 0.738 0.075 0.544 0.293
Social influence 0.746 0.78 0.54
***
SI1 0.826 0.682 0.336
SI2 0.830 0.053 0.688 0.338
SI3 0.566 0.053 0.320 0.722
Facilitating conditions 0.772 0.76 0.61
***
FC1 0.778 0.606 0.425
FC2 0.808 0.118 0.653 0.378
Content 0.884 0.82 0.53
***
CT1 0.782 0.612 0.528
CT2 0.750 0.052 0.562 0.561
CT3 0.749 0.047 0.561 0.465
CT4 0.756 0.047 0.571 0.450
Goal expectancy 0.617 0.66 0.50
***
GY2 0.502 0.196 0.640
GY3 0.862 0.220 0.744 0.343
Perceived playfulness 0.772 0.88 0.72
***
PP1 0.900 0.811 0.255
PP2 0.895 0.032 0.801 0.259
PP3 0.833 0.034 0.694 0.384
Behavioral intention to use 0.854 0.84 0.64
***
BI1 0.901 0.811 0.224
BI2 0.762 0.042 0.581 0.435
BI3 0.781 0.045 0.609 0.468

5.2.2. Reliability
Reliability analysis is related to the assessment of the degree of consistency between multiple measurements of a
variable, and could be measured by Cronbach alpha coefficient and composite reliability [57]. Some scholars
(e.g. [65]) suggested that the values of all indicators or dimensional scales should be above the recommended
value of 0.60. Table 5 indicates that all Cronbach Alpha values for the nine variables exceeded the recom-
mended value of 0.60 [65] demonstrating that the instrument is reliable. Furthermore, as shown in Table 5,

567
M. Maqableh et al.

composite reliability values ranged from 0.66 to 0.88, and were all greater than the recommended value of more
than 0.60 or greater than 0.70 as suggested by (P. Holmes-Smith, 2001) [66]. Consequently, according to the
above two tests, all the research constructs in this study are considered reliable.
As shown above, since the measurement model has a good fit; convergent validity and discriminant validity
can now be assessed in order to evaluate if the psychometric properties of the measurement model are adequate.

5.2.3. Content, Convergent, and Discriminant Validity


Although reliability is considered as a necessary condition of the test of goodness of the measure used in re-
search, it is not sufficient [67]-[69], thus validity is another condition used to measure the goodness of a measure.
Validity refers to which an instrument measures is expected to measure or what the researcher wishes to meas-
ure [70]. Indeed, the items selected to measure the nine variables were validated and reused from previous re-
searches. Therefore, the researchers relied upon in enhancing the validity of the scale was to benefit from a
pre-used scale that is developed from other researchers. In addition, the questionnaire items were reviewed by
four instructors of the Business Faculty at the University of Jordan. The feedback from the chosen group for the
pre-test contributed to enhanced content validity of the instrument. Moreover, in order to enhance the content
validity of the instrument, seven academics were asked to give their feedback about the questionnaire, thus con-
firming that the knowledge presented in the content of each question was relevant to the studied topic.
Furthermore, as convergent validity test is necessary in the measurement model to determine if the indicators
in a scale load together on a single construct; discriminant validity test is another main one to verify if the items
developed to measure different constructs are actually evaluating those constructs [71]. As shown in Table 5, all
items were significant and had loadings more than 0.50 on their underlying constructs. Moreover, the standard
errors for the items ranged from 0.032 to 0.220 and all the item loadings were more than twice their standard
error. Discriminant validity was considered using several tests. First, it could be examined in the measurement
model by investigating the shared Average Variance Extracted (AVE) by the latent constructs. The correlations
among the research constructs could be used to assess discriminant validity by examining if there were any ex-
treme large correlations among them which would imply that the model has a problem of discriminant validity.
If the AVE for each construct exceeds the square correlation between that construct and any other constructs
then discriminant validity is occurred [72]. As shown in Table 5, this study showed that the AVEs of all the
constructs were above the suggested level of 0.50, implying that all the constructs that ranged from 0.50 to 0.72
were responsible for more than 50 percent of the variance in their respected measurement items, which met the
recommendation that AVE values should be at least 0.50 for each construct [65] [66]. Furthermore, as shown in
Table 6, discriminant validity was confirmed as the AVE values were more than the squared correlations for
each set of constructs. Thus, the measures significantly discriminate between the constructs.

5.3. Structural Model and Hypotheses Testing


In order to examine the structural model it is essential to investigate the statistical significance of the standar-
dized regression weights (i.e. t-value) of the research hypotheses (i.e. the path estimations) at 0.05 level (see
Table 7); and the coefficient of determination (R2) for the research endogenous variables as well.

Table 6. AVE and square of correlations between constructs.


Constructs PU PE CS SI FC CT GY PP BI
PU 0.68
PE 0.55 0.61
CS 0.29 0.51 0.58
SI 0.59 0.58 0.35 0.54
FC 0.27 0.34 0.17 0.33 0.61
CT 0.54 0.66 0.34 0.47 0.40 0.53
GY 0.45 0.64 0.27 0.46 0.36 0.52 0.50
PP 0.53 0.70 0.34 0.42 0.35 0.54 0.47 0.72
BI 0.31 0.36 0.30 0.31 0.30 0.37 0.34 0.38 0.44
Note: Diagonal elements are the average variance extracted for each of the nine constructs. Off-diagonal elements are the squared correlations be-
tween constructs.

568
M. Maqableh et al.

Table 7. Summary of proposed results for the theoretical model.


Coefficient Empirical
Research proposed paths t-value p-value
value evidence
H1: Perceived playfulness → behavioral intention to use 0.104 1.975 0.049 Supported
H2: Perceived usefulness → behavioral intention to use 0.008 0.164 0.870 Not supported
H3: Perceived usefulness → perceived playfulness 0.156 4.290 0.000 Supported
H4: Perceived ease of use → behavioral intention to use 0.024 0.522 0.602 Not supported
H5: Perceived ease of use → perceived usefulness 0.175 5.131 0.000 Supported
H6: Perceived ease of use → perceived playfulness 0.264 8.285 0.000 Supported
H7: Computer self efficacy → perceived ease of use 0.618 11.012 0.000 Supported
H8: Social influence → perceived usefulness 0.343 9.231 0.000 Supported
H9: Facilitating conditions → perceived ease of use 0.209 5.570 0.000 Supported
H10: Goal expectancy → perceived usefulness 0.118 2.490 0.000 Supported
H11: Goal expectancy → perceived playfulness 0.376 9.797 0.000 Supported
H12: Content → perceived usefulness 0.156 3.594 0.000 Supported
H13: Content → perceived playfulness 0.283 7.050 0.000 Supported
H14: Content → goal expectancy 0.605 16.859 0.000 Supported
H15: Content → behavioral intention to use 0.044 0.833 0.405 Not supported

The coefficient of determination for Goal Expectancy, Perceived Usefulness, Perceived Playfulness, Per-
ceived Ease of Use, and Behavioral Intention to Use were 0.34, 0.20, 0.47, 0.22, and 0.10 respectively, which
indicates that the model does quite account for the variation of the proposed model.

6. Discussion
Nowadays, students’ learning performance and outcome are evaluated using CBA rather than PBA. Our re-
search purpose is to explore and identify the influential factors that affect the students’ attitude toward using
CBA in higher education. Researchers are working in this research area to help institutions to have a successful
implementation for CBA. In the literature, Perceived Usefulness, Perceived Ease of Use, Perceived Playfulness,
and Perceived Importance considered as a main elements in Behavioral Intention to use CBA [2] [4] [7] [11]-
[15].
The study shows that Perceived Playfulness has direct impact on Behavioral Intention, while the constructs
which have indirect impact on Behavioral Intention are Perceived Usefulness, Perceived Ease of Use, Content,
Computer Self Efficacy, Facilitating Conditions, Social Influence and Goal Expectancy (see Table 8). The con-
tent construct which was used in this manner for the first time in this model did not have a direct impact on Be-
havioral Intention as the hypothesis of this study suggests. However; other hypothesis suggested regarding con-
tent were confirmed. Content has a direct effect on Perceived Usefulness, Playfulness and Goal Expectancy
which indicates an indirect influence on Behavioral Intention.
Regarding Goal Expectancy, it was shown that students find a CBA useful and playful when they have good
expectations from the system. Moreover, the positive effect of Social Influence on Perceived Usefulness pro-
vided by TAM2 was also supported by this model. Additionally, Perceived Ease of Use is positively impacted
by Computer Self Efficacy and Facilitating Conditions as shown by the study. Furthermore, Perceived Ease of
Use has a direct impact on Perceived Usefulness and Perceived Playfulness. While previous studies show that
Perceived Usefulness and Perceived Ease of Use have a direct impact on Behavioral Intention, the study of this
model shows that they have only an indirect impact through Perceived Playfulness.
Therefore, the results of this study confirm the results’ of prior study conducted by [4] related to role of Per-
ceived Playfulness, Perceived Usefulness, Content, Computer Self Efficacy, Facilitating Conditions, Social In-
fluence and Goal Expectancy on students Behavioral Intention to Use CBA and contradict with the results re-
lated to the role of Perceived Ease of Use. Table 9 summarizes the results concluded by this study and (Terzis &
Economides, 2011) study, the table lists the 15 hypotheses and whether they were supported by the model or not.
The study concludes that a system is more likely to be used by students if it is playful which confirms previous
studies. Also, a CBA is more likely to be playful when it is easy to use and useful.

569
M. Maqableh et al.

Table 8. R2 and direct, indirect and total effects.

Dependent variables R2 Independent variables Direct effects Indirect effect Total effect
Perceived playfulness 0.104 0.000 0.104
Perceived usefulness 0.008 0.017 0.025
Perceived ease of use 0.024 0.032 0.056
Computer self efficacy 0.000 0.034 0.034
Behavioral intention to use 0.10
Social influence 0.000 0.008 0.008
Facilitating conditions 0.000 0.012 0.012
Goal expectancy 0.000 0.039 0.039
Content 0.044 0.057 0.101
Perceived usefulness 0.156 0.000 0.156
Perceived ease of use 0.263 0.028 0.291
Computer self efficacy 0.000 0.179 0.179
Perceived playfulness 0.47 Social influence 0.000 0.054 0.054
Facilitating conditions 0.000 0.061 0.061
Goal expectancy 0.376 0.003 0.379
Content 0.283 0.254 0.537
Perceived ease of use 0.175 0.000 0.175
Computer self efficacy 0.000 0.108 0.108
Social influence 0.343 0.000 0.343
Perceived usefulness 0.20
Facilitating conditions 0.000 0.037 0.037
Goal expectancy 0.018 0.000 0.018
Content 0.156 0.011 0.167
Computer self efficacy 0.618 0.000 0.618
Perceived ease of use 0.22
Facilitating conditions 0.209 0.000 0.209
Goal expectancy 0.34 Content 0.605 0.000 0.605

Table 9. Summary of our research results and (terzis & economides, 2011) [4] results.

Hypothesis Path Terzis et al. results This research result


H1 PP → BI Supported Supported
H2 PU → BI Not supported Not supported
H3 PU → PP Supported Supported
H4 PEOU → BI Supported Not supported
H5 PEOU → PU Supported Supported
H6 PEOU → PP Supported Supported
H7 CSE → PEOU Supported Supported
H8 SI → PU Supported Supported
H9 FC → PEOU Supported Supported
H10 GE → PU Supported Supported
H11 GE → PP Supported Supported
H12 C → PU Supported Supported
H13 C → PP Supported Supported
H14 C → GE Supported Supported
H15 C → BI Not supported Not supported

570
M. Maqableh et al.

7. Conclusions
This study investigated the factors that influenced the students’ behavior toward intention to use a computer
based assessment in higher education. The tested model and measurement were supported from the collected
data. Our research results demonstrate that Perceived Playfulness has a direct effect on Behavioral Intention to
Use CBA, which aligns with [4] [48] [54] [29] [30]. Perceived usefulness has no direct effect on Behavioral In-
tention to Use CBA, which aligns with [4] and contradicts with [29] [31] [32] [54]. On the other hand, Perceived
Ease of Use has no direct effect on Behavioral Intention to Use CBA, which contradicts with [4]. Furthermore,
content has no direct effect on Behavioral Intention to Use CBA, while content has a direct effect on Goal Ex-
pectancy, Perceived Ease of use, and Perceived Playfulness, which align with [4]. Also, Perceive Ease of Use
has direct effect on Perceived Usefulness and Perceived Playfulness. Furthermore, Perceived Ease of Use is po-
sitively impacted by Computer Self Efficacy and Facilitating Conditions. Moreover, Perceived Usefulness is po-
sitively impacted by Goal Expectancy and Social Influence as shown by the study. Finally, Perceived Playful-
ness is positively impacted by Perceived Usefulness and Goal Expectancy.
The study shows that Perceived Playfulness has a direct effect on CBA use. Perceived Ease of Use, Perceived
Usefulness, Computer Self Efficacy, Social Influence, Facilitating Conditions, Content and Goal Expectancy
have only indirect effects. Consequently, educators and developers have to achieve the students’ playfulness
through using CBA. The study concludes that a system is more likely to be used by students if it is playful and
CBA is more likely to be playful when it is easy to use and useful. Finally, the studied acceptance model for
computer based assessment explains approximately only 10% of the variance of Behavioral Intention to Use
CBA. Therefore, researchers need to investigate other variables that affect the Behavioural Intention.

References
[1] Joosten-ten Brinke, D., van Bruggen, J., Hermans, H., Burgers, J., Giesbers, B., Koper, R. and Latour, I. (2007) Mod-
eling Assessment for Re-Use of Traditional and New Types of Assessment. Computers in Human Behavior, 23, 2721-
2741. http://dx.doi.org/10.1016/j.chb.2006.08.009
[2] Siozos, P., Palaigeorgiou, G., Triantafyllakos, G. and Despotakis, T. (2009) Computer Based Testing Using “Digital
Ink”: Participatory Design of a Tablet PC Based Assessment Application for Secondary Education. Computers &
Education, 52, 811-819. http://dx.doi.org/10.1016/j.compedu.2008.12.006
[3] Moridis, C.N. and Economides, A.A. (2009) Mood Recognition during Online Self-Assessment Test. IEEE Transac-
tions on Learning Technologies, 2, 50-61. http://dx.doi.org/10.1109/TLT.2009.12
[4] Terzis, V. and Economides, A.A. (2011) The Acceptance and Use of Computer Based Assessment. Computers & Edu-
cation, 56, 1032-1044. http://dx.doi.org/10.1016/j.compedu.2010.11.017
[5] Maqableh, M. (2012) Analysis and Design Security Primitives Based on Chaotic Systems for eCommerce. Durham
University.
[6] Karajeh, H., Maqableh, M. and Masa’deh, R. (2014) A Review on Stereoscopic 3D: Home Entertainment for the
Twenty First Century. 3D Research-Springer, 5, 1-9. http://dx.doi.org/10.1007/s13319-014-0026-3
[7] Deutsch, T., Herrmann, K., Frese, T. and Sandholzer, H. (2012) Implementing Computer-Based Assessment—A Web-
Based Mock Examination Changes Attitudes. Computers and Education, 58, 1068-1075.
http://dx.doi.org/10.1016/j.compedu.2011.11.013
[8] Masa’deh, R. (2013) A Structural Equation Modeling Approach for Determining Antecedents and Outcomes of Stu-
dents’ Attitude toward Mobile Commerce Adoption. Life Science Journal, 10, 2321-2333.
[9] Sieber, V. and Young, D. (2008) Factors Associated with the Successful Introduction of On-Line Diagnostic, Forma-
tive and Summative Assessment in the Medical Sciences Division University of Oxford, 267-278.
http://caaconference.co.uk/pastConferences/2008/proceedings/Seiber_V_Young_D_final_formatted_e1.pdf
[10] Ko, C.C. and Cheng, C.D. (2008) Flexible and Secure Computer-Based Assessment Using a Single Zip Disk. Comput-
ers and Education, 50, 915-926. http://dx.doi.org/10.1016/j.compedu.2006.09.010
[11] Davis, F.D. (1989) Perceived Usefulness Perceived Ease of Use and User Acceptance of Information Technology. MIS
Quarterly, 13, 319-340. http://dx.doi.org/10.2307/249008
[12] Venkatesh, V. and Davis, F.D. (1996) A Model of the Antecedents of Perceived Ease of Use: Development and Test.
Decision Science, 27, 451-481. http://dx.doi.org/10.1111/j.1540-5915.1996.tb01822.x
[13] Kreiter, C.D., Ferguson, K. and Gruppen, L.D. (1999) Evaluating the Usefulness of Computerized Adaptive Testing for
Medical In-Course Assessment. Academic Medicine: Journal of the Association of American Medical Colleges, 74,

571
M. Maqableh et al.

1125-1128. http://dx.doi.org/10.1097/00001888-199910000-00016
[14] Terzis, V., Moridis, C.N. and Economides, A.A. (2012) How Student’s Personality Traits Affect Computer Based As-
sessment Acceptance: Integrating BFI with CBAAM. Computers in Human Behavior, 28, 1985-1996.
http://dx.doi.org/10.1016/j.chb.2012.05.019
[15] Terzis, V., Moridis, C.N. and Economides, A.A. (2013) Continuance Acceptance of Computer Based Assessment
through the Integration of User’s Expectations and Perceptions. Computers and Education, 62, 50-61.
http://dx.doi.org/10.1016/j.compedu.2012.10.018
[16] Thelwall, M. (2000) Computer-Based Assessment: A Versatile Educational Tool. Computers & Education, 34, 37-49.
http://dx.doi.org/10.1016/S0360-1315(99)00037-8
[17] Jantz, C., Anderson, J. and Gould, S.M. (2002) Using Computer-Based Assessments to Evaluate Interactive Multime-
dia Nutrition Education among Low-Income Predominantly Hispanic Participants. Journal of Nutrition Education and
Behavior, 34, 252-260. http://dx.doi.org/10.1016/S1499-4046(06)60103-6
[18] He, L. and Brandt, P. (2007) WEAS: A Web-Based Educational Assessment System. Proceedings of the 45th Annual
Southeast Regional Conference, ACM, New York, 126-131. http://dx.doi.org/10.1145/1233341.1233365
[19] JImoh, R.G., Yussuff, M.A., Akanmu, M.A., Enikuomehin, A.O. and Salman, I.R. (2011) Acceptability of Computer
Based Testing (CBT) Mode for Undergraduate Courses in Computer Science. Journal of Science, Technology, Mathe-
matics and Education (JOSTMED), 7, 11-20.
[20] Saleem, H., Beaudry, A. and Croteau, A.M. (2011) Antecedents of Computer Self-Efficacy: A Study of the Role of
Personality Traits and Gender. Computers in Human Behavior, 27, 1922-1936.
http://dx.doi.org/10.1016/j.chb.2011.04.017
[21] Alquraan, M.F. (2012) Methods of Assessing Students’ Learning in Higher Education: An Analysis of Jordanian Col-
lege and Grading System. Education, Business and Society: Contemporary Middle Eastern Issues, 5, 124-133.
http://dx.doi.org/10.1108/17537981211251160
[22] Jimoh, R.G., Shittu, A.K. and Kawu, Y.K. (2012) Students’ Perception of Computer Based Test (CBT) for Examining
Undergraduate Chemistry Courses. Journal of Emerging Trends in Computing and Information Sciences, 3, 125-134.
[23] Van Der Kleij, F.M., Eggen, T.J.H.M., Timmers, C.F. and Veldkamp, B.P. (2012) Effects of Feedback in a Computer-
Based Assessment for Learning. Computers and Education, 58, 263-272.
http://dx.doi.org/10.1016/j.compedu.2011.07.020
[24] Quellmalz, E. (2014) Computer-Based Assessment. In: Gunston, R., Ed., Encyclopedia of Science Education SE-44-2,
Springer, Dordrecht, 1-6. http://dx.doi.org/10.1007/978-94-007-6165-0_44-2
[25] Terzis, V., Moridis, C.N., Economides, A.A. and Mendez, G.R. (2013) Computer Based Assessment Acceptance: A
Cross-Cultural Study in Greece and Mexico. Educational Technology and Society, 16, 411-424.
[26] Abduh, H.Y., Hussin, R., Bin, C. and Dahlan, H.M. (2014) Technology Acceptance for CBT in Secondary Schools of
Saudi Arabia, 3-6.
[27] Huff, K.C. (2015) The Comparison of Mobile Devices to Computers for Web-Based Assessments. Computers in Hu-
man Behavior, 49, 208-212. http://dx.doi.org/10.1016/j.chb.2015.03.008
[28] Timmers, C.F., Walraven, A. and Veldkamp, B.P. (2015) The Effect of Regulation Feedback in a Computer-Based
Formative Assessment on Information Problem Solving. Computers & Education, 87, 1-9.
http://dx.doi.org/10.1016/j.compedu.2015.03.012
[29] Landry, B.J.L., Griffeth, R. and Hartman, S. (2006) Measuring Student Perceptions of Blackboard Using the Technol-
ogy Acceptance Model. Decision Sciences Journal of Innovative Education, 4, 87-99.
http://dx.doi.org/10.1111/j.1540-4609.2006.00103.x
[30] Terzis, V., Moridis, C.N. and Economides, A.A. (2011) The Extension of the Computer Based Assessment Acceptance
Model with Perceived Importance. Proceedings of the 4th International Conference on Interactive Computer-Aided
Blended Learning, Antigua Guatemala, 2-4 November 2011.
[31] Liao, H.L. and Lu, H.P. (2008) The Role of Experience and Innovation Characteristics in the Adoption and Continued
Use of E-Learning Websites. Computers and Education, 51, 1405-1416.
http://dx.doi.org/10.1016/j.compedu.2007.11.006
[32] Teo, T. (2009) Modeling Technology Acceptance in Education: A Study of Pre-Service Teachers. Computers & Edu-
cation, 52, 302-312. http://dx.doi.org/10.1016/j.compedu.2008.08.006
[33] Mayer, R.E. (2002) A Taxonomy for Computer-Based Assessment of Problem Solving. Computers in Human Behavior,
18, 623-632. http://dx.doi.org/10.1016/S0747-5632(02)00020-1
[34] Gikandi, J.W., Morrow, D. and Davis, N.E. (2011) Online Formative Assessment in Higher Education: A Review of
the Literature. Computers and Education, 57, 2333-2351. http://dx.doi.org/10.1016/j.compedu.2011.06.004

572
M. Maqableh et al.

[35] Terzis, V. and Economides, A.A. (2011) Computer Based Assessment: Gender Differences in Perceptions and Accep-
tance. Computers in Human Behavior, 27, 2108-2122. http://dx.doi.org/10.1016/j.chb.2011.06.005
[36] Nirmalakhandan, N. (2013) Improving Problem-Solving Skills of Undergraduates through Computerized Dynamic
Assessment. Procedia—Social and Behavioral Sciences, 83, 615-621. http://dx.doi.org/10.1016/j.sbspro.2013.06.117
[37] Pilli, O. and Aksu, M. (2013) The Effects of Computer-Assisted Instruction on the Achievement, Attitudes and Reten-
tion of Fourth Grade Mathematics Students in North Cyprus. Computers and Education, 62, 62-71.
http://dx.doi.org/10.1016/j.compedu.2012.10.010
[38] Moon, J.W. and Kim, Y.G. (2001) Extending the TAM for a World-Wide-Web Context. Information and Management,
38, 217-230. http://dx.doi.org/10.1016/S0378-7206(00)00061-6
[39] Malone, T.W. (1981) Toward a Theory of Intrinsically Motivating Instruction. Cognitive Science: A Multidisciplinary
Journal, 5, 333-369. http://dx.doi.org/10.1207/s15516709cog0504_2
[40] Lee, Y.C. (2008) The Role of Perceived Resources in Online Learning Adoption. Computers and Education, 50, 1423-
1438. http://dx.doi.org/10.1016/j.compedu.2007.01.001
[41] Ong, C.S. and Lai, J.Y. (2006) Gender Differences in Perceptions and Relationships among Dominants of E-Learning
Acceptance. Computers in Human Behavior, 22, 816-829. http://dx.doi.org/10.1016/j.chb.2004.03.006
[42] Van Raaij, E.M. and Schepers, J.J.L. (2008) The Acceptance and Use of a Virtual Learning Environment in China.
Computers & Education, 50, 838-852. http://dx.doi.org/10.1016/j.compedu.2006.09.001
[43] Agarwal, R. and Prasad, J. (1999) Are Individual Differences Germane to the Acceptance of New Information Tech-
nologies? Decision Sciences, 30, 361-391. http://dx.doi.org/10.1111/j.1540-5915.1999.tb01614.x
[44] Agarwal, R., Sambamurthy, V. and Stair, R.M. (2000) Research Report: The Evolving Relationship between General
and Specific Computer Self-Efficacy? An Empirical Assessment. Information Systems Research, 11, 418-430.
http://dx.doi.org/10.1287/isre.11.4.418.11876
[45] Garrido-Moreno, A., Padilla-Mele, A. and Del Aguila-Obra, A.R. (2008) Factors Affecting E-Collaboration Technol-
ogy Use among Management Students. Computers & Education, 51, 609-623.
[46] Karahanna, E. and Straub, D.W. (1999) The Psychological Origins of Perceived Usefulness and Ease-of-Use. Informa-
tion & Management, 35, 237-250. http://dx.doi.org/10.1016/S0378-7206(98)00096-2
[47] Venkatesh, V., Morris, M.G., Davis, G.B. and Davis, F.D. (2003) User Acceptance of Information Technology: To-
ward a Unified View. MIS Quarterly, 27, 425-478.
[48] Wang, Y.-S., Cheng, M. and Wang, H.-Y. (2008) Investigating the Determinants and Age and Gender Differences in
the Acceptance of Mobile Learning. British Journal of Educational Technology, 40, 92-118.
[49] Smith, P.J., Smith, P.J., Murphy, K.L., Murphy, K.L., Mahoney, S.E. and Mahoney, S.E. (2003) Towards Identifying
Factors Underlying Readiness for Online Learning: An Exploratory Study. Distance Education, 24, 57-67.
http://dx.doi.org/10.1080/01587910303043
[50] Yi, M.Y. and Hwang, Y. (2003) Predicting the Use of Web-Based Information Systems: Self-Efficacy, Enjoyment,
Learning Goal Orientation, and the Technology Acceptance Model. International Journal of Human Computer Studies,
59, 431-449. http://dx.doi.org/10.1016/S1071-5819(03)00114-9
[51] Shih, H.P. (2008) Using a Cognition-Motivation-Control View to Assess the Adoption Intention for Web-Based
Learning. Computers and Education, 50, 327-337. http://dx.doi.org/10.1016/j.compedu.2006.06.001
[52] Vroom, V.H. (1964) Work and Motivation. 14th Edition, Wiley, New York.
[53] Cahill, S.E. and Bandura, A. (1987) Social Foundations of Thought and Action: A Social Cognitive Theory. Contem-
porary Sociology, 16, 12. http://dx.doi.org/10.2307/2071177
[54] Ong, C.S., Lai, J.Y. and Wang, Y.S. (2004) Factors Affecting Engineers’ Acceptance of Asynchronous E-Learning
Systems in High-Tech Companies. Information and Management, 41, 795-804.
http://dx.doi.org/10.1016/j.im.2003.08.012
[55] Weinerth, K., Koenig, V., Brunner, M. and Martin, R. (2014) Concept Maps: A Useful and Usable Tool for Computer-
Based Knowledge Assessment? A Literature Review with a Focus on Usability. Computers and Education, 78, 201-
209. http://dx.doi.org/10.1016/j.compedu.2014.06.002
[56] Byrne, B.M. (2001) Structural Equation Modeling with AMOS : Basic Concepts, Applications, and Programming.
Lawrence Erlbaum Associates. Mahwah.
[57] Hair, J., Black, W., Babin, B., Anderson, R., Tatham, R. and Black, W. (2010) Multivariate Data Analysis. 7th edition,
Prentice-Hall International Inc., Upper Saddle River.
[58] Kline, R.B. (2005) Principles and Practice of Structural Equation Modeling. 2nd Edition, The Guilford Press, New
York.

573
M. Maqableh et al.

[59] Kline, R.B. (2010) Principles and Practice of Structural Equation Modeling. The Guilford Press, New York.
[60] Krejcie, R.V. and Morgan, D.W. (1970) Determining Sample Size for Research Activities. Education and Psychologi-
cal Measurement, 30, 607-610.
[61] Pallant, J. (2005) SPSS Survival Guide—A Step by Step Guide to Data Analysis Using SPSS for Windows. Open
University Press, Chicago.
[62] Newkirk, H.E., Newkirk, H.E., Lederer, A.L. and Lederer, A.L. (2006) The Effectiveness of Strategic Information
Systems Planning under Environmental Uncertainty. Information & Management, 43, 481-501.
http://dx.doi.org/10.1016/j.im.2005.12.001
[63] Arbuckle, J.L. (2009) Amos 18 User’s Guide, 635.
[64] Chou, T.C., Chang, P.L., Cheng, Y.P. and Tsai, C.T. (2007) A Path Model Linking Organizational Knowledge
Attributes, Information Processing Capabilities, and Perceived Usability. Information and Management, 44, 408-417.
http://dx.doi.org/10.1016/j.im.2007.03.003
[65] Bagozzi, R. and Yi, Y. (1988) On the Evaluation of Structural Equation Models. Journal of the Academy of Marketing
Science, 16, 74-94. http://dx.doi.org/10.1007/BF02723327
[66] Holmes-Smith, P. (2001) Introduction to Structural Equation Modeling Using LISREL. ACSPRI Winter Training Pro-
gram, Perth.
[67] Creswell, J.W. (2014) Research Design: Qualitative, Quantitative and Mixed Methods Approaches. Sage, Los Angeles.
[68] Sekaran, U. (2003) Research Methods for Business: A Skill-Building Approach. 4th Edition, John Wiley and Sons,
New York.
[69] Sekaran, U. and Roger, B. (2013) Research Methods for Business: A Skill-Building Approach. 6th Edition, John Wiley
& Sons, West Sussex.
[70] Blumberg, B., Cooper, D.R. and Schindler, P.S. (2005) Business Research Methods. McGraw Hill, Berkshire, 770.
[71] Gefen, D., Straub, D.W. and Boudreau, M.B. (2000) Structural Equation Modeling and Regression: Guidelines for Re-
search Practice. Communications of the Association for Information Systems, 4, 1-76.
http://www.cis.gsu.edu/~dstraub/Papers/Resume/Gefenetal2000.pdf
[72] Fornell, C. and Larcker, D.F. (1981) Evaluating Structural Equation Models with Unobservable Variables and Mea-
surement Error. Journal of Marketing Research (JMR), 18, 39-50.

574

View publication stats

You might also like