Assessing Assessment Literacy and Practices Among Lecturers: Seyed Ali Rezvani Kalajahi, Ain Nadzimah Abdullah

Download as pdf or txt
Download as pdf or txt
You are on page 1of 17

ISSN 1392-0340

E-ISSN 2029-0551

Pedagogika / Pedagogy
2016, t. 124, Nr. 4, p. 232–248 / Vol. 124, No. 4, pp. 232–248, 2016

Assessing Assessment Literacy and Practices


among Lecturers
Seyed Ali Rezvani Kalajahi1, Ain Nadzimah Abdullah 2

1
Universiti Putra Malaysia, Faculty of Modern Languages and Communication, Department of English,
43400 Serdang, Selangor, Malaysia, [email protected]
2
Universiti Putra Malaysia, Faculty of Modern Languages and Communication, Department of English,
43400 Serdang, Selangor, Malaysia, [email protected]

Abstract. Accountability systems are important for higher education and are often linked to
the credibility of assessment literacy of lecturers. Lecturers are responsible for ‘report cards’ that
act as benchmarks of student learning processes and outcomes. Therefore, assessment literacy
of lecturers is of prior importance as institutions rely on lecturers to assess students’ content
knowledge and skills. The question that arises is whether lecturers have been provided sufficient
and appropriate knowledge of assessment methods or whether assessment has been left much to
the idiosyncrasies of the lecturers. This study seeks to establish the level of assessment literacy
among lecturers and investigate common assessment practices. The methodology involves a
survey questionnaire administered to 65 lecturers from different disciplines at a Malaysian
public university. Findings show that the state of assessment literacy among lecturers is not at
a satisfactory level and that lecturers may have not gone through sufficient assessment training
to discharge an important part of their professional responsibility in the context of teaching
and learning.

Keywords: higher education, assessment practice, assessment literacy, lecturers, types of as-
sessments, learning outcomes.

Introduction

Assessment has gained increasing attention in education in recent decades. Educational


researchers now recognize that teachers’ classroom assessment beliefs and knowledge
of assessment practice act as instruments from which both students and teachers can

232 Pedagogika / 2016, t. 124, Nr. 4


DOI: http://dx.doi.org/10.15823/p.2016.65
ISSN 1392-0340
E-ISSN 2029-0551

benefit enormously. In addition, assessment at school and tertiary levels have resulted in
vigorous discussions in many countries and Malaysia is no exception.
Chan (2008, p. 37) puts forth that “assessment refers to any method, strategy or tool
a teacher may use to collect evidence about students’ progress toward achievement of
established goals”. It is a process of collecting information and gathering evidence about
what students have learned. In particular, Heaton (1990) and Popham (1995) point
out that the goals and functions of assessment is to (1) understand the strengths and
weaknesses of students’ learning ability, (2) assist teachers in monitoring student learn-
ing progress, (3) evaluate students’ learning, and (4) place students in learning groups
based on given institutional standards. Assessment is a systematic process that provides
an opportunity for teachers to meaningfully reflect on how learning is best delivered,
collect respective evidences, and then use that information to improve their teaching.
Furthermore, assessment can help instructors obtain useful and immediate feedback
on what, how much, and how well their students are learning (Taras, 2005; Stiggins,
1992 cited in Buyukkarci, 2014).
Assessment can be classified into two main categories: The first is summative assess-
ment (or assessment of learning put forth by Stiggins, 2002; Derrich and Ecclestone, 2006).
The objective of summative assessment is to evaluate student learning at the end of an
instructional unit by comparing it against some standard or benchmark. Taras (2005)
notes that summative assessment is a judgment which summarizes all the evidence up
to a given point. This certain point is seen as a finality at the point of the judgment. This
type of assessment can have various functions, such as shaping how teachers organize
their courses or what courses schools can offer their students, which do not have an effect
on the learning process.
The second category, on the other hand, is formative assessment (or assessment for
learning, ongoing assessment, or dynamic assessment as stated by Stiggins, 2002; Derrich
and Ecclestone, 2006). The aim of formative assessment is to monitor student learning to
provide ongoing feedback that can be used by instructors to improve their teaching, and
by the students to improve their learning. In Threlfall’s (2005) terms “formative assessment
may be defined as the use of assessment judgments about capacities or competences to
promote the further learning of the person who has been assessed” (p. 54). This type of
assessment helps students identify their strengths and weaknesses and target areas that
need work and also helps faculty recognize in which areas the students are struggling
and to address the problems immediately.
It is worth to mention that teachers may also take an interest in assessing the strengths
and weakness of the individuals through diagnostic assessment. Brown (2005) described
that, like formative assessment, diagnostic assessment is meant to improve the learner’s
experience and their level of achievement. However, it looks backwards rather than
forwards via assessing what the learner already knows and/or the nature of difficulties
that the learner might have. This type of assessment is often used when a problem arises
Pedagogika / 2016, t. 124, Nr. 4 233
ISSN 1392-0340
E-ISSN 2029-0551

or before teaching. If left undiagnosed, it might limit their engagement in new learning.
In contrast, placement assessment, which is almost identical to summative assessment,
is more specifically relevant to a given program, particularly in terms of the relatively
narrow range of abilities assessed and the content of the curriculum, so that it efficiently
separates the students into level grouping within that program.
Based on the given discussion about the modes of assessment, it can be said that
formative assessment and its continuous and actual practices in classes would be highly
beneficial for students’ learning process compared to summative assessment. Yet, it var-
ies from institution to institution and different teachers may have different views about
practicing them. Chew and Lee (2013, p 1) state that “through formative assessment,
students can be assessed for their process of learning rather than just for their product
of learning. Coupled with the usual summative assessment that students are subjected
to, acquisition and application of knowledge acquired can be effectively determined.”
The role of lecturers in assessment is not without importance. The motivation behind
this study is to seek whether lecturers at universities are equipped with the necessary skills
knowledge in the fundamentals of language assessment to be able to evaluate students
fairly and effectively. In addition, this study intended to explore the assessment practices
and beliefs of the lecturers. Linn and Miller (2005) emphasized that the educational reform
has called for the implementation of multiple sources of assessment information from the
classroom instead of just relying on one type of assessment. The Ministry of Education
in Malaysia has taken this assessment reform into account seriously and came up with
a new national assessment system for all schools. The goal was to reduce dependence on
the highly centralized assessment system and shift to a system that integrates assessment
practices and beliefs. In anticipation of the reformation of the assessment system, the
current assessment practices of the Malaysian lecturers need to be known so that appro-
priate action can be taken to improve the assessment skills of lecturers.
Previous attempts, in the context of Malaysia, to identify the assessment beliefs and
assessment practices relied on the data from Malaysian primary, secondary or in-service
school teachers (Mohd Sallehhudin, 2011; Suah & Ong, 2012; Ch’ng & Rethinasamy, 2013).
Inadequacies in the data often result in unclear picture about the assessment practices
and beliefs of the lecturers. As assessment practices of Malaysian lecturers are not well
explored, this study was carried out to identify the current assessment practices and
beliefs of the lecturers’ at tertiary levels.
It is firmly believed that the results of this study will shed light on both the strong and
problematic areas of teachers’ ideas and actual practices about assessment for learning
and assessment of learning, which may lead to a better understanding of the importance
of assessment in students’ language learning processes.

234 Pedagogika / 2016, t. 124, Nr. 4


ISSN 1392-0340
E-ISSN 2029-0551

Purpose of the Study

Harris, Irving and Peterson (2008) put forward that assessment is considered to be
a critical component in the process of teaching and learning as it enables educators to
evaluate student learning and utilize the information to improve learning and instruc-
tion. As a result, Brookhart (1999) highlights the great importance of teachers using
assessments that are valid, reliable, meaningful and accurate to guide instruction. Hence,
this study seeks to investigate assessment beliefs and practices of a group of lecturers
from Malaysian tertiary institutions. In particular, this study sought to answer the
following questions:
1. What is the state of assessment beliefs among lecturers in institutions of higher
learning?
2. How do lecturers carry out their assessment practices?
3. Is there any relationship between assessment beliefs and practices of Malaysian
lecturers?
4. What kinds of assessment methods and tools do lecturers use to evaluate their
students?
5. What are the problems that lecturers encounter while assessing their students?

Research Background and Literature Review

Educators today are expected to make important professional decisions based on the
results of educational assessments. Yet, in many instances, the educators making those
assessment-dependent decisions are doing so without a genuine understanding of edu-
cational assessment (Popham, 2006). According to Wiggins (1998), the term ‘educative
assessment’ used to describe assessment literacy includes techniques and issues that
educators should know about when they design and use assessments. He states that the
nature of assessment influences what is learned and the degree of meaningful engagement
by students in the learning process. It is also believed that effective teaching is character-
ized by assessments that motivate and engage students in ways that are consistent with
philosophies of teaching and learning and with theories of learning, and motivation.
Effective teaching and learning rests on meaningful assessment and professional judg-
ment which is the foundation for assessment. Educators need to clearly understand and
use all aspects of assessment. Whether that professional judgment occurs for constructing
test questions, scoring essays, creating rubrics, grading participation, combining scores,
or interpreting standardized test scores, the essence of the process is making professional
judgment and interpretation. However, the degree of competence rests on a high level
of making professional judgment and interpretation which is determined by the level of
assessment literacy. Assessment literacy is based on professional assumptions and values
Pedagogika / 2016, t. 124, Nr. 4 235
ISSN 1392-0340
E-ISSN 2029-0551

and is cultivated in the context of institutional needs and goals. Thus, assessment literacy
has a high premium in describing the success of providing education.
Surprisingly, the term assessment literacy is not listed in the Dictionary of Language
Testing (1999). Neither does it appear in the ALTE Multilingual Glossary of Language
Testing Terms (1998) or Mousavi’s Encyclopedic Dictionary of Language Testing (2002). In
each of these works, the issue of how people actually become competent at assessing was
not mentioned. However, it is assumed that when lecturers begin their careers, they are
expected to be equipped with assessment literacy. If they do not already have the skills
they are expected to develop them in tandem with all the other skills that will help to
advance their careers such as intensive publications and research besides teaching. The
issue in focus is how lecturers learn about concepts involved in assessment?
Assessment literacy does not cover just knowing about test formats such as multi-
ple-choice tests, essay tests, cloze tests etc. It covers knowing aspects of assessments that
include principles of good test constructions, factors external to the classroom such as
mandated large-scale testing and different assessment strategies (using selected-response
tests and providing practice in objective test-taking) (McMillan & Nash, 2000). Assess-
ment literacy also means having knowledge about making decisions about what type of
test to use and for what purpose. Assessment can come in many forms depending on
what needs to be assessed and how to assess. Some of these are:
• Learning vs. auditing
• Formative (informal and ongoing) vs. summative (formal and at the end)
• Criterion-referenced vs. norm-referenced
• Value-added vs. absolute standards
• Traditional vs. alternative
• Authentic vs. contrived
• Speeded tests vs. power tests
• Standardized tests vs. classroom tests

Many researchers have argued that there are a number of “essential” assessment con-
cepts, principles, techniques and procedures that educators need to know about (Calfee &
Masuda, 1997; McMillan, 2001; Sanders & Vogel, 1993; Stiggins & Conklin, 1992). There
continues to be relatively little emphasis on assessment preparation or professional devel-
opment of assessment literacy for educators (Stiggins, 2000). Competing purposes, uses
and pressures result in tension for educators as they make assessment-related decisions.
These prevalent tensions suggest that the better informed decisions about assessment are
best made with a complete understanding of how different factors influence the nature of
the assessment. After these factors are understood, priorities need to be made and trade-
offs may be inevitable. By understanding the tensions better, educators will hopefully
make better informed and better justified assessment decisions.

236 Pedagogika / 2016, t. 124, Nr. 4


ISSN 1392-0340
E-ISSN 2029-0551

Assessment is inherently a process of professional judgment which needs specialized


training. In the course of hiring new lecturers, assessment literacy is not often a consider-
ation for employment though assessment forms a major part of a lecturer’s responsibility.
It seems to appear that the skills are to be ‘CAUGHT’ rather than ‘TAUGHT’. As a result,
for many novice lecturers, assessment literacy is an issue as they have to contend with
learning about assessment simultaneously with learning the many other duties of a lec-
turer. The gaining of assessment literacy should not be left to chance. There is definitely
a place for every institution to harness and develop assessment literacy in a concerted
manner to ensure that every lecturer has the knowledge and assessment skills in order
to be an effective educator.
At the tertiary level, lecturers need to be secure and firm in their knowledge of assess-
ment literacy. Lecturers must be able to put knowledge into practice in order to come up
with viable tests that are valid. Their knowledge would need to cover the understanding of
fundamental concepts common to all assessments, as well as specifics that would be essential
for particular learning situations, e.g. in the laboratory, classroom, the different types of
instruments used, measurement concepts and constructing and evaluating scoring rubrics.
Generally, personal experiences and interactions in daily life play a critical role in
chapping beliefs and interpretations of events that individuals have engaged in (Al-
Sharafi, 1998; Hsieh, 2002). Bauch (1984) holds that these beliefs are changed into attitudes
which in turn influence intentions with intentions of becoming the bases for decisions
that result in action.
In educational contexts, this belief system rules teaching behaviors, with individual
pedagogies reflecting a teacher’s beliefs about language teaching (Bauch, 1984; Graves,
2000; Huang, 1997). Beliefs are influenced by teachers’ thought process and instructional
decisions are substantial (Borg, 1999). These beliefs chiefly governs teachers’ choices and
practice such as addressing teaching objectives, designing lessons, selecting tasks and
activities, and assessing student performance (Rios, 1996). Therefore, teachers not only
impart knowledge to their students but also consciously or unconsciously pass or impose
their beliefs about learning on to their students in the classroom (Horwitz, 1988).

Previous studies on assessment practices and beliefs

Studies focusing on assessment practices showed that teachers have been affected by
subject areas and years of teaching experience (Bol, et al., 1998; Mertler, 1998, Trepani-
er-Street, McNair, & Donegan, 2001; Zhang & Burry-Stock, 2003). These studies indicat-
ed that teachers relied more on formative assessment practices approaches rather than
summative. Several studies were carried out and compared primary school teachers with
secondary school teachers. It was found that, all in all, formative assessment was preferred,
while secondary teachers opted for more conventional types of assessment approaches.
Pedagogika / 2016, t. 124, Nr. 4 237
ISSN 1392-0340
E-ISSN 2029-0551

Mohd Sallehhudin (2011) analyzed the assessment practices of the Malaysian teachers
The results from the investigation showed that the language teachers adopted a range of
practices. Suah and Ong (2012) investigated the assessment practices of 406 Malaysian
in-service teachers and came to the conclusion that in-service teachers were found to
often use traditional types of assessment. The assessment practices differed between
language teachers and science and mathematics teachers, primary school teachers
and secondary school teachers and experienced teachers and inexperienced teachers.
Cheng (1997) found that teachers’ beliefs about foreign language learning had a critical
effect on students’ anxiety about foreign language learning. The findings revealed that
Chinese teachers are likely to emphasize the importance of excellent pronunciation,
immediate error correction, vocabulary memorization and grammar rules. Rahim,
Venville and Chapman (2009) uncovered that teachers’ beliefs influenced their class-
room decision-making regarding the teaching and learning experiences for students and
assessment for making judgment about students’ learning. For example, they reported
that studies conducted on Mathematics teachers’ beliefs indicated a positive relationship
between Mathematics teachers’ beliefs and their assessment practices. Chew and Lee
(2013) also conducted a study in Singapore and their findings showed no significant gaps
between participants’ assessment beliefs and practices. Likewise, Thomos’s study (2012)
in Pakistan revealed that there is no significant difference in the assessment beliefs of
trained and untrained teachers. In another study, Susuwele-Banda (2005) figured out
that teachers’ perceptions of classroom assessment have influenced their classroom
assessment practices. They perceived assessment as testing and classroom assessment
practices were not clearly embedded in their teaching. Finally, Chan’s (2008) study on
Taiwanese teachers discovered that teachers had strong beliefs of assessments. They
preferred multiple assessment to conventional assessment. His findings revealed that, in
general, teachers applied assessments or used alternative assessment most of the time.
Furthermore, the correlation coefficient between teachers’ beliefs and practices showed
that their relationship was positively significant at the level of .01. In fact, the study dis-
closed that the stronger the beliefs on assessments the teachers had, the more frequently
they used multiple assessments in their teaching practices.

Method

A quantitative design approach was used in an effort to describe the current assess-
ment beliefs and assessment practices and also to determine to what degree relationships
exist among the variables. According to Gay and Airasian (2000) quantitative research is
“based on the collection and analysis of numerical data” (p. 8) and is used to “describe
current conditions, investigate relationships, and study cause-effect phenomena” (p.11).
The respondents were randomly selected and they gave their consent to take part in this

238 Pedagogika / 2016, t. 124, Nr. 4


ISSN 1392-0340
E-ISSN 2029-0551

study. Participants were composed of 35 male (53.8%) and 30 female (46.2%) Malaysian
lecturers from various disciplines in Universiti Putra Malaysia (UPM).

The instrument

The instrument is a four-point Likert scale questionnaire which was developed by the
researchers to meet the objectives of the study. It consists of three parts. In part A, there
are seven demographic questions to collect background information of the participants.
Part B gauges assessment beliefs and is composed of five constructs; 1) general assess-
ment beliefs (20 items), 2) assessment and teacher (9 items), 3) assessment and student
learning (5 items), 4) assessment and teaching improvement (4 items) and 5) irrelevance
of assessment (7 items). Part C deals with assessment practices and also has five con-
structs; 1) assessment to measure students capability (7 items), 2) assessment practices
(12 items) 3) frequency of assessment (5 items), 4) assessment approaches (13 items) and
5) assessment problems teachers face (16 items).
The survey was developed based on related literature and previous studies done on
assessment beliefs and practices. Two main constructs (assessment beliefs and assessment
practices) were developed in order to make this study feasible. Sub-constructs emerged as
a result of the construct and content validity. To ensure the validity of the questionnaire,
three PhD holders were requested to comment on the questionnaire. They were experi-
enced researchers in the field of language and education as well as Applied Linguistics,
collectively with 17 years of experience in teaching and research. The independent ex-
perts left their comments on the instrument and the researchers were asked to revise it
accordingly. Taking the comments into considerations, some irrelevant items under each
sub-construct were removed due to their unsuitability. Some biased items were changed,
while some were made shorter. The language of some items were improved as they were
reported to be unclear and vague.
As for reliability, the items examining assessment beliefs were computed and reliability
appeared to be 0.87 while the Cronbach’s alpha reliability for the assessment practices was
0.87. The questionnaire had a high level of internal consistency, as determined by a Cron-
bach’s alpha of 0.904, suggesting that the items have relatively high internal consistency.

Result and Discussion

Data management and analysis were performed using SPSS software (version 17). The
analysis attempted to address the research questions in detail. Table 1 shows the result
for assessment beliefs and assessment practices among lecturers.

Pedagogika / 2016, t. 124, Nr. 4 239


ISSN 1392-0340
E-ISSN 2029-0551

Table 1. Assessment Beliefs Vs. Assessment Practices


Mean Std. Deviation
Assessment beliefs 3.1077 .56245
Assessment and the teacher 2.5692 .55816
Assessment and the student learning 2.9846 .67297
Assessment and teaching improvement 3.2615 .50858
Irrelevant of assessment 2.2308 .67937
Assessment beliefs (Total) 2.9077 .45836
Assessment to measure students capability 3.3846 .60447
Assessment practices 3.8615 .34807
Frequency of assessment 2.6462 .51329
Assessment practices (Total) 3.0615 .39039

As can be seen in Table 1, assessment practices had the highest mean (3.8615). The
mean and standard deviation of assessment beliefs is 3.1077 and 0.56245 respectively.
The total mean and standard deviation for assessment practices, meanwhile, are 3.0615
and .39039 respectively.
Table 2 presents the findings for overall assessment beliefs of the lecturers and its five
sub-categories while table 3 illustrates the results for the assessment practices, in general,
and its three subcategories.

Table 2. Assessment Beliefs Sub-categories


Assessment beliefs Assessment and the teacher
Frequency Percent Frequency Percent
Strongly disagree 0 0 Strongly disagree 1 1.5
Disagree 7 10.8 Disagree 27 41.5
Agree 44 67.7 Agree 36 55.4
Strongly agree 14 21.5 Strongly agree 1 1.5
Assessment and the student learning Assessment and teaching improvement
Strongly disagree 0 0 Strongly disagree 0 0
Disagree 15 23.1 Disagree 2 3.1
Agree 36 55.4 Agree 44 67.7
Strongly agree 14 21.5 Strongly agree 19 29.2
Irrelevant of assessment Assessment beliefs (Total)
Strongly disagree 8 12.3 Strongly disagree 0 0
Disagree 35 53.8 Disagree 10 15.4
Agree 21 32.3 Agree 51 78.5
Strongly agree 1 1.5 Strongly agree 4 6.2

240 Pedagogika / 2016, t. 124, Nr. 4


ISSN 1392-0340
E-ISSN 2029-0551

The frequency and percentage of the respondent’s answers regarding assessment be-
lief sub-categories are presented in Table 2. In total, it can be stated that the assessment
beliefs of the participants is fairly high. This suggests that teachers have positive views
towards assessment.

Table 3. Assessment Practices Sub-categories


Assessment to measure students capability Assessment practices
Frequency Percent Frequency Percent
Never 0 0 Strongly disagree 0 0
Not often 4 6.2 Disagree 0 0
Often 32 49.2 Agree 9 13.8
Very Often 29 44.6 Strongly agree 56 86.2
Frequency of assessment Assessment practices (Total)
Strongly disagree 0 0 Strongly disagree 0 0
Disagree 24 36.9 Disagree 3 4.6
Agree 40 61.5 Agree 55 84.6
Strongly agree 1 1.5 Strongly agree 7 10.8

As can be seen in Table 3, the frequency and percentage of the respondent’s answers
regarding assessment practices sub-categories are presented. The findings indicate that
majority of the instructors’ measures students’ capability frequently. Likewise, they also
believe that assessment practices are very useful. In total, 84.6% of the respondents agree
that assessment practices are undertaken on a regular basis.
Pearson Correlation was used to explore the relationship between assessment beliefs
and practices. Table 4 displays the correlation between assessment beliefs and assessment
practices among participants.

Table 4. The correlation between assessment beliefs and assessment practices


Assessment practices
Pearson Correlation .294 *

Sig. (2-tailed) .017

Correlation is significant at the 0.05 level (2-tailed).

As is shown in Table 4, there is a positive significant correlation between assessment


beliefs and assessment practices (this correlation is significant at the 0.05 level). In oth-
er words, the higher the level of assessment beliefs, the higher the level of assessment
practices. The result is in line with the findings of Rahim, Venville and Chapman (2009)
and Chan (2008), who also found a positive correlation between assessment beliefs and
Pedagogika / 2016, t. 124, Nr. 4 241
ISSN 1392-0340
E-ISSN 2029-0551

practices. This implies that a greater understanding of assessment beliefs and importance
of practices can contribute to the development of relevant professional development aimed
at the improvement of teachers’ assessment pedagogies and practices which can contrib-
ute to greater educational success. But, it remains unclear if the assessment practices are
influenced by teachers’ beliefs and the contexts or vice versa.
Table 5 indicates the correlation among sub-categories of assessment beliefs and as-
sessment practices as the study was also interested in examining the correlation within
the sub-categories of two variables.

Table 5. Correlation among sub-categories of assessment beliefs and assessment practices

Assessment to measure
teaching improvement

Assessment practices
Assessment and the

Assessment and the

students capability
Assessment beliefs

student learning
Assessment and

Frequency of
Irrelevant of
assessment

assessment
teacher

Assessment beliefs Pearson Correlation .200 .459** .337** -.107 .290* .077 .134
Sig. (2-tailed) .110 .000 .006 .396 .019 .540 .287
Assessment and the te- Pearson Correlation .200 .107 .128 .431** .175 .010 .169
acher Sig. (2-tailed) .110 .397 .310 .000 .164 .938 .179
Assessment and the stu- Pearson Correlation .459 **
.107 .468** -.197 .360** .057 .074
dent learning
Sig. (2-tailed) .000 .397 .000 .115 .003 .649 .556
Assessment and tea- Pearson Correlation .337 **
.128 .468 **
-.042 .379** .031 .121
ching improvement Sig. (2-tailed) .006 .310 .000 .741 .002 .805 .338
Irrelevant of assessment Pearson Correlation -.107 .431** -.197 -.042 .161 -.127 .283*
Sig. (2-tailed) .396 .000 .115 .741 .200 .313 .023

As can be seen in Table 5 there is a positive significant correlation between some


sub-categories of assessment beliefs and assessment practices. These include; assess-
ment beliefs and Assessment and student learning, assessment beliefs and Assessment
and teaching improvement, assessment beliefs and Assessment to measure students
capability, Assessment and the student learning and assessment beliefs, Assessment
and the student learning and Assessment and teaching improvement, Assessment and
the student learning and Assessment to measure students capability, Assessment and
teaching improvement and assessment beliefs, Assessment and teaching improvement
and Assessment and student learning, and (Assessment and teaching improvement and
Assessment to measure students capability). These correlations are significant at 0.05, but

242 Pedagogika / 2016, t. 124, Nr. 4


ISSN 1392-0340
E-ISSN 2029-0551

at moderate level. It should also be highlighted that not only does no correlation exist
between some variables (e.g. assessment and the teacher & irrelevant of assessment),
there is also a negative correlation between several variables (e.g. assessment and teaching
improvement & irrelevant of assessment).
Table 6 reveals how the lecturers use diversity of the approaches to undertake assess-
ment of the pupils. The results are sorted from the least frequent to the most frequently
used approaches.

Table 6. Employment of various approaches for conducting assessment of the students


Mean SD
Dramatization 1.8923 .97023
Role play 2.0615 .82683
Tests- weekly and monthly 2.5077 .83147
Projects and experiments 2.5692 .82858
Quiz- competition 2.6615 .88877
Written exercises in calls 2.6923 .82771
Demonstration of practical skills 2.7231 .89281
Scenarios/case studies 2.7385 .85288
Class presentation 2.9385 .93336
Homework/Assignment 3.0154 .78047
Group discussion 3.0308 .76993
Oral questions 3.0462 .77924
Tests-midterm and terminal 3.2615 .61940

Table 6 presents the mean and standard deviation for employment of various ap-
proaches for conducting assessment of the students. Group discussion, Oral questions
and Tests-midterm and terminal with respective mean scores of 3.03, 3.0462 and 3.26
show that teachers are using these methods and tools the most to assess their students.
Nevertheless, Dramatization, Role play and Tests- weekly and monthly are the least
frequently used approaches.
Table 7 displays the results of the assessment problem which lecturers encounter while
teaching. The findings were sorted based on the mean score to show the least and the
most important problems that they faced.

Pedagogika / 2016, t. 124, Nr. 4 243


ISSN 1392-0340
E-ISSN 2029-0551

Table 7. Assessment problems


Student absent on examination day because of fear 2.0923 .97984
Poor lighting 2.1231 .87514
Insufficient air conditioning 2.2000 .86963
Insufficient number of people to help invigilate the test 2.2308 .94818
Inexperienced expertise to vet 2.3846 .94691
Seating arrangement in classroom 2.4000 .89791
Audio-visual equipment 2.4308 .90085
Large class size 2.4462 .93593
Pupils needing more time to take the test 2.5077 .79300
Classroom layout 2.5385 .96949
Insufficient time to cover all the material 2.5538 .91908
Lack of discipline among students 2.5538 .86658
Students copying other students’ work 2.6308 .80174
Time-consuming to write the test and administer it 2.8000 .86963
Administrative duties involved with assessment 2.8923 .86824
Time-consuming to mark all the papers 2.9077 .82392

As can be seen in Table 7 the mean and standard deviation for assessment problems
are presented. Issues such as the time consumed to write and administer tests, admin-
istrative duties involved with assessment, and time-consumed to mark all the papers
showed mean scores of 2.80, 2.89 and 2.90 respectively. As is evident, it is reported that
marking is time consuming and had the highest means among assessment problems
suggesting that they are assessment problems lecturers encounter most when assessing
their students. Taken together, the findings of this study raise some significant issues
related to the quality of teaching and assessment in tertiary level.

Conclusion

The findings of this research can be summarized as that firstly, Malaysian lecturers have
strong beliefs about assessments and these are often presented in the way they conduct
assessment practices. Secondly, the data seemed to indicate that there was a significant but
weak correlation between assessment beliefs and assessment practices. Lecturers tended
to use mid-term tests and final examinations as the most favored approach for conducting
assessment of the students. Dramatization was the least favored. Finally, lecturers stated
that factors such as time-consuming to mark all the papers and administrative duties in-
volved with assessment are the most challenging assessment issues that lecturers encounter.

244 Pedagogika / 2016, t. 124, Nr. 4


ISSN 1392-0340
E-ISSN 2029-0551

By and large, the results suggest that identifying the state of assessment literacy and
practices among lecturers should always be given overriding priority in order that effective
teaching and learning can take place in the classroom. Feedback of this nature could depict
lecturers actual and perceived beliefs about assessment practices. The role assessment
literacy plays in the teaching and learning process would then not be undermined. The
findings are beneficial in that they reveal strengths and weaknesses of assessment systems
and preferences among lecturers in terms of assessment methods and techniques, as well
as providing necessary input for further meaningful feedback and action. To complement
the awareness of assessment literacy of lecturers, the study also provided insights into the
perceptions of another important stakeholder i.e. the students themselves. They are the
final beneficiary of teaching and learning and are a captive audience, as well the subject
of assessment practices. Teachers’ beliefs about language teaching are also influential and
have a huge impact on assessment literacy and practice. In line with this, future research
should consider different variables of assessment literacy and investigate lecturers’ beliefs
on a deeper level. Further research should also be conducted to investigate students’
beliefs to find out the relationship between teachers and students’ beliefs and the ways
these could impact and impinge on the other.

References

Al-Sharafi, A. (1998). An investigation of the beliefs and practice of foreign language teachers: A
case study of five American high school foreign language teachers in Leon County: Unpublished
doctoral dissertation. College of Education of Florida State University.
ALTE. (1998). Multilingual glossary of language testing terms. Studies in Language Testing 6.
Cambridge: Cambridge University Press.
Bauch, P. (1984). The impact of teachers’ instructional beliefs on their teaching: Implications for
research and practice. ERIC Digest. ED252954.
Bol, L., Stephenson, P. L., & O’Connell, A. A. (1998). Influence of experience, grade level and
subject area on teachers’ assessment practices. Journal of Educational Research, 91(6), 323–330.
doi: 10.1080/00220679809597562
Borg, S. (1999). Teachers’ theories in grammar teaching. ELT Journal, 53(3), 157–167. doi: 10.1093/
elt/53.3.157
Brookhart, S. (1999). Teaching about Communicating Assessment Results and Grading.
Educational Measurement: Issues and Practice, 18(1), 5–13. doi:  10.1111/j.1745-3992.1999.
tb00002.x
Brown, J. D. (2005). Testing in Language Programs: A Comprehensive Guide to English Language
Assessement. McGraw-Hill College.
Buyukkarci, K. (2014). Assessment beliefs and practices of language teachers in primary
education. International Journal of Instruction, 7(1).

Pedagogika / 2016, t. 124, Nr. 4 245


ISSN 1392-0340
E-ISSN 2029-0551

Calfee, R. C., & Masuda, W. V. (1997).Classroom assessment as inquiry. In G. D. Phye (Ed.),


Handbook of classroom assessment: Learning, adjustment, and achievement. New York:
Academic Press.
Chan, Y. C. (2008). Elementary school EFL teachers’ beliefs and practices of multiple assessments.
Reflections on English Language Teaching, 7(1), 37–62.
Cheng, M. (1997). The impacts of teachers’ beliefs on students’ anxiety about foreign language
learning. The proceedings of the Sixth International Symposium on English Teaching (pp.
113–129). Taipei: Crane.
Chew, A. Y. L., & Lee, I. C. H. (2013). Teachers’ beliefs and practices of classroom assessment in
Republic Polytechnic. Singapore.
Ch’ng, L. C., & Rethinasamy, S. (2013). English language assessment in Malaysia: teachers’
practices in test preparation. Issues in Language Studies, 24.
Davies, A. (1999). Dictionary of Language Testing. Cambridge: Cambridge University Press.
Derrich, J., & Ecclestone, K. (2006). Formative assessment in adult literacy, language and numeracy
programmes: a literature review for the OECD. Draft.
Gay, L. R., & Arasian, P. (2000). Educational Research: Competencies for Analysis and Application
(6th ed.). USA: Merrill/Prentice Hall.
Graves, K. (2000). Designing language courses: A guide for teachers. Boston: Heinle & Heinle.
Harris, L., Irving, S., & Petterson, E., (2008, December). Secondary teachers’ conceptions of the
purpose of assessment and feedback. Paper presented at the annual conference of the Australian
Association for Research in Education, Brisbane, Australia.
Heaton, J. (1990). Writing English language tests. New York: Longman Inc.
Horwitz, E. (1988). The beliefs about language learning of beginning university foreign language
students. The Modern Language Journal, 72, 283–294. doi: 10.1111/j.1540-4781.1988.tb04190.x
Hsieh, H. (2002). Teachers’ beliefs about English learning: A case study of elementary school
English teachers in Taipei County. Unpublished master thesis. Taipei: National Taipei
Teachers’ College.
Huang, S. L. (1997). The domestical situations and prospection of the study of teachers’ beliefs.
Journal of Humanity and Society of National Chung-Hsing University, 6, 135–152.
Linn, R. L., & Miller, M. D. (2005). Measurement and assessment in teaching (9th ed.). New Jersey:
Pearson Education.
McMillan, J. H. (2001). Essential assessment concepts for teachers and administrators. Thousand
Oaks, CA: Corwin Publishing Company.
Mertler, C. A. (1998). Classroom assessment: Practices of Ohio teachers. Paper presented at the
Annual Meeting of the Mid-Western Educational Research Association, Chicago.
Mohd Sallehhudin, A. A. (2011). Assessment practices of high school teachers in Malaysia. English
Language Assessment, 6(5), 55–74.
Mousavi, S. A. (2002). Encyclopedic dictionary of language testing (3rd Edition). Taipei: Tung Hua
Book Company.

246 Pedagogika / 2016, t. 124, Nr. 4


ISSN 1392-0340
E-ISSN 2029-0551

Popham, W. J. (2006). All About Accountability: A Dose of Assessment Literacy. Improving


Professional Practice, 63(6), 84–85.
Rahim, S. S. A., Venville, G., & Chapman, A. (2009). Classroom assessment: Juxtaposing teachers’
beliefs with classroom practices. 2009 Australian Association for Research in Education:
International Education Research Conference. Retrieved May 19, 2012 from http://aare.edu.
au/09pap/abd091051.pdf.
Rios, F. (1996). Teacher thinking in cultural contexts. New York: State University of New York Press.
Sanders, J. R., & Vogel, S. R. (1993). The development of standards for teacher competence in
educational assessment of students. In S. L. Wise (Ed.), Teacher training in measurement and
assessment skills, Lincoln, NB: Burros Institute of Mental Measurements.
Stiggins, R. J. (2002). Assessment crisis: the absence of assessment for learning. Online article.
Kappan Professional Journal. Retrieved from http://electronicportfolios.org/afl/Stiggins-
AssessmentCrisis.pdf.
Stiggins, R. J. (2000). Classroom assessment: A history of neglect, a future of immense potential.
Paper presented at the Annual Meeting of the American Educational Research Association.
Stiggins, R. J., & Conklin, N. F. (1992). In teachers’ hands: Investigating the practices of classroom
assessment. Albany, NY: State University of New York Press, Albany.
Suah, S. L., & Ong, S. L. (2012). Investigating assessment pracices of in-service teachers. International
Online Journal of Educational Sciences, 4(1), 91–106.
Susuwele-Banda, W.  J. (2005). Classroom Assessment in Malawi: Teachersâ Perceptions and
Practices in Mathematics.
Thomas, M. (2012). Teachers’ beliefs about classroom assessment and their selection of classroom
assessment strategies. Journal of Research and Reflections in Education, 6(2), 103–112.
Threlfall, J. (2005). The formative use of assessment information in planning – the notion of
contingent planning. British Journal of Educational Studies, 53(1), 54–65. doi: 10.1111/j.1467-
8527.2005.00283.x
Trepanier-Street, M. L., McNair, S., & Donegan, M. M. (2001). The views of teachers on assessment:
A comparison of lower and upper elementary teachers. Journal of Research in Childhood
Education, 15(2), 234–241. doi: 10.1080/02568540109594963
Wiggins, G. (1998). Educative Assessment: Designing Assessments to Inform and Improve Student
Performance. San Francisco, California. Jossey-Bass.
Zhang, Z., & Burry-Stock, J. (2003). Classroom assessment practices and teachers’ self-perceived
assessment skills. Applied Measurement in Education, 16(4), 323–342. doi:  10.1207/
S15324818AME1604_4

Pedagogika / 2016, t. 124, Nr. 4 247


ISSN 1392-0340
E-ISSN 2029-0551

Dėstytojų vertinimo kompetencijos vertinimas


Seyed Ali Rezvani Kalajahi1, Ain Nadzimah Abdullah2


1
Malaizijos Putra universitetas, Šiuolaikinių kalbų ir komunikacijos fakultetas, Anglų kalbos katedra, 43400
Serdang, Selangor, Malaizija, [email protected]

2
Malaizijos Putra universitetas, Šiuolaikinių kalbų ir komunikacijos fakultetas, Anglų kalbos katedra,
43400 Serdang, Selangor, Malaizija, [email protected]

Santrauka

Aukštajame moksle apskaitos sistema yra svarbi ir dažniausiai siejama su pasitikėjimu dėsty-
tojų vertinimo kompetencija. Dėstytojai yra atsakingi už „vertinimo lapus“, kurie laikomi pagrin-
diniu studentų pažangos ir mokymosi rezultatų rodikliu. Dėstytojų vertinimo kompetencija yra
ypač reikšminga studijų procese, nes ugdymo institucijoje pasikliaujama dėstytojų pateikiamu
studentų dalyko žinių ir įgūdžių vertinimu. Todėl kyla klausimų: ar dėstytojams suteikiama pa-
kankamai ir tinkamų žinių apie vertinimo metodus, ar vertinimo kompetencijos ugdymasis yra
paties dėstytojo atsakomybė? Tyrimu siekta nustatyti dėstytojų vertinimo kompetencijos lygį ir
ištirti dažniausiai taikomus vertinimo metodus. Tyrimui naudota anketinė apklausa. Apklausti
65 skirtingų studijų dalykų dėstytojai, dirbantys Malaizijos valstybiniame universitete. Tyrimo
rezultatai atskleidė, kad dėstytojų vertinimo kompetencija yra žemesnė nei patenkinamo lygio.
Tai rodo, kad dėstytojai neturėjo specialaus pasirengimo, ugdančio jų vertinimo kompetenciją
kaip svarbią jų profesinės atsakomybės sritį mokymo(si) kontekste. 

Esminiai žodžiai: aukštasis mokslas, vertinimo praktika, vertinimo kompetencija, dėstytojai,


vertinimo metodai, mokymosi pasiekimai. 

Įteikta / Received 2016-08-03


Priimta / Accepted 2016-11-14

248 Pedagogika / 2016, t. 124, Nr. 4

You might also like