L2-Intro To Assessment

Download as pdf or txt
Download as pdf or txt
You are on page 1of 21

ASSESSMENT AND CLASSROOM DECISIONS

Think for a few minutes :

How many ways have you been assessed in your life?


When did your assessment experiences begin?

1
DISTINCTIONS AMONG ASSESSMENTS, TESTS,
MEASUREMENTS AND EVALUATION.
Assessments
Are used to gather information about students and include

Tests Nontests
Are systematic procedures for describing certain
characteristics of students using either

Classification schemes Numerical scales


Uses psychological theories Uses a process called

Measurement
To assign qualitative labels to students
To assign scores to students

One or more of these may be combined with a teacher’s experience to judge


the worth of a student's achievement using a process called

Evaluation 2
ASSESSMENT
Assessment is a broad term defined as a process for obtaining
information that is used for making decisions about students; curricular,
programs and schools; and educational policy.
“We are assessing student’s competence” = “We are
collecting information to help us decide the degree to
which student has achieved the learning target”

• Formal & informal observations


• Paper-pencil test
• Student’s performance on homework
• Lab work
• Research paper
• Projects
• Oral questioning
• Analysis of student’s record.
3
ASSESSMENT
Assessment is a process for obtaining information for making a particular
educational decision. Here is a set of five guiding principles that you
should follow to select and use educational meaningful.
a) Be clear about the learning target you want to assess.
b) Be sure that the assessment techniques you match the learning target.
c) Be sure that the selected assessment technique serve the needs of the
earner.
d) Whenever possible, be sure to use multiple indicators of performance
for each learning target.
e) Be sure that when you interpret or help students interpret, the result of
assessments, you take the limitations of such results into account.

4
5
TESTS
Tests is defined as an instrument or systematic procedure for observing
and describing one or more characteristics of a student using either
numerical scale or classification scheme.

Tests is a concept narrower than assessments. In


schools, we usually think of a test as a paper –
and-pencil instrument with a series of questions
that students must answer.

• A test may be called as a tool, a set of


questions, an examination which is used to
measure a particular characteristic of an
individual.
• Its is a measuring tool to access the status of
one’s skill, knowledge, attitude and fitness.
6
MEASUREMENT
Measurement is defined as a procedure for assigning numbers (usually
called scores) to a specified attribute or characteristic of a person in
such a way that the numbers describe the degree of which the person
possess the attributes.
• A process of collecting data on attribute of interest
• It is the collection of information / data in numeric
form.
• It is the record of performance for the information
which is required to make judgment.
• Process that involves the assignment of numerical
values to whatever is being tested.

Example : If you are a better speller than we are, a test that measure our spelling
activities should result in your score (your measurement) being higher than ours.
7
EVALUATION
Evaluation is defined as the process of making a value judgment about
the worth of a student’s product or performance.

• Evaluation are the bases for decisions about what


course of action to follow.
• Evaluation may be based on counting things,
using checklists, or using rating scale.

For example : You may judge a student’s writing as exceptionally good for his grade
placement. This evaluation may lead you to encourage the student to enter a national
essay competition.
8
VALIDITY OF ASSESSMENT RESULTS
Validity is the soundness of your interpretations and uses of student’s
assessment’s results.

To validate your interpretations and uses of student's assessment result,


you must combine evidence from a variety of sources that demonstrates
these interpretations and uses are appropriate.
For example :
Suppose your school administer the ABC
“Are these assessments Reading Test and wishes to use the scores
results valid?” for one or more of the following purposes.
• To describe student’s growth in reading
comprehension.
• To place students into high-middle-low
reading groups.
• To evaluate school’s reading program.
9
VALIDITY OF ASSESSMENT RESULTS
Validity also refers to the accuracy of inferences drawn from an
assessment.

It is the degree to which the assessment measures what it is intended to


measure.
For example :
If your true weight was 180 pounds and the
scale reported that you weigh 180 pounds, that
is a valid measure of your weight.

In the world of educational assessment, if you


wanted to measure a student’s ability to do
long division, and the assessment actually
measures the student’s ability to do long division
(and not their ability to read word problems or
speak English), that would be a valid measure.
10
VALIDITY OF ASSESSMENT RESULTS
VALIDITY OF TEACHER-MADE CLASSROOM ASSESSMENT RESULTS
• Content representative and Relevance
• Thinking processes and skills represented
• Consistency with other classroom assessments
• Reliability on objectivity
• Fairness to different types of students

11
REMEMBER!

An assessment that is highly reliable


is not necessarily valid. However, for
an assessment to be valid, it must
also be reliable.

12
RELIABILITY OF ASSESSMENT RESULTS
Reliability is :
• the degree to which students’ results remain consistent over replications of an
assessment procedure.

• confidence that can be placed in an instrument to yield the same score for
the same student if the test were administered more than once and to the
degree with which a skill or trait is measured consistently across items of a test.

• a necessary BUT not sufficient for validity.

For ex : Your interpretations and decisions are less valid when your students’
assessments result are inconsistent. However, an assessment result’s degree of
reliability limits its degree of validity.

13
RELIABILITY OF ASSESSMENT RESULTS
For classroom teachers, the key to reliability is understanding to decide
what sort of consistency is important for different assessment purposes
(Parkes & Giron, 2006).

14
RELIABILITY OF ASSESSMENT RESULTS

1) Objective Test
For objective tests, students’ performance should be consistent from
item to item. Tests should have enough items that the consistency can
show itself.

For ex :
You have only one question about mathematic problem and the
student gets it right. How confident would you be in generalizing to say
that he has “100% mastery” in this area? Would you be more confident if
he got two items in this area correct?

15
RELIABILITY OF ASSESSMENT RESULTS

2) Essays, Papers, and Project Scores with rubric


For tasks with multiple point items, or tasks scored with rubrics or grades,
the major reliability concern is accuracy of judgement. Its good to
compare your grade assignments with other rater.

There are several ways to help make your judgement as accurate as


possible. Ensure that your directions are clear enough so all students are
likely to produce work that you are able to score. For ex :
• Use systematic rubrics or scoring guides
• Score works without looking at students name.
• Score one question/essay/assignment before moving to the next (focus on one
scoring scheme).

16
RELIABILITY OF ASSESSMENT RESULTS

3) Make up Work
For make up work, the reliability concern is consistency across occasions
and sometimes forms. If the student is making up work missed because
of absence, use the procedure to ensure equivalence.

For ex :
Students know not to tell their absent friends what questions are on a
test. Or the procedure could be to use another form of the same test or
assignment.

17
RELIABILITY OF ASSESSMENT RESULTS

3) Oral questioning and observations


Reliability concerns for oral questioning and for observations of students
include the dependability of your interpretations and the accuracy of
your judgment. Use a sufficient number of questions and observations.

For ex :
If you only ask a student one question about a chapter she read, how
sure are you that you can interpret the correct answer to mean that she
read and understood the whole chapter.
• Allow enough time for students to answer oral questions, or to do whatever you’re
observing.

18
RELIABILITY OF ASSESSMENT RESULTS

3) Peer editing, Group collaboration ratings, and other peer evaluation


technique.
For peer evaluation, the most important reliability concern is accuracy of
judgement. The clearer directions or rubrics are, the more likely students
are to use them in the same way.

For ex :
You may engage members in rating others’ contributions to small group
work. In a group of four, you would have three ratings of each student,
which should more or less agree if they were all rating the same work.

19
RELIABILITY OF ASSESSMENT RESULTS

3) Self evaluation
For self evaluation, the most important reliability concern is accuracy of
self-judgement. Give the students lots of opportunity to practice.

You also need to create a classroom environment where it is safe for a


student to describe his needs for improvement, and mistakes are
interpreted as opportunities to learn and not cause of penalty.

20
THANK YOU

21

You might also like