Educ 103,104,106-Assessment in Learning 1

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 107

ASSESSMENT IN

LEARNING 1
CLYDE C. BELMES, LPT
CL ASSROO
M RULES
He/Sh e is FAILED from th e subject if h e/she absen t
fo r 7 days.

3 instances of tradiness are equivalent to o ne (1)


absen ce.

15 minutes grace period (L ATE), if h e/she arrives


after the prescribed minutes h e/she will be marked
ABSENT.

Strictly adhere to the GIVEN su bmissio n date, NO


late su bmissio n otherwise stated.
GRADING
SYSTEM
40% - MAJOR EXAMINATION
30% - QUIZZES
20% - SEATWORK, ASSIGNMENT,
PROJECT
10% - ACTIVE PARTICIPATION,
GRADED RECITATION
WHAT IS ASSESSMENT IN
LEARNING?
ASSESSMENT is rooted in the Latin word
assidere, which means "to sit beside
another."

Assessment is generally defi ned as the


process of gathering quantitative and/or
qualitative data for the purpose of making
decisions.
ASSESSMENT IN LEARNING can be defi ned as
the systematic and purpose- oriented
collection, analysis, and interpretation of
evidence of student learning in order to
make informed decisions relevant to the
learners.

Assessment in learning can be characterized


as
(a) a process,
M EASUREMENT can be defi ned as the
process of quantifying the attributes of an
object.

EVALUATION - may refer to the process of


making value judgments on the
information collected from measurement
based on specifi ed criteria.
In the context of assessment in learning,

M EASUREMENT refers to the actual


collection of information on student
learning through the use of various
strategies and tools.

EVALUATION refers to the actual process of


making a decision or judgment on student
learning based on the information
collected from measurement.
ASSESSMENT AND
TESTING
The most common form of assessment is TESTING.

Testing refers to the use of a test or battery of


tests to collect information on student learning
over a specifi ed period of time.
TEST CATEGORY

SELECTED RESPONSE (e.g. Matching-type of test)


CONSTRUCTED RESPONSE (e.g. Essay test, Short answer test)

OBJECTIVE FORMAT (e.g. Multiple choice, Enumeration)


SUBJECTIVE FORMAT (e.g. Essay)

A TABLE OF SPECIFICATIONS (TOS) - a table that


maps out the essential aspects of a test (e.g. test
objectives, contents, topics covered by the test,
item distribution) - is used in the design and
development of a test.
ASSESSMENT AND
Grading
GRADING
is a process of assigning value to the
performance or achievement of a learner based on
specifi ed criteria or standards.
Aside from tests, other classroom tasks can serve as
bases for grading learners. These may include a
learner's performance in recitation, seatwork,
homework, and project. The fi nal grade of a learner in a
subject or course is the summation of information from
multiple sources (ie., several assessment tasks or
requirements).
ASSESSMENT AND
GRADING
Grading is a form of evaluation which provides
information on whether a learner passed or failed a
subject or a particular assessment task. Teachers are
expected to be competent in providing performance
feedback and communicating the results of
assessment tasks or activities to relevant
stakeholders.
Descriptive Statistics are typically used to
describe and interpret the results of tests.

A test should be VALID, RELIABLE, has


acceptable level of diffi culty, and can be
discriminate between learners with higher
and lower ability.
There are tw o main types:
1.Measures of Central Tendency - "average" or
most typical value in the data.
⚬ Mean: T he average value.
⚬ Median : The middle value when the data
is arranged in order.
⚬ Mode: T he most frequent value.
2.Measures of Spread - how spread out the
data is.
⚬ Range: Th e diff erence between the
high est and lowest values.
⚬ Varian ce and Standard Deviation: These
show how much the values vary from the
DIFFERENT MEASUREMENT
FRAMEWORKS USED IN
ASSESSMENT
Psychometric Theories

• CL ASSICAL TEST THEORY (CTT)


• ITEM RESPONSE THEORY (IRT)
• The Classical Test Theory (CTT), also
known as the true score theory , explains
that variations in the performance of
examinees on a given measure is due to
variations in their abilities.

• CT assumes that all measures are


imperfect, and the scores obtained from a
measure could diff er from the true score
(i.e., true ability) of an examinee.

• CTT focuses on overall test scores and is


simpler but less detailed.
• The Item Response Theory (IRT), on the
other hand, analyzes test items by
estimating the probability that an examine
answers an item correctly or incorrectly.

• It considers that diff erent questions might


have diff erent levels of diffi culty and that
each person's probability of answering
correctly depends on their ability level.

• IRT analyzes individual questions and gives


a more nuanced understanding of both the
test and the test-taker’s abilities.
DIFFERENT TYPES OF
ASSESSMENT IN LEARNING

1.FORMATIVE ASSESSMENT
2.SUMMATIVE ASSESSMENT
3.DIAGNOSTIC ASSESSMENT
4.PL ACEMENT ASSESSMENT
5.TRADITIONAL ASSESSMENT
6.AUTHENTIC ASSESSMENT
SUMMATIVE ASSESSMENT

• aim to determine learners' mastery of content or


attainment of learning outcomes. They are
summative, as they are supposed to provide
information on the quantity or quality of what
students have learned or achieved at the end of
instruction.
• To evaluate student learning at the end of an
instructional period by comparing it against a
standard or benchmark.
FORMATIVE ASSESSMENT

• refers to assessment activities that provide


information to both teachers and learners on how
they can improve the teaching-learning process.
• it is used at the beginning and during instruction
for teachers to assess learners understanding.
• To monitor student learning and provide ongoing
feedback that can be used to improve teaching
and learning.
DIAGNOSTIC ASSESSMENT

• detect the learning problems or diffi culties of the learners


so that corrective measures or interventions are done to
ensure learning.
• is usually done right after seeing signs of learning problems
in the course of teaching.
• It can also be done at the beginning of the school vear for
spirally-designed curriculum so that corrective actions are
applied if pre-requisite knowledge and skills for the targets
of instruction have not been mastered yet.
• assess students' prior knowledge and skills to identify
strengths, weaknesses, and learning gaps before instruction
PL ACEMENT ASSESSMENT

• usually done at the beginning of the school year to


determine what the learners already know or what are their
needs that could inform design of instruction.
• Grouping of learners based on the results of placement
assessment is usually done before instruction to make it
relevant to address the needs or accommodate the entry
performance of the learners.
• The entrance examination given in schools is an example of
a placement assessment.
• determine the appropriate level or course for a student
based on their current abilities and knowledge.
TRADITIONAL ASSESSMENT

• use of conventional strategies or tools to


provide information about the learning of
students. Typically, objective (e.g., multiple
choice) and subjective (e.g., essay) paper-and-
pencil tests are used.
• To measure student knowledge through
conventional methods, often using
standardized tools like tests or quizzes.
AUTHENTIC ASSESSMENT

• refers to the use of assessment strategies or tools that


allow learners to perform or create a product that are
meaningful to the learners, as they are based on real-world
contexts.
• The authenticity of assessment tasks is best described in
terms of degree rather than the presence or absence of
authenticity. Hence, an assessment can be more authentic
or less authentic compared with other assessments.
• The most authentic assessments are those that allow
performances that most closely resemble real-world tasks or
applications in real-world settings or environments.
PRINCIPLES IN
ASSESSING LEARNING
1. Assessment should have a clear purpose. This
assessment principle is congruent with the Outcome-
Based Education (OBE) principles of clarity of focus
and design down.

2. Assessment is not an end in itself. Assessment


serves as a means to enhance student learning. It is
not a simple recording or documentation of what
learners know and do not know.
PRINCIPLES IN ASSESSING
LEARNING

3. Ass essment is an ongoing, continuous, and a


formative process. Assessment consists of a series of
tasks a nd activities conducted over time. It is not a
one-shot activity a nd should be cumulative.

4. Ass essment is learner-centered. Assessment is not


about what the teacher does but what the learner
can do.
PRINCIPLES IN ASSESSING
LEARNING

5. Asse ssm e n t is bo th pro cess- and pro du c t- o rie n te d .


Assessm e n t gives equal importance to learn er
perfo rm an c e o r product and the process th ey en gage in
to perfo rm o r pro du ce a product.

6. Asse ssm e n t m ust be co mprehensive an d h o listic .


Assessm e n t sh o u ld be perfo rmed using a variety o f
strategies an d to o ls designed to assess stu de nt lea rn in g
in a h o listic way.
PRINCIPLES IN ASSESSING
LEARNING

7. A s s es s ment requires the us e o f appro priate measures.


A ppro pr iate measures als o mean tha t l earners must be
pro vi ded w ith chal lenging but age- and context-
appro pr i ate assessment tas ks .

8. A s s es sment sho ul d be a s authentic as possible.


A s s es s ment tasks or activiti es s ho ul d cl o sely, if not fully,
approximate real-life s ituati o ns o r exper i ences.
LESSON II: ASSESSMENT
PURPOSES, LEARNING
TARGETS, AND
APPROPRIATE METHODS
CL ASSICIFICATION OF THE
PURPOSE OF CL ASSROOM
ASSESSMENT

1.Assessment OF Learning
2.Assessment FOR Learning
3.Assessment AS Learning
ASSESSMENT OF
LEARNING
• re fe rs to the use of a s s es s me nt to de termine learners'
a cquire d knowledge a nd s kills from instruction and
whe the r they were a ble to ach iev e the curriculum
outcomes.

• Me as ures what stude nts hav e le arned.

• This is summativ e as s e s s me nt , which typically


ha ppe ns at the end of a lea rning pe riod (like a unit,
s e me s ter, or cours e ). It e v alua tes what students have
le arne d and how well th ey me t the learning
ASSESSMENT FOR
• refers
LEARNING
to the use of assessment to identify the needs of
learners in order to modify instruction or learning
activities in the classroom. It is formative in nature and
it is meant to identify gaps in the learning experiences
of learners so that they can be assisted in achieving the
curriculum outcomes.

• This is formative assessment , which occurs during the


learning process. It helps teachers and students identify
areas of strength and areas needing improvement.
ASSESSMENT AS
LEARNING
• refers to the use of assessment to help learners become self-
regulated. It is formative in nature and meant to use
assessment tasks, results, and feedback to help learners
practice self-regulation and make adjustments to achieve the
curriculum outcomes.

• This involves students actively participating in the


assessment process. They refl ect on their own learning,
identify their strengths and weaknesses, and set personal
learning goals.

• encourage students to take responsibility for their own


ROLES OF CL ASSROOM
ASSESSMENT IN THE TEACHING-
LEARNING PROCESS
• Formative
• Diagnostic
• Evaluative
• Facilitative
• Motivational
• Formative. Teachers conduct
assessment because they want to
acquire information on the current
status and level of learners' knowledge
and skills or competencies.

• Diagnostic. Teachers can use


assessment to identify specifi c learners
weaknesses or diffi culties that may
aff ect their achievement of the
intended learning outcomes.
• Evaluative. Teachers conduct
assessment to measure learners'
performance or achievement for the
purposes of making judgment or grading
in particular. The learners' placement or
promotion to the next educational level
is informed by the assessment results.
• Facilitative. Classroom assessment may
aff ect student learning. On the part of
teachers, assessment for learning provides
information on students' learning and
achievement that teachers can use to
improve instruction and the learning
experiences of learners. On the part of
learners, assessment as learning allows
them to monitor, evaluate, and improve
their own learning strategies. In both cases,
student learning is facilitated.
• Motivational. Classroom assessment can
serve as a mechanism for learners to be
motivated and engaged in learning and
achievement in the classroom.

Grades, for instance, can motivate and


demotivate learners. Focusing on progress,
providing eff ective feedback, innovating
assessment tasks, and using scaff olding
during assessment activities provide
opportunities for assessment to be motivating
rather than demotivating.
What are learning targets?

1.Goals . Goals are general statements about desired


learner outcomes in a given year or during the
duration of a program (e.g., senior high school)
2.Standards . Standards are specifi c statements about
what learners should know and are capable of doing
at a particular grade level, subject, or course.
McMillan (2014, p. 31) described four diff erent types
of educational standards:
(1) content (desired outcomes in a content area.)
(2) performance (What students do to demonstrate
competence),
(3) developmental (sequence of growth and change over
time), and
(4) grade-level (outcomes for a specifi c grade)
3. Educational Objectives . Educational
objectives are specifi c statements of learner
performance at the end of an instructional
unit. These are sometimes referred to as
behavioral objectives and are typically stated
with the use of verbs. The most popular
taxonomy of educational obiectives is Bloom's
Taxonomy of Educational Objectives
The Bloom's Taxonomy of
Educational Objectives
Bloom's Taxonomy consists of three
domains: cognitive, aff ective, and
psychomotor. These three domains
correspond to the three types of goals
that teachers want to assess:

knowledge-based goals (cognitive),


skills-based goals (psychomotor), and
aff ective goals (aff ective).
The most popular among the three
taxonomies is the Bloom's Taxonomy of
Educational Objectives in the Cognitive
Domain, also known as Bloom's Taxonomy
of Educational Objectives for Knowledge-
Based Goals. The taxonomy describes six
levels of expertise: knowledge,
comprehension, application, analysis,
synthesis, and evaluation.
Bloom's Taxonomy of Educational
Objectives in the Cognitive
Domain
The Revised Bloom's Taxonomy
of Educational Objectives
Anderson and Krathwohl proposed a
revision of the Bloom's Taxonomy in the
cognitive domain by introducing a two-
dimensional model for writing learning
objectives (Anderson and Krathwohl, 2001).
The fi rst dimension, knowledge dimension,
includes four types: factual, conceptual,
procedural, and metacognitive.
The second dimension, cognitive process
dimension, consists of six types: remember,
understand, apply, analyze, evaluate, and
create. An educational or learning objective
formulated from this two-dimensional
model contains a noun (type of knowledge
and a verb (type of cognitive process). The
Revised Bloom's Taxonomy provides
teachers with a more structured and more
precise approach in designing and
assessing learning objectives.
Cognitive Process Dimensions in the Revised Bloom's
Taxonomy of Educational Objectives
When revising Bloom’s Taxonomy in 2001, Anderson and
Krathwohl also added the knowledge dimension to the
taxonomy. The knowledge dimension consists of four
dimensions, which are:
• Factual knowledge (basic elements to learn or solve
problems in the discipline) who, where, what, when
• Conceptual knowledge (interrelationships between
basic elements within a larger context) what
• Procedural knowledge (methods in the discipline) how
• Metacognitive knowledge (awareness of how learning
work in relation to one’s self) why, how
Learning Targets

A statement of student performance for a


relatively restricted type of learning outcome
that will be achieved in a single leasson or a
few days

Learning targets should be congruent with the


standards prescribed by program or level and
aligned with the instructional or learning
objectives of a subject or course.
Mcmillan (2014, p.53) proposed fi ve criteria for
selecting learning targets:

1.Establish the right numbe of learning targets


2.Establish comprehensive learning targets
3.Establish learning targets that refl ect school
goals and 21st century skills,
4.Establish learning targets that are
challenging yet feasible, and
5.Establish learning targets that are consistent
with current principles of learning and
motivation.
Learning target types include knowledge,
reasoning, skill, product, and disposition
targets. Knowing the appropriate learning
target type will help the educ ator to
eff ectively plan for instruction with aligned
learning outcom es and assessment methods.
• Knowledge Targets
Refers to the factual, conceptual, and
procedural information that learners must
learn in a subject or content area.

• Reasoning Targets
Knowledge-based thought processes that
learners must learn. It involves application
of knowledge in problem-solving, decision-
making, and other tasks that require mental
skills.
• Skills Targets
Use of knowledge and/or reasoning to
perform or demonstrate physical skills.

• Product Targets
Use of knowledge, reasoning, and skills
in creating a concrete or tangible
product.
APPROPRIATE
METHODS OF
ASSESSMENT
LESSON III:
DIFFERENT
CL ASSIFICATIONS OF
ASSESSMENT
WHEN DO WE USE EDUCATIONAL
AND PSYCHOLOGICAL
ASSESSMENT?
1. Educational assessments

• are used in the school setting for the


purpose of tracking the growth of learners
and grading their pertormance.

• It comes in the form or formative and


summative assessment.
Formative Assessment

• is a continuous process of gathering information about


student learning at the beginning, during, and after
instruction so that teachers can decide how to improve
their instruction until learners are able to meet the
learning targets
• it is used to track and monitor student learning and
their progress toward the real learning target.
• it comes in the form of paper-and-pencil test and
performance-based test before instruction begins,
formative assessments serves as a diagnostic tool to
determine whether learners already know about the
learning target.
• Formative assessments aiven at the start of the lesson
determines the following:
1.What learners know and do not know so
that instruction can supplement what
learners do not know
2.Misconceptions of learners so that they
can be corrected
3.Confusion of learners so that they can be
clarifi ed
4.What learners can and cannot do so that
enough practice can be given to perform
the task
Summative Assessment

• is used to determine and record what


the learners have learned.

• This comes in the form of periodic test,


weekly test, unit or chapter test,
2. Psychological Assessments

• Are measures that determine the learner's


cognitive and non-cognitive characteristics
• Examples of cognitive tests are those that
measure ability, aptitude, intelligence, and
critical thinking.
• Aftective measures are for personality,
motivation, attitude, interest and disposition.
• The results of these assessments are used by
the school's guidance counselor to perform
interventions on the learner's academic,
career. and social and emotional development.
Paper-and-pencil

It requires a single c orrec t a nswer in the


form of:

• binary (true or false)


• short answer (identifi c ation)
• matching type
• multiple choice
• it usually perta ins to a spec ifi c c ognitive
skill such as rec a lling, understa nding,
applying, analyzing, evalua ting and
Performance-based
it requires learners to perform tasks such as :

• demonstrations
• show strategies
• present information
• creating a word problem
• dance and song performance
• playing a musical instrument
• writing essay
• reporting in front of the class
• reciting a poem
• demonstrating how a problem was solved
• reporting the results of the experiment
• painting and drawing
How do we distinguish teac her-ma de from
standardized test?

Standardized Test
• Have fi xed direc tions for administering and
scoring.
• Can be purcha sed with text manuals,
booklets and answer sheets
• When the test are developed, the items a re
sampled on a large number of ta rget groups
• called norms
• The norm groups performanc e is used to
compare the results of those who took the
Non-standardized or teacher-made tests

• Usually intended for classroom


assessment
• They are used for classroom purposes,
such as determining whether learners
have reached the learning targets
• Intended to measure behavior (such as
learning) in line with the objectives of
the course
Ex amples: quizzes, long tests and exams.
What information is sought from achievement
and aptitude tests?

Achievement Test
• M easures what learners hav e learned after instruction
or after going through a specifi c curricular program.
• It prov ides information on what learners can do and
hav e acquired after training and instruction.
• It is a measure of the accomplished skills and indicates
what a person can do at present (Atkinson 1995)
• Achiev ement can be refl ected in the fi nal grades of
learners within a quarter.
• A quarterly test is composed of sev eral learning targets
and is a good way of determining achievement of
learners.
Aptitude Tests
• According to Logman ( 2005), aptitudes are the
characteristics that infl uence a person's behavior that
aid goal attainment in a particular situation
• Aptitude refers to the degree of readiness to learn and
perform well in a particular situation or domain (Corno et
al. 2002)

Assessment of aptitude can go beyond cognitive abilities


such as:

Cognitive Abilities Measurement :


• measures working memory capacity,
• ability to store old information and process new ones.
• Speed of an individual in retrieving and processing new
information, etc.
How do we diff erentiate speed from power
test?

Speed Test
Speed test is consists of easy items that need to be
completed within a time limit.
Example: a typing test in which examinees are required to
correctly type as many word as possible over a given limited
of time.

Power Tests
Power test is consist of items with increasing level of
diffi culty but time is suffi cient to complete me whole test.
Example : a test that determines the ability of the
examinees to utilize data to reason and become creative,
formulate, solve, and refl ect critically on the problems
Criterion-referenced Tests

Criterion-referenced test has a set of standards, and the


scores are compared to a given criterion.

E xample: in a 50-item test; 40-50 is very high, 30-39 is


high, 20-29 is average, and 10-19 is low and 0-9 is very
low.

One approach in criterion-referenced interpretation is


that the score is compared to a specifi c cutoff .
An example is the grading in schools where the range of
grades 96-100 is highly profi cient, 90-95 is profi cient,
80-89 is nearly profi cient and below 80 is beginning.
Norm-referenced test
• Norm-referenced test interprets results using
the distribution of scores of a sample group.
• The mean and standard deviations are
computed for the group.
• Standardized tests usually interpret scores
using a norm set from a large sample.
• Having established norm for a test means
obtaining the normal or average performance
in the distribution of scores. A normal
distribution is obtained by increasing the
sample size.
• A norm is a standard and is based on a very
large group samples. Norms are reported in
the manual or standardized tests
• A normal distribution takes the shape of the
bell curve where it shows the number of
people within a range of scores.
• It also reports the percentage of people with
particular scores
• The norm is used to convert a raw score into
standard scores for interpretability
• What is the use of a norm?
1. A norm is the basis of interpreting a test
score
2. A norm can be used to interpret a particular
PLANNING A
WRITTEN
TEST
LEARNING OUTCOMES:

• Set appropriate instructional


objectives for a written test
and,

• Prepare a table of specifications


for a written test.
Why do you need to define the test
objectives or learning outcomes targeted
for assessment?
• In defi ning a well-planned written test, fi rst
and foremost, you should be able to identify
the intended learning outcomes in a course,
where a written test is an appropriate
method to use.

• They provide teachers the focus and


direction on how the course is to be
handled, particularly in terms of course
content, instruction, and assessment.

• On the other hand, they provide the


students with the reasons and motivation to
The TABLE OF SPECIFICATION (TOS),
sometimes called a test blueprint, is a tool used
by teachers to design a test.

Test objectives
Contents/ Topics
Level of cognitive behavior
Distribution of test items
Number
Placement
Weights of test items
Test format
Teachers need to
create a TOS for
every test that
they intend to
The test TOS is important because it
does the following:

•Ensures that the instructional objectives


and what the test captures match,

•Ensures that the test developer will not


overlook details that are considered
essential to a good test,
•Makes developing a test easier and more
efficient

•Ensures that the test will sample all-


important content areas and processes,

•Is useful in planning and organizing,

•Offers an opportunity for teachers and


students to clarify achievement
What are the general steps
in developing a table of
specifications?
1. Determine the objectives of the
test.

Cognitive objectives are designed to


increase an individual’s knowledge,
understanding, and awareness. On the other
hand, affective objectives aim to change
an individual's attitude into something
desirable, while psychomotor objectives
2. Determine the coverage of the
test.

The next step in creating the TOS is to


determine the contents of the test. Only
topics or contents that have been
discussed in class and are relevant
should be included in the test
3. Calculate the weight for each
topic.

Once the test coverage is determined, the weight


of each topic covered in the test is determined.
The weight assigned per topic in the test is based
on the relevance and the time spent to cover
each topic during instruction. The percentage of
time for a topic in a test is determined by dividing
the time spent on that topic during instruction by
the total amount of time spent on all topics
PERCENT OF
TOPIC NO. OF TIME SPENT
TIME
SESSIONS (WEIGHT)
Theories and .5 class session 30 min 10.0
Concepts
Psychoanalytic 1.5 class session 90 min 30.0
Theories
Trait Theories 1 class session 60 min 20.0
Humanistic Theories .5 class session 30 min 10.0

Cognitive Theories .5 class session 30 min 10.0


Behaviorial Theories .5 class session 30 min 10.0

Social Learning .5 class session 30 min 10.0


Theories
TOTAL 5 class 300 min or 5 100
4. Determine the number of items
for the whole test.
• To determine the number of items to be
included in the test, the amount of time needed
to answer the items are considered.

• Students are given 30-60 seconds for each in


test formats with choices.
5. Determine the number of items
per topic.

To determine the number of items to be


included in the test, the weights per
topic are considered.
PERCENT OF TIME
TOPIC NO. OF ITEMS
(WEIGHT)
Theories and 10 5
Concepts
Psychoanalytic 30 15
Theories
Trait Theories 20 10
Humanistic Theories 10 5
Cognitive Theories 10 5
Behaviorial Theories 10 5
Social Learning 10 5
Theories
TOTAL 100 50 ITEMS
DIFFERENT
FORMATS OF A
TEST TABLE OF
SPECIFICATIONS
1. One-Way TOS

A one-way TOS maps out the content or topic,


test objectives, number of hours spent, and
format, number, and placement of items. This
type of TOS is easy to develop and use because
it just works around the objectives without
considering the different levels of cognitive
behaviors However, a one-way TOS cannot
ensure that all levels of cognitive behaviors that
should have been developed by the course are
NO. OF FORMAT NO. AND
TOPIC TEST HOURS AND PERCENT
OBJECTIVES SPENT PLACEM OF ITEMS
ENT OF
ITEMS
Theories Recognize .5 Multiple 5 (10.0%)
and important Choice
Concepts concepts in Item #s
personality 1-5
theories
Psychoanaly Identify the 1.5 Multiple 15 (30.0%)
tic Theories different theories Choice
of personality Item #s
under the 6-20
psychoanalytic
model
2. Two-Way TOS
A two-way TOS reflects not only the content, time
spent, and number of items but also the levels of
cognitive behavior targeted per test content based on
the theory behind cognitive testing. For example, the
common framework for testing at present in the
DepEd Classroom Assessment Policy is the Revised
Bloom's Taxonomy (DepEd, 2015). One advantage of
this format is that it allows one to see the levels of
cognitive skills and dimensions of knowledge that are
emphasized by the test. It also shows the framework
of assessment used in the development of the test.
LEVEL OF COGNITIVE
CONTENT TIME NO. & KD*
BEHAVIOR ITEM FORMAT,
SPENT PERCENT NO. AND PLACEMENT OF
ITEMS
OF ITEMS

R U AP AN E C

Theories and 0.5 F I.3


#1-
Concepts hours 5 (10.0%) 3
C I.2
#4-
5
F 1.2
#6-
Psychoanaly 1.5 15 (30.0%) 7
tic Theories hours I.2 I.2
C
3. Three-Way TOS

This type of TOS reflects the features of


one-way and two- way TOS. One advantage
of this format is that it challenges the test
writer to classify objectives based on the
theory behind the assessment. It also
shows the variability of thinking skills
targeted by the test. However, it takes
much longer to develop this type of TOS.
LEVEL OF COGNITIVE BEHAVIOR
CONTENT LEARNING TIME NO. OF
ITEM FORMAT, NO. AND
OBJECTIVE SPENT ITEMS & PLACEMENT OF ITEMS
PERCEN
T

R U AP AN E C

Theories and Recognize


Concepts important 0.5 5 (10.0%) I.3 I.2
concepts in hours #1-3 #4-
personality (F) 5
theories (C)
Psychoanalytic Identify the
Theories different theories 1.5 15 I.2 #6- I.2 I.2 I.2 II.1 II.1
of personality hours (30.0%) 7 (F) #8- #10- #14- #4 #42
under 9 11 15 1 (M)
Construction
of Written
Test
STEPS TO HELP YOU CONSTRUCT AN EFFECTIVE
WRITTEN TEST:

1. Define the Purpose:


Determine the purpose of the test. What specific
knowledge or skills do you want to assess?
Understanding your objectives is essential for creating
relevant questions.

2. Identify the Format:


Decide on the format of the test. Will it be multiple-
choice, true/false, short answer, essay, or a combination
of these? Different formats assess different skills, so
3. Develop Clear Instructions:
Provide clear and concise instructions at the
beginning of the test. Ensure that participants
understand what is expected of them regarding
answering each type of question.

4. Create a Test Outline:


Plan the structure of the test. Decide on the
number of sections and questions in each section.
Allocate appropriate time for each section if the
test is timed
5. Write Clear and Concise Questions:
Ensure that questions are clear, concise, and
free of ambiguity. Avoid double negatives or
complex sentence structures that could confuse
participants.

6. Avoid Bias and Tricky Questions:


Be mindful of language that might be biased or
culturally insensitive. Avoid trick questions that
can confuse participants and lead to inaccurate
7. Include a Mix of Difficulty Levels:
Include questions of varying difficulty levels.
Have easy questions to assess basic knowledge
and harder ones to challenge the participants who
have a deeper understanding of the subject
matter.

8. Review and Revise:


Review the test for accuracy and fairness.
Check for spelling and grammatical errors. If
possible, have someone else review the test as
well to get a fresh perspective.
9. Pilot Test:
If time allows, conduct a pilot test with a
small group of participants. This helps identify
any unclear questions or problems with the
instructions before the actual test is
administered.

10. Scoring Guidelines:


Establish clear and consistent scoring
guidelines, especially for subjective questions.
If using automated systems, ensure they are
11. Feedback and Improvement:
After the test is administered, analyze the results.
Identify the areas where participants performed well and
where they struggled. Use this feedback to improve future
tests.

12. Ethical Considerations:


Respect participant confidentiality and privacy. Ensure
that the test does not discriminate against any group and
complies with ethical standards.

13. Accessibility:
Design the test in a way that is accessible to all
participants, including those with disabilities.
ESTABLISHING TEST
VALIDITY AND
RELIABILITY
WHAT IS TEST
RELIABILITY?
Reliability is the consistency of the
responses to measure under three
conditions:

1.When retested on the same


person;
2.When retested on the same
measure;
FACTORS AFFECTING THE
RELIABILITY OF A TEST

1. The number of items in a test.


2. Individual differences of
participants.
3. External environment.

You might also like