0% found this document useful (0 votes)
23 views20 pages

Lyksss Portfolio Kemerut 20240604 154541 0000

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 20

CHAPTER 1

Different Terminologies: Assessment,


Testing, Measurement and Evaluation
Assessment
the systematic basis for making inferences about the learning and
development of students. It is the process of defining, selecting,
designing, collecting, analyzing, interpreting, and using information
to increase students' learning and development.

Test / testing
is formal and systematic instrument, usually paper and pencil also
refers to the administration, scoring, and interpretation of the
procedures designed to get information about the extent of the
performance of the students.
EXAMPLES : Oral questionings, Observations Projects ,
Performances Portfolio

Measurement
is a process of quantifying or assigning number to
the individual’s intelligence, personality, attitudes
and values, and achievement of the students. It
express the assessment data in terms of numerical
values and answer the question, “how much?”
EXAMPLE: numerical values are used to represent the performance
of the students in different subjects.

Evaluation
the process of judging the quality of what is good
and what is desirable. It is the comparison of data
to a set of standard or learning criteria for the
purpose of judging the worth or quality.

Role of Assessment in Classroom


Instruction
“Teaching and Learning are reciprocal processes
that depend on and affect one another.”
Four roles of assessment used in the instructional process:

1. Beginning of Instruction
- placement Assessment according to is
concerned with the entry performance and
typically focuses on the questions.

2. During Instruction
- monitor the learning progress of the students. If
the students achieved the planned learning
outcomes, the teacher s provide a feedback to
reinforce learning.
Formative Assessment
type of assessment used to monitor the
learning process of the students during
instruction.

Diagnostic Assessment
type of assessment given at the beginning
of instruction or during instruction. It aims
to identify the strengths and weaknesses of
the students regarding the topics t be
discussed.
3. End of Instruction
-Summative Assessment is type of assessment
usually given at the end of a course or unit.

Methods of Interpreting the Results

1. Norm-referenced Interpretation
- It is used to describe student performance
according to relative position in some known group.
2. Criterion-referenced Interpretation
- used to describe students’ performance
according to a specified domain of clearly defined
learning tasks.

OTHER TYPES OF TEST


Non-standardized Test vs.
Standardized Test
Non-standardized test is a type of test developed by the
classroom teachers.
Standardized test is a type of test developed by test
specialists. It is administered, scored and interpreted
using a certain standard condition.

Objective Test versus Subjective Test


Objective test us a type of test in which two or more
evaluators give an exameneers the same score.
Subjective test is a type of test in which the scores are
influenced by the judgment of the evaluators, meaning
there is no one correct answer.

Objective Test versus Subjective Test


Objective items include multiple-choice, true-false,
matching and completion.
while subjective items include short-answer essay,
extended-response essay, problem solving and performance
test items.
Individual Test versus Group Test
Individual test is a type of test administered to
student on a one-on-one basis using oral questioning.
Group test is a type of test administered to a group of
individuals or group of students.

Mastery Test versus Survey Test


is a type of achievement test that measures the degree of
mastery of a limited set of learning outcomes using
criterion-reference to interpret the result.
Survey test is a type of test that measurers students’
general achievement over a broad range of learning
outcomes using norm-reference to interpret the result.

Speed Test versus Power Test


Speed test is designed to measure number of items an
individual van complete over a certain period of time.
Power test is designed to measure the level of
performance rather than speed of response. It contains
test items that are arranged according to increasing
degree of difficulty.
Chapter 2
Goals a broad statement of very general educational
outcomes that do not include specific level of performance.

General Educational
Program Objectives more narrowly
defined statements of educational outcomes that apply to
specific educational program; formulated on the annual basis;
developed by program coordinators, principals, and other school
administrators.

Instructional
Objectives specific statement of the learners
behavior or outcomes that are expected to be exhibited by the
students after completing a unit of instruction.
Types of Educational Objectives

1. Specific or Behavioral Objectives


Precise statement of behavioral to be exhibited by the
students; the criterion by which mastery of the objectives will
be judged; the statement of the conditions under which behavior
must be demonstrated.

2. General or Expressive Objectives.


the behaviors are not usually specified and the criterion of the
performance level is not stated. It only describes the experience
or educational activity to be done.
“teacher developed instructional
objectives, he must include an
action verb that specifies learning
outcomes.”

. TYPES OF LEARNING OUTCOMES


stated as a measurable and (a) Observable behavior or
(b)non-measurable.

TAXONOMY OF EDUCATIONAL
OBJECTIVES
The three domains of educational activities were:

Cognitive Domain outcomes of mental activity


such as memorizing, reading problem solving, analyzing,
synthesizing and drawing conclusions.

Affective Domain describes learning objectives


that emphasize a feeling tone, an emotion, or a degree of
acceptance or rejection.

Psycho motor Domain characterized by the


progressive levels of behaviors from observation to mastery of
physical skills. This includes physical movements,
coordination, and use of the motor-skill areas.
CHAPTER 3
DEVELOPMENT OF
CLASSROOM ASSESSMENT
TOOLS
PRINCIPLES OF HIGH QUALITY
ASSESSMENT assessing the performance of every
student is a very critical task for classroom teacher. It is
very important that a classroom teacher should prepare the
assessment tool appropriately.

Clarity of the Learning Target


should be clearly stated and must be focused on student
learning objectives rather than teacher activity. The
learning outcomes must be;
Specific,
Measurable, Attainable, Realistic and
Time-bound (SMART)
1.Objective Test
a type of test that requires students to select the correct
response from several alternatives or to supply a word or
short phrase to answer a question or complete a
statement, only one answer.(true-false, matching type,
and multiple-choice questions)
2. Subjective Test
It includes either short answer questions or long
general questions. This type of test has no specific
answer.
3. Performance Assessment
students are asked to perform real-world tasks that
demonstrate meaningful application of essential
knowledge and skills.

4. Portfolio Assessment
based on the systematic, longitudinal collection of
student work created in response to specific known
instructional objectives and evaluated in relation to
the same criteria.

5. Oral Questioning.
assessment data by asking oral questions. This is
also a form of formative assessment.

6. Observation Technique
Another method of collecting assessment data is
through observation
7. Self-report
response of the students may be used to evaluate
both performance and attitude. Assessment tools
could include sentence completion, likert scales,
checklists, or holistic scales.
Different Qualities of Assessment
Tools
Validity
the appropriateness of score-based inferences; or
decisions made based on the students’ test results

Reliability
the consistency of measurement; that is, how consistent
test results or other assessment results from one
measurement to another.
Fairness
the test item should not have any biases. It should not
be offensive to any examinee subgroup. A test can only
be good if it is fair to all the examinees.
Administrability
the test should be administered uniformly to all
students so that the scores obtained will not very due
to factors other than differences of the students’
knowledge and skills.

Practicality and Efficiency


the teacher’s familiarity with the method used, time
required for the assessment, complexity of the
administration, ease of scoring, ease of interpretation
of the test results and the materials used must be at the
lowest cost.
Kinds of Objective Type
Test
a. Multiple-choice Test
is used to measure knowledge outcomes and other types
of learning outcomes such as comprehension and
applications.
b . Matching Type Test
provides a way for learners to connect a word, sentence
or phrase in one column to a corresponding word.
c . True or False Type
this type of test, the examinees determine whether the
statement presented true or false.
.
Kinds of subjective types test
items
a. Completion type of short answer test
the examinee needs to supply or to create the
appropriate word(s), symbol(s) or number(s) to answer
the question or complete a statement rather than
selecting the answer from the given options.
b. Essay Items
It is appropriate when assessing students’ ability to
organize and present their original ideas. It consists of a
few number of questions.
CHAPTER 4
Administering, Analyzing, and
Improving Test
PACKAGING AND REPRODUCING TEST ITEMS

1. Put the items with the same format together.


2. Arrange the test items from easy to difficult.
3. Give proper spacing for each item for easy reading.
4. Keep questions and options in the same page.
5. Place the illustrations near the options.
6. Check the key answer.
7. Check the direction of the test.
8. Provide space for name, date and score.
9. Proofread the test.
10. Reproduce the test.

Item Analysis
Item analysis is a process of examining the student’ response
to individual item in the test. We can identify which of the given
are good and defective test items. Good items are to be retained
and defective items are to be improved, to be revised or to be
rejected.
Types of Quantitative Item Analysis
1. Difficulty Index
the proportion of the number of students in the upper and
lower groups who answered an item correctly
2. Discrimination Index
the power of the item to discriminate the students between
those who scored high and those who scored low in the overall
test.
(3) kinds of Discrimination Index:
(a) Positive discrimination
happens when more students in the upper group got the item
correctly than those students in the lower group.
(b) Negative Discrimination
occurs when more students in the lower group got the item
correctly than the students in the upper group.

(c) Zero Discrimination


happens when a number of students in the upper group and
lower who answer the test correctly are equal.

Qualitative Item Analysis


a process in which the teacher or expert carefully
proofreads the test before it is administered, to check if
there are typographical errors, to avoid grammatical
clues that may lead to giving away the correct answer,
and to ensure that the level of reading materials is
appropriate.
CHAPTER 5
Utilization of Assessment Data
DEFINITION OF STATISTICS
“Statistics is a branch of science, which deals with the
collection, presentation, analysis and interpretation of
quantitative data“

FREQUENCY DISTRIBUTION
a tabular arrangement of data into appropriate
categories showing the number of observation in each
category or group.
Parts of Frequency Table:
1. Class Limit the grouping or categories defined by the
lower and upper limits.

2. Class size (c.i) is the width of each class interval.


3. Class boundariesthe numbers used to separate each
category in the frequency distribution but without gaps create by
the class limits.

4. Class marks are the midpoint of the lower and


upper class limits.
Measure of Central Tendency
MEAN the most commonly used measure of the center
of data and it is also referred as the “arithmetic
average.”

MEDIAN the middle value in a set of data.

MODE the value that has a higher frequency in a given


set of values. It is the value that appears the most
number of times.

Quantiles a score distribution where the scores


are divided into different equal parts.

Measures of Variation
Absolute Measures of Variation:
Range
Inter-quartile Range (IQR) or Quartile Deviation
Mean Deviation
Variance and Standard Deviation
DESCRIBING RELATIONSHIPS
Correlation
types of correlation: (a) Positive Correlation
(b) Negative Correlation
(c) Zero Correlation
Scattergram of Correlation
(a)Scattergram of Positive Correlation.
(b)Scattergram of Negative Correlation
Spearman rho Coefficient
CHAPTER 6
Establishing, Validity and Reliability
of Test
Validity of a Test
CONTENT VALIDITY
refers to the relationship between test and the
instructional objectives, establishes content so
that the test measurers what it is supposed to
measure
CRITERION - RELATED VALIDITY
refers to the extent to which scores from a
test relate to theoretically similar measures.
CONSTRUCT VALIDATION
refers to the measure of the exten to which
a test measure a theoretical and unobservable
variable qualities.
Validity Coefficient
“computed value of the rxy. In theory, the
validity coefficient has values like the
correlation that ranges from 0 to 1.”
Reliability of a Test
“ refers to the consistency with which it
yields the same rank for
individuals who take the test more than
once.”
Test - Retest Method
a type of reliability determined by administering
the same test twice to the same groups of
students with any time interval between the
tests.
Equivalent Form
type of reliability determined by administering
two different but equivalent forms of the test
(also called parallel or alternate forms)
Sp[it - Half Method
administer test once and score two equivalent
halves of the test, the usual procedure is to score
the even-numbered and the odd-numbered test
item separately.

Reliability Coefficient
a measure of the amount of error
associated with the test scores.
Scoring Rubrics fot Performance and
Portfolio Assessment
Scoring Rubrics
“are descriptive scoring schemes that are
developed by teachers or other evaluators
to guide the analysis of the products or
processes of students’ efforts.”
Type of Rubrics;

a type of rubrics that requires the teacher to


score the overall process or product as a whole.

a type of rubric that provides information


regarding performance in each component part of
a task, making it useful for diagnosing specific
strengths and weaknesses of the learners.
Performance - Based Assessment
a direct and systematic observation of the actual
performances of the students based from a
predetermined performance criterion. “Implementing
Performance Assessment in the Classroom,”

Paper and Pencil Test vs. Performance-


based Assessment
measures learning indirectly. When measuring
factual knowledge and when solving well structured
mathematical problems, it is better to use paper and
pencil test.

Portfolio Assessment
is the systematic, longitudinal collection of student
work created in response to specific, known
instructional objectives and evaluated in relation to
the same criteria.
a purposeful collection of student work that exhibits
the student’s effort, progress and achievements in
one or more areas.
Traditional Assessment Portfolio Assessment

Measures student’s Measures student’s ability


ability at one time. over time.
Done but the teacher Done by the teacher and the
alone, students are not students, the students are
aware of the criteria. aware of the criteria.
Conducted outside Embedded in instruction
instruction Involves student in own
Assigns student a grade assessment.
Does not capture the Allows many expression of
students language ability teachers knowledge of
Does not give students student as a learner.
ability Student learns how to take
responsibility

You might also like