Lyksss Portfolio Kemerut 20240604 154541 0000
Lyksss Portfolio Kemerut 20240604 154541 0000
Lyksss Portfolio Kemerut 20240604 154541 0000
Test / testing
is formal and systematic instrument, usually paper and pencil also
refers to the administration, scoring, and interpretation of the
procedures designed to get information about the extent of the
performance of the students.
EXAMPLES : Oral questionings, Observations Projects ,
Performances Portfolio
Measurement
is a process of quantifying or assigning number to
the individual’s intelligence, personality, attitudes
and values, and achievement of the students. It
express the assessment data in terms of numerical
values and answer the question, “how much?”
EXAMPLE: numerical values are used to represent the performance
of the students in different subjects.
Evaluation
the process of judging the quality of what is good
and what is desirable. It is the comparison of data
to a set of standard or learning criteria for the
purpose of judging the worth or quality.
1. Beginning of Instruction
- placement Assessment according to is
concerned with the entry performance and
typically focuses on the questions.
2. During Instruction
- monitor the learning progress of the students. If
the students achieved the planned learning
outcomes, the teacher s provide a feedback to
reinforce learning.
Formative Assessment
type of assessment used to monitor the
learning process of the students during
instruction.
Diagnostic Assessment
type of assessment given at the beginning
of instruction or during instruction. It aims
to identify the strengths and weaknesses of
the students regarding the topics t be
discussed.
3. End of Instruction
-Summative Assessment is type of assessment
usually given at the end of a course or unit.
1. Norm-referenced Interpretation
- It is used to describe student performance
according to relative position in some known group.
2. Criterion-referenced Interpretation
- used to describe students’ performance
according to a specified domain of clearly defined
learning tasks.
General Educational
Program Objectives more narrowly
defined statements of educational outcomes that apply to
specific educational program; formulated on the annual basis;
developed by program coordinators, principals, and other school
administrators.
Instructional
Objectives specific statement of the learners
behavior or outcomes that are expected to be exhibited by the
students after completing a unit of instruction.
Types of Educational Objectives
TAXONOMY OF EDUCATIONAL
OBJECTIVES
The three domains of educational activities were:
4. Portfolio Assessment
based on the systematic, longitudinal collection of
student work created in response to specific known
instructional objectives and evaluated in relation to
the same criteria.
5. Oral Questioning.
assessment data by asking oral questions. This is
also a form of formative assessment.
6. Observation Technique
Another method of collecting assessment data is
through observation
7. Self-report
response of the students may be used to evaluate
both performance and attitude. Assessment tools
could include sentence completion, likert scales,
checklists, or holistic scales.
Different Qualities of Assessment
Tools
Validity
the appropriateness of score-based inferences; or
decisions made based on the students’ test results
Reliability
the consistency of measurement; that is, how consistent
test results or other assessment results from one
measurement to another.
Fairness
the test item should not have any biases. It should not
be offensive to any examinee subgroup. A test can only
be good if it is fair to all the examinees.
Administrability
the test should be administered uniformly to all
students so that the scores obtained will not very due
to factors other than differences of the students’
knowledge and skills.
Item Analysis
Item analysis is a process of examining the student’ response
to individual item in the test. We can identify which of the given
are good and defective test items. Good items are to be retained
and defective items are to be improved, to be revised or to be
rejected.
Types of Quantitative Item Analysis
1. Difficulty Index
the proportion of the number of students in the upper and
lower groups who answered an item correctly
2. Discrimination Index
the power of the item to discriminate the students between
those who scored high and those who scored low in the overall
test.
(3) kinds of Discrimination Index:
(a) Positive discrimination
happens when more students in the upper group got the item
correctly than those students in the lower group.
(b) Negative Discrimination
occurs when more students in the lower group got the item
correctly than the students in the upper group.
FREQUENCY DISTRIBUTION
a tabular arrangement of data into appropriate
categories showing the number of observation in each
category or group.
Parts of Frequency Table:
1. Class Limit the grouping or categories defined by the
lower and upper limits.
Measures of Variation
Absolute Measures of Variation:
Range
Inter-quartile Range (IQR) or Quartile Deviation
Mean Deviation
Variance and Standard Deviation
DESCRIBING RELATIONSHIPS
Correlation
types of correlation: (a) Positive Correlation
(b) Negative Correlation
(c) Zero Correlation
Scattergram of Correlation
(a)Scattergram of Positive Correlation.
(b)Scattergram of Negative Correlation
Spearman rho Coefficient
CHAPTER 6
Establishing, Validity and Reliability
of Test
Validity of a Test
CONTENT VALIDITY
refers to the relationship between test and the
instructional objectives, establishes content so
that the test measurers what it is supposed to
measure
CRITERION - RELATED VALIDITY
refers to the extent to which scores from a
test relate to theoretically similar measures.
CONSTRUCT VALIDATION
refers to the measure of the exten to which
a test measure a theoretical and unobservable
variable qualities.
Validity Coefficient
“computed value of the rxy. In theory, the
validity coefficient has values like the
correlation that ranges from 0 to 1.”
Reliability of a Test
“ refers to the consistency with which it
yields the same rank for
individuals who take the test more than
once.”
Test - Retest Method
a type of reliability determined by administering
the same test twice to the same groups of
students with any time interval between the
tests.
Equivalent Form
type of reliability determined by administering
two different but equivalent forms of the test
(also called parallel or alternate forms)
Sp[it - Half Method
administer test once and score two equivalent
halves of the test, the usual procedure is to score
the even-numbered and the odd-numbered test
item separately.
Reliability Coefficient
a measure of the amount of error
associated with the test scores.
Scoring Rubrics fot Performance and
Portfolio Assessment
Scoring Rubrics
“are descriptive scoring schemes that are
developed by teachers or other evaluators
to guide the analysis of the products or
processes of students’ efforts.”
Type of Rubrics;
Portfolio Assessment
is the systematic, longitudinal collection of student
work created in response to specific, known
instructional objectives and evaluated in relation to
the same criteria.
a purposeful collection of student work that exhibits
the student’s effort, progress and achievements in
one or more areas.
Traditional Assessment Portfolio Assessment