Unit 1 - Authentic Assessment in The Classroom (GROUP 1A)
Unit 1 - Authentic Assessment in The Classroom (GROUP 1A)
Unit 1 - Authentic Assessment in The Classroom (GROUP 1A)
High Quality Assessment = it provides a result that demonstrates and improve targeted
students learning
1. PURPOSES OF ASSESSMENT
a. Assessment FOR learning
The preposition “for” in assessment for learning implies that assessment is done
to improve and ensure learning. This is referred to as FORmative assessment,
assessment that is given while the teacher is in the process of student formation. It
ensures that learning is going on while teacher is in the process of teaching.
b. Assessment OF learning
It is usually given at the end of a unit, grading period or a term like a semester. It
is meant to assess learning for grading purposes, thus the term assessment of learning.
c. Assessment AS learning
It is associated with self-assessment. As the term implies, assessment by itself is
already a form of learning for the students.
As students assess their own work (e.g. a paragraph) and/or with their peers with
the use of scoring rubrics, they learn on their own what a good paragraph is. At the same,
as they are engaged in self-assessment, they learn about themselves as learners become
aware of how they learn. In short, in assessment AS learning, students set their targets,
actively monitor and evaluate their own learning in relation to their set target. As a
consequence, they become self-directed or independent learners. By assessing their
own learning, they are learning at the same time.
Assessment AS
learning
Assessment FOR Assessment OF
learning learning
Self-assessment
Formative assessment
ASSESSMENT
Level 2. Comprehension
- refers to the same concept of ―understanding.
- it is a step higher than mere acquisition of facts and involves a cognition or
awareness of the interrelationships of facts and concepts.
- re-state data or information in one’s own words, interpret, and translate.
- explaining or interpreting the meaning of the given scenario or statement.
Level 3. Application
- refers to the transfer of knowledge from one field of study to another or from
one concept to another concept in the same discipline.
- using or applying knowledge and putting theory into practice.
- demonstrating and solving problems
Level 4. Analysis
- refers to the breaking down of a concept or idea into its components and
explaining the concept as a composition of these concepts.
- interpreting elements, organizing, and structuring.
Level 5. Synthesis
- refers to the opposite of analysis and entails putting together the components
in order to summarize the concept.
- developing new unique structures, model, system, approaches, or ideas.
- build, create, design, establish, assemble, formulate.
Level 6. Evaluating and reasoning
- refers to valuing and judgment or putting worth to a concept or principle.
- judgment relating to external criteria.
- assess effectiveness of whole concept, in relation to values, outputs, efficacy,
and others.
1. Written-Response Instruments
- written response instrument includes objective test (multiple choice, true or
false, matching or short answer) test, essay, examination, and checklist.
● Objective test is appropriate for assessing the various level of hierarchy
of educational objectives.
● Multiple choice test in particular can be constructed in such a way as
to test higher order thinking skills.
● Essay can test the student grasp of the higher-level cognitive skills
particularly in the areas of application analysis, synthesis and judgment.
Example:
(POOR) Write an essay about the First EDSA Revolution.
(BETTER) Write an essay about the First EDSA Revolution and their
respective roles.
● Checklist list of several characteristics or activities presented to the
subjects of a study, where they will analyze and place a mark opposite
to the characteristics
4. Oral Questioning
- the traditional Greeks used oral questioning extensively as an assessment
method. Socrates himself, considered the epitome of a teacher, was said to
have handled his classes solely based on questioning and oral interactions.
- oral questioning is an appropriate assessment method when the objectives are:
(a) To assess the student‘s stock knowledge.
(b) To determine the student‘s ability to communicate ideas in coherent verbal
sentences.
4. SAMPLING
b. Systematic sampling
- is easier to do than random sampling.
- the list of elements is "counted off". That is, every kth element is taken. This is
similar to lining everyone up and numbering off "1,2,3,4; 1,2,3,4; etc". When
done numbering, all people numbered 4 would be used.
c. Stratified sampling
- also divides the population into groups called strata. However, this time it is by
some characteristic, not geographically.
- for instance, the population might be separated into males and females. A
sample is taken from each of these strata using either random, systematic, or
convenience sampling.
d. Cluster sampling
- is accomplished by dividing the population into groups --usually geographically.
- these groups are called clusters or blocks.
- the clusters are randomly selected, and each element in the selected clusters
are used.
5. ACCURACY
a. Validity
- is the extent to which a test measures what it is supposed to measure or as
referring to the appropriateness, correctness, meaningfulness and usefulness
of the specific decisions a teacher makes based on the test results.
- the first definition refers to the test itself while the second refers to the decisions
made by the teacher based on the test.
- a test is valid when it is aligned with the learning outcome.
- a teacher who conducts test validation might want to gather different kinds of
evidence. There are essentially three (3) main types of evidence that may be
collected:
a. Content-related evidence of validity refers to the content and format of the
instrument. How appropriate is the content? How comprehensive? Does it
logically get at the intended variable? How adequately does the sample of items
or questions represent the content to be assessed?
b. Criterion-related evidence of validity refers to the relationship between
scores obtained using the instrument and scores obtained using one or more
other tests (often called criterion). How strong is this relationship? How well do
such scores estimate present or predict future performance of a certain type?
c. Construct-related evidence of validity refers to the nature of the
psychological construct or characteristic being measured by the test? How well
does a measure of the construct explain differences in the behaviour of the
individuals or their performance on a certain task?
b. Reliability
- refers to the consistency of the scores obtained – how consistent they are for
each individual from one administration of an instrument to another and from
one set of items to another.
- reliability and validity are related concepts. If an instrument is unreliable, it
cannot yield valid outcomes. As reliability improves, validity may also improve
(or not) however, if an instrument is shown scientifically to be valid then it is
almost certain that it is also reliable.
- something reliable is something that works well and that you can trust
- a reliable test is a consistent measure of what it is supposed to measure.
a. Evaluation process
- Authentic assessment is an evaluation process that involves multiple forms of
performance measurement reflecting the student's learning, achievement,
motivation, and attitudes on instructionally-relevant activities.
b. Real-world task
- a ‘real-world’ assessment is meant to focus on the impact of one’s work in real
or realistic contexts.
- it requires students to deal with the messiness of real or simulated settings,
purposes, and audience (as opposed to a simplified and ‘clean’ academic task
to no audience but the teacher-evaluator).
d. Student’s performance
- an authentic assessment usually includes a task for students to perform and a
rubric by which their performance on the task will be evaluated.
2. CHARACTERISTICS OF AUTHENTIC ASSESSMENT
a. Performance Assessment
- an approach to educational assessment that requires students to directly
demonstrate what they know and are able to do through open-ended tasks
such as constructing an answer, producing a project, or performing an activity.
b. Alternative Assessment
- is a method of evaluation that measures a student's level of proficiency in a
subject as opposed to the student's level of knowledge.
- the overall goal of alternative assessment is to allow students to demonstrate
their knowledge and execute tasks.
c. Direct Assessment
- refers to any method of collecting data that requires students to demonstrate a
knowledge, skill, or behavior.
C. WHY USE AUTHENTIC ASSESSMENT?
Examples
a. Traditional Assessment
b. Authentic Assessment
- demonstrations
- hands-on experiments
- computer simulations
- portfolios
- projects
- multi-media presentations
- role plays
- recitals
- stage plays
- exhibits
D. DEVELOPING AUTHENTIC CLASSROOM ASSESSMENTS
Given that the assessment should be, well, authentic, start by looking at what
professionals in your field do on a daily basis and how those tasks might relate to your
selected learning objective.
It is important for these performance criteria to align with the nature of your task. To
return to our business example from earlier, you’d want to make sure that the way you
measure students’ performance is reflective of or similar to the expectations they would
encounter in a business scenario.
d. Develop a Rubric
Rubrics are a powerful tool for many assessment types, and they are an essential
component of authentic assessment. After all, authentic assessments are fairly
subjective, and rubrics help ensure instructors are grading fairly and consistently from
assessment to assessment and student to student.
Conclusion
References: