Study This! Assesment of Learning

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 12

CHAPTER I - Nature and Purposes of Assessment

OBJECTIVES
At the end of the chapter, the learners are expected to:
 Discuss the importance of assessment in classroom instruction;
 Differentiate test, measurement, and assessment from evaluation;
 Enumerate the sound principles of assessment in education; and
 Cite examples of norm and criterion-referenced evaluation.

 Based on the NCBTS, planning, assessing, and reporting refer to the alignment of assessment and planning activities. In
particular, the domain focuses on:
(1) The use of assessment data to plan and revise teaching-learning plans,
(2) The integration of assessment procedures in the plan and implementation of teaching-learning activities, and
(3) The reporting on the learner’s actual achievement and behavior.

 The set of competencies expected of a would-be-teacher are


(1) To develop and use a variety of appropriate assessment strategies to monitor and evaluate learning
(2) To regularly monitor and provide feedback on the learner’s understanding of the content.

Definition of Terms

In the field of testing, the term test, measurement, assessment, and evaluation are often used interchangeably. Identifying the
nuances in their meaning may aid test developers and test users to design and construct effective tests and use the result appropriately.

COMPARATIVE MATRIX ON THE FOUR


MOST COMMONLY USED TERM IN TESTING
Test Measurement Assessment Evaluation
o A set of item or o Establishes the o Any of the variety of o Involves a broader
questions measuring characteristics of procedures used to obtain process that includes
a sample behavior or individuals or groups information about student examining several
task from a specific of individuals through performance ( Linn & components of a whole
domain of knowledge the assignments of Gronlund, 2020) and making
or skill; designed to numerals according to o Systematic, continuous process instructional
be presented to one rules that give these of monitoring the various decisions(Gredler,
or more examinees numerals quantitative pieces of learning to evaluate 1996)
under specific meaning (ASEAN student achievement and o The process of
conditions, with Seminar-Workshop on instructional effectiveness delineating, obtaining,
definite boundaries Test Item Writing/Cons (Hewitt-Gervais & Baylen, and providing useful
and limits ( UP Open traction and 1998) information for judging
University). Development, 1998). decision alternatives
(Popham, 1993)
o consist of questions, o A process of obtaining o Answers the questions “ how o The process of
exercises or other a numerical description much of a given skill does a summing up the results
devices to measure of the degree to which student possesses before, of measurements or
the outcomes of an individual possesses during and after instruction” tests and giving them
learning. a particular and “ how much change has some meaning based
characteristics occurred” on value judgments
( Hopkins & Stanley,
1981)
o A holistic way of
looking at the
effectiveness of the
learning process by
considering both the
learner and the learning
product and applying
quantitative judgments.

A. Reflect on the overview you just read by answering questions. Write down your reflections in separate sheet.
Questions:
1. How is this subject (Assessment of Student Learning 1) different from other Professional Education subject? List at
least three differences.
 _______________________________________________________________________.
 _______________________________________________________________________.
 _______________________________________________________________________.

2. How will my personal characteristics and circumstances affect (positively or negatively) my learning in this subject
(Assessment of Student Learning 1)?

Classroom Assessment Defined

1
Classroom assessment can be defined as the collection, interpretation, and use of information to help teachers make better
decisions. Thus, assessment is more than testing and measurement (McMillan, 1997).

Four Essential Components of Implementing Classroom Assessment


1. Purpose
2. Measurement
3. Evaluation
4. Use

Purpose and Functions of Assessment

Four purposes of assessment according to Wyatt (1998)


 To inform the teacher about a student’s progress;
 To inform the students about their progress;
 To inform others about the students’ progress ( parents and future teachers) ; and
 To provide information for the public.

These purposes can be summed up into three: assessment for learning, assessment of learning, and assessment as learning (Earl,
2005).

Assessment for learning


 Teachers use the students’ prior knowledge as a starting point of instruction. The results of assessment are communicated
clearly and immediately to the students to determine effective ways to teach and learn.

Assessment of Learning or summative assessment


 Is done after instruction. It is used to identify what students know and can do and the level of their proficiency or
competency. Its result reveals whether or not instructions have successfully achieved the desired curriculum outcome. The
information from assessment of learning is usually expressed as grades and is made known to the students, parents, and other
stakeholders for better decision making.

Assessment as Learning
 Is done for teachers to understand and perform well their role of assessing for and of learning. It requires teachers to undergo
training on how to assess learning and be equipped with the competencies needed in performing their work as assessors. To
assess for and of learning, teacher should have the needed skills in assessment. This could be made possible through the
different forms of capacity building.

The purpose of assessment leads to development and improvement, and accountability and confidence. In this context of assessment
and accountability, Eisner (1993) listed five functions of assessment.

 Temperature-taking function: describes the educational health of the country rather than individual students or system
 Gate-keeping function: directs students along certain paths of learning based on the view that the school has a social selection
function
 Feedback –to-teachers function: provides information to teachers about the quality of their work
 Objective-achievement function: determines whether the course objectives have been achieved
 Appraisal-of-program function: provides an indication of the quality of the program

2
Importance of Assessment

Assessment serves specific purposes. The results of assessment are generally used to:
1. Provide essential guide for planning, implementing, and improving instructional programs and techniques;
2. Monitor student progress;
3. Promote learning by providing positive information like knowledge of results, knowledge of tasks well done, good grades
and praise;
4. Measure the outcomes of instruction; and
5. Provide the parents with information on how well their children are doing in school.

Scope of Assessment

The chief purpose of assessment is the improvement of the student. Specifically, it assesses the learning outcomes of instruction
which are:
1. Cognitive behaviors ( knowledge and information gained, intellectual abilities);
2. Affective behaviors ( attitudes, interests, appreciation, and values);
3. Psychomotor behaviors (perceptual and motor skills and abilities in performing tasks).

Principles of Assessment

Assessment is an integrated process for determining the nature and extent of student learning and development. This process will
be most effective when the following principles are taken into consideration (Grolund, 1995):

1. Specifying clearly what is to be assessed is prioritized in an assessment. The effectiveness of an assessment depend as much
on a careful description of what needs to be assessed as it does on the technical qualities of the assessment procedure that was
used. Thus, the specification of the characteristics to be measured should precede the selection or development of assessment
procedures.
2. An assessment procedure should be selected because of its relevance to the characteristics or performance to be measured.
Assessment procedures are frequently selected on the basis of their objectivity, accuracy or convenience. These criteria are
important; however, they are only secondary to the major question asked before the assessment, which is whether the chosen
procedure is the most effective method of measuring the learning or development that needs to be assessed.
3. A comprehensive assessment of student achievement and development requires a variety of procedures. No single type of
instrument or procedure can assess the vast arrays of learning and development outcomes emphasized in a school program.
4. Proper use of assessment procedures requires an awareness of their limitations for them to be used more effectively. No test
or assessment is composed purely of questions or problems that might be presented in a comprehensive coverage of the
knowledge, skills, and understanding relevant to the objectives of a course. On the other hand, limitations of assessment
procedures do not negate the value of tests and other types of assessments.
5. Assessment is a means to an end, not an end in itself. The use of assessment procedures implies that some useful purpose is
being served and that the user is clearly aware of this purpose. Assessment is best viewed as a process of obtaining
information in which to base educational decisions.

Reganit, Reyes, and Marques (2004) listed the following principles for effective classroom assessment:
1. Assessment must be based on a previously accepted set of objectives. Assessment takes place only in relation to the
objectives that have been previously set up.
2. Assessment should be continuous, cumulative process and must be operative throughout the entire teaching and learning
process.
3. Assessment must recognize that the total individual personality is involved in learning.
4. The assessment process should encourage and give opportunity to the student to become increasingly independent in self-
appraisal and self-direction.
5. Assessment must be done cooperatively.
6. Assessment is positive in nature and promotes action. It includes planning for improvement and overcoming weaknesses.
7. Assessment is governed by true democratic principles.
8. Assessment should include all significant evidences for every possible source.
9. A comprehensive record of the evidences gathered in the process of assessment is necessary to assure an intelligent
interpretation of the data.
10. Assessment should take into consideration the nature of the opportunities and limitation of the educational experiences
provided by the school.

Table 1.2
TRENDS OF CLASSROOM ASSESSMENT
From To
o Sole emphasis on outcomes o Assessment of processes
o Isolated skills o Integrated skills
o Isolated facts o Application of knowledge
o Paper-and-pencil tasks o Authentic task
o Decontextualized tasks o Contextualized tasks
3
o A single correct answer o Several correct answers
o Secret standards o Public standards
o Secret criteria o Public criteria
o Individuals o Groups
o After instruction o During instruction
o Little feedback o Considerate feedback
o Objective tests o Performance –based tests
o Standardized test o Informal test
o External evaluation o Students’ self-evaluation
o Single assessment o Multiple assessment
o Sporadic o Continual
o Conclusive o Recursive

Non- Testing

 Is an alternative assessment in the sense that it diverts from paper-and-pen test.

Two Major Non-Testing Techniques:

 Performance-based assessment- is a method to measure skill and product learning targets, as well as knowledge and
reasoning targets. In contrast to paper-and-pen tests, a performance-based assessment requires student to construct an original
response to a tasks scored through teacher judgment. Students provide explanations, and so there is no single correct answer.

 Authentic Assessment- involves a performance-based task that approximates what students are likely to do in a real-world
setting. It integrates instruction with an evaluation of student achievement and is based on the constructive learning theory.
Like the performance-based assessment, it is most frequently used with reasoning, skill, and product learning targets. The
scoring criteria are the basis for evaluating student performances.

Portfolio

A portfolio is a purposeful collection of student work that exhibits the student’s efforts, progress, and achievements in one more
areas of the curriculum. The collection must include the following:

1. Student participation in the selection contents;


2. Criteria for selection;
3. Criteria for judging merits; and
4. Evidence of a student’s self-reflection.

Purpose of Using Portfolio

Portfolio can enhance the assessment process by revealing a range of skills and understanding of students: supporting
instructional goals: reflecting change and growth over a period of time; encouraging student, teacher, and parent reflection; and
providing for continuity in education from one year to the next. Instructors can use portfolios for specific purposes including:

1. Encouraging self-directed learning;


2. Giving a comprehensive view of what has been learned;
3. Fostering learning about learning;
4. Demonstrating progress toward identified outcomes;
5. Creating an intersection for instruction and assessment;
6. Providing a way for students to value themselves as learner: and
7. Offering opportunities for peer-supported growth.

4
What are the Different Types of Portfolio?

There are many different types of portfolios, each of which can serve one or more specific purposes as part of an overall school or
classroom assessment program. The following is a list of the types of portfolios most often cited in literature:

1. Documentation Portfolio
Also known as the working portfolio, this type involves a collection of work done over time showing growth and
improvement on the students learning of identified outcomes. The documentation portfolio can include everything, from
brainstorming activities to drafts to finished products. The collection becomes meaningful when specific items are chosen to
focus on particular educational experiences or goals. It can include the best and the weakest of the student‘s works.
2. Process Portfolio
This type documents all facets of the learning process. The process portfolio is particularly useful in documenting the
students overall learning process. It can show how students integrate specific knowledge or skills and progress toward both
basic and advanced mastery. In addition, the process portfolio emphasizes the students’ reflection of their learning process,
including the use of reflective journals, think logs, and related forms of metacognitive processing.
3. Showcase Portfolio
This type of portfolio is best used for a summative evaluation of the student’s mastery of key curriculum outcomes. It
should contain the students best works, determined through a combination of student and teacher selection. Only completed
work should be included. It should also consist of the students written analyses and reflections on the decision-making
process (es) used to determine which works should be included. The showcase portfolio is especially compatible with audio-
visual artifact development, such as photographs, videotapes, and electronic records of the students completed work.

Roles of Assessment in Making Instructional Decisions

Test and other evaluative procedures can be classified in terms of their functional roles in classroom instruction. One such
classification system follows the system follows the sequence in which assessment procedures are likely to be used in classroom.
These categories classify the assessment of the students’ performance in the following manner:

1. Placement Assessment
This is used to determine the students’ entry behavior and performance at the beginning of the instruction. The goal of
placement evaluation is to determine the position in the instructional sequence and the mode of evaluation that is most
beneficial for each student.

2. Formative Assessment
This category determines the learning progress of the students. It is the gathering of data during the time a program is
being developed for the purpose of guiding the progress. It is likewise used to monitor the learning progress during
instruction, as well as provide continuous feedback to both the students and the teacher concerning learning success and
failure.
3. Diagnostic Assessment
This used to diagnose the students learning difficulties during instruction. It is concerned with recurring learning
difficulties that are left unresolved by the standard corrective prescription of the formative evaluation.

4. Summative Assessment
This category is used to determine mastery and achievement at the end of the course. It is the process of making an
overall assessment or decision about the program. It is designed to determine the extent to which the instructional objectives
have been achieved and is used primarily for assigning course grades or certifying student mastery of the intended learning
outcome.

Table 1.3
COMPARISON OF FORMATIVE AND SUMMATIVE ASSESSMENT
FORMATIVE ASSESSMENT SUMMATIVE ASSESSMENT
o Students are given the opportunity to improve their o The outcome of the task can neither be repeated nor
performance on the same task. improved.
o Students expect feedback on their performance, o The final grade is released on the assessment task.
enabling them to improve their performance on the same o Assessment is done at the end of systematic and
task. incremental learning activities that have formative
o Although diagnostics in nature, the students’ knowledge assessment tasks.
of their strengths and weaknesses is categorized as
formative.

Norm- and Criterion-Referenced Interpretation

Two approaches useful for instructional purposes:


(1) Norm-referenced
(2) Criterion-referenced evaluations

Norm-referenced assessment
– is a type of assessment designed to provide a measure of performance that is interpretable in terms of an individual’s standing
in some known group. It is the comparison of an individual’s progress with the performance of a specified group. If the score of a
student is interpreted by comparing his/her score to those of other individuals ( a norm group), this would be norm-referencing. The
5
standards used for comparison are rankings and percentages derived from performance of the class as a whole. Hence, an individual is
judged as below average, average, above average, third from the top, or the best in class.

The norm-referenced evaluation is used in the following cases:


1. For subject matter that is not cumulative and students do not need to reach some specified level of competency;
2. For selection purpose if the institution is constrained of enrollment; and
3. For predicting degrees of success.

Criterion-referenced assessment
– is a type of assessment designed to provide a measure of performance that is interpretable in terms of a clearly defined and
delimited domain of learning tasks. It is the comparison of the individuals’ performance with a particular standard, usually with a
mastery or competency point. This is used in subject areas that demand mastery and are cumulative and progressively complex. This
also used in subjects that are included in licensure examinations.

Table 1.4
DIFFERENCES OF NORM- AND CRITERION-REFERENCED ASSESSMENT
Norm-Referenced Assessment Criterion-Referenced Assessment
o Covers a large domain of learning tasks o Focuses on delimited domain of learning task with a
relatively large number of items measuring each task
o Emphasizes description of what learning tasks
o Emphasizes discrimination among individuals in terms individuals can and cannot perform
of relative levels of learning o Matches item difficulty to learning task without altering
o Favors items of average difficulty and usually omits item difficulty or omitting easy items
easy items o Used primarily, but not exclusively, for mastery testing
o Used primarily, but not exclusively, for survey testing o Interpretation requires a clearly defined and delimited
o Interpretation requires a clearly defined group achievement domain

Nature of Measurement

Often, when teachers are given a set of scores of their students, they have difficulty in determining the meaning of those scores. If
educators are going to use data successfully in decision making, they must have some knowledge on describing and synthesizing
them. Data differ in terms of what properties of the real number series (order, distance or origin) is attributed to the scores. The most
common-though not the most refined- classification of scores is nominal, ordinal, interval, and ratio.
The nominal scale is the simplest scale of measurement. It involves the assignment of different numerals into categories that are
quantitatively different. The ordinal scale has the order property of a real number series and gives indication of rank order. Interval
scale, on the other hand, can interpret the distance between scores, and finally, with the ratio scale, the ratio of the scores has meaning
because there is a meaningful zero point.

B. Once you’re done reading, answer the following questions. Write your response and any other ideas and reflections in
separate sheet.
Questions:
1. How does devoting quality time in reading the module affect or influence your self-learning?

2. What are the challenges to having successful learning of assessment of Student Learning 1 on modular learning delivery
modality? Write down top three challenges.

3.What steps can I take to overcome any barriers or challenges to finish this Assessment of Student Learning 1 subject?
List at least three actions.
C. Begin by articulating what you know and what you want to learn about the key concept in this lesson. Write down
your thoughts in separate sheet.

6
CHAPTER 2 - Principles of High-Quality Assessment

OBJECTIVES
At the end of the chapter, the learners are expected to:
 Identify what constitutes high-quality assessment;
 List down the productive and unproductive uses of test; and
 Classify the various types of tests.

Characteristics of High Quality Assessment


High-quality assessments provide results that demonstrate and improve targeted student learning. They also convey instructional
decision making. To ensure the quality of any test; the following criteria must be considered:
1. Clear and Appropriate Learning Targets
When designing good assessment, start by asking if the learning targets are on the right level of difficulty to be able to
motivate students and if there is an adequate balance among the different types of learning targets.
A learning target is a clear description of what students know and able are to do. Learning targets are categorized by
Stiggins and Conklin (1992) into five.

a. Knowledge learning target is the ability of the student to master a substantive subject matter.
b. Reasoning learning target is the ability to use knowledge and solve problems.
c. Skill learning target is the ability to demonstrate achievement- related skills like conducting experiments, playing
basketball, and opening computers.
d. Product learning target is ability to create achievement-related products such as written reports, oral presentations, and
art products.
e. Affective learning target is the attainment of affective traits such as attitudes, values, interest, and self-efficacy.

2. Appropriateness of Assessment Methods


Once the learning targets have been identified, match them with their corresponding methods by considering the
strengths of various methods in measuring different targets.

Table 2.1
MATCHING LEARNING TARGETS WITH ASSESSMENT METHODS
ASSESSMENT METHODS
Targets Objective Essay Performance-Based Oral Question Observation Self-
Report
Knowledge 5 4 3 4 3 2
Reasoning 2 5 4 4 2 2
Skills 1 3 5 2 5 3
Products 1 1 5 2 4 4
Affect 1 2 4 4 4 5

3. Validity
This refers to the extent to which the test serves its purpose or the efficiency with which it intends to measure. Validity is
a characteristic that pertains to the appropriateness of the inferences, uses, and results of the test or any other method utilized
to gather data.
There are factors that influence the validity of the test; namely, appropriateness of test items, directions, reading
vocabulary and sentence structures, pattern of answers, and arrangement of items.

a. How Validity is Determined


Validity is always determined by professional judgment. However, there are different types of evidence to use in
determining validity. The following major sources of information can be used to establish validity:

i. Content-related validity determines the extent of which the assessment is the representative of the domain of
interest. Once the content domain is specified, review the test items to be assured that there is a match between
the intended inferences and what is on the test. A test blueprint or table of specification will help further
delineate what targets should be assessed and what is important from the content domain.

ii. Criterion-related validity determines the relationship between an assessment and other measure of the same
trait. It provides validity by relating an assessment to some valued measure ( criterion) that can either provide
an estimate of current performance ( concurrent criterion-related evidence) or predict future performance
( predictive criterion-related evidence) .

iii. Construct-related validity determines which assessment is a meaningful measure of an unobservable trait or
characteristics like intelligence, reading comprehension, honesty, motivation attitude, learning style, and
anxiety.

iv. Face validity is determined on the basis of the appearance of an assessment, whether based on the superficial
examination of the test, there seems to be a reasonable measure of the objectives of domain.

v. Instructional-related validity determines to what extent the domain of content in the test is taught in class.

7
b. Test Validity Enhancers
The following are suggestions for enhancing the validity of classroom assessment:
i. Prepare a table of specification (TOS).
ii. Construct appropriate test items.
iii. Formulate directions that are brief, clear, and concise.
iv. Consider reading vocabulary of the examinees.
The test should not be made up of jargons.
v. Make the sentence structure of your test items simple.
vi. Never have an identifiable pattern of answers.
vii. Arrange the test items from easy to difficult.
viii. Provide adequate time for student to complete the assessment.
ix. Use different methods to assess the same thing.
x. Use the test only for intended purposes.

4.
Reliability
This refers to the consistency with which a student may be expected to perform on a given test. It means the extent to
which a test is dependable, self-consistent, and stable.

Factors that affect test reliability


(1) Scorers inconsistency because of his/her subjectivity
(2) Limited sampling because of incidental inclusion and accident exclusion of some materials in the test
(3) Changes in the individual examinee himself/herself instability during the examination
(4) Testing environment

a. How Reliability is Determined


There are various ways of establishing test reliability. These are the length of the test, difficulty of the test, and
objectivity of the scorer. There are also four methods in estimating the reliability of a good measuring instrument.

i. Test-Retest Method or Test of Stability


The same measuring instrument is administered to the same group of subjects. The score of the first and
second administrations of the test are determined by correlation coefficient.
ii. Parallel-Form Method or Test Equivalence
Parallel or equivalent forms of a test may be administered to the group of subjects and the paired
observations correlated.
iii. Split-Half Method
The test in this method may only be administered once, but the test items are divided into two halves. The
common procedure is to divide a test into odd and even items.
iv. Internal-Consistency Method

b. The Concept of Error in Assessment


The concept of error in assessment is critical to the understanding of reliability. Conceptually, whenever something is
assessed, an observed score or result is produced. This observed score is the product of what the true score or real ability or skill is
plus some degree of error.

c. The Reliability Enhancers


The following should be considered in enhancing the reliability of classroom assessments:

i. Use a sufficient number of items or tasks. A longer test is more reliable.


ii. Use independent raters or observers who can provide similar scores to the same performances.
iii. Make sure the assessment procedures and scoring are objective.
iv. Continue the assessment until the results are consistent.
v. Eliminate or reduce the influence of extraneous events or factors.
vi. Assess the difficulty level of the test.
vii. Use shorter assessments more frequently rather than a few long assessments.
5. Fairness
This pertains to the intent that each question should be made as clear as possible to the examinees and the test is absent
of any biases. An example of a bias in intelligence test is an item about a person or object that has not been part of the
cultural and educational context of the test taker.
6. Positive Consequences
These enhance the overall quality of assessment, particularly the effect of assessments on the students’ motivation and
study habits.
7. Practicality and Efficiency
Assessments need to take into consideration the teacher’s familiarity with the method, the time required, and the
complexity of administration, the ease of scoring and interpretation, and the cost to be able to determine an assessment’s practically
and efficiency.

Productive Uses of Test

Learning Analysis. Test s are used to identify the reasons or causes why students do not learn and the solutions to help them
learn. Ideally, a test should be designed to determine what students do not know so that the teachers can take appropriate actions.

8
Improvement of Curriculum. Poor performance in a test may indicate that the teacher is not explaining the material effectively,
the textbook is not clear, the students are not properly taught, and the student do not see the meaningfulness of the materials.

Improvement of Teacher. In a reliable grading system, the class average is the grade the teacher has earned.

Improvement of Instructional Materials. Tests measure how effective instructional materials are in bringing about intended
changes.

Individualization. Effective tests always indicate differences in students learning. These can serve as bases for individual help.

Selection. When enrolment opportunity or any other opportunity is limited, a test can be used to screen those who are more
qualified.

Placement. Tests can be used to determine to which category a student belongs.

Guidance and Counselling. Result from appropriate tests, particularly standardized tests, can help teachers and counselors guide
students in assessing future academic and career possibilities.

Research. Tests can be feedback tools to find effective methods o0f teaching and learn more about students, their interest, goals
and achievements.

Selling and Interpreting the School to the Community. Effective tests help the community understand what the students are
learning, since test items are representative of the content of instruction.

Identification of Exceptional Children. Tests can reveal exceptional student inside the classroom. More often than not, this
student overlooked and left unattended.

Evaluation of Learning Program. Ideally, tests should evaluate the effectiveness of each element in a learning program, not just
blanket the information on the total learning environment.

Unproductive Uses of Test


Grading. Tests should not be used as the only determinants in grading a student.
Labelling. It is often a serious disservice to label a student, even if the label is positive.
Threatening. Tests lose their validity when used as disciplinary measures.
Unannounced Testing. Surprise tests are generally not recommended.
Ridiculing. This means using tests to deride students.
Tracking. Students are grouped according to deficiencies are revealed by tests without continuous re-evaluation, thus
locking them into categories.
Allocating Funds. Some Schools exploit tests to solicit for funding.

9
Table 2.3
COMPARISON BETWEEN TEACHER-MADE TESTS
AND STANDARDIZED TESTS
Characteristics Teacher-Made Test Standardized Test
Direction for Usually, no uniform directions are specified. Specific instructions standardized the
Administration administration and scoring procedures.
And Scoring
Sampling Content Both content and sampling are determined Content is determined by curriculum and subject
by the classroom teacher. matter experts. It involves intensive investigations
of existing syllabi, textbooks, and programs.
Sampling of content is done systematically.
Construction May be hurriedly done because of time It uses meticulous construction procedures that
constraints; often no test blueprints, item include constructing objectives and test
tryouts, item analysis or revision; quality of blueprints, employing item tryouts, item analysis,
test may be quite poor. and item revisions.
Norms Only local classroom norms are available In addition to local norms, standardized tests
typically make available national, school district,
and school building norms.
Purpose and use Best suited for measuring particular Best suited for measuring broader curriculum
objectives set by the teacher and for objectives and for interclass, school, and national
intraclass comparisons. comparisons.

10
CHAPTER 3 - Social, Legal, and Ethical Implications of Test

OBJECTIVES
At the end of the chapter, the learners are expected to:
 Evaluate the soundness of the criticisms of testing; and
 Cite the testing principles a teacher must observe.

Criticism of Testing
In spite of the advantages of testing, still some quarter hurl some serious allegations against its use. Below are some criticisms:

Invasion of Privacy. Whether tests represent an invasion or not depends in part on how they are used. For sure there is no
invasion of privacy if the subjects were told how the test result will be used and if the subjects volunteered.

Anastasi and Urbina ( 1997) offer some factors to consider to observe the right to privacy:
1. Purpose of testing
2. Relevance of information
3. Informed consent
4. Confidentiality

Chase (1976) derived the following implications for teaching from the 1974 Family Educational Right and Privacy Act ( Buckey
Ammendment ):
1. Teachers cannot post the grades of the students.
2. Teachers cannot display the works of their student as an example of poor or excellent work.
3. Teachers are not allowed to let the students grade or correct any other students’ paper.
4. Teachers cannot ask students to raise their hands to determine if they answered correctly or incorrectly to any item.
5. Teachers cannot distribute test papers in a manner that will permit other students to observe the scores of others.
6. Teachers cannot assume the letters of recommendations requested by students will be kept confidential.

Anxiety in testing can be demonstrated in nail biting, pencil tapping, or squirming. The following suggestions may help motivate
students prepare for and take examinations without creating unnecessary anxiety:

1. Emphasize tests for diagnosis and mastery rather than means of punishing students who fail to live up to the expectations of
the teachers or parents.
2. Avoid a “sudden death examination.” Keep in mind that passing or failing is a function of one test only.
3. Write personal notes on each examination paper encouraging students to keep up the good work or exert more effort.
4. Be sure each item has “face validity.” Items measure some important aspects of life as perceived by the students.
5. Avoid unannounced examination.
6. Schedule personal conference with the students as often to reduce anxiety and redirect learning.
7. Emphasize more on strengths, not on deficiencies.
8. Do not emphasize competitive examinations when some students are unable to compete.
9. Treat students’ grade confidentially.
10. Allow students to choose among activities of equal instructional values.

Ethical Testing Principles

Tests presume an ethical, responsible attitude on the top part of the examiner and a desire to cooperate on the part of the students.
As in all social interactions, mutual trust and respect must be developed. Relevant to testing are the following ethical principles:
1. Confidentiality. This regulates or controls legal or lawful access. Who shall have access to the test results? Several
considerations such as the security of test content, the hazards of misunderstanding results, and the need of various persons to
know the results influence the answer in particular situations.
However confidentiality can be breached in the following instances:
a. When there is clear, immediate danger to the student and the teacher informs other professionals or authorities.
b. If the students will benefit by talking to other professionals concerned with a case.
c. If the student gives permission for confidential communications to be reported to others.

11
2. Test Security. Tests are professional instruments and as such, their dissemination is restricted to those with the technical
competence to use them properly. No standardized tests should be left unsecured.
3. Tests Scores and Interpretations. These should only be available to individuals who are qualified to use them. Test results
should be interpreted to parents in a manner that will ensure against misuse and misinterpretations.
4. Test Publication. Standardized test should provide a manual or technical handbook, describing how the test can be used most
effectively and who may use it.

Ethical Testing Practices

1. It is both ethical and advantageous to inform students in advance that they are about to take a test and to tell them something
about the nature of the test. They should also be told of the advantages of taking the test and where the results would be used.
2. Teachers should explain the mechanics of taking a test and practice the student on how to fill out an answer sheet (i.e.,
making heavy marks, and erasing marks completely). It is how however essential that the teacher does not make the question
available.
3. It is perfectly proper to try to motivate students to do as well as they can as long as they are not threatened or made anxious
about their performance
4. It is essential that all testing materials and results be kept secured before, during, and after testing.
5. It is ethical to combine classes for testing as long as there are adequate proctors to safeguard the test and make sure that the
students are following instructions. The ideal ratio is one proctor to a maximum of 30 students.
6. Once an examination has been administered and scored, it is permissible for the teachers to examine results and determine the
areas of the students’ weaknesses.

Unethical Testing Practices

1. To tutor students on the specific subject matter of an expected examination. This destroys the standardized procedures of test
administration and distorts the meaning provided by the scores.
2. To use or give a test item from any part of the test in which only a word or phrase has been changed.
3. To construct or use any practice form that is similar to the actual test items to reflect the situations, options, or conditions of
the original questions.
4. To copy/ or distribute the test before the scheduled date of the test.
5. For teachers to use standardized test or mandated testing programs for their examinations. Similarly, it is unethical to use
standardized tests as instructional materials.
6. To exclude some students from participating in tests, even though the teachers expect them to do poorly. Nor is it ethical to
exclude the whole class as if they are low achievers.
7. To allow students to use false records, identification papers, unauthorized identification cards, or computer access to official
school documents.
8. To neglect the instruction of one student just to increase the test scores of other pupils. The goal of education is to maximize
the achievement of each pupil, not the attainment of high-test scores.
9. To alter the directions, time limits, and scoring procedure.
10. To try to improve student performance by developing items parallel to those on standardized tests.
11. To create anxiety and rivalry about standardized tests among students and between classes and schools. Examinations are not
contests and should not be treated as such.
12. To accept gratuities, gifts or favors that might impair or appear to influence professional decisions or actions regarding
student testing and scores.
13. To disclose information about students obtained in the course of testing, unless disclosure serves a compelling professional
purpose or is required by the school, is unethical.

D. Reflect on the following question. Write your answer in a separate sheet.


Question:
1. Justify whether the exemption of students from taking examinations is an ethical or unethical practice.
2. Suggest other approaches on how student can be motivated to take examinations.
3. Make a list of action steps to ensure that the Module 1 of assessment of student learning 1 are learned. List at least 5
or more actions.
1._______________________________________________________________________.
2._______________________________________________________________________.
3._______________________________________________________________________.
4._______________________________________________________________________.
5._______________________________________________________________________.

Prepared by:

RONALDO B. BUELLA
Instructor

12

You might also like