Jocelyn Quinto-Midterm Requirements
Jocelyn Quinto-Midterm Requirements
Jocelyn Quinto-Midterm Requirements
Jocelyn D Quinto
Quiz #01
Task #01: In each principle of high quality assessment (PHQA), write your personal experiences where your former or
present teachers complied or violated these PHQA. Use three (3) to five (5) sentences in each PHQA.
Some of my teachers seemed not undergone seminars in terms of ethical decision making, why? Because
they used to embarrassed students when they are not satisfied to what their students had complied.
Sometimes we the students can really make a mistake and even we want to do our best to give the main
point of the lesson that our teacher wants to get or hear from us, we failed because not everything in each
lesson we already know and understand. Teachers must be aware that embarrassing students is never been a
way to make students dig deeper.
14. Ethics (complied)
Results of any test assessment should be communicated to each student in a way that others would not be in
possession of any information. Most of my college professors distributed our grade privately, this is the best
way to make students feel calm whatever is the result of their academic performance.
ASL 1-Midterm Requirements
Jocelyn D Quinto
Task #02: Create your own ‘meme’ about PHQA. A meme is used to express ideas or real scenarios through
humorous and sarcastic images and piece of text.
ASL 1-Midterm Requirements
Jocelyn D Quinto
Quiz #02
Task #01: Read 5 to 10 research papers or articles about the importance of establishing validity
and reliability of assessment tools of teachers.
In order for assessments to be sound, they must be free of bias and distortion. Reliability
and validity are two concepts that are important for defining and measuring bias and
distortion. Validity and reliability are two quality indicators for classroom tests. Validity refers
to the degree to which a test is measuring what it is supposed to measure, while reliability is
an indication of the consistency between two measures of the same test. A test may be highly
reliable but not necessarily valid, but a highly valid test is usually reliable.
A basic knowledge of test score reliability and validity is important for making
instructional and evaluation decisions about students. The purpose of testing is to obtain a
score for an examinee that accurately reflects the examinee’s level of attainment of a skill or
knowledge as measured by the test. Since instructors assign grades based on assessment
information gathered about their students, the information must have a high degree of validity
in order to be of value. Assessment data collected will be influenced by the type and number
of students being tested. This variance in student groups from semester to semester will affect
how difficult or easy test items and tests will appear to be. This variance in scores from group
to group makes reliability and validity an important consideration when developing and
administering assessments and evaluating student learning.
Reliability of the result of an assessment, which may be in the form of a test score or
summary grade or mark, is the extent to which it can be said to be accurate and not influenced
by, for instance, the particular occasion or who does the marking or grading. Thus reliability is
often identified as, and measured by, the extent to which, if the assessment were to be
repeated, the second result would agree with the first’ (Harlen, 2000, p 111). When it is not
possible to give the same test twice to the same pupils, or to repeat observations of a
particular event assessed by teachers, other procedures are adopted. In the case of tests,
these include using parallel forms of the test, or splitting the test randomly into two halves and
comparing the scores or correlating the items with the total score and averaging the result. In
the case of observation of tasks, the equivalent procedures are to compare the rating of the
same event by two independent raters.
Validity refers to what is assessed and how well this corresponds with the behaviour or construct
that it is intended to test or assess. The distinction between reliability and validity is clear in, for
example, the case of a multiple-choice test of knowledge about materials that conduct electricity.
This would not be a valid assessment of understanding of an electric circuit, although it would
give a score of quite high reliability (Harlen, 2000). Validity, however, is not a simple concept and
various forms of it are identified according to the basis of the judgement of validity. Evidence
relating to the content validity of an assessment would result from comparing the content
assessed with the content of a curriculum it was intended to assess.
It is well recognized that the concepts of reliability and validity are not independent of each
other in practice. The relationship is usually expressed in a way that makes reliability the prior
requirement. The argument is that an assessment that does not have high reliability cannot
ASL 1-Midterm Requirements
Jocelyn D Quinto
have high validity; if there is uncertainty about the accuracy of the assessment and it is
influenced by a number of different factors, then the extent to which it measures what it is
intended to measure must also be uncertain.
In order to improve the quality of selected-response tests that will be used again, poorly
functioning items need to be identified so they can be fixed, eliminated, or replace.
Since reliability and validity are not independent of each other - and increasing one tends
to decrease the other - it is useful in some contexts to refer to dependability as a combination
of the two. The approach to summative assessment by teachers giving the most dependable
result would protect construct validity, while optimising reliability.
Classroom tests are routinely designed and administered by teachers. To be of real value
they must be valid and reliable. Test validity and reliability may be achieved from the steps
taken throughout the design and administration stages. Two of the most effective methods
that could be employed to enhance reliability and validity are constructing a table of
specifications and carrying out a pilot study on the newly designed test. For increased
efficiency, teachers may decide to work in teams to design and develop classroom tests. Lastly,
although following the recommended measures previously discussed does not provide a
guarantee for a perfect and valid test, it can certainly help teachers from getting it totally
wrong.
References:
-https://teachingcommons.unt.edu/teaching-essentials/assessment/why-
reliability- and-validity-are-important-learning-assessment
https://www.researchgate.net/publication/242477615_Assessment_of_learning_out
comes_validity_and_reliability_of_classroom_tests
(https://www.thegraidenetwork.com/blog-all/2018/8/1/the-two-keys-to-
quality- testing-reliability-and-validity)
Task #02: Use (Microsoft) Excel to solve the reliability coefficient of the given problems. Complete the
table provided in each problem.
1. Teacher Mark conducted a test to his 10 students in his Assessment of Learning class twice
after two-day interval. The test given after two days is exactly the same test given the first
time. Scores below were gathered in the first test (FT) and second test (ST). Using test-retest
method, is the test reliable?
Student FT ST
1 35 34
2 21 21
3 40 40
4 27 27
5 23 25
6 28 26
7 34 35
8 35 34
9 18 19
10 36 38
2. Teacher Magdalena conducted a test to her 10 students in Linear Algebra class two times
after one-week interval. The test given after one week is the parallel form of the test during
the first time the test was conducted. Scores below were gathered in the first test (FT) and
second test or parallel test (PT). Using the equivalent or parallel form method, is the test
reliable?
Student FT ST
1 5 35
2 50 2
3 4 40
4 7 45
5 3 48
6 28 6
Midterms
Quinto, Jocelyn D.
7 4 35
8 35 4
9 8 19
10 36 8
4. Prof. Bella administered a 50-item multiple choice test in Social Science for her senior high
school students. Below are the scores of 20 students, find the reliability using the Kuder-
Richardson formula.
6 33
7 32
8 34
9 41
10 15
11 23
12 25
13 27
14 44
15 12
16 26
17 27
18 32
19 33
20 27
Quiz #03
Task #01: Construct a graphic organizer to summarize the process of conducting item analysis.
PROCESS OF
CONDUCTING ITEM
ANALYSIS
Midterms
Quinto, Jocelyn D.
Count off the N test results Count off the N test results
Set aside the middle from the bottom of arranged from the top of arranged
results scores. This constituted the scores. This constituted the
lower group upper group
Note: The 30% of 60 is 18 so consider only the top 18 (students with highest and lowest scores).