Asm Long

Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

9.Explain measurement scales with suitable examples.

SCALES OF MEASUREMENT
In Statistics, the variables or numbers are defined and categorized using different scales of measurements. Each
level of measurement scale has specific properties that determine the various use of statistical analysis. In this
article, we will learn four types of scales such as nominal, ordinal, intervaland ratio scale
WHAT IS THE SCALE?
A scale is a device or an object used to measure or quantifies any event or another object.

LEVELS OF MEASUREMENTS
There are four different scales of measurement. The data can be defined as being one of the four scales.The four
types of scales are:
1. Nominal Scale
2. Ordinal Scale
3. Interval Scale
4. Ratio Scale

I. NOMINAL SCALE
A nominal scale is the 1st level of measurement scale in which the numbers serve as “tags” or “labels” to
classify or identify the objects. A nominal scale usually deals with the non-numeric variables or the numbers that
do not have any value.

CHARACTERISTICS OF NOMINAL SCALE


A nominal scale variable is classified into two or more categories. In this measurement mechanism, the
answershould fall into either of the classes

It is qualitative. The numbers are used here to identify the objects.


The numbers don’t define the object characteristics. The only permissible aspect of numbers in the
nominal scaleis “counting.”

Example:
An example of a nominal scale measurement is given below:
What is your gender?

M- Male

F- Female

Here, the variables are used as tags, and the answer to this question should be either M or F.

II. ORDINAL SCALE


The ordinal scale is the 2nd level of measurement that reports the ordering and ranking of data without
establishing the degree of variation between them. Ordinal represents the “order.” Ordinal data is known
as qualitative data or categorical data. It can be grouped, named and also ranked.
CHARACTERISTICS OF THE ORDINAL SCALE
 The ordinal scale shows the relative ranking of the variables
 It identifies and describes the magnitude of a variable
 Along with the information provided by the nominal scale, ordinal scales give the rankings of
those variables
 The interval properties are not known
 The surveyors can quickly analyze the degree of agreement concerning the identified order of
variables
EXAMPLE:
 Ranking of school students – 1st, 2nd, 3rd, etc.
 Ratings in restaurants
 Evaluating the frequency of occurrences
i. Very often
ii. Often
iii. Not often
iv. Not at all
 Assessing the degree of agreement
1. Totally agree
2. Agree
3. Neutral
4. Disagree
5. Totally disagree
III. INTERVAL SCALE
The interval scale is the 3rd level of measurement scale. It is defined as a quantitative measurement scale
in which the difference between the two variables is meaningful. In other words, the variables are
measured in an exact manner, not as in a relative way in which the presence of zero is arbitrary.
CHARACTERISTICS OF INTERVAL SCALE:
 The interval scale is quantitative as it can quantify the difference between the values
 It allows calculating the mean and median of the variables
 To understand the difference between the variables, you can subtract the values between the
variables
 The interval scale is the preferred scale in Statistics as it helps to assign any numerical values to
arbitrary assessment such as feelings, calendar types, etc.
EXAMPLE:
I. Likert Scale
II. Net Promoter Score (NPS)
III. Bipolar Matrix Table
IV. RATIO SCALE
The ratio scale is the 4th level of measurement scale, which is quantitative. It is a type of variable
measurement scale. It allows researchers to compare the differences or intervals. The ratio scale has a
unique feature. It possesses the character of the origin or zero points.
CHARACTERISTICS OF RATIO SCALE:
 Ratio scale has a feature of absolute zero
 It doesn’t have negative numbers, because of its zero-point feature
 It affords unique opportunities for statistical analysis. The variables can be orderly added,
subtracted, multiplied, and divided. Mean, median, and mode can be calculated using the ratio
scale.
 Ratio scale has unique and useful properties. One such feature is that it allows unit conversions
like kilogram – calories, gram – calories, etc.

EXAMPLE:
An example of a ratio scale is:What is your weight in Kgs?

 Less than 55 kgs


 55 – 75 kgs
 76 – 85 kgs
 86 – 95 kgs
 More than 95 kgs

10.What is assessment? Discuss the general principles of assessment.

MEANING OF ASSESSMENT

In education, the term assessment refers to the wide variety of methods that educators use to evaluate,
measure, and document the academic readiness, learning progress, and skill acquisition of students from
preschool through college and adulthood. It is the process of systematically gathering information as
part of an evaluation. Assessment is carried out to see what children and young people know,
understand and are able to do. Assessment is very important for tracking progress, planning next steps,
reporting and involving parents, children and young people in learning

1. Validity: Assessment tools and methods should measure what they are intended to measure. Validity
ensures that the assessment accurately reflects the knowledge, skills, or attributes it is designed to evaluate.

2. Reliability: Reliable assessments yield consistent results over time and across different evaluators. A
reliable assessment tool should produce similar results under consistent conditions.

3. Fairness: Assessments should be fair and free from bias. They should not disadvantage any particular group
of individuals based on factors such as race, gender, socioeconomic status, or cultural background.

4. Transparency: The assessment process, criteria, and expectations should be clear and transparent to both
the assessors and those being assessed. This helps in understanding how judgments are made.

5. Authenticity: Assessments should reflect real-world tasks and situations whenever possible. Authentic
assessments provide a more accurate representation of an individual's abilities and skills.
6. Practicality: Assessments should be practical in terms of time, resources, and ease of administration. They
should not place an undue burden on assessors or those being assessed

7. Multiple Methods: A variety of assessment methods should be used to gather a comprehensive picture of
an individual's abilities. This might include written tests, practical demonstrations, interviews, portfolios, and
observations.

8. Feedback: Providing constructive feedback is an essential component of the assessment process. It helps
individuals understand their strengths and weaknesses, facilitating improvement.

11.What is meant by Validity? Explain in detail about the types

Validity
Validity is the quality of data gathering instrument which enables to measure what it is supposed to
measure. Validity refers to the degree to which the test actually measures what it claims to measure.
Validity is also the extent to which inferences, conclusions and decisions made on the basis of test
scores are appropriate and meaningful. Validity also refers to whether or not a test measures what it
intends to measure. A test with high validity has items closely linked to the test‘s intended focus. A test
with poor validity does not measure the content and competencies it ought to. Validity encompasses the
entire experimental concept and establishes whether the results obtained meet all of the requirements of
the scientific research method. A quality of a measurement indicating the degree to which the measure
reflects the underlying construct, that is, whether it measures what it purports to measure.
Different Methods of Validity
Sometimes validity is also thought as utility. Basic to validity of a tool is to measure the right thing or
asking right questions. The items of a questionnaire, inventory must appropriately sample a significant
aspect of the purpose of the investigation. Validity is not absolute characteristic; it depends on purpose
and method used. The six categories of validity are content validity, construct validity, criterion-related
validity, concurrent validity, predictive validity and face validity.
Content Validity
Content validity refers to the connections between the test items and the subject-related tasks. It is
judged by the degree of relationship between diagnostics techniques and achievements in curriculum.
The content validity of academic achievement test in subjects is examined by checking the test items
against the complete courses of study. The test should evaluate only the outline of the content related to
the field of study in a manner sufficiently representative, relevant, and comprehensible. Based on the
outline of the content indicating the kinds of knowledge and abilities which the students answer
correctly. The overall judgment is based on the extent of agreement between the test and the
instructional plan.
Construct Validity
Construct validity is the relationship between the results of a technique of measurement and other
indicators of the characteristics that are measured. It implies using the construct(concepts, ideas,
and notions) in accordance to the state of the art in the field. Construct validity seeks agreement between
updated subject-matter theories and the specific measuring components of the test. This type of
validation is often used for measures of a psychological characteristic that is assumed to exist by
empirical or theoretical deduction. The general mental ability comprises independent factors such as
verbal ability, number ability, perceptual ability, special ability, reasoning ability and memory ability.
In order to establish the construct validity of a test,it may be necessary to correlate the results of other
tests.
Criterion-Related Validity
It referred to as instrumental validity; it is used to demonstrate the accuracy of a measureor procedure
by comparing it with another process or method which has been demonstrated to be valid. For example,
imagine a hands-on driving test has been proved to be an accurate test of driving skills. A written test
can be validated by using a criterion related strategy in which the hands-on driving test is compared to it
Concurrent Validity
Concurrent validity refers to the usefulness of a test in closely relating to measures or scores on another
test of known validity. Tests are validated by comparing their results with atest of known validity.
Concurrent validity indicates the relationship between a measure and
more or less immediate behavior or performance of identifiable groups. Concurrent validity is
considered when any test is used for the purpose of distinguishing between two or more groups of
individuals whose status at the time of testing is different. Concurrent validity is used for statistical
methods of correlation to other measures. Once the tests have been scored, the relationship between the
examinees‘ status and their performance (i.e., pass or fail) is estimated based on the test.
Predictive Validity
Predictive validity refers to the usefulness of a test in predicting some future performance. Predictive
validity is measured by the degree of relationship between a measuredand subsequent criteria measure
of judgments. This type of validity is used in tests of intelligence, test of aptitudes, vocational interest
inventories and projective techniques. This type of validity is especially useful for test purposes such as
selection or admissions.
Face Validity
Face validity is the characteristics which appear to measure those which are actually sought to be
measured. It determined by a review of the items and not through the use ofstatistical analyses. Unlike
content validity, face validity is not investigated through formal procedures. Instead, anyone who looks
over the test, including examinees, may develop an informal opinion as to whether or not the test is
measuring what it is supposed to measure. Face validity is not however suitable measure of validity,
sometimes it might be misleading.

12.What is moderation? Explain the different types of standardization.

Moderation
Moderation is the process to assure assessment criteria have been applied consistently and that assessment
outcomes are fair and reliable.Modules must be moderated in accordance with the procedures set out in the
University Assessment Policy. There are two kinds of moderation:

Internal moderation
What is internal moderation?
Internal moderation is undertaken by UW staff to demonstrate that the grades awarded are reliable and
consistent. The purpose of internal moderationto ensure that academic standards are appropriate and consistent
across course/subject teams and that feedback reflects agreed assessment policies and assessment criteria, and
therefore the assessment outcomes for students are fair and reliable.

How do I do internal moderation?


A UW assessor marks the set of student assignments, providing a grade and comments to justify the grade, and a
second UW assessor (the moderator) then reviews a sample of marked assignments (normally through blind or
non-blind double marking) from across the grade profile. The moderator’s role is to confirm (or not) the grades
awarded by the first marker, and the quality of the feedback, in the light of course/University protocols and
expectations. A moderation report must be complied for each module run.The moderation report must be sent to
the External Examiner

Moderation is normally undertaken using double marking of a sample of assessments in accordance with the
University Assessment Policy; other methods of moderation are detailed in the policy.

Allocation of moderators can be undertaken in a variety of ways;


 Allocation in pairs, where markers moderate each other’s work
 Allocation where each marker has an identified moderator
 Random allocation where a moderator is assigned or self-assigned

Moderation can occur electronically between identified pairs or at a single event where team members meet and
moderate together.

Where a module is run as part of collaborative provision, moderation should include representation from all
relevant partners.

When do I do internal moderation?


Internal moderation is normally carried out on a sample basis, in order to corroborate the accuracy of the
marking standards and quality of feedback applied by the first marker. It is the most usual form of moderation
activity, and should be used for all assessments where other forms of moderation do not apply. Internal
moderation should be completed within the 20 working days assessment feedback period and before provisional
grades are made available to the students. All summative assessments should be subject to internal and external
moderation.

What do I do if the first and second grades are different?


Differences between markers should be resolved by discussion and agreement in the first instance. Where
agreement cannot be reached, the assessment will be third marked, usually by the module leader.

In this case, after the module leader has third marked they will consider whether the result of third marking
indicates that the marking practices of the first or second marker requires further investigation and/or action.
This requires discussion with both markers and consideration of a range of further measures including:

 No further action
 Additional third marking of a further sample of assessments marked by either or both markers.
 Additional use of blind or non-blind moderation for assessments marked by a specified marker
 Remarking of assessments marked by either marker or both.

Before releasing provisional grades to students, the module leader will review the moderation process as entered
on the form, assessing the levels of disagreement between markers and the agreed grades. Where levels of
disagreement are consistently above one whole grade band, they will consider a range of further measures
including:

 No further action
 Review of a further sample of assessment marked by either or both markers.
 Additional use of blind or non-blind moderation for assessments marked by a specified marker
 Remarking of assessments marked by either marker or both

Justification for decisions will be reported on the moderation report

External moderation
What is external moderation?
External moderation is undertaken by experienced academic peers (External Examiners), independent of the
University, to ensure that the level of achievement of students reflects.

How do I do external moderation?


A minimum sample of 15% of the work for each item of assessment for individual modules must be made
available to the External Examiner(s), as described in section 12.18 of the Assessment Policy. External
Examiners are not expected to arbitrate in the event of disagreement between first and second markers, and are
not expected to change grades for individual items of student work.

When do I do external moderation?


External moderation can take place after the 20 working days assessment feedback period and after provisional
grades are made available to the students.Assessment relating to level 4 modules in three-year degree courses is
not normally subject to external moderation after the first year of delivery.

Moderation checklist
 A formal published statement of standardisation and moderation procedures should be included as an
annexe to the Student Course Handbook. The statement must specify how differences between markers
are to be resolved

 Where a course is taught across different sites or through different partnerships, the course management
team must specify in the formal statement the moderation arrangements across the sites or partnerships.

 Minimum requirements apply to the internal moderation of all summative student assessments, as
described in section 12.12 of the Assessment Policy

 Where a course or module is delivered at more than one site, the External Examiner should be provided
with the provisional statistical profile of grades for each site of delivery, so that they are able to comment
on the marking and student achievement standards for each delivery site.

 The moderation report must be completed and sent to the External Examiner
13.What is Item Analysis? How it improves classroom teaching.

Item analysis is a statistical technique which is used for selecting and rejecting the items of the test on
the basis of their difficulty value and discriminated power.
 Item analysis is a process which examines student responses to individual test items (questions)
in order to assess the quality of those items and of the test as a whole.
 Item analysis is especially valuable in improving items which will be used again in later tests,but
it can also be used to eliminate ambiguous or misleading items in a single test administration.
 In addition, item analysis is valuable for increasing instructors’ skills in test construction, andit is
an important tool to uphold test effectiveness and fairness.
 Item analysis is likely something educators do both consciously and unconsciously on a regular
basis.
 In fact, grading literally involves studying student responses and the pattern of student errors,
whether to a particular question or particular types of questions.
 But when the process is formalized, item analysis becomes a scientific method through which
tests can be improved, and academic integrity upheld.
Need of the Item Analysis:

Following are the needs of a test


 To select the candidates
 To classify the candidates
 To provide the ranking to the candidates
 To promote the candidates
 To frame the statements about the future behavior of the candidates
 To establish the individual differences among the candidates
To achieve the above objectives a test need to have the appropriate items so that the test can differentiate
the individuals in different categories like superior, average and inferior. Therefore for selecting the
appropriate items for the final form of the test.
Following may be the objectivesof the item analysis:
 To select appropriate items for the final draft.
 To obtain the information about the difficulty value( D.V) of all the items
 To provide discriminatory power to differentiate between capable and less capable examines for
items
 To provide modification to be made in some of the items
 To prepare the final draft properly (easy to difficult items)
Therefore we can say that the following may be the functions of Item Analysis:
 Item analysis can increase the efficacy of the exams by testing knowledge accurately.
 Item analysis not only can drive exam design, but it can also inform course content and
curriculum.
 When it comes to item difficulty, it’s important to note whether errors indicate a
misunderstanding of the question or of the concept the item addresses.
 When a large number of students answer an item incorrectly, it’s notable. It may be a matter of
fine-tuning a question for clarity; is the wording of the question confusing? Are the answers
clear?
 It could be that the material may have to be reviewed in class, possibly with a different learning
approach.
Following three functions are the main functions of Item Analysis:
1. Selecting the appropriate items for the test
2. Rejecting in appropriate items, and
3. Modification in the structure of the items
Characteristics of an item:
There are two main characteristics of an item
I. Difficulty Value or Pass Percentage
“The difficulty value of an item is defined as the proportion or percentage of the examinee’s whoanswer
the item correctly”
II. Discriminating Power
Discriminating power further can be divided into two parts
Item Reliability- “Item reliability may be defined as the degree to which an item differentiate high and
low groups on the basis of the same test scores”
Item Validity- “Item validity may be defined as the degree to which the item differentiate between high
and low groups on the basis of some criterion test score”
14.What are the guidelines for preparing multiple-choice questions in objective-based test items? explain

Constructing objective test items, particularly multiple-choice questions (MCQs), is a common and effective
method of assessing knowledge and understanding. Here are some guidelines for creating multiple-choice
items:

1. Clear Stem:

 Ensure that the stem (the main part of the question) is clear and concise.

 State the problem or question directly.

2. Avoiding Negative Phrasing: Avoid using negatives in the stem unless you specifically want to test the
understanding of negative concepts.

3. Plausible Distractors:

 Include distractors (incorrect options) that are plausible and could be chosen by someone who
misunderstands the material.

 Ensure that all distractors are relevant to the content being assessed.

4. Similar Length and Format: Make sure all answer choices are of similar length and format to avoid
giving away the correct answer.

5. Avoiding Tricky Wording:

 Avoid tricky or confusing wording that may mislead the student.

 Ensure that the language used is appropriate for the target audience.

6. Single Best Answer:

 Each question should have only one correct answer.

 Avoid creating questions with multiple correct answers unless specifically intended (e.g., multiple
true/false).

7. Avoiding Absolute Terms: Be cautious with absolute terms like "always" or "never." These can make the
options easier to eliminate.
8. Randomizing Answer Order: If possible, randomize the order of the answer choices to reduce the chance
of guessing.

9. Complete Options: Ensure that each answer choice is a complete and grammatically correct statement
when read with the stem.

15.Discuss the strengths and weaknesses of the portfolio.

Let us now look at the merits of a portfolio. A portfolio can:

 Demonstrate knowledge, skills and attitudes of the learner while applying the principles of
instructional design;
 Showcase the learners' strengths and interests and their understanding of instructional design;
 Present a record of a students' work over a period of time. The working portfolio may show
how the design has been conceptualized and progressed from an abstract idea to a concrete
model;
 Enable measurement of student achievement over a period of time;
 Act as an evaluation tool for performance appraisal, personal assessmentand goal setting;
 Provide diverse information about the learner under one cover;
 Act as an effective tool for reflection by students as well as peer sharing;
 Help the learner in self-assessment;
 Provide opportunities to select a unit or block for preparing a portfolio according to his/her
choice.

Let us also identify some demerits of a portfolio:

 It is time consuming to collect and select the resources for developingportfolios.


 It needs continuous updating to stay relevant and current.
 A paper portfolio may be bulky and not easily portable.
 ePortfolios require application of technology and skills by the learner
Portfolios are purposeful, organized, systematic collections of student work that tell
the story of a student's efforts, progress, and achievement in specific areas. The
student participates in the selection of portfolio content, the development of
guidelines for selection, and the definition of criteria for judging merit. Portfolio
assessment is a joint process for instructor and student.
Portfolio assessment emphasizes evaluation of students' progress, processes, and
performance over time. There are two basic types of portfolios:
A process portfolio serves the purpose of classroom-level assessment on the part of
both the instructor and the student. It most often reflects formative assessment,
although it may be assigned a grade at the end of the semester or academic year. It
may also include summative types of assignments that were awarded grades.
A product portfolio is more summative in nature. It is intended for a major
evaluation of some sort and is often accompanied by an oral presentation of its
contents. For example, it may be used as a evaluation tool for graduation from a
program or for the purpose of seeking employment.
In both types of portfolios, emphasis is placed on including a variety of tasks that
elicit spontaneous as well as planned language performance for a variety of purposes
and audiences, using rubrics to assess performance, and demonstrating reflection
about learning, including goal setting and self and peer assessment.
PORTFOLIO CHARACTERISTICS:
 Represent an emphasis on language use and cultural understanding
 Represent a collaborative approach to assessment
 Represent a student's range of performance in reading, writing, speaking, and
listening as well ascultural understanding
 Emphasize what students can do rather than what they cannot do
 Represent a student's progress over time
 Engage students in establishing on-going learning goals and assessing
their progress towardsthose goals
 Measure each student's achievement while allowing for individual
differences between studentsin a class
 Address improvement, effort, and achievement
 Allow for assessment of process and product
 Link teaching and assessment to learning.
16.Discuss the types of reporting methods in educational institutions.
same as question 7 short questions

You might also like