STM - Abm.hms126 Unit 1 Lesson 1
STM - Abm.hms126 Unit 1 Lesson 1
STM - Abm.hms126 Unit 1 Lesson 1
CONTEXT
LEARNING OUTCOMES
At the end of this lesson, the learner:
a. constructs an instrument and establishes its validity and reliability.
b. describes characteristics, processes, and ethics of research
c. presents and interprets data in tabular and graphical form.
VALUES INTEGRATION
a. Competence
Students will be able to demonstrate competence in understanding and organizing of
research findings. They will be able to explain the concepts of validity and reliability,
adhere to ethical principles when data gathering, and apply appropriate methods to
analyze the data they collected.
b. Character
Students will develop character traits such as thoroughness, integrity, and attention to
detail in relation to organizing of research findings. They will display a commitment
to conducting research with high standards, ensuring the validity and reliability of their
measurement instruments, and upholding ethical principles in their data collection and
analysis.
BIG IDEA
“Mastering the art of constructing valid instruments, presenting data effectively, and
upholding ethical principles, learners unlock the foundation for conducting impactful
research with integrity and precision.”
EXPERIENCE
PRELECTION: EMOJI SURVEY
Instructions: You will have to accomplish this activity by pair Below are a set of questions that you
should answer individually, but only through EMOJIS. You may use one or more emojis to answer
each question. Once you are finished, ask your pair to interpret your answers and compare it to
your own. Do their interpretation of your answers correct?
Questions:
1. How do you feel about Mondays?
2. What’s your favorite genre of music?
Partner’s Interpretation:
______________________________________________________________________________
______________________________________________________________________________
Your Interpretation:
______________________________________________________________________________
______________________________________________________________________________
In this activity, you might have noticed that different students might choose different emojis to
represent their responses to the same question. This variation in responses is similar to what
researchers encounter when designing research instruments. Some participants might interpret
questions differently, or provide inconsistent answers, impacting the reliability of the data
collected.
CONCEPT NOTES
Considering the importance of a research instrument to be used in data gathering, it is imperative
that the validity and reliability of the instrument be established.
The validity of an instrument is the extent to which an instrument measures what it is intended to
measure. The reliability of an instrument measures the degree of consistency with which the
instrument measures what it is intended to measure.
VALIDITY TESTING
Validity refers to how accurately it measures what the researchers intended to measure. It shows
how well the results among the participants represent the true findings among similar individuals
outside of the study.
Example:
A researcher made a qualitative questionnaire guide and presented it to a group of
college students and professors to get feedback on whether the items appear to extract the
necessary experience that the researchers want to extract.
There are several ways to assess the validity of an instrument. However, the following are often
used:
1. Content Validity. This is where researchers ask experts in an area to review the instrument
and assess if it adequately or completely contains the characteristics being measured.
Example:
Researcher A has crafted an Algebra test to measure the mathematical competency of
senior high school students in Algebra. It is important that the test covers every form of
algebra that was taught in class to ensure accurate indications of students' understanding of
the subject. If certain types of algebra are left out, the results may not be an accurate measure
of students' comprehension. Similarly, if questions unrelated to algebra are included, the
validity of the test as a measure of algebra knowledge may be compromised. Therefore, they
sought the assistance of a mathematics teacher to review the instrument, knowing that
the teacher has the necessary knowledge and experience.
2. Criterion Validity. This is where researchers ask experts in an area to review the instrument
and asses if it is helpful in predicting characteristics being measured, such as a person’s
behavior.
Example:
Researcher B crafted an English test to measure the writing ability of senior high
school students in English. They ask an English professor to assess how well the test really
does measure students’ writing ability. They found an existing test that is considered a valid
measurement of English writing ability, and compared the results when the same group of
students took both tests. The outcomes were very similar; therefore, the new test has high
criterion validity.
RELIABILITY TESTING
Example:
Researcher Cee is asking different people to guess someone's weight using a survey
instrument. If he repeats this process several times, it's likely that each person's guess will be
different each time. This inconsistency shows that the method of measurement is unreliable.
On the other hand, if he uses a weight scale in the survey to measure someone's weight, he
can expect to obtain a similar value each time the person steps on the scale, if their actual
weight hasn't changed between measurements. This makes the weight scale a more reliable
way of measuring weight in the survey.
There are several ways on how to measure reliability of an instrument. The table below shows the
two most relevant test that you may use.
RELIABILITY PURPOSE TYPE OF SURVEY
Cronbach's Alpha Measures the internal consistency of a Likert scale, multiple-choice,
scale or instrument with multiple or any survey with multiple
items items
Cronbach's alpha is a statistical measure that assesses the internal consistency or reliability of a
scale or questionnaire. It is commonly used in research to evaluate how closely related a set of
items or questions are in measuring a particular construct or trait.
Example:
This is where Cronbach's alpha comes in. Cronbach's alpha is like a quality check for
the survey. It helps the student determine if the questions are working well together and if they
are reliable in measuring happiness at school. By calculating Cronbach's alpha, the student
can see how closely related the questions are to each other. If the alpha value is high, it means
that the questions are strongly correlated and that they are consistently measuring happiness
at school. This is good news because it shows that the survey is reliable, and the questions are
doing their job effectively.
On the other hand, if the alpha value is low, it suggests that the questions might not be
working well together or may not be consistently measuring happiness at school. In this case,
the student may need to revise or remove some questions to improve the reliability of the
survey.
A higher Cronbach's alpha suggests that the items in the scale are highly correlated and reliably
measure the construct of interest. On the other hand, a lower alpha value indicates that the items
are less consistent or may not be measuring the same construct effectively. For reference, the
following table indicates when an alpha value is considered reliable.
If the survey contains dichotomous (binary) questions, an alternative to Cronbach’s alpha is the
Kuder-Richardson Formula 20 (KR-20).
KR-20 like Cronbach's alpha, evaluates how well a set of items or questions in a scale or test
measure the same underlying construct. Additionally, the interpretation above may also be utilized
to identify the reliability of the survey.
A Pilot test, also known as a pilot study or pilot phase, refers to a small-scale trial or preliminary
test conducted before the full implementation of a research study or project. The purpose of a pilot
test is to evaluate and refine the research design, methodology, data collection instruments, or
procedures before undertaking the main study.
During a pilot test, researchers typically select a smaller sample size (often a subset of the target
population) and carry out a trial run of the study procedures. This allows them to identify any
potential issues or challenges that may arise during the full-scale implementation and make
necessary adjustments to enhance the validity, reliability, or feasibility of the study.
Guidelines in Conducting the Pilot Testing:
To make sure that the pilot testing is done effectively, here are the guidelines on how it can
be done:
1. Select a representative sample: Choose a smaller group of participants that accurately
represents the target population. For QUALITATIVE research, interview at least 2
participants, and for QUANTITATIVE research, select at least 15 participants.
2. Pre-test the instruments: Administer the data collection instruments (e.g., surveys,
interview guides) to participants, ensuring clarity and addressing any confusion.
3. Evaluate instrument clarity: Assess if the instruments are easy to understand, avoiding
ambiguity or confusion.
4. Assess item relevance: Determine if the items capture the intended information
accurately, avoiding redundancy or missing key aspects.
5. Analyze pilot data: Analyze the collected data to identify patterns or issues, utilizing
measures like Cronbach's Alpha or KR-20.
GUIDED PRACTICE
Instructions: Given a research question and instrument, determine how you will test the validity
by using content or criterion validity, and test reliability by calculating internal consistency using
Cronbach's alpha or KR20.
Research Question:
How does caffeine consumption affect sleep quality among teenagers?
INDEPENDENT PRACTICE
Instructions: Based on YOUR RESEARCH OBJECTIVES, determine how you will test the
validity by using content or criterion validity, and test reliability by calculating internal consistency
using Cronbach's alpha or KR20.
How will you test for VALIDITY?
• ____________________________________________________________________
____________________________________________________________________
• ____________________________________________________________________
____________________________________________________________________
• ____________________________________________________________________
____________________________________________________________________
• ____________________________________________________________________
____________________________________________________________________
• ____________________________________________________________________
____________________________________________________________________
REFLECTION-ACTION
Directions. Look back at the lesson proper and the activity you have done. Please answer the
following processing questions within the provided lines.
1. How did your understanding of instrument validity and reliability improve throughout this
learning experience?
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
2. How will you continue to uphold high standards and ethical principles in your future
research endeavors, particularly in relation to instrument validity and reliability?
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
3. How can the skills developed in this lesson be used outside of research, such as your
everyday life?
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
EVALUATION
Written Work:
PILOT TESTING: Conduct a pilot test of your research instrument. For QUALITATIVE
research, interview at least 2 participants, and for QUANTITATIVE research, select at least 15
participants. Administer the instrument, collect feedback, analyze the data, reflect on the findings,
make necessary improvements, and consider conducting a second pilot test if significant changes
were made. The pilot test is crucial for enhancing the quality and effectiveness of your research
instrument before the main data collection phase. Seek guidance from your instructor if needed.
• Demographic characteristics:
Yes, Specify:
No
• Suggestions for improvement:
• Number of participants: 30
• Demographic characteristics: Students from Grade 11, mixed gender and varied
academic backgrounds.
Yes, Specify:
Some participants found certain questions confusing due to complex
wording.
No
• Suggestions for improvement:
Cronbach's Alpha was calculated for internal consistency and yielded a value of
0.82.
Revise certain items for better clarity and include additional questions to capture
students' prior exposure to the teaching method.
The pilot test highlighted the need for improvements in questionnaire clarity and item design. The
Cronbach's Alpha value indicated good internal consistency of the questionnaire. Based on the
feedback and findings, adjustments will be made to enhance the questionnaire for the main data
collection phase.
For EXPERIMENTAL STUDIES only:
Instructions: Using the Experiment Procedure Form, list down all the materials and equipment
that will be used for the experiment. Acquire a signature of an expert as proof of validity and
reliability.
Title:
Purpose/Problem
Hypothesis:
Materials/Supplies:
Controlled Variables:
Procedure:
Name and Signature of Evaluator:
EXPERIMENT PROCEDURE FORM
EXAMPLE
GROUP NO.: 1
SECTION: ANDLAUER
NAME/S: CRUZ, DIANE T.