PR2 Week 6
PR2 Week 6
PR2 Week 6
Practical Research 2
Quarter I – Module 4:
Understanding Data and Ways to
Systematically Collect Data
(Week 6)
P a g e 1 | 10
Subject: Practical Research 2
Grade & Section: Grade 12-ABM
Module No. 4
Week No. 6
Instructor: Ms. Camille N. Cornelio
Objectives:
Lesson
Quantitative Research Design
1
QUANTITATIVE RESEARCH
If the researcher views quantitative design as a continuum, one end of the range
represents a design where the variables are not controlled at all and only observed.
Connections amongst variable are only described. At the other end of the spectrum,
however, are designs which include a very close control of variables, and relationships
amongst those variables are clearly established. In the middle, with experiment design
moving from one type to the other, is a range which blends those two extremes together.
P a g e 2 | 10
A. Non-Experimental Research Design
Non-experimental research means there is a predictor variable or group of
subjects that cannot be manipulated by the experimenter. Typically, this means that
other routes must be used to draw conclusions, such as correlation, survey or case study.
(Kowalczyk 2015)
Remember!
It is very important when conducting survey research that you work with
statisticians and field service agents who are reputable. Since there is a high level of
personal interaction in survey scenarios as well as a greater chance for unexpected
circumstances to occur, it is possible for the data to be affected. This can heavily
influence the outcome of the survey.
There are several ways to conduct survey research. They can be done in person,
over the phone, or through mail or email. In the last instance they can be self-
administered. When conducted on a single group survey research is its own
category.
2. Correlational Research
Correlational research tests for the relationships between two variables. Performing
correlational research is done to establish what the effect of one on the other might
be and how that affects the relationship.
Remember!
Correlational research is conducted in order to explain a noticed occurrence. In
correlational research the survey is conducted on a minimum of two groups. In most
correlational research there is a level of manipulation involved with the specific
variables being researched. Once the information is compiled it is then analyzed
mathematically to draw conclusions about the effect that one has on the other.
Correlation does not always mean causation. For example, just because two data
points sync doesn‘t mean that there is a direct cause and effect relationship.
Typically, you should not make assumptions from correlational research alone.
3. Descriptive
As stated by Good and Scates as cited by Sevilla (1998), the descriptive method is
oftentimes as a survey or a normative approach to study prevailing conditions.
Remember!
Descriptive method involves the discretion, recognition, analysis and
interpretation of condition that currently exist. Moreover, according to Gay (2007)
Descriptive research design involves the collection of the data in order to test
P a g e 3 | 10
hypotheses or to answer questions concerning the current status of the subject of
the study. It determines and reports the way things are.
4. Comparative
Comparative researchers examine patterns of similarities and differences across a
moderate number of cases. The typical comparative study has anywhere from a
handful to fifty or more cases. The number of cases is limited because one of the
concerns of comparative research is to establish familiarity with each case included
in a study. (Ragin, Charles 2015)
Like qualitative researchers, comparative researchers consider how the different
parts of each case - those aspects that are relevant to the investigation - fit together;
they try to make sense of each case. Thus, knowledge of cases is considered an
important goal of comparative research, independent of any other goal.
5. Ex Post Facto
According to Devin Kowalczyk, that Ex post facto design is a quasi-experimental
study examining how an independent variable, present prior to the study, affects a
dependent variable.
Remember!
A true experiment and ex post facto both are attempting to say: this independent
variable is causing changes in a dependent variable. This is the basis of any
experiment - one variable is hypothesized to be influencing another. This is done by
having an experimental group and a control group. So if you're testing a new type of
medication, the experimental group gets the new medication, while the control
group gets the old medication. This allows you to test the efficacy of the new
medication. . (Kowalczyk 2015)
B. Experimental Research
Though questions may be posed in the other forms of research, experimental research is
guided specifically by a hypothesis. Sometimes experimental research can have several
hypotheses. A hypothesis is a statement to be proven or disproved. Once that statement
is made experiments are begun to find out whether the statement is true or not. This
type of research is the bedrock of most sciences, in particular the natural sciences.
Quantitative research can be exciting and highly informative. It can be used to help
explain all sorts of phenomena. The best quantitative research gathers precise empirical
data and can be applied to gain a better understanding of several fields of study.
(Williams 2015)
Types of Experimental research
1. Quasi-experimental Research
Design involves selecting groups, upon which a variable is tested without any
random pre-selection process. For example, to perform an educational experiment,
a class might be arbitrarily divided by alphabetical selection or by seating
arrangement. The division is often convenient especially in an educational
situations cause a little disruption as possible.
2. True Experimental Design
According to Yolanda Williams (2015) that a true experiment is a type of
experimental design and is thought to be the most accurate type of experimental
research. This is because a true experiment supports or refutes a hypothesis using
statistical analysis. A true experiment is also thought to be the only experimental
P a g e 4 | 10
design that can establish cause and effect relationships. So, what makes a true
experiment?
There are three criteria that must be met in a true experiment
1. Control group and experimental group
2. Researcher-manipulated variable
3. Random assignment
Lesson
Instrument Development
2
Developing a research instruments
Before the researchers collect any data from the respondents, the young
researchers will need to design or devised new research instruments or they may adopt
it into the other researches (the tools they will use to collect the data).
If the researcher/s is planning to carry out interviews or focus groups, the young
researchers will need to plan an interview schedule or topic guide. This is a list of
questions or topic areas that all the interviewers will use. Asking everyone the same
questions means that the data you collect will be much more focused and easier to
analyze.
If the group wants to carry out a survey, the young researchers will need to
design a questionnaire. This could be on paper or online (using free software such as
Survey Monkey). Both approaches have advantages and disadvantages.
If the group is collecting data from more than one “type”of person (such as young
people and teachers, for example), it may well need to design more than one interview
schedule or questionnaire. This should not be too difficult as the young researchers can
adapt additional schedules or questionnaires from the original.
REMEMBER!
Any questionnaires ask people for any relevant information about themselves, such as
their gender or age, if relevant. Don‘t ask for so much detail that it would be possible to
identify individuals though, if you have said that the survey will be anonymous.
THE INSTRUMENT
P a g e 5 | 10
Instrument is the generic term that researchers use for a measurement device (survey,
test, questionnaire, etc.). To help distinguish between instrument and instrumentation,
consider that the instrument is the device and instrumentation is the course of action
(the process of developing, testing, and using the device).
Instruments fall into two broad categories, researcher-completed and subject-
completed, distinguished by those instruments that researchers administer versus those
that are completed by participants. Researchers chose which type of instrument, or
instruments, to use based on the research question. Examples are listed below:
USABILITY
Usability refers to the ease with which an instrument can be administered, interpreted
by the participant, and scored/interpreted by the researcher. Example usability
problems include:
Students are asked to rate a lesson immediately after class, but there are only a few
minutes before the next class begins (problem with administration).
Students are asked to keep self-checklists of their after school activities, but the
directions are complicated and the item descriptions confusing (problem with
interpretation).
Teachers are asked about their attitudes regarding school policy, but some questions are
worded poorly which results in low completion rates (problem with
scoring/interpretation).
Validity and reliability concerns (discussed below) will help alleviate usability issues.
For now, we can identify five usability considerations:
How long will it take to administer?
Are the directions clear?
How easy is it to score?
Do equivalent forms exist?
Have any problems been reported by others who used it?
VALIDITY
Validity is the extent to which an instrument measures what it is supposed to
measure and performs as it is designed to perform. It is rare, if nearly impossible,
that an instrument be 100% valid, so validity is generally measured in degrees. As a
process, validation involves collecting and analyzing data to assess the accuracy of
an instrument. There are numerous statistical tests and measures to assess the
P a g e 6 | 10
validity of quantitative instruments, which generally involves pilot testing. The
remainder of this discussion focuses on external validity and content validity.
External validity is the extent to which the results of a study can be generalized
from a sample to a population. Establishing eternal validity for an instrument,
then, follows directly from sampling. Recall that a sample should be an accurate
representation of a population, because the total population may not be available.
An instrument that is externally valid helps obtain population generalizability, or
the degree to which a sample represents the population.
RELIABILITY
Reliability can be thought of as consistency. Does the instrument consistently
measure what it is intended to measure? It is not possible to calculate reliability;
however, there are four general estimators that you may encounter in reading
research:
P a g e 7 | 10
Lesson Guidelines in Writing
3 Research Methodology
While writing this section, be direct and precise. Write it in the past tense. Include
enough information so that others could repeat the experiment and evaluate whether
the results are reproducible the audience can judge whether the results and conclusions
are valid.
The explanation of the collection and the analysis of your data are very important
because;
Readers need to know the reasons why you chose a particular method or procedure
instead of others.
Readers need to know that the collection or the generation of the data is valid in
the field of study.
Discuss the anticipated problems in the process of the data collection and the steps
you took to prevent them.
Present the rationale for why you chose specific experimental procedures.
Provide sufficient information of the whole process so that others could replicate
your study.
You can do this by: giving a completely accurate description of the data collection
equipment and the techniques. Explaining how you collected the data and analyzed
them.
Specifically;
P a g e 8 | 10
Present the basic demographic profile of the sample population like age, gender,
and the racial composition of the sample. When animals are the subjects of a study,
you list their species, weight, strain, sex, and age.
Explain how you gathered the samples/ subjects by answering these questions: -
Did you use any randomization techniques? - How did you prepare the samples?
Explain how you made the measurements by answering this question.
What calculations did you make?
Describe the materials and equipment that you used in the research.
Describe the statistical techniques that you used upon the data.
DIRECTIONS: Make a reflection Relating Reliability and Validity at least 250 words.
(25 points)
Rubrics:
Depth of Reflection : 25%
Required Components: 25%
Structure: 25%
Evidence and Practice : 25%
P a g e 9 | 10
References:
http://people.uwec.edu/piercech/researchmethods/data%20collection%20methods/
data%20collection%20methods.htm
http://www.socialresearchmethods.net/kb/sampprob.php
http://www.stat.ncsu.edu/info/srms/survpamphlet.html
http://www.statcan.ca/english/edu/power/ch2/methods/methods.htm
http://www.statisticssolutions.com/quantitative-research-approach/
http://study.com/academy/lesson/true-experiment-definition-examples.html
http://study.com/academy/lesson/non-experimental-and-experimental-research-
differences-advantages-disadvantages.html
P a g e 10 | 10