Summary For Taofikat
Summary For Taofikat
Summary For Taofikat
Research may be very broadly defined as systematic gathering of data and information and its
analysis for advancement of knowledge in any subject. Research attempts to find answer
experimentation aimed at the discovery and interpretation of facts, revision of accepted theories
or laws in the light of new facts, or practical application of such new or revised theories or laws".
Some people consider research as a movement, a movement from the known to the unknown. It
is actually a voyage of discovery. We all possess the vital instinct of inquisitiveness for, when
the unknown confronts us, we wonder and our inquisitiveness makes us probe and attain full and
fuller understanding of the unknown. This inquisitiveness is the mother of all knowledge and the
method, which man employs for obtaining the knowledge of whatever the unknown, can be
termed as research. Research is an academic activity and as such the term should be used in a
technical sense. According to Clifford Woody research comprises defining and redefining
data; making deductions and reaching conclusions; and at last carefully testing the conclusions to
determine whether they fit the formulating hypothesis. D. Steiner and M. Stephenson in the
symbols for the purpose of generalizing to extend, correct or verify knowledge, whether that
knowledge aids in construction of theory or in the practice of an art.” Research is, thus, an
original contribution to the existing stock of knowledge making for its advancement. It is the
pursuit of truth with the help of study, observation, comparison and experiment. In short, the
search for knowledge through objective and systematic method of finding solution to a problem
is research. The systematic approach concerning generalization and the formulation of a theory is
also research. As such the term ‘research’ refers to the systematic method consisting of
enunciating the problem, formulating a hypothesis, collecting the facts or data, analyzing the
facts and reaching certain conclusions either in the form of solutions(s) towards the concerned
RESEARCH PROCESS
Finding an issue or formulating a research question is the first step. A well-defined research
problem will guide the researcher through all stages of the research process, from setting
objectives to choosing a technique. There are a number of approaches to get insight into a topic
A preliminary survey
Case studies
Observational survey
Survey of literature
A thorough examination of the relevant studies is essential to the research process. It enables the
researcher to identify the precise aspects of the problem. Once a problem has been found, the
how they were conducted, and its conclusions. The researcher can build consistency between his
work and others through a literature review. Such a review exposes the researcher to a more
significant body of knowledge and helps him follow the research process efficiently.
Selection of hypothesis
Formulating an original hypothesis is the next logical step after narrowing down the research
topic and defining it. A belief solves logical relationships between variables. In order to establish
It is important for researchers to keep in mind while formulating a hypothesis that it must be
based on the research topic. Researchers are able to concentrate their efforts and stay committed
Collection of information
Data collection is important in obtaining the knowledge or information required to answer the
research issue. Every research collected data, either from the literature or the people being
studied. Data must be collected from the two categories of researchers. These sources may
Experiment
Questionnaire
Observation
Interview
Literature survey
RESEARCH PROBLEMS
Research Problem/Topic and Research Questions Research question is the actually identified and
carefully framed questions, based on literature and identified existing problems which the
researcher wants to answer through systematic study/investigation. The research questions are
thus the specific questions asked by the researcher with the intention of finding answers to. A
research topic/problem means the identified problem that warrants conducting the study.
Although the two (research questions and topic/problem) are inter-twined and or related, they are
different. The research problem has to be clearly identified, formulated for a proper literature
review, relevant and robust data collection as well as the successful conduct of a study. The
research question has to also be worthy of investigation, otherwise it makes the research
insignificant. For research problem, it is key to conduct of every study and is a major
determinant of what method and or design should appropriately be used in the study –
qualitative, quantitative, and mixed, case study, etc. any research problem to be addressed must
take into consideration the relevance and significance of the problem to the audience, availability
of and accessibility to data, the contribution/change it will make from the existing conditions,
can the research be successfully concluded within stipulated/available time, energy, resources,
appropriate review of literature. This review enables the researcher to measure up and understand
what have so far been done on the problem/topic, how such were done (methods/designs), who
did what, what strengths and weaknesses are there in what have been done and what gaps are
there that need to be filled by future studies. Overall, literature review provides the researcher
RESEARCH DESIGN
This is an outline, main plan and or a basic scheme of conducting a particular research using
some specific means of collection and measurement of primary, secondary or both data, and
procedural analysis of the same data in order to address/answer some research questions and or
test some hypotheses. In research, several designs are used and each is important, unique and
appropriate in its context of usage. There are numerous research designs which are seen as ‘a
general strategy for conducting a research study’, in use and they vary from one researcher,
scholar to another. Generally however, several research designs have been identified which
In Descriptive Research Design, the scholar explains/describes the situation or case in depth in
their research materials. This type of research design is purely on a theoretical basis where the
individual collects data, analyses, prepares and then presents it in an understandable manner. It is
the most generalised form of research design. To explore one or more variables, a descriptive
design might employ a wide range of research approaches. Unlike in experimental research, the
researcher does not control or change any of the variables in a descriptive research design;
instead, he or she just observes and measures them. In other words, while qualitative research
may also be utilised for descriptive reasons, a descriptive method of research design is typically
Experimental research is a type of research design in which the study is carried out utilising a
scientific approach and two sets of variables. The first set serves as a constant against which the
variations in the second set are measured. Experimentation is used in quantitative research
methodologies, for example. If you lack sufficient evidence to back your conclusions, you must
first establish the facts. Experimental research collects data to assist you in making better
that a variable change is due only to modification of the constant variable. The study should
identify a noticeable cause and effect. The traditional definition of experimental design is “the
strategies employed to collect data in experimental investigations.” There are three types of
experimental designs:
Pre-experimental research design
Diagnostic research design is a type of research design that tries to investigate the underlying
cause of a certain condition or phenomenon. It can assist you in learning more about the
elements that contribute to certain difficulties or challenges that your clients may be
experiencing. This design typically consists of three research stages, which are as follows:
Explanatory research is a method established to explore phenomena that have not before been
researched or adequately explained. Its primary goal is to notify us about where we may get a
modest bit of information. With this strategy, the researcher obtains a broad notion and uses
research as a tool to direct them more quickly to concerns that may be addressed in the future. Its
purpose is to discover the why and what of a subject under investigation. In short, it is a type of
research design that is responsible for finding the why of the events through the establishment of
cause-effect relationships.
Variables, Measurement, and Scaling Technique
VARIABLES
In this universe, there are two types of entities— the first one varies over different situations and
the second one remains fixed. An entity which varies over different situations (e.g., time,
Discrete Variable
A discrete variable is one which takes only an integer value within a given range. For example,
the number of grains per panicle of a particular variety of paddy varies between 40 and 60 grains.
Dependent Variable
A dependent variable is a type of variable whose values are dependent on the values taken by the
other variables and their relationship. Generally in relational studies, a variable is influenced/
affected by other related variables. In a production function analysis, there exists a functional
Explanatory Variables
Independent variables are sometimes known as explanatory variables. Any variable which
regression analysis, there are only one predictor and one response variable.
Stimulus Variable
The idea of stimulus and response variables is familiar in agriculture, socioeconomic, and
clinical studies. A stimulus is a type of treatment applied to the respondents to record their
response.
In clinical studies generally, the doses, concentrations, different chemicals, etc., form a
stimulus, whereas the response may be in the form of quantal response or quantitative response.
MEASUREMENT
according to scientific rules. It is the process observing and recording the observations that are
collected as part of a research effort. Measurement means the description of data in terms of
numbers – accuracy; objectivity and communication. The combined form of these three is the
actual measurement. Accuracy: The accuracy is as great as the care and the instruments of the
observer will permit. Objectivity: Objectivity means interpersonal agreement. Where many
persons reach agreement as to observations and conclusions, the descriptions of nature are more
Levels Of Measurement
Measurement is important in the process of data collection. Researcher need to measure Several
characteristic of variable such as human subject ,animal ,object and event. Level of measurement
refers to the relationship among the values that are assigned to the attributes for a variable. It is
important because - First, knowing the level of measurement helps you decide how to interpret
the data from that variable. When you know that a measure is nominal, then you know that the
numerical values are just short codes for the longer names.The most widely used classification of
: (a) nominal scale; (b) ordinal scale; (c) interval scale; and (d) ratio scale. : (a) Nominal Scale-
The nominal scale (also called dummy coding) simply places people, events, perceptions, etc.
into categories based on some common trait. Some data are naturally suited to the nominal scale
such as males vs. females, white vs. black vs. blue, and American vs. Asian. The nominal scale
forms the basis for such analyses as Analysis of Variance (ANOVA) because those analyses
require that some category is compared to at least one other category. Coding of nominal scale
data can be accomplished using numbers, letters, labels, or any symbol that represents a category
into which an object can either belong or not belong. In research activities a Yes/No scale is
nominal.
(b) Ordinal Scale-An ordinal level of measurement uses symbols to classify observations into
categories that are not only mutually exclusive and exhaustive; in addition, the categories have
some explicit relationship among them.. Most of the commonly used questions which ask about
job satisfaction use the ordinal level of measurement. For example, asking whether one is very
satisfied, satisfied, neutral, dissatisfied, or very dissatisfied with one’s job is using an ordinal
scale of measurement.
(c) Interval Scale-In the case of interval scale, the intervals are adjusted in terms of some rule
that has been established as a basis for making the units equal. The units are equal only in so far
as one accepts the assumptions on which the rule is based. Interval scales can have an arbitrary
zero, but it is not possible to determine for them what may be called an absolute zero or the
unique origin.An interval level of measurement classifies observations into categories that are
not only mutually exclusive and exhaustive, and have some explicit relationship among them,
but the relationship between the categories is known and exact. This is the first quantitative
application of numbers. In the interval level, a common and constant unit of measurement has
been established between the categories. For example, the commonly used measures of
temperature are interval level scales. We know that a temperature of 75 degrees is one degree
warmer than a temperature of 41 degrees. Numbers may be assigned to the observations because
the relationship between the categories is assumed to be the same as the relationship between
numbers in the number system. For example, 74+1= 75 and 41+1= 42. The intervals between
(d) Ratio Scale-The ratio level of measurement is the same as the interval level, with the
addition of a meaningful zero point. There is a meaningful and non-arbitrary zero point from
which the equal intervals between categories originate. For example, weight, area, speed, and
velocity are measured on a ratio level scale. In public policy and administration, budgets and the
number of program participants are measured on ratio scales. In many cases, interval and ratio
scales are treated alike in terms of the statistical tests that are applied. A ratio scale is the top
SCALING OF MEASUREMENT
Scaling is the branch of measurement that involves the construction of an instrument that
associates qualitative constructs with quantitative metric .Stevens stated the simplest and most
according to a rule”. In most scaling, the objects are text statements, usually statements of
attitude or belief. People often confuse the idea of a scale and a response scale
Psychologist Robert Thurstone was one of the first and most productive scaling theorists. He
actually invented three different methods for developing a unidimensional scale: the method of
equal-appearing intervals; the method of successive intervals; and the method of paired
comparisons. The three methods differed in how the scale values for items were constructed, but
in all three cases, the resulting scale was rated the same way by respondents Thurstone
developed the method of equal-appearing intervals. This technique for developing an attitude
scale compensates for the limitation of the Likert scale in that the strength of the individual items
is taken into account in computing the attitude score. It also can accommodate neutral
1. Collect statements on the topic from people holding a wide range of attitudes, from extremely
favorable to extremely unfavorable. For this example, attitude toward the use of Yaba. Example
statements are - It has its place. Its use by an individual could be the beginning of a sad situation.
3. Step - 3. Originally, judges were asked to sort the statements into eleven (11) stacks
representing the entire range of attitudes from extremely unfavourable (1) to extremely
favourable (11). The middle stack is for statements which are neither favourable nor
unfavourable (6). Only the end points (extremely favourable and extremely unfavourable) and
the midpoint are labelled. The assumption is the intervening stacks will represent equal steps
along the underlying attitude dimension. With a large number of judges, for example, using a
class or some other group to do the preliminary ratings, it is easier to create a paper-and-pencil
version.
SAMPLING DESIGN
Sampling is the process of systematic selection of elements from a population of interest so that
by studying the sample a researcher can fairly generalize the results about the population. Size of
population ranges from few individuals, for example, nuclear scientists in the country, to a very
large number, for example, school going children in the country. In the first example, it is fairly
less difficult for a researcher to identify the population for the study as the number of scientists
specialised in nuclear science in the country is less. Given the resources and time, sometimes
researcher might collect data from entire population. Operational, technical and material
constraints of research may demand collection of data from a set of elements drawn from
population instead. If data are collected from all the elements of population, it is referred to as
census data. If data are collected from few select respondents, it is referred to as sample data.
The important issue here is that how the researcher arrives at generalizations or explanations
Interval Sampling
This kind of sampling may be characterised by its systematic nature of sampling may be
characterised by its systematic nature of uncertainty. Interval sampling is random in the sense
that there is no basis for deciding the units to be chosen, yet it follows a systematic format of
choosing the uncertain units. The prerequisite of interval sampling is to have a list of all units in
the universe. The researcher randomly chooses one of the units that may or may not be the first
one in the list. Thereafter the units following after an interval of a certain ‘n’ number will be
chosen. That is to say, every ‘n th ’ unit will be chosen for the sample. This ‘n’ number may be
any number of the researcher’s choice. Interval sapling is not purely Probability Sampling, as all
the units do not stand an equal chance of being represented in the sample. Once the r searcher
decides the gap, then the units falling in between the intervals straightaway lose their chance of
being in the sample purely Probabilities. This is the reason Interval Sampling cannot be
researcher to choose the units, except that the researcher chooses the number of interval after
Stratified Sampling
population is often formed in such a way that it can be divided into different strata of
homogeneous population. Stratified Sampling is helpful for doing drawing samples out of such a
population. First the population is divided into different strata or layers and then samples are
drawn out of each stratum. The units from each sample from the various strata form the final
sample for carrying out the research. Strata can be purposely formed by the researcher, by
putting together the units having common characteristics. Thus each stratum will be a mini-
universe composed of homogeneous population. Any technique may be used to draw out sample
from the strata. Since the population in the strata is homogeneous, simple random sampling is
also a or Interval Sampling is the most preferred choice. Stratified form of ‘Mixed Sampling’ as
it is neither purely Probability Sampling nor purely Non-Probability Sampling’. Samples from
each stratum may be selected by the researcher proportionate to the strata or randomly. That is
entirely the choice of the researcher. However, if samples are selected proportionately, the
representation of each stratum in the final sample is more authentic. For example for a study of
1,000 persons, the population consists of persons belonging to four different religions in this
manner: 400 people in Religion A, 300 people in Religion B, 200 people in Religion C and 100
people in Religion D. the researcher decides to create a sample of 200 people, that is 20% of the
population. Now for the final sample to proportionately represent each stratum, the researcher
must draw out 20% of sample from each stratum as well. Thus, there will be 80 persons from
Religion A, 60 persons from Religion B, 40 persons from Religion C and 20 persons from
Religion D.
Purposive Sampling
Purposive sampling is also known as ‘Judgment Sampling’, as it relies entirely on the wish and
judgment of the researcher. This is the purest form of Nonprobability Sampling. No unit in the
universe stands any chance of being included in the sample except the ones that the researcher
himself/herself chooses. That is to say all the units in the universe do not have an equal chance of
being included in the sample. In purposive sampling the researcher purposely selects units to
include in the sample. The basis for selection of the units is entirely the wish and judgment of the
researcher.
Cluster Sampling
Cluster sampling involves drawing samples from smaller clusters that the population is divided
into. It should not be confused with stratified sampling. In cluster sampling, the population is
either studied in multi-phase method, in different clusters, or samples are drawn from each
cluster. This type of sampling is useful only where the population can be looked at, in a cluster.
Unlike stratified sampling, cluster sampling does not require the population to be divided into
homogeneous groups; that is to say the clusters may be heterogeneous. For example, an
of students, teachers, visiting faculty, office staff, etc., and it cannot be divided into strata
because it is best to be seen in its functional mode. But the University has various departments,
which can be considered each as a cluster. The clusters may be studied one by one in multi-phase
method or else samples may be formed out of each of the clusters, and studied together, just like
we saw in.
Quota Sampling
Quota sampling is a very useful method of sampling where a large body of persons is to be
studied. In quota sampling the population is divided into different categories on the basis of some
characteristics, and selection of units in the sampling is done according to the proportion that
group represents in the entire population. For quota sampling the researcher must first define the
characteristics on the basis of which the population shall be divided into groups. The researcher
must have knowledge about the proportion that each characteristic group possesses in the
population. The sample drawn from the universe would proportionately represent the
COLLECTION OF DATA
Data represents information collected in the form of numbers and text. Data collection is done
after an experiment or an observation. Data collection is useful in planning, estimation and it also
saves lot of time and resources. Data collection is either qualitative or quantitative. Data
collection methods are used in businesses and sales organizations to analyze the outcome of a
problem, arrive at a solution, draw conclusions about the performance of a business and so on.
Primary data collection is the original form of data that is collected directly from the source. For
example, data collected through surveys, opinion polls from people, conducting experiments,
Primary data can be classified in to the following two types. They are,
Qualitative data collection methods does not include any mathematical calculation to collect
data. It is mainly used for analyzing the quality, or understanding the reason behind something.
Some of the common methods used for qualitative data collection are discussed below.
Interview method
As the name suggests data collection is made by verbal conversation of interviewing people in
person or in a telephone or using any computer aided model. A short note on each of these
Personal or Face to Face Interview: This is done by an interviewer with a person from whom
data is collected. The interview may be structured or it may not be structured. Data to be
collected is directly got from the person who is interviewed by straight forward questions or
investigations.
Telephonic Interview: This is done by asking questions in a telephonic call. There are many
online calling software readily available to carry out this data collection method. Data is
Computer Assisted Interview: This type of interview is same as that of a personal interview,
except that the interviewer and the person being interviewed are doing it in a desktop or a laptop.
Also, the data collected is directly updated in a database in a aim of making the process quicker
and easier and it eliminates lot of paper work to be done in updating the collection of data.
Questionnaire method is nothing but conducting surveys with a set of questions targeting the
quantitative research. These survey questions are easily made using online survey questions
creation software. It also ensures that the trust of the people attending the surveys are
Web Based Questionnaire: Web based questionnaire is a method in which a survey link is sent
to the targeted audience. They click on the link which takes them to the survey questions. This is
a very cost efficient and a quick method which people can do it at their own convenient time.
The survey has the flexibility of doing in any device also. So it is really reliable and flexible.
Mail Based Questionnaire: In this type of questionnaire mails are sent to the target audience
which contains sheets of paper containing survey questions. Basically it contains the purpose of
conducting the survey and the type of research that is being made. Some incentives are also
given to complete this survey which is a main attraction. The benefit of this method is that the
respondents name remains undisclosed to the researchers and they are free to take any time to
Observation method
As the word 'observation' suggests, in this method data is being collected directly by observing.
This can be achieved by counting the number of people or umber of events that take place in a
particular time frame. The main skill needed here is to observe and arrive at the numbers
The information/data collected/collated either from primary or secondary sources at the initial
stage is known as raw data. Raw data is nothing but the observation recorded from individual
units. Raw data, particularly the primary data, can hardly speak anything unless and otherwise
arranged in order or processed. Data are required to be processed and analyzed as per the
requirement of a research problem outlined. Working with data starts with the scrutiny of data;
Raw data set is put under careful examination to find out the existence of any abnormal/doubtful
observation, to detect errors and omissions, if any, and to rectify these. Editing/scrutiny of data
ensures the accuracy, uniformity, and consistency of data. If the observations are few in number,
during scrutiny, one can have an overall idea about the information collected or collated. If the
number of observations is large, then one may go for arrangement of observations in order, that
Coding of Data
the process of assigning numerals or other symbols to the responses so that these could be
categorized. Coding should be made in such a way that these are nonoverlapping and all the
observations are categorized in one of the categories framed for the purpose. That means the
coding should be made in such a way that categories are exclusive and exhaustive in nature.
Generally, the numerical information does not require coding. Coding helps researchers in
Analysis of Data
Once after the processing and data presentation, it is now imperative for a researcher to explain
and describe the nature of the research in a deeper sense. In the first attempt, the researcher tries
to explain/describe the nature of the information through measures of central tendency, measures
analysis. In his/her next endeavor, he/she tries to find out the association among the variables—
which are coming under either bivariate (taking two variables at a time) or multivariate analysis
(taking more than two variables at a time). Once after completion of the description (through
univariate/bivariate/ multivariate analysis), the researcher tries to infer or draw conclusion about
Correlation Analysis
While conducting research works, a researcher needs to deal with a number of factors/variables
at a time, instead of a single variable/factor. And all these variables may not be independent of
each other; rather they tend to vary side by side. Most of the growth/social/economic and other
variables are found to follow the above characteristics. For example, while dealing with yield
component analysis of any crop, it is found that yield is an ultimate variable contributed/
influenced/affected by a number of other factors. If we consider the yield of paddy, then one can
find that the factors like the number of hills per square meter, number of tillers per hill, number
of effective/panicle-bearing tillers per hill, length of the panicle, number of grains per panicle,
and test (1,000 grain) weight of grains are influencing the yield. Variation in one or more of the
abovementioned factors results in variations of the yield. Thus, yield may vary because of
variation in the number of hills per square meter or variations in the number of tillers per hill or
so on. Again, yield may vary because of variation in the number of hill per square meter and/or
in the number of tiller per hill and other factors. When we consider variations in one variable
the field of agriculture, social science, business, education, medicine, and several other fields.
Using this tool the researcher can draw inference about the samples whether these have been
drawn from the same population or they belong to different populations. Using this technique
the researcher can establish whether a no. of varieties differ significantly among themselves
with respect to their different characteristics like yield, susceptibility to diseases and pests,
nutrient acceptability, and stress tolerance and efficiency of different salesmen; for example,
one can compare different plant protection chemicals, different groups of people with
respect to their innovativeness, and different drugs against a particular disease, different
Data Transformation
Among the different types of transformation generally used to make the data corrected for
Logarithmic Transformation
The number of plant insects, number of egg mass per unit area, number of larvae per unit
area, etc., are typical examples wherein variance is proportional to the mean and logarithmic
transformation can be used effectively. The procedure is to take simply the logarithm of each and
every observation and carry out the analysis of variance following the usual procedure with the
transformed data.
Among all experimental designs, completely randomized design (CRD) is the simplest one
where only two principles of design of experiments, that is, replication and randomization,
have been used. The principle of local control is not used in this design. CRD is being analyzed
as
per the model of one-way ANOVA. The basic characteristic of this design is that the whole
experimental area (1) should be homogeneous in nature and (2) should be divided into as many
numbers of experimental units as the sum of the number of replications of all the treatments. Let
respectively, then according to this design, we require the whole experimental area to be divided
into 20 experimental units of equal size. Under laboratory condition, completely randomized
LSD is a design in which known sources of variation in two perpendicular directions, that
is, north to south and east to west or south to north and west to east, could be taken into
consideration. In this type of field, we require framing of blocks into perpendicular directions
which take care of the heterogeneity in both directions. A Latin square is an arrangement of
treatments in such a way that each treatment occurs once and only once in each row and each
column. If t is the number of treatments, then the total number of experimental units needed for
this design is t _ t.
The analysis of covariance can be taken up for one-way and two-way layouts and other specific
types of experimental design. In the analysis of covariance, there is one dependent variable (y)
and one or more concomitant variables. The basic difference between the analysis of variance
and the analysis of covariance models is that in the former, each response (y) is partitioned into
two components, one because of its true value and the error part.
There are several examples where the analysis of covariance can be used effectively in
augmenting the precession of the experimental results. For example, in yield component analysis
of paddy, the yield components, namely, the number of hills per unit area, the number of
effective tillers per hill, and the number of grains per panicles, can be used as covariates or
school-going children, initial body weight, height, age, physical agility, etc., can be taken as
concomitant variables during the analysis of covariance. In the analysis of covariance, there are
two types of variables: the characteristic of the main interest and the information on the
In the analysis of covariance, the expected (true) value of the response is the resultant of two
components, one because of the linear combination of the values of the concomitant variables
which are functionally related with the response and another one already obtained in the analysis
of variance. During regression analysis, generally one dependent variable is taken at a time to
find out its relationship with independent variables. But in many situations, the researchers need
The researchers become interested in getting the relationship between a group of dependent
getting the interrelationship between these two groups of variables. Canonical correlation is a
powerful multivariate technique which provides the information of higher quality and in a more
interpretable manner. Canonical analysis suggests the number of ways in which two groups
(independent and dependent) of variables are related, their strength of linear relationship, and the
nature of the relationship which otherwise might be unmanageable by judging huge number of