MBA-Business Research Methodology Assignment 1 2
MBA-Business Research Methodology Assignment 1 2
MBA-Business Research Methodology Assignment 1 2
(B) Explain Observation Method as a tool of collecting Primary Data in detail along with
its merits and demerits.
Observation is the systematic viewing/watching of specific phenomenon or investigator’s own direct
observation of relevant people, actions and situations without asking from the respondent for
gathering primary data for a particular study Example: Watching the life of street-children provides a
detailed description of their soocial life.
Primary data is a type of data that is collected by researchers directly from main sources through
interviews, surveys, experiments, etc. Primary data are usually collected from the source—where the
data originally originates from and are regarded as the best kind of data in research.
The sources of primary data are usually chosen and tailored specifically to meet the demands or
requirements of particular research. Also, before choosing a data collection source, things like the aim
of the research and target population need to be identified.
For example, when doing a market survey, the goal of the survey and the sample population need to
be identified first. This is what will determine what data collection source will be most suitable—
an offline survey will be more suitable for a population living in remote areas without an internet
connection compared to online surveys.
Sensation
Attention
Perception
Advantages& Limitations •
Advantages of observation method
1. Easiest method: The simplest method of data collection is the method of observation.
Very minimal technical knowledge is required, and even though scientifically controlled
observations require some technical skills, it is still more accessible and more
straightforward than other methods. It is easier because every day, everyone observes
different things in their lives. If little training is given, then it can make a person perfect
for observing their surroundings.
2. Natural surroundings: The observation method of data collection describes the
observed phenomenon precisely and does not introduce any artificiality like other
methods. They describe the phenomenon precisely as it occurs in the natural research
environment. The observation method is not as restricted as the experiment.
3. High accuracy: In interview methods and questionnaire methods, the respondents’
information provides us the information with which the researchers have to work. These
are all indirect methods, and there is no means to investigate the accuracy. But in the
observation method, the information accuracy can be checked by various testing. So, the
data collected by observation is much more reliable.
4. Appropriate tool: There is a particular phenomenon that cannot provide information
verbally regarding their behavior, activities, feelings, etc. For this phenomenon,
observations are the best method. The observation method is essential for studies on
infants who are unable to understand the details of research work and cannot express
themselves clearly.
5. Less cooperation of the respondent is needed: The observation method does not
require people’s willingness to provide information regarding them. There are various
instances where the respondent refuses to speak about themselves and their personal life
to an outsider. Some do not have the right communicative skills or time to provide
information about themselves to researchers. Even though observation cannot always
overcome such problems, it is still relatively easier to require less cooperation from the
respondent.
Limitations
1. Everything is not observed: There are various personal behaviors and secrets which the
researcher does not observe. Many respondents refuse to let researchers observe their
activities, and due to this reason, not everything is observed by the researcher. It also
becomes difficult to gather information about an individual’s personal opinions and
preferences.
2. Past life remains unknown: The observation method has no technique to study the
subject’s past life. It is tough to gather information about past life if the subject is not
cooperative enough. Since no other option is available, researchers have to rely on
documents that are not always accurate.
3. Time-consuming: Observation is a prolonged and time-consuming method. If one wants
their observation to be precise and accurate, they must give it enough time and not hurry
the process. P.V. Young also remarked that observation is a method that cannot be hurried.
It is tough to complete an investigation in a limited period through observation. Since it is
a time-consuming process, there are chances that the observer and the observed both lose
their interests and deny continuing the process.
4. Expensive: Observation is a very costly affair. It requires plenty of time, strict and
detailed work, and high cost. Observation consists of traveling to various places, staying at
the place where the phenomenon occurred, and buying sophisticated and high-quality tools
for research. Due to the reasons mentioned above, the observation method is known as one
of the most expensive data collection methods.
5. Personal Bias: The personal bias of the researchers affects their observation in many
ways. This also creates issues for making valid generalizations. The observer or researcher
may have their insight of right and wrong regarding specific events. They may also have
different preconceptions related to a particular event which jeopardizes the objectivity of
social research.
C - DATA ANALYSIS
Analysis of data Mean’s critical examination of the data for studying the characteristics of the object,
under study and for determining the patterns of relationship among the variables relating to it’s using
both quantitative and qualitative methods.
Data can be analysed either manually or with the help of a computer :
• Manual Data Analysis: This can be done if the number of respondents is reasonably small,
and there are not many variables to analyse •
• Data Analysis Using a Computer:-If you want to analyse data using computer, you should be
familiar with the appropriate program. In this area, knowledge of computer and statistics
plays an important role. The most common software is SPSS for windows
Name-
Enrolment No
Subject - Business Research Methodology
Course-MBA
Semester -2
INTERNAL ASSESSMENT – 2
Q1-C What is Primary Data? Distinguish between Primary Data and Secondary Data.
Data: data are facts, figures and other relevant materials past and present serving as basis for study
and analysis •
Primary data is a type of data that is collected by researchers directly from main sources through
interviews, surveys, experiments, etc. Primary data are usually collected from the source—where the
data originally originates from and are regarded as the best kind of data in research.
The sources of primary data are usually chosen and tailored specifically to meet the demands or
requirements of particular research. Also, before choosing a data collection source, things like the aim
of the research and target population need to be identified.
For example, when doing a market survey, the goal of the survey and the sample population need to
be identified first. This is what will determine what data collection source will be most suitable—
an offline survey will be more suitable for a population living in remote areas without an internet
connection compared to online surveys. This data are also called first hand information
Difference between Primary data and Secondary data
Primary sources are firsthand, contemporary accounts of events created by individuals during that
period of time or several years later (such as correspondence, diaries, memoirs and personal histories).
These original records can be found in several media such as print, artwork, and audio and visual
recording. Examples of primary sources include manuscripts, newspapers, speeches, cartoons,
photographs, video, and artifacts. Primary sources can be described as those sources that are closest to
the origin of the information. They contain raw information and thus, must be interpreted by
researchers.
In-depth interviews present the opportunity to gather detailed insights from leading industry
participants about their business, competitors and the greater industry. When you approach a company
contact from a position of knowledge — thanks to all that secondary data you’ve already collected —
you can have a free-flowing conversation about the topics of interest. You can guide the conversation
toward your research objectives, but also allow yourself to be led down unexpected paths by
interviewees — some of the most valuable insights are the ones you didn’t know you should be
looking for.
Surveys are an excellent way to collect a large amount of information from a given population.
Surveys can be used to describe a population in terms of who they are, what they do, what they like
and if they’re happy. You can then forecast the population’s future behavior in light of these identified
characteristics, behavior, preferences and satisfaction. Surveys yield the most meaningful data when
they ask the right questions of the right people in the right way, so care should be taken both to
develop survey questions respondents will find relevant and interesting, and to determine which
method of conducting the survey (online, telephone or in-person) is most appropriate.
Looking to get consumers’ thoughts on a new product or service offering idea when you’re in the
early stages of the development process?
A focus group can get a small group of people that fit your target demographic in a room to discuss
what they like, dislike, are confused by, would do differently — whatever. The group’s leader
encourages honest, open discussion among participants, collecting opinions that can further direct
your development efforts.
Social media monitoring can help you keeps tabs on candid conversations about your industry, your
company and your competitors. How much are people talking about your brand compared to
competitive brands? Is what they’re saying positive or negative? Is the public clamoring for
something the industry currently doesn’t provide? How are your competitors portraying themselves
via social media, and what does that say about their strategy? Social media monitoring shows that you
don’t always need to participate in the conversation to learn from it
Secondary sources are closely related to primary sources and often interpret them. These sources are
documents that relate to information that originated elsewhere. Secondary sources often use
generalizations, analysis, interpretation, and synthesis of primary sources. Examples of secondary
sources include textbooks, articles, and reference books.
Government statistics are widely available and easily accessed online, and can provide insights
related to product shipments, trade activity, business formation, patents, pricing and economic trends,
among other topics. However, data is often not presented explicitly for the subject you are interested
in, so it can take some manipulation and cross-checking of the data to get it as narrowly focused as
you’d like.
Industry associations typically have websites full of useful information — an overview of the
industry and its history, a list of participating companies, press releases about product and company
news, technical resources, and reports about industry trends. Some information may be accessible to
members only (such as member directories or market research), but industry associations are a great
place to look when starting to learn about a new industry or when looking for information an industry
insider would have.
Trade publications, such as periodicals and news articles, most of which make their content
available online, are an excellent source of in-depth product, industry and competitor data related to
specific industries. Oftentimes, news articles include insights obtained directly from executives at
leading companies about new technologies, industry trends and future plans.
Company websites can be virtual goldmines of information. Public companies will have investor
relations sections full of annual reports, regulatory findings and investor presentations that can
provide insights into both the individual company’s performance and that of the industry at large.
Public and private companies’ websites will typically provide detail around product offerings,
industries served, geographic presence, organizational structure, sales methods (distribution or direct),
customer relationships and innovations.
Q2 Short Notes
A. Sampling size determination –
A larger sample will be a better depiction of the target group. Sample size estimates are based upon
assumptions that might not always be correct. The numbers collected need to be tested statistically
once the sample is completed by comparing sample variables. To make sure you have covered the
right sample size for your research, it is advisable to get professional advice. Our research services
help you get the optimum sample size, which would give you accurate results, without putting excess
burden for collection and management of data
Of the factors that influence the magnitude of sample, the main ones are motive of research, size of
the total population and the study tools used. Latest statistical rules about sample size and selection
are complicated and we provide detailed and expert solutions on such cases as well.
Though the purpose of a sample is to represent the population from where it is extracted, the size
needs to be satisfactory to make the variable stable. A factor that must be considered here is sampling
error percentage. Our strong team of 24 PhD statisticians can help you find out the best options for
determining the best fit sample size.
Many a times, thousands of samples are collected to acquire the required data. Political telephone
surveys are an example of this approach. Alternatively, there just might be one individual sample.
This type of sample could be a case study of an organisation or a sports team. Usually, a sample lies
between these two extremes, with between 30 and 400 respondents being a part of the study.
An optimum result can be achieved by taking 30 responses in consideration. This will be sufficient for
a minor study; however this is more applicable for exploratory research or a pilot study. Small, first
research projects generally do not go beyond this size. For larger research jobs, a survey sample can
range from 30 to 400 for a population of 30 lakhs to 1 million, although samples can be bigger than
this. The determination of a sample size is dependent on the study parameters and expected
confidence with which the results need to be obtained.
B- HYPOTHESIS
“Hypothesis may be defined as a proposition or a set of propositions set forth as an explanation for the
occurrence of some specified group of phenomena either asserted merely as a provisional conjecture
to guide some investigation in the light of established facts” (Kothari, 1988). A research hypothesis is
quite often a predictive statement, which is capable of being tested using scientific methods that
involve an independent and some dependent variables. For instance, the following statements may be
considered: i. “Students who take tuitions perform better than the others who do not receive tuitions”
or, ii. “The female students perform as well as the male students” These two statements are
hypotheses that can be objectively verified and tested. Thus, they indicate that a hypothesis states
what one is looking for. Besides, it is a proposition that can be put to test in order to examine its
validity.
C- ADVANTAGES/DISADVANTAGES OF SECONDARY DATA
Secondary data are those which have already been collected and used by some other persons. They are
usually in the shape of finished products. They are called secondary information
• Advantages of Secondary data
Less cost: The information can be collected by incurring least cost.
Less time consuming: The time requires for obtaining the information is very less
Large quantity of information: Most of the secondary data are those published by big
institutions. So they contain large quantity of information
• Disadvantages of Secondary data-
Since the secondary data is a result of some other person’s attempt, it need not be suitable for
a researcher, who makes use of it
It may be inaccurate and unreliable.
It may contain certain errors.
E Types of sampling
Sampling is that part of statistical practice concerned with the selection of an unbiased or
random subset of individual observations within a population of individuals intended to yield
some knowledge about the population of concern, especially for the purposes of making
predictions based on statistical inference. Sampling is an important aspect of data collection.
There are two basic approaches to sampling: probabilistic and non-probabilistic sampling.
A probability sampling scheme is one in which every unit in the population has a chance
(greater than zero) of being selected in the sample, and this probability can be accurately
determined. The combination of these traits makes it possible to produce unbiased estimates
of population totals, by weighting sampled units according to their probability of selection.
Example: We want to estimate the total income of adults living in a given street. We visit
each household in that street, identify all adults living there, and randomly select one adult
from each household. (For example, we can allocate each person a random number, generated
from a uniform distribution between 0 and 1, and select the person with the highest number in
each household). We then interview the selected person and find their income. People living
on their own are certain to be selected, so we simply add their income to our estimate of the
total. But a person living in a household of two adults has only a one-in-two chance of
selection. To reflect this, when we come to such a household, we would count the selected
person’s income twice towards the total. (In effect, the person who is selected from that
household is taken as representing the person who isn’t selected.) In the above example, not
everybody has the same probability of selection; what makes it a probability sample is the
fact that each person’s probability is known. When every element in the population does have
the same probability of selection, this is known as an ‘equal probability of selection’ (EPS)
design. Such designs are also referred to as ‘self-weighting’ because all sampled units are
given the same weight.
Types of Probability Sampling
Nonprobability sampling is any sampling method where some elements of the population
have no chance of selection (these are sometimes referred to as ‘out of
coverage’/’undercovered’), or where the probability of selection can’t be accurately
determined. It involves the selection of elements based on assumptions regarding the
population of interest, which forms the criteria for selection. Hence, because the selection of
elements is nonrandom, nonprobability sampling does not allow the estimation of sampling
errors. These conditions place limits on how much information a sample can provide about
the population. Information about the relationship between sample and population is limited,
making it difficult to extrapolate from the sample to the population. Example: We visit every
household in a given street, and interview the first person to answer the door. In any
household with more than one occupant, this is a nonprobability sample, because some
people are more likely to answer the door (e.g. an unemployed person who spends most of
their time at home is more likely to answer than an employed housemate who might be at
work when the interviewer calls) and it’s not practical to calculate these probabilities. In
addition, non-response effects may turn any probability design into a non-probability design
if the characteristics of non-response are not well understood, since non-response effectively
modifies each element’s probability of being sampled.
Types of Non-probability Sampling
1. Convenience sampling
2. Quota sampling
3. Judgment sampling
4. Snowball sampling