Broota K D - Experimental Design in Behavioural Research - 3e

Download as pdf or txt
Download as pdf or txt
You are on page 1of 26

NEW AGE THIRD EDITION

EXPERIMENTAL DESIGN
IN
BEHAVIOURAL RESEARCH

K D BROOTA

NEW AGE INTERNATIONAL PUBLISHERS


Get free access to e-books for 3 Months
Best Selling Textbooks in COMMERCE AND MANAGEMENT
by
Renowned Authors from Prestigious Publishers

... & many more

Compliments from to all Faculty Members


already LIVE @ Download the App to access e-books:

Indian Institute of Technology Kanpur, Uttar Pradesh

Visvesvaraya Technological University (Consortium), Belgaum, Karnataka

National Institute of Technology, Kurukshetra, Haryana


E(
LLEG DEEME
CO D
G T
N
O
RI
EE

BE
JAB ENGIN

U
NIVERSITY)

Punjab Engineering College, Chandigarh


N
PU

Rajiv Gandhi Proudyogiki Vishwavidyalaya (Consortium), Bhopal, Madhya Pradesh

Bennett University (Times of India Group), Greater Noida, Uttar Pradesh

Jamia Millia Islamia - A Central University, New Delhi Username: MGM2021


.... & MANY OTHER REPUTED UNIVERSITIES Password: MGM2021

CONTACT US TO KEEP LEARNING!


Mr. Ranjan Roy Mr. Manish Gupta
9315905298 9315905295
EXPERIMENTAL DESIGN
IN
BEHAVIOURAL RESEARCH
THIRD EDITION

K D BROOTA
M.A. (Psy.), Ph.D., FNA Psy.
Former
Professor and Head
Department of Psychology
University of Delhi, Delhi
India

Click Here to Buy the Book Online


Copyright © 2023, 2020, 1989, New Age International (P) Ltd., Publishers
Published by New Age International (P) Ltd., Publishers
First Edition: 1989
Third Edition: 2023

All rights reserved.


No part of this book may be reproduced in any form, by photostat, microfilm, xerography, or any other means, or
incorporated into any information retrieval system, electronic or mechanical, without the written permission of the
publisher.
GLOBAL OFFICES
• New Delhi NEW AGE INTERNATIONAL (P) LIMITED, PUBLISHERS
7/30 A, Daryaganj, New Delhi-110002, (INDIA), Tel.: (011) 23253472, 23253771, Mob.: 9315905300
E-mail: [email protected] • Visit us at www.newagepublishers.com

• London NEW AGE INTERNATIONAL (UK) LTD.


27 Old Gloucester Street, London, WC1N 3AX, UK
E-mail: [email protected] • Visit us at www.newacademicscience.co.uk
BRANCHES
• Bangalore 37/10, 8th Cross (Near Hanuman Temple), Azad Nagar, Chamarajpet, Bangalore- 560 018
Tel.: (080) 26756823, Mob.: 9315905288, E-mail: [email protected]

• Chennai 26, Damodaran Street, T. Nagar, Chennai-600 017, Tel.: (044) 24353401, Mob.: 9315905309
E-mail: [email protected]

• Cochin CC-39/1016, Carrier Station Road, Ernakulam South, Cochin-682 016


Tel.: (0484) 4051304, Mob.: 9315905289, E-mail: [email protected]

• Guwahati Hemsen Complex, Mohd. Shah Road, Paltan Bazar, Near Starline Hotel
Guwahati-781 008, Tel.: (0361) 2513881, Mob.: 9315905296, E-mail: [email protected]

• Hyderabad 105, 1st Floor, Madhiray Kaveri Tower, 3-2-19, Azam Jahi Road, Near Kumar Theater
Nimboliadda Kachiguda, Hyderabad-500 027, Tel.: (040) 24652456, Mob.: 9315905326
E-mail: [email protected]

• Kolkata RDB Chambers (Formerly Lotus Cinema) 106A, 1st Floor, S N Banerjee Road
Kolkata-700 014, Tel.: (033) 22273773, Mob.: 9315905319, E-mail: [email protected]

• Mumbai 142C, Victor House, Ground Floor, N.M. Joshi Marg, Lower Parel, Mumbai-400 013
Tel.: (022) 24927869, 24915415, Mob.: 9315905282, E-mail: [email protected]

• New Delhi 22, Golden House, Daryaganj, New Delhi-110 002, Tel.: (011) 23262368, 23262370
Mob.: 9315905300, E-mail: [email protected]

ISBN: 978-93-93159-96-0

K K Printed in India at R.K Printers, Delhi.


Typeset at Goswami Associates, Delhi.

NEW AGE INTERNATIONAL (P) LIMITED, PUBLISHERS


7/30 A, Daryaganj, New Delhi - 110002
Visit us at www.newagepublishers.com
(CIN: U74899DL1966PTC004618)

Click Here to Download the Excel Catalogue for Print Books


Preface to the Third Edition

T his edition of the book is an outcome of the good response of previous editions, suggestions and critical
comments received from the faculty and students.
In this edition three appendices on Binomial and Values for n Factorial have been included at the end
of the book. This will help the students in their study and understanding the subject in a better way.
Updates in the text have been made wherever necessary.
For my students and researchers, the main objective of writing this book was to enable them to
undertake research projects independently, understand the intricacies of research designs, and handle with
confidence the analysis of data. With my background of mathematics, it was possible for me to
present the designs in more comprehensive and easy to understand format and language.
I am happy to note from the comments and personal communications with my students and colleagues
as well as from readers of the book in some countries that the book has been well received by the
academic community and I am successful in achieving my objective and goal of writing this book. It took
me some years completing the manuscript and teaching from it in the class room and guiding my research
students and getting very valuable and useful feedback and comments. I also took the opportunity of giving
lectures and seminars in some prominent Indian universities and institutions and got extremely fruitful
comments and feedback. All this helped me in improving the content and quality of the manuscript. Most
fulfilling and gratifying comments after the publication and distribution of the book, in India and abroad,
were about the crispness and simplicity of the language of the book and that even students without
background of mathematics could easily understand the complicated designs. It was commented by my
students—while reading the book they had a perception that I was lecturing them in the class. On the whole,
I am happy and gratified in getting the feedback about the book and I think I have fulfilled my objective
of writing this book.
The book was published in 1989 and subsequently many reprints appeared. During this period, I had
an opportunity to go thoroughly through the book, receive comments from students, researchers, and
colleagues from different disciplines and tried to carry out some corrections, make it error free, change
the overall format of the chapters, and wherever necessary figures were redrawn for better comprehension.
Suggestions for further improvement shall be appreciated from the students, researchers, and teachers.

K.D. Broota

Click Here to Download the Excel Catalogue for e-Books


CONTENTS
Foreword (vii)
Preface to the Third Edition (xi)
Preface to the First Edition (xiii)
Acknowledgements (xv)
Glossary of Symbols (xvii)

1. Experimental Design: An Introduction 1


What is Experimental Design? 2
Experimental Design as Variance Control 3
Types of Experimental Designs 8
Basic Terminology in Experimental Design 10
Basic Terminology of Statistical Analysis 13

2. Analysis of Variance: The Foundation of Experimental Design 17


Analysis of Variance and t Test 19
The Concept of Variance 20
One-way Analysis of Variance 26
Two-way Analysis of Variance 32
Assumptions Underlying Analysis of Variance 40
Transformations 43
Analysis of Variance by Ranks 47

3. Single Factor Experiments 53


Fixed Effect and Random Effect Models 54
Equal Sample Sizes 55
Tests for Trends 60
Unequal Sample Sizes 64
Multivariate Analysis of Variance 66

4. Comparison Among Treatment Means 75


A Priori and Post Hoc Comparisons 76
Newman-Keuls Test 77
Duncan Multiple Range Test 81
Tukey Test 83
Protected t-test 85
xx CONTENTS

Comparing Different Procedures 87


Comparing Means with a Control 88

5. Randomized Complete Block Designs 91


Blocking 93
Randomized Complete Block Design (Single Subject Each Cell) 93
Randomized Complete Block Design (n Subjects Each Cell) 99

6. Single Factor Experiments: Repeated Measures 107


Comparison of Designs with and without Repeated Measures 108

7. Factorial Experiments: Two Factors (p × q) 123


Factor 126
Assumptions 127
Equal Cell Frequencies (p × q) 127
Homogeneity of Error Variance 142
Unequal Cell Frequencies (p × q) 145
Missing Values 154
Replicated Experiments 155
Two Factor Nested Design 163

8. Factorial Experiments: Three Factors (p × q × r) 171


Complete Factorial Experiment 173
Three Factor Nested Design: Factor B Nested Under Factor A 196
Three Factor Nested Design: Factor B Nested Under Factor A
and Factor C Nested Under Both A and B 204

9. Latin Square Designs 207


Latin Square 209
Latin Square Design with One Observation Each Cell 212
Latin Square Design with n Observations Each Cell 217
Latin Square Design with Repeated Measures (Replications with the Same Square) 227
Latin Square Design with Repeated Measures
(Replication with Independently Randomized Squares) 240

10. Cross-Over and Greco-Latin Square Designs 253


I. Cross-over Design 254
Replications with the Same Square 254
Replications with Independently Randomized Squares 264
II. Greco-Latin Square Design 270
Single Observation Each Cell 272
n Observations Each Cell 277
CONTENTS xxi

11. Two Factor Experiments (p × q): Repeated Measures 279


Two Factor Experiment (p × q) with Repeated Measures on one Factor 280
Two Factor Experiment (p × q) with Repeated Measures on both the Factors 296

12. Three Factor Experiments (p × q × r): Repeated Measures 313


Repeated Measures on One Factor 314
Repeated Measures on Two Factors 331
Repeated Measures on all Three Factors 350

13. Higher-dimensional Designs 373


I. p × q × r × s Complete Factorial Design 375
II. p × q × r × s Factorial Experiment with Repeated Measures 379
Partitioning of Total Variation and df (General Form) 380

14. Analysis of Covariance: Single Factor 383


References 399
Appendix 401
Table A: Critical Values of t 403
Table B: Critical Values of F 403
Table C: Critical Values of Chi-Square 412
Table D: Distribution of the Studentized Range Statistic 413
Table E: Significant Studentized Ranges for Duncan’s New Multiple Range Test
(α = .05) 416
Table F: Distribution of t Statistic for One-sided Comparisons between k
Treatment Means and a control (α = .05) 420
Table G: Distribution of Fmax Statistic 424
Table H: Coefficients of Orthogonal Polynomials 425
Table I: Arcsin transformation, angle = arcsin √ percentage 426
Table J: Binomial Coefficients 429
Table K: Table of Binomial Probabilities 430
Table L: Values for n Factorial 438
Index 439
Experimental Design: 1
An Introduction

WHAT IS EXPERIMENTAL DESIGN? 2


EXPERIMENTAL DESIGN AS VARIANCE CONTROL 3
Systematic Variance 3
Extraneous Variance 3
Randomization 4
Elimination 5
Matching 5
Additional Independent Variable 5
Statistical Control 6
Error Variance 6
Validity 7
TYPES OF EXPERIMENTAL DESIGNS 8
Single Case Experimental Design 8
Quasi-Experimental Design 8
Experimental Design 9
BASIC TERMINOLOGY IN EXPERIMENTAL DESIGN 10
Factor 10
Levels 10
Dimensions 11
Treatment Combinations 11
Replication 11
Main Effects 12
Simple Effects 12
Interaction Effects 12
BASIC TERMINOLOGY OF STATISTICAL ANALYSIS 13
Null Hypothesis (H0) 14
Level of Significance 14
Type I and Type II Error 15
Power of the Test 15
Region of Rejection 15
1

This book is about basic principles of experimental design and analysis, and is addressed to the students
and researchers in behavioural sciences. More particularly, it is an attempt to set forth the principles of
designing experiments, methods of data collection, analysis, and interpretation of results. Before we
focus on the principles of designing experiments and methods of analysis, it would be putting the matter
in proper perspective if we first defined experimental design.
The term experimental design has been used differently by different authors. A look at the literature
on the subject reveals that the term experimental design has been used to convey mainly two different,
though interrelated, meanings. In the first category are those who have used the term in general sense to
include a wide range of basic activities for carrying out experiments, that is, everything from formulation
of hypotheses to drawing of conclusions. The second definition of the term is comparatively restricted.
The term is used in the “Fisher tradition” that is, to state statistical principles underlying experimental
designs and their analysis, wherein an experimenter can schedule treatments and measurements for
optimal statistical efficiency. It contains activities like procedure for selection of factors and their levels
for manipulation, identification of extraneous variables that need to be controlled, procedures for handling
experimental units, selection of criterion measure, selection of specific design (e.g., factorial design,
Latin square design) and analysis of data.
In this book, we shall be dealing with the designs that conform primarily to the latter definition of
the term, although, other aspects of designing, contained in the former definition, cannot be ignored in
entirety. The reader will appreciate that research is an integrated activity, where one step out of a sequence
of steps cannot be effectively isolated from the rest. Thus, knowledge of the basic principles of
experimental design covered by both the definitions is a prerequisite for achieving the objectives of
research. However, in this book, we shall concentrate primarily on the second definition of the term,
that is, principles of experimental design and analysis, in the “Fisher tradition”.

WHAT IS EXPERIMENTAL DESIGN?

Winer (1971) has compared the design of an experiment to an architect’s plan for the structure of a
building. The designer of experiments performs a role similar to that of the architect. The prospective
owner of a building gives his basic requirements to the architect, who then exercising his ingenuity
prepares a plan or a blue-print outlining the final shape of the structure. Similarly, the designer of the
experiment has to do the planning of the experiment so that the experiment on completion fulfils the
objectives of research. According to Myers (1980), the design is the general structure of the experiment,
not its specific content.

2
EXPERIMENTAL DESIGN: AN INTRODUCTION 3
Though, there are different objectives of designing of an experiment, it may not be out of proportion
to state that the most important function of experimental design is to control variance. According to
Lindquist (1956), “Research design is the plan, structure, and strategy of investigation conceived so as
to obtain answer to research question and to control variance”. The first part of the statement emphasizes
only upon the objective of research, that is, to obtain answer to research question. The most important
function of the design is the strategy to control variance. This point will be elaborated in the discussion
that follows.

EXPERIMENTAL DESIGN AS VARIANCE CONTROL


Variance control, as we shall notice throughout this book, is the central theme of experimental design.
Variance is a measure of the dispersion or spread of a set of scores. It describes the extent to which the
scores differ from each other. Variance and variation, though used synonymously, are not identical
terms. Variation is a more general term which includes variance as one of the statistical methods of
representing variation. A lot more is discussed about variance in chapter 2. Here we shall confine the
discussion and only emphasize its importance and methods of its control.
The problem of variance control has three aspects that deserve full attention. The three aspects of
variance are: systematic variance, extraneous variance and error variance. Main functions of experimental
design are to maximize the effect of systematic variance, control extraneous source of variance1, and
minimize error variance. The major function of experimental design is to take care of the second function,
that is, control of extraneous source of variance. Here we shall consider this aspect in comparatively
greater detail. It will be seen later on that various designs are available for controlling the extraneous
source of variance in different situations, and with the help of these designs, an experimenter can draw
valid inference.

SYSTEMATIC VARIANCE
Systematic variance is the variability in the dependent measure due to the manipulation of the experimental
variable by the experimenter. An important task of the experimenter is to maximize this variance. This
objective is achieved by making the levels of the experimental variable/s as unlike as possible. Suppose,
an experimenter is interested in studying the effect of intensity of light on visual acuity. The experimenter
decides to study the effect by manipulating three levels of light intensity, i.e., 10 mL, 15 mL, and
20 mL. As the difference between any two levels of the experimental variable is not substantial, there is
little chance of separating its effect from the total variance. Thus, in order to maximize systematic
variance, it is desirable to make the experimental conditions (levels) as different as possible. In this
experiment, it would be appropriate, then, to modify the levels of light intensity to 10 mL, 20 mL, and
30 mL, so that the difference between any two levels is substantial.
EXTRANEOUS VARIANCE
In addition to the independent variable and the dependent variable, which are main concerns in any
experiment, extraneous variables are encountered in all experimental situations that can influence the
dependent variable.
1Extraneous source of variance is contributed by all the variables other than the independent variable whose effect is
being studied in the experiment. These variables have often been called extraneous variables, irrelevant variables,
secondary variables, nuisance variables etc. In this book, all variables in the experimental situation other than the
independent variable have been termed as extraneous variables or secondary variables.
4 EXPERIMENTAL DESIGN IN BEHAVIOURAL RESEARCH

There are five basic procedures for controlling the extraneous source of variance. These procedures
are:
(i) Randomization (ii) Elimination
(iii) Matching (iv) Additional Independent Variable
(v) Statistical Control

Randomization
An important method of controlling extraneous variable/s is randomization. It is considered to be the
most effective way to control the variability due to all possible extraneous sources. If thorough
randomization has been achieved, then the treatment groups in the experiment could be considered
statistically equal in all possible ways. Randomization is a powerful method of controlling secondary
variables. In other words, it is a procedure for equating groups with respect to secondary variables.
According to Cochran and Cox (1957), “Randomization is somewhat analogous to insurance in that it is
a precaution against disturbances that may or may not occur and that may or may not be serious if they
do occur”.
Randomization in the experiment could mean random selection of the experimental units from the
larger population of interest to the experimenter, and/or random assignment of the experimental units or
subjects to the treatment conditions. Random assignment means that every experimental unit has an
equal chance of being placed in any of the treatment conditions or groups. However, in making groups
equal in the experiment, we may have random assignment with constraints. The assignment is random,
except for our limitations on number of subjects per group or equal number of males and females, and
so on. Random selection and random assignment are different procedures. It is possible to select a
random sample from a population, but then assignment of experimental units to groups may get biased.
Random assignment of subjects is critical to internal validity. If subjects are not assigned randomly,
confounding2 may occur.
An experimental design that employs randomization as a method of controlling extraneous variable
is called randomized group design. For example, in the randomized group design (chapter 3), extraneous
source of variance due to individual differences is controlled by assigning subjects randomly to, say, k
treatment conditions in the experiment. According to McCall (1923), “Just as representativeness can be
secured by the method of chance, … . so equivalence may be secured by chance, provided the number
of subjects to be used is sufficiently numerous”. This refers to achieving comparable groups through the
principle of chance. It may, however, be noted that randomization is employed even when subjects are
matched. In repeated measures design (within subject design), where each subject undergoes all the
treatment conditions, the order in which treatments are administered to the subjects is randomized
independently for each subject (see chapters 6, 11 and 12).
Fisher’s most fundamental contribution has been the concept of achieving pre-experimental equation
of groups through randomization. Equating of the effects through random assignment of subjects to
groups in the experiment is considered to be the overall best tool for controlling various sources of
2Term is used to describe an operation of variables in an experiment that confuses the interpretation of data. If the
independent variable is confounded with a secondary variable, the experimenter cannot separate the effects of the two
variables on the dependent measure.
EXPERIMENTAL DESIGN: AN INTRODUCTION 5
extraneous variation at the same time. Perhaps, the most important discriminating feature of the
experimental design, as compared to the quasi-experimental design3, is the principle of randomization.
Elimination
Another procedure for controlling the unwanted extraneous variance is elimination of the variable by so
choosing the experimental units that they become homogeneous, as far as possible, on the variable to be
controlled. Suppose, the sex of a subject, an unwanted secondary variable, is found to influence the
dependent measure in an experiment. Therefore, the variable of sex (secondary source of variance) has
to be controlled. The experimenter may decide to take either all males or all females in the experiment,
and thus, control through elimination the variability due to sex variable. Procedure explained in this
particular example is also referred to as the method of constancy. Let us take another example to illustrate
the control of unwanted extraneous variance by elimination. Suppose, intelligence of the subjects in the
group is found to influence the scores of the subjects on achievement test. Its potential effect on the
dependent variable can be controlled by selecting subjects of nearly uniform intelligence. Thus, we can
control the extraneous variable by eliminating the variable itself. However, with this procedure we lose
the power of generalization of results. If we select subjects from a restricted range, then we can discuss
the outcome of experiment within this restricted range, and not outside it. Elimination procedure for
controlling the extraneous source of variance is primarily a non-experimental design control procedure.
Elimination as a procedure has the effect of accentuating the between group variance through decrease
in the within group or error variance.
Matching
Another procedure, which is also a non-experimental design procedure, is control of extraneous source
of variance through matching. The procedure is to match subjects on that variable which is substantially
related to the dependent variable. That is, if the investigator finds that the variable of intelligence is
highly correlated with the dependent variable, it is better to control the variance through matching on
the variable of intelligence. Suppose, an investigator is interested in studying the efficacy of method of
instruction on the achievement scores of the 10th grade children. The methods to be evaluated are:
lecture, seminar, and discussion. Here the method of instruction is the experimental variable of interest
to the investigator. The investigator discovers that the achievement scores (DV) are positively correlated
with the intelligence of the subjects, that is, subjects with high intelligence tend to score high on the
achievement test and those who are low on intelligence score are low on the achievement test. Thus, the
variable of intelligence (not of direct interest to the investigator) needs to be controlled because it is a
source of variance that will influence the achievement scores. In this experiment, the extraneous variable
(intelligence) can be controlled by matching the subjects in the three groups on intelligence (concomitant
variable).
However, matching as a method of control limits the availability of subjects for the experiment. If
the experimenter decides to match subjects on two or three variables, he may not find enough subjects
for the experiment. Besides this, the method of matching biases the principle of randomization. Further,
matching the subjects on one variable may result in their mismatching on other variables.
Additional Independent Variable
Sometimes the experimenter may consider elimination inexpedient or unpractical. He may not eliminate
the extraneous variable (of not direct interest to the experimenter) from the experiment and, thus, build
3In this book, as stated earlier, the subject of experimental design is treated in the “Fisher tradition”. The reader is
advised to go through the other aspect of designing, referred to in the introductory paragraph with the first meaning of
the term, experimental design. Campbell and Stanley (1963) have presented an excellent treatment of the subject
experimental and quasi-experimental designs in the non-statistical tradition.
6 EXPERIMENTAL DESIGN IN BEHAVIOURAL RESEARCH

it right into the design as a second independent variable. Suppose, an experimenter is interested in
studying the efficacy of methods of instruction on achievement scores. He does not want to eliminate
the variable of intelligence. He introduces intelligence as an attribute4 variable. He creates three groups
on the basis of intelligence scores of the subjects. The three groups consist of subjects of superior
intelligence, average intelligence and low intelligence as levels of the second variable (intelligence).
With the help of analysis of variance, the experimenter can take out the variance due to intelligence
(main effect of intelligence) from the total variance. The experimenter may decide to study the influence
of intelligence on achievement, and also the interaction between intelligence and method of instruction.
Thus, the secondary source of variance is controlled by introducing the secondary variable as an
independent variable in the experiment, and the experimenter gets the advantage of isolating the effect
of intelligence on achievement and the interaction effect as additional information.
Due outcome of such a control procedure is a factorial design. In the above example, it will be a
3 × 3 factorial design. There, the first variable or factor is intelligence (having three levels) and the
second variable is the method of instruction (three levels). The first factor or independent variable is a
classification variable or a control variable and the second one is the experimental variable, which was
directly manipulated by the experimenter.
Statistical Control
In this approach, no attempt is made to restrain the influence of secondary variables. In this technique,
one or more concomitant secondary variables (covariates) are measured, and the dependent variable is
statistically adjusted to remove the effects of the uncontrolled sources of variation. Analysis of covariance
is one such technique. It is used to remove statistically the possible amount of variation in the dependent
variable due to the variation in the concomitant secondary variable. The method has been presented in
chapter 14.
The extraneous source of variance can also be controlled with the help of various experimental
designs. For example, we can make the extraneous variable constant by “blocking” the experimental
units as in the randomized complete block design (chapter 5). In this design, the subjects pretested on
the concomitant secondary variable are grouped in blocks on the basis of their scores on the concomitant
variable so that the subjects within blocks are relatively homogeneous. The purpose is to create between
block differences. Later on, the variance between the blocks is taken out from the total variance. Thus,
the variability due to the extraneous variable is statistically held constant.
Let us take up an example to illustrate this point. Suppose, an investigator finds that anxiety level of
the subjects, an extraneous variable of no direct consequence to the purposes of the experiment, influences
the dependent variable in the experiment. The experimenter can control this secondary source of variation
through elimination, that is, by selecting subjects of low anxiety level only. However, this procedure
will limit the generality of the results. So the experimenter may decide to apply statistical technique to
control the extraneous variable (anxiety level). He can administer an anxiety test (to measure concomitant
variable) to all the subjects (selected randomly for the experiment), and then create blocks on the basis
of their anxiety scores such that within the blocks the subjects are as homogeneous as possible, and the
differences between the blocks are high. In such a design, the variability due to the block differences is
taken out from the total variation. Thus, the statistical control technique can be utilized by the experimenter
to control the variance contributed by an extraneous variable.
ERROR VARIANCE
The results of experiments are affected by extraneous variables which tend to mask the effect of
experimental variable. The term experimental error or error variance is used to refer to all such
4A characteristic that can be identified and measured.
EXPERIMENTAL DESIGN: AN INTRODUCTION 7
uncontrolled sources of variation in experiments. Error variance results from random fluctuations in the
experiment. Experimental errors can be controlled either through experimental procedures or some
statistical procedure. If we are not able to effectively control the extraneous source of variation, then it
will form the part of error variance. By controlling secondary source of variation, one can reduce the
experimental error.
Two main sources of error variance may be distinguished. First is inherent variability in the
experimental units to which treatments are applied. Second source of error variance is lack of uniformity
in physical conduct of experiment, or in other words lack of standardized experimental technique. This
refers to the errors of measurement.
Individuals vary a lot in respect of intelligence, aptitude, interests, anxiety, etc. All these person-
related variables tend to inflate the experimental error. The other source of error variance is associated
with errors of measurement and could be due to unreliable measuring instrument, fatigue on the part of
experimental units, transient emotional states of the subject, inattention by subjects at some point of
time, and so on.
Statistical controls can be applied to minimize such error variance. For example, repeated measures
design can be used to minimize the experimental error. By this technique the variability due to individual
differences is taken out from the total variability, and thus, the error variance is reduced. Analysis of
covariance is also a technique to reduce the error variance. Further, error variance can be controlled by
increasing the reliability of measurements by giving clear and unambiguous instructions, and by using
a reliable measuring instrument, etc.
It has been pointed out earlier that an important function of experimental design is to maximize the
systematic variance, control extraneous source of variance, and minimize error variance. The systematic
variance or variance due to experimental variable is tested against the error variance (F test is discussed
at length in chapter 2), therefore, the error variance should be minimized to give systematic variance a
chance to show the significance. In the next chapter, we shall learn that for the variability due to
experimental variable (between group variance) to be accurately evaluated for significant departure
from chance expectations, the denominator, that is, error variance should be an accurate measure of the
error.

VALIDITY
Validity is an important concept in measurement, may it be in a testing situation or in experimental
situation. In experimental situation, validity is related to the control of secondary variables. More the
secondary variation that slips into an investigation, greater is the possibility that the independent variable
was not wholly responsible for dependent variable changes. Secondary or extraneous variation may
influence the dependent variable to an extent, where the conclusions drawn become invalid.
In experimental situations, the validity problem is divided into two parts—internal and external
validity. Internal validity is basic minimum without which the outcome of any experiment is
uninterpretable. That is, it is concerned with making certain that the independent variable manipulated
in the experiment was responsible for the variation in dependent variable. On the other hand, external
validity is concerned with generalizability. That is, to what populations, settings, treatment variables,
etc., can the effect (obtained in an experiment) be generalized. For detailed discussion on internal and
external validity, the reader may refer to Campbell and Stanley (1963).
8 EXPERIMENTAL DESIGN IN BEHAVIOURAL RESEARCH

TYPES OF EXPERIMENTAL DESIGNS


In behavioural sciences, specially in education and social research, it is not always possible to exercise
full control over the experimental situation. For example, the experimenter may not have the liberty of
assigning subjects randomly to the treatment groups or the experimenter may not be in a position to
apply the independent variable whenever or to whomever he wishes. Collectively, such experimental
situations form part of quasi-experimental designs.
In another research situation, the objective may be to study intensively a particular individual rather
than a group of individuals. In the former case the researcher may be interested in answering questions
about a certain person or about a person’s specific behaviour. For example, behaviour of a particular
individual may be observed to note changes over a period of time to study the effect of a behaviour
modification technique. All such designs in which observations or measurements are made on individual
subject are categorized as single case experimental designs, in contrast to the designs in which groups
of subjects are observed and the experimenter has full control over the experimental situation (as in
experimental design).
The experimental situations in which experimenter can manipulate the independent variable/s and
has liberty to assign subjects randomly to the treatment groups and control the extraneous variables are
designated as true experiments. The designs belonging to this category are called experimental designs
and in this book we are concerned with such regions only.
Understanding the nature of experimental design will be easier if we fully comprehend the nature
and meaning of quasi-experimental design and single case experimental design. Let us consider the
three types of designs—single case experimental design, quasi-experimental design, and experimental
design, in some detail.

Single Case Experimental Design


Single case experimental designs are an outgrowth of applied clinical research, specially in the area of
behaviour modification. In this type of design, repeated measurements are taken across time on one
particular individual to note the subtle changes in behaviour. Single subject or single case experimental
designs are an extension of the before-after design.
The single case experimental designs do not lend themselves to clear statistical analysis and
hypothesis testing has not been formalized as in the case of experimental designs. The experimenter
relies on the convincingness of the data. In these designs, the experimenter cannot control the order
effects. Moreover, the designs do not provide a good basis for generalization. However, single case
experimental designs provide us such information about human behaviour that is not always obtainable
in group designs. It is specially useful in clinical research where individual’s behaviour is of paramount
importance. Here, we shall not deal with the subject of single case experimental design in detail. For
detailed treatment of the subject, the reader may refer to Hersen and Barlow (1976).

Quasi-Experimental Design
All such experimental situations in which experimenter does not have full control over the assignment
of experimental units randomly to the treatment conditions, or the treatment cannot be manipulated, are
collectively called quasi-experimental designs. For example, in an ex-post-facto study the independent
variable has already occurred and hence, the experimenter studies the effect after the occurrence of the
variable. In another situation, three intact groups are available for the experiment but the experimenter
EXPERIMENTAL DESIGN: AN INTRODUCTION 9
cannot assign the subjects to the treatment conditions; only treatments can be applied randomly to the
three intact groups. There are various such situations in which the experimenter does not have full
control over the situation. The plan of such experiments constitutes the quasi-experimental design.
Let us take an example from research to distinguish quasi-experimental design from experimental
design. First, we give an example of an experimental design. Suppose, an investigator is interested in
evaluating the efficacy of three methods of instruction (lecture, seminar and discussion) on the
achievement scores of the students of 10th grade. The experimenter draws a random sample of kn subjects
from a large population of 10th grade students. Then n subjects are assigned randomly to each of the k
(here k = 3) treatment conditions. Each of the n subjects in each of the k treatment groups is given
instructions with a method for one month. Thereafter, a common achievement test is administered to all
the subjects. The outcome of the experiment is evaluated statistically in accordance with the design of
the experiment (randomized group design or single factor experiment).
Let us consider an example of the quasi-experimental design. Suppose, for the aforesaid problem,
the experimenter cannot draw a random sample of 10th grade students as the schools will not permit the
experimenter to regroup the classes to provide instructions with the methods he is interested in. Ideal
conditions being unavailable, the experimenter finds three schools, following the same curriculum and
each providing instructions by one of the three methods. He administers an achievement test to the
subjects from the three schools and compares the outcome to evaluate the effect of each method of
instruction (ex-post-facto) on achievement scores.
It is observed from the example that the experimenter in the second condition did not have control
over the selection of subjects and also over the assignment of subjects to the treatments. Further, the
experimenter could not manipulate the independent variable (providing instructions with the three
methods) as the independent variable had already occurred. This experiment constitutes what we call
quasi-experiment.
Notice that the objective of the experiment was same in both the designs. However, random
assignment of subjects to the treatment groups was not possible in the quasi-experiment and it was,
therefore, a handicap in controlling secondary variables. These investigations are as sound as experimental
investigations, but are less powerful in drawing causal relationships between independent and dependent
variables. The statistical tests applied to the data obtained from quasi-experimental designs are same as
those applied to data in experimental designs. It is possible to perform even analysis of covariance on
data of such studies. However, the conclusions cannot be drawn with as much confidence as from the
studies employing experimental designs because some of the assumptions (e.g., randomization)
underlying the statistical tests are violated in the quasi-experiments. Besides this, the experimenter does
not have full control over the secondary variables.
Though quasi-experimental investigations have limitations, nevertheless these are advantageous in
certain respects. It is possible to seek answers to several kinds of problems about past situations and
those situations which cannot be handled by employing experimental design.

Experimental Design
Included in this category are all those designs in which large number of experimental units or subjects
are studied, the subjects are assigned randomly to the treatment groups, independent variable is
manipulated by the experimenter, and the experimenter has complete control over the scheduling of
independent variable/s. Fisher’s statistical innovations had tremendous influence on the growth of this
10 EXPERIMENTAL DESIGN IN BEHAVIOURAL RESEARCH

subject; his special contribution was the problem of induction or inference. After the invention by
Fisher of the technique of analysis of variance, it became possible to compare groups and study
simultaneously the influence of more than one variable.
There are three types of experimental designs—between subjects design, within subjects design
and mixed design. In the between subjects design, each subject is observed only under one of the
several treatment conditions. In the within subjects design or repeated measures design, each subject is
observed under all the treatment conditions involved in the experiment. Finally, in the mixed design,
some factors are between subjects and some within subjects.

BASIC TERMINOLOGY IN EXPERIMENTAL DESIGN


As in any area of study, experimental designs also have some terminology which we shall be using in
the chapters that follow. It is essential to get acquainted with the terminology for clear understanding of
the designs and analysis given in the following chapters.
Factor
A factor is a variable that the experimenter defines and controls so that its effect can be evaluated in the
experiment. The term factor is used interchangeably with terms, treatment or experimental variable.
A factor is also referred to as an independent variable. A factor may be an experimental variable which
is manipulated by the experimenter. For example, the experimenter manipulates the intensity of
illuminance to study its effect on visual acuity. Here, illuminance is an experimental variable and is
referred to as treatment factor. Then, there are subject related variables which cannot be directly
manipulated by the experimenter but can be manipulated through selection. For example, if the
experimenter is interested in studying the effect of age on RT (response time), he may manipulate the
age by selecting subjects of different age levels. When the variable is manipulated through selection, it
is generally referred to as classification factor. Variables of this category allow the researcher to assess
the extent of differences between the subjects.
The independent variable or factor that is directly manipulated by the experimenter is also known
as E type of factor and one that is manipulated through selection is known as S type of factor. S type of
factors are generally included to classify the subjects for the purposes of control. At times, the experimenter
may be interested in evaluating the effect of S type of factors. For example, an experimenter may
classify subjects into low, medium and high economic groups to assess the extent of differences between
the subjects in the three groups. However, most of the time the classification factor is built into the
design, not because of intrinsic interest in the effects but because the results are likely to be difficult to
interpret if these factors are not included. These factors are defined by their function in the design and
may be either classification or treatment factors.
Factors are denoted by the capital letters A, B, C, D and so on. For example, in an experiment
having two factors, Factor A refers to the variable of intensity of light and Factor B to the variable of
size.

Levels
Each specific variation in a factor is called the level of that factor. For example, the factor light intensity
may consist of three levels: 10 mL, 20 mL and 30 mL. Experimenter may decide to choose the number
of levels of a factor. The number of potential levels of a factor is generally very large. The choice of
EXPERIMENTAL DESIGN: AN INTRODUCTION 11
levels to be included and manner of selection of the levels, from among the large number available to
the experimenter, in a design is a major decision on the part of the experimenter. Some factors may have
infinite number of potential levels (e.g., light intensity) and others may have few (e.g., sex of the subject).
In case the experimenter decides to select p levels from potential P levels available on the basis of some
systematic, non-random procedure, the factor is considered a fixed factor. In contrast to this systematic
selection procedure, when the experimenter decides to include p levels from the potential P levels
through random procedure, then the factor is considered a random factor. A detailed discussion on the
manner of selection of levels of factors and the statistical models involved have been presented in
chapter 3.
The potential levels of a factor are designated by the corresponding lower case (small) letters of the
factor symbol with a subscript. For example, the potential levels of factor A will be designated by the
symbols a1, a2, a3, ..., ap. Similarly, the potential levels of factor B will be designated by the symbols
b1, b2, b3, …, bq .

Dimensions
The dimensions of a factorial experiment are indicated by the number of levels of each factor and the
number of factors. For example, a three factor experiment in which the first factor has p levels, second
q levels and the third r levels, will be designated as p × q × r factorial experiment. This is the general
form and the dimensions in the specific case may assume any value for p, q, and r. A factorial experiment,
for example, in which there are three factors, first having 2 levels, second having 4 levels and third
having 4 levels, is called 2 × 4 × 4 (read as two by four by four) factorial experiment. The dimension of
this experiment is 2 × 4 × 4.
Treatment Combinations
A treatment is an independent variable in the experiment. In this text, the term treatment will be used to
refer to a particular set of experimental conditions. For example, in a 2 × 4 factorial experiment, the
subjects are assigned to 8 treatments. The term treatment and treatment combinations will be used
interchangeably. In a single factor experiment, the levels of the factor constitute the treatments. Suppose,
in an experiment the investigator is interested in studying the effect of levels of illumination on visual
acuity and the experimenter decides to have three levels of illuminance. Thus, there will be three treatments
and in a randomized group design, each of the n subjects will be assigned to each of the three treatments
randomly. Let us take another example to present a case of treatment combination. In a 2 × 3 × 4
factorial experiment, there will be a total of 24 treatment combinations and each of the n subjects will
be assigned randomly to one of the 24 treatment combinations.
Replication
The term replication refers to an independent repetition of the experiment under as nearly identical
conditions as possible. The experimental units in the repetitions being independent samples from the
population being studied. It may be pointed out that an experiment with n observations per cell is to be
distinguished from an experiment with n replications with one observation per cell. The total number of
observations per treatment in the two experiments is the same, but the manner in which the two
experiments are conducted differs. For example, a 2 × 2 × 2 factorial experiment having 8 treatment
combinations with 5 observations per treatment is different from an experiment with 5 replications with
one observation per cell. The total number of observations per treatment is the same, that is, five. The
purpose of a replicated experiment is to maintain more uniform conditions within each cell of the
experiment to eliminate possible extraneous source of variation between cells. The partitioning of total
12 EXPERIMENTAL DESIGN IN BEHAVIOURAL RESEARCH

variation and df (degrees of freedom) in the replicated and non-replicated experiments will differ (see
chapter 7). It is quite important that the number of observations per cell for any single replication
should be the maximum so as to ensure uniform conditions within all cells of the experiment.
Main Effects
The difference in performance from one level to another for a particular factor, averaged over other
factors is called main effect. In a factorial experiment, the mean squares (MS) for the levels of factors
are called the main effects of the factors. Let us consider an example of a 2 × 3 × 4 factorial experiment
in which factor A has two levels, factor B three and factor C four. The A sum of squares corresponds to
a comparison between levels a1 and a2, the B sum of squares to a comparison between levels b1, b2, and
b3, and the C sum of squares to a comparison between levels c1, c2, c3, and c4. The difference in
performance between levels a1 and a2, averaged over levels of factors B and C, is called the main effect
of A. Similarly, the difference in performance among levels b1, b2, and b3, averaged over levels of
factors A and C, is called the main effect of B and so on.
The main effect, graphically, is the curve joining the points representing the mean performance on
the levels of a particular factor averaged over the other factors in the experiment. A significant main
effect will have significant slope or, in other words, the curve will not be parallel to the X-axis.

Simple Effects
In a factorial experiment, the effect of a treatment on one factor at a given level of the other factor is
called the simple effect. Let us consider an example of a 2 × 2 factorial experiment in which factors A
and B have two levels each. The effect of treatment on two levels of A under each of the two levels of B
is called simple effect of A. Similarly, the effect of treatment on two levels of B under each of the two
levels of A is called simple effect of B.
Graphically, the simple effects are presented in the same manner as the two factor interaction. For
example, the simple effect of A, graphically, is the AB interaction profile where the levels of factor A are
marked on the X-axis, and the two curves represented by the levels b1 and b2 are the simple effects at
each of the two levels. Similarly, the simple effect of B is the AB interaction profile where the levels of
factor B are marked on X-axis and the two curves, represented by a1 and a2 are the simple effects at each
level.

Interaction Effect
Factorial designs are important because they allow the investigator to study the effects of more than one
variable at a time. Apart from the advantage of efficiency of factorial design over the single factor
experiment, it at the same time permits the investigator to evaluate the interaction among the independent
variables that are present. Interaction is an important concept in research. It can be evaluated in all
experiments having two or more independent variables.
Interaction between two variables is said to occur when change in the values of one variable alters
the effects on the other. However, it may be noted that the presence of interaction, from a statistician’s
point of view, destroys the additivity of the main effects. That is, what is added by one factor at the first
level of the other is different from what is added at another level. Absence of interaction, on the other
hand, means that the additive property applies to the main effects, that is, they are independent.
Let us explain the concept of interaction with the help of an example. An experimenter is interested
in evaluating the effect of two study hours (i.e., 4 hrs. and 8 hrs.) on the achievement scores of the 10th
EXPERIMENTAL DESIGN: AN INTRODUCTION 13
grade students. In order to control the influence of secondary variable of intelligence, the experimenter
includes intelligence as a second independent variable in the experiment. Two groups of subjects are
included in the experiment, one of high intelligence and the other of low intelligence. The students are
assigned randomly to the two levels of the experimental variable (study hours). It is, thus, a 22 or 2 × 2
factorial experiment. The mean scores of the four groups are summarized in a 2 × 2 contingency table
below:
Study hours
4 hrs 8 hrs
Hi 7.1 9.8
Intelligence Level
Lo 5.0 6.5
Difference for high intelligence group: 9.8 – 7.1 = 2.7
Difference for low intelligence group: 6.5 – 5.0 = 1.5
Interaction (Difference) = 1.2
Alternatively,
Difference for 8 hours group: 9.8 – 6.5 = 3.3
Difference for 4 hours group: 7.1 – 5.0 = 2.1
Interaction (Difference) = 1.2
Interaction is indicated by the failure of the differences to be equal. If the differences are equal, it
means, there is no interaction. Interaction is measured by the difference between the two differences. It
can be observed from the table that the increase in the mean scores of the high intelligence group under
8 hours is much more than the corresponding increase in the mean scores of the low intelligence group.
That is, what is added by the factor of intelligence at the level of 4 hours is different from what is added
at the level of 8 hours. Clearly, the two factors have a combined effect which is different from the
effects when the two are applied separately.
It may be noted that in the presence of significant interaction effect, the main effect should be
interpreted with caution. It is observed quite often that, despite questionable meaning, F values for the
main effects are reported when interaction is present. This practice is open to criticism, but is almost
unavoidable. However, one must be cautious in interpreting the outcome which should be in relation to
the interaction present.

BASIC TERMINOLOGY OF STATISTICAL ANALYSIS


We obtain a sample, conduct an experiment in accordance with the design of the experiment and finally
test the hypotheses, i.e., draw inferences beyond the data. Once the data have been obtained, the next
important problem for researcher is how to evaluate objectively the evidence provided by a set of
observations. In the tradition of experimental design presented in this book, the method of collecting
data, layout for the set of observations to be made, and statistical analysis, all are decided in advance of
the actual conduct of the experiment. Once a particular design is selected, all aspects of the experiment
from the initial to the final stage are taken care of.
14 EXPERIMENTAL DESIGN IN BEHAVIOURAL RESEARCH

The statistical tests are useful tools to draw conclusions from evidence provided by samples. It is
expected that the student or researcher using this book will have knowledge of elementary statistics.
However, for completeness of the volume, some of the statistical concepts occurring in the following
chapters are briefly recapitulated here.

Null Hypothesis (H0)


The first important step in the decision making procedure is to state the null hypothesis (H0). The null
hypothesis is a hypothesis of no differences. It is a statistical hypothesis usually formulated for the
express purpose of being rejected. If H0 is rejected, we may accept the alternate hypothesis (H1).
Suppose, a researcher is interested in investigating the effect of certain treatments on two groups of
subjects. On the basis of some theory, he predicts that the two groups will differ in their mean performance.
This prediction would be the research hypothesis (H1) and confirmation of H1 will lend support to the
theory from which it was derived. To test the research hypothesis (H1), we state it in operational form as
the alternate hypothesis (H1), e.g., 1  2, that is, the mean of first group is not equal to the mean of the
second group (two tailed). The null hypothesis (H0) would be 1 = 2, i.e., the means of the two groups
are equal. In other words, the null hypothesis states that the performance of treatment groups is so
similar that the groups must belong to the same population or implies that the experimental manipulation
had no effect on the groups.
After formulating the null hypothesis, a suitable statistical test is applied. If the test yields a value
whose associated probability of occurrence under H0 is equal to or less than  (level of significance set
in advance of the collection of data), we decide to reject H0 in favour of H1. On the other hand if the test
yields a value, whose associated probability of occurrence under H0 is greater than , we decide to
accept the null hypothesis, that would mean that the two groups were random samples from the same
population.
It may be noted that on the basis of experimental evidence a statistical test can lead to the rejection
of the null hypothesis but not its acceptance. The null hypothesis can never be proved by any finite
amount of experimentation (Fisher, 1949). In an experiment, given a null hypothesis with no specific
alternate hypothesis (H1), the experimenter either rejects or does not reject the null hypothesis. The
term “does not reject” does not mean that the null hypothesis is accepted. It is, therefore, important that
null hypothesis and alternate hypothesis be formulated precisely for each experiment.

Level of Significance
Level of significance is our own decision making procedure. In advance of the data collection, for the
requirement of objectivity, we specify the probability of rejecting the null hypothesis, which is called
the significance level of the test and is indicated by . Conventionally,  = .05 and .01 have been chosen
as the levels of significance. We reject a null hypothesis whenever the outcome of the experiment has a
probability equal to or less than .05. The frequent use of .05 and .01 levels of significance is a matter of
convention having little scientific basis.
In contemporary statistical decision theory, this convention of adhering rigidly to an arbitrary .05
level has been rejected. It is not uncommon to report the probability value even when probability
associated with the outcome is greater than the conventional level .05. The reader can apply his own
judgement in making his decision on the basis of the reported probability level. In fact, the choice of

Click Here to Buy the Book Online


EXPERIMENTAL DESIGN: AN INTRODUCTION 15
level of significance should be determined by the nature of the problem for which we seek an answer
and the consequences of the findings. For example, in medical research where the efficacy of a particular
medicine is being evaluated, .05 level may be considered a lenient standard. Perhaps, a stringent level
of significance, say .001 is more appropriate in this situation. However, if we select a very small value
of , we decrease the probability of rejecting the null hypothesis when it is in fact false. The choice of
level of significance is related to the two types of errors in arriving at a decision about H0.

Type I and Type II Error


In making tests of significance, we are likely to be in error in drawing an inference concerning the
hypothesis to be tested. There are two types of errors which may be made while arriving at a decision
about the null hypothesis. The first, Type I error, is to reject H0 when in fact it is true. The second, the
Type II error, is to accept H0 when in fact it is false.
The probability of committing Type I error is associated with the level of significance, that is, .
Larger, the , the more is the likelihood of H0 getting rejected falsely. In other words, if the level of
significance for rejecting H0 is high, we are more likely to commit Type I error.
The Type II error is usually represented by . When H0 is false and we decide on the basis of a test
of significance not to reject H0, then we are likely to commit Type II error.
p of Type I error = 
p of Type II error = 
The probability of making Type I error is controlled by the level of significance () which is at the
discretion of the experimenter. For the requirement of objectivity, the specific values of  should be
specified before beginning data collection.

Power of the Test


We have just considered that there is an inverse relation between the likelihood of making the two types
of errors. That is, a decrease in  will increase  for a given sample of N elements. If we wish to reduce
Type I and Type II errors, we must increase N.
Various statistical tests offer the possibility of different balances between the two types of errors.
For achieving this balance, the notion of the power function of a statistical test is relevant.
The power of a test is defined as the probability of rejecting the null hypothesis (H0) when it is in
fact false and, thus, must be rejected. That is,
Power =1 – probability of Type II error = 1 – 
It may be noted that the power of a test increases with the increase in size of sample (N ).

Region of Rejection
Region of rejection of H0 is defined with reference to the sampling distribution. The decision rules
specify that H0 be rejected if an observed statistic has any value in the region of rejection. The probability
associated with any value in the region of rejection is equal to  or less than .

Click Here to Download the Excel Catalogue for Print Books


16 EXPERIMENTAL DESIGN IN BEHAVIOURAL RESEARCH
a b

p = .05 p = .025 p = .025

0 0
Fig. 1.1 (a) One-tailed test region of rejection,  = .05.
(b) Two-tailed test region of rejection,  = .05.

The location of region of rejection is affected by the nature of experimental hypothesis (H1). If H1
predicts the direction of the difference, then a one-tailed test is applied. However, if the direction of the
difference is not indicated by H1, then the two-tailed test is applied. It may be noted that one-tailed and
two-tailed tests differ only in the location of the region of rejection, size of the region is not affected.
The one-tailed region and two-tailed region are being presented in Figs. 1.1 a and b respectively.
It can be seen in Fig. 1.1a that the region of rejection is entirely at one end or tail of the sampling
distribution, 5 per cent of the entire area being under the curve. In a two-tailed test (Fig. 1.1b), the
region of rejection is located at both ends of the sampling distribution, 2.5 per cent of total area on each
side of the distribution.

Click Here to Download the Excel Catalogue for e-Books


Get free access to e-books for 3 Months
Best Selling Textbooks in COMMERCE AND MANAGEMENT
by
Renowned Authors from Prestigious Publishers

... & many more

Compliments from to all Faculty Members


already LIVE @ Download the App to access e-books:

Indian Institute of Technology Kanpur, Uttar Pradesh

Visvesvaraya Technological University (Consortium), Belgaum, Karnataka

National Institute of Technology, Kurukshetra, Haryana


E(
LLEG DEEME
CO D
G T
N
O
RI
EE

BE
JAB ENGIN

U
NIVERSITY)

Punjab Engineering College, Chandigarh


N
PU

Rajiv Gandhi Proudyogiki Vishwavidyalaya (Consortium), Bhopal, Madhya Pradesh

Bennett University (Times of India Group), Greater Noida, Uttar Pradesh

Jamia Millia Islamia - A Central University, New Delhi Username: MGM2021


.... & MANY OTHER REPUTED UNIVERSITIES Password: MGM2021

CONTACT US TO KEEP LEARNING!


Mr. Ranjan Roy Mr. Manish Gupta
9315905298 9315905295
Click Here to Buy the Book Online

You might also like