ANOVA - Edit (Jay & Dya)

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 69

ANALYSING QUANTITATIVE DATA

ANOVA
• One way Anova
• Two Way Anova
• Repeated Measures Anova
INTRODUCTION
• When a researcher desires to find out
whether there are significant differences
between the means of more than two
groups, they use Analysis Of Variance
(ANOVA).
• It is also used when more than one
independent variable is investigated
(Fraenkel, Wallen and Hyun, 2019).
• The analysis of variance is an effective way to
determine whether the means of more than
two samples are too different to attribute to
sampling error.

Fraenkel, J.R., Wallen,N.E., & Hyun, H.H., (2019). How to design and evaluate research in education. Tenth Edition. New York, NY: McGraw-Hill
Education.
INTRODUCTION

• ANOVA test is a way to find out if survey or experiment results are


significant.
• In other words, it helps us to figure out if we need to reject the null
hypothesis or not to reject the null hypothesis.
• Basically, we are testing groups to see if there is any difference between
them.
• For example:-
 A group of psychiatric patients are trying 3 different therapies: counselling, medication and biofeedback. You want to
see if one therapy is better than the others.
 A manufacturer has two different processes to make light bulbs, they want to know if one process is better than the
other.
 Students from different colleges take the same exam. You want to see if any college outperform the other.

Retrieved from https://www.statisticshowto.datasciencecentral.com/probability-and-statistics/hypothesis-testing/anova/


ANOVA vs T-test

• T-test will tell you if there is a significant variation between groups.


• T- test compares means, while ANOVA compares variance between
population. You could technically perform a series of t-test on your data.
• However, as the group grows in number, you may end with a lot of pair
comparisons that you need to run.
– For instance, if we calculated 10 t tests (for comparing five means) and accepted .05 as
our significance level, we would have 10 times .05, or .50, as the probability that we
would reject at least one null hypothesis when it is really true (Type I error).
• ANOVA will give you a single number (the F statistics) and one of the P-
value to help you reject or not to reject the null hypothesis.

ANOVA Test (n.d). Retrieved from https://www.statisticshowto.datasciencecentral.com/probability-and-statistics/hypothesis-testing/anova/


Grand Mean
• Mean is a simple arithmetic average of a range of values.
• There are two kinds of means that we use in ANOVA
calculations,
they are:-
1. Separate sample means – μ1, μ2, μ3
2. The grand mean – μ
• The grand mean is the mean of sample means / the mean of all
observations combined, irrespective of the sample.

A Simple Introduction to ANOVA (with applications in Excel) (n.d.). Retrieved March 27,2020 from https://
www.analyticsvidhya.com/blog/2018/01/anova-analysis-of-variance/.
Hypothesis
• The Null hypothesis in ANOVA is valid when all the sample means are equal, or they don’t
have any significant difference.
• Thus, they can be considered as a part of a larger set of the population.
• On the other hand, the alternate hypothesis is valid when at least one of the sample
means is different from the rest of the sample means.

HO : μ1 = μ2 = · · · = μL
Null Hypothesis
H1 : μ1 ≠ μ2
Alternate Hypothesis

A Simple Introduction to ANOVA (with applications in Excel) (n.d.). Retrieved March 27,2020 from https://
www.analyticsvidhya.com/blog/2018/01/anova-analysis-of-variance/.
Partitioning Variation
May be caused by the variation in your
independent variable, by individual
differences among different subjects in
Between-Groups
your group, by experimental error or
variability
by combination of these (Gravetter &
Wallnau, 2017)

Total variation
May be attributed to error. This error
can arise from either or both of two
Within-groups sources: individual differences
variability between subjects treated alike within
group and experimental error
( Gravetter & Wallnau, 2017)
Partitioning Variation
A B C

Between-Groups
variability

Total variation A B C

Within-groups
variability
Between Group Variability

• Consider the distributions of two


samples.
• As these samples overlap, their
individual means won’t differ by a
great margin.
• Hence the difference between their
individual means and grand mean
won’t be significant enough.

A Simple Introduction to ANOVA (with applications in Excel) (n.d.). Retrieved March 27,2020 from https://
www.analyticsvidhya.com/blog/2018/01/anova-analysis-of-variance/.
Between Group Variability

• Now consider these two sample


distributions.
• As the samples differ from each
other by a big margin, their
individual means would also differ.
• The difference between the
individual means and grand mean
would therefore also be significant.

A Simple Introduction to ANOVA (with applications in Excel) (n.d.). Retrieved March 27,2020 from https://
www.analyticsvidhya.com/blog/2018/01/anova-analysis-of-variance/.
Source : Psychstat – Missouri State
 Such variability between the distributions called between-group variability. It refers to
variations between the distributions of individual groups (or levels) as the values within each
group are different.
 If the distributions overlap or are close, the grand mean will be similar to the individual
means whereas if the distributions are far apart, difference between means and grand mean
would be large.

A Simple Introduction to ANOVA (with applications in Excel) (n.d.). Retrieved March 27,2020 from https://
www.analyticsvidhya.com/blog/2018/01/anova-analysis-of-variance/.
Calculating Between Group Variability
• We will calculate Between Group Variability just as we calculate the standard deviation.
• Given the Sample means and Grand mean, we can calculate it as:-

A Simple Introduction to ANOVA (with applications in Excel) (n.d.). Retrieved March 27,2020 from https://
www.analyticsvidhya.com/blog/2018/01/anova-analysis-of-variance/.
Calculating Between Group Variability
• We also want to weigh each squared deviation by the size of the sample.
• In other words, a deviation is given greater weight if it’s from a larger sample.
• Hence, we’ll multiply each squared deviation by each sample size and add them up.
This is called the sum-of-squares for between-group variability (Ssbetween).

A Simple Introduction to ANOVA (with applications in Excel) (n.d.). Retrieved March 27,2020 from https://
www.analyticsvidhya.com/blog/2018/01/anova-analysis-of-variance/.
Calculating Between Group Variability

• There’s one more thing we have to do to derive a good measure of between-group


variability.
• Again, recall how we calculate the sample standard deviation.
• We find the sum of each squared deviation and divide it by the degrees of freedom.
• For our between-group variability, we will find each squared deviation, weigh them by
their sample size, sum them up, and divide by the degrees of freedom, which in the
case of between-group variability is the number of sample means (k) minus 1.

A Simple Introduction to ANOVA (with applications in Excel) (n.d.). Retrieved March 27,2020 from https://
www.analyticsvidhya.com/blog/2018/01/anova-analysis-of-variance/.
Within Group Variability
• Consider the given distributions of three samples. As the spread (variability) of each
sample is increased, their distributions overlap and they become part of a big
population.

A Simple Introduction to ANOVA (with applications in Excel) (n.d.). Retrieved March 27,2020 from https://
www.analyticsvidhya.com/blog/2018/01/anova-analysis-of-variance/.
Within Group Variability
• Now consider another distribution of the same three samples but with less
variability. Although the means of samples are similar to the samples in
the above image, they seem to belong to different populations.

A Simple Introduction to ANOVA (with applications in Excel) (n.d.). Retrieved March 27,2020 from https://
www.analyticsvidhya.com/blog/2018/01/anova-analysis-of-variance/.
Within Group Variability
• Such variations within a sample are denoted by Within-group variation.
• It refers to variations caused by differences within individual groups (or levels) as not
all the values within each group are the same.
• Each sample is looked at on its own and variability between the individual points in
the sample is calculated.
• In other words, no interactions between samples are considered.

A Simple Introduction to ANOVA (with applications in Excel) (n.d.). Retrieved March 27,2020 from https://
www.analyticsvidhya.com/blog/2018/01/anova-analysis-of-variance/.
Calculating Within Group Variability
• We can measure Within-group variability by looking at how much each value in each
sample differs from its respective sample mean.
• We’ll take the squared deviation of each value from its respective sample mean and
add them up.
• This is the sum of squares for within-group variability.

A Simple Introduction to ANOVA (with applications in Excel) (n.d.). Retrieved March 27,2020 from https://
www.analyticsvidhya.com/blog/2018/01/anova-analysis-of-variance/.
Calculating Within Group Variability

A Simple Introduction to ANOVA (with applications in Excel) (n.d.). Retrieved March 27,2020 from https://
www.analyticsvidhya.com/blog/2018/01/anova-analysis-of-variance/.
Calculating Within Group Variability

• Like between-group variability, we then divide the sum of squared deviations by the degrees
of freedom to find a less-biased estimator for the average squared deviation (essentially, the
average-sized square from the figure above).
• This time, the degrees of freedom is the sum of the sample sizes (N) minus the number of
samples (k).

A Simple Introduction to ANOVA (with applications in Excel) (n.d.). Retrieved March 27,2020 from https://
www.analyticsvidhya.com/blog/2018/01/anova-analysis-of-variance/.
F-Statistic
• The statistic which measures if the means of different samples are significantly different or
not is called the F-Ratio.
• The lower the F-Ratio, the more similar are the sample means. In that case, we cannot reject
the null hypothesis.

Source: www.spss-tutorials.com

A Simple Introduction to ANOVA (with applications in Excel) (n.d.). Retrieved March 27,2020 from https://
www.analyticsvidhya.com/blog/2018/01/anova-analysis-of-variance/.
F-Statistic
• F = Between group variability / Within
group variability
• As between group variability increases,
sample means grow further apart from
each other. In other words, the samples
are more probable to be belonging to
totally different populations.
• The lower the F-Ratio, the more similar are
the sample means. In that case, we cannot
reject the null hypothesis.

A Simple Introduction to ANOVA (with applications in Excel) (n.d.). Retrieved March 27,2020 from https://
www.analyticsvidhya.com/blog/2018/01/anova-analysis-of-variance/.
F-Statistic

• If the value of the calculated F-statistic is


more than the F-critical value (for a
specific α/significance level), then we
reject the null hypothesis and can say that
the treatment had a significant effect
• Unlike the z and t-distributions, the F-
distribution does not have any negative
values because between and within-
group variability are always positive due
to squaring each deviation
A Simple Introduction to ANOVA (with applications in Excel) (n.d.). Retrieved March 27,2020 from https://
www.analyticsvidhya.com/blog/2018/01/anova-analysis-of-variance/.
One Way
Introduction
• The one-way or two-way in the name of the test refers to the number of variables
present in the test, where one way ANOVA has one independent variable with 2
levels.
• For example, the independent variable in a study to investigate the effects of
technology in a class would be website application, and its levels are Kahoot, Padlet
and Quizziz.
• One-Way ANOVA compares the means of two or more independent groups in order
to determine whether there is statistical evidence that the associated population
means are significantly different.
• However, one limitation is that it won’t tell us which groups are different, therefore
we may need to run a post-hoc test.
Purpose of the Test
• One-Way ANOVA compares the means of two or more independent groups in order
to determine whether there is statistical evidence that the associated population
means are significantly different.
• Used for:-
– Field studies
– Experiments / quasi-experiments
• Both the One-Way ANOVA and the Independent Samples t Test can compare the
means for two groups. However, only the One-Way ANOVA can compare the means
across three or more groups.
Conditions and Assumptions for Test
Assumption #1 :
 Dependent variable that is continuous (i.e., interval or ratio level)

Assumption #2 :
 Independent variable that is categorical (i.e., two or more groups).

Assumption #3:
 Independent samples/groups – no relationship between the subjects in the sample,
meaning:
– subjects in the first group cannot also be in the second group.
– no subject in either group can influence subjects in the other group.
– no group can influence the other group.
One-way ANOVA in SPSS Statistics (n.d.). Retrieved March 25, 2020, from https://statistics.laerd.com/spss-tutorials
/one-way-anova-using-spss-statistics.php.
Conditions and Assumptions for Test
Assumption #4:
 Normal distribution (approximately) of the dependent variable for each group
(i.e., for each level of the factor).

Assumption #5:
Homogeneity of variances (i.e., variances approximately equal across groups):
When this assumption is violated and the sample sizes differ among groups, the p value for the
overall F test is not trustworthy. These conditions warrant using alternative statistics that do not
assume equal variances among populations, such as the Browne-Forsythe or Welch statistics
When this assumption is violated, regardless of whether the group sample sizes are fairly equal, the
results may not be trustworthy for post hoc tests. When variances are unequal, post hoc tests that
do not assume equal variances should be used (e.g., Dunnett’s C).
One-way ANOVA in SPSS Statistics (n.d.). Retrieved March 25, 2020, from https://statistics.laerd.com/spss-tutorials
/one-way-anova-using-spss-statistics.php.
Conditions and Assumptions for Test
Assumption #6:
 There should be no significant outliers.
• Outliers are simply single data points within your data that do not follow the usual
pattern.
• For example, in a study of 100 students’ IQ scores where the mean is 108 with
small variation among students, one student had a score of 189.

One-way ANOVA in SPSS Statistics (n.d.). Retrieved March 25, 2020, from https://statistics.laerd.com/spss-tutorials
/one-way-anova-using-spss-statistics.php.
Example of Study
A manager wants to raise the productivity at his company by increasing the speed at
which his employees can use a particular spreadsheet program. As he does not have the
skills in-house, he employs an external agency which provides training in this
spreadsheet program.
They offer 3 courses: a beginner, intermediate and advanced course. He is unsure which
course is needed for the type of work they do at his company, so he sends 10 employees
on the beginner course, 10 on the intermediate and 10 on the advanced course.
When they all return from the training, he gives them a problem to solve using the
spreadsheet program, and times how long it takes them to complete the problem. He
then compares the three courses (beginner, intermediate, advanced) to see if there are
any differences in the average time it took to complete the problem.

One-way ANOVA in SPSS Statistics (n.d.). Retrieved March 25, 2020, from https://statistics.laerd.com/spss-tutorials
/one-way-anova-using-spss-statistics.php.
Analysing and Explaining the Results
Descriptives Table

• The descriptive table (see above) provides some very useful descriptive statistics, including
the mean, standard deviation and 95% confidence intervals for the dependent variable
(Time) for each separate group (Beginners, Intermediate and Advanced), as well as when all
groups are combined (Total).
One-way ANOVA in SPSS Statistics (n.d.). Retrieved March 25, 2020, from https://statistics.laerd.com/spss-tutorials
/one-way-anova-using-spss-statistics.php.
Analysing and Explaining the Results
ANOVA Table
• This is the table that shows the output of the ANOVA analysis and whether there is a
statistically significant difference between our group means. We can see that the
significance value is 0.021 (i.e., p = .021), which is below 0.05. and, therefore, there is a
statistically significant difference in the mean length of time to complete the spreadsheet
problem between the different courses taken.

One-way ANOVA in SPSS Statistics (n.d.). Retrieved March 25, 2020, from https://statistics.laerd.com/spss-tutorials
/one-way-anova-using-spss-statistics.php.
Analysing and Explaining the Results

Multiple Comparisons Table


• We know that there are statistically significant differences between the groups as a whole
(limitation).
• Therefore, we can use the Multiple Comparisons to show which groups that differed from
each other.
• The Tukey post hoc test is generally the preferred test for conducting post hoc tests on a
one-way ANOVA, but there are many others.

One-way ANOVA in SPSS Statistics (n.d.). Retrieved March 25, 2020, from https://statistics.laerd.com/spss-tutorials
/one-way-anova-using-spss-statistics.php.
Analysing and Explaining the Results
• We can see from the table below that there is a statistically significant difference in
time to complete the problem between the group that took the beginner course and
the intermediate course (p = 0.046), as well as between the beginner course and
advanced course (p = 0.034). However, there were no differences between the
groups that took the intermediate and advanced course (p = 0.989).

One-way ANOVA in SPSS Statistics (n.d.). Retrieved March 25, 2020, from https://statistics.laerd.com/spss-tutorials
/one-way-anova-using-spss-statistics.php.
Conclusion and Writing the Results
• Based on the results above, you could report the results of the study as follows (this does
not include the results from your assumptions tests or effect size calculations):-

There was a statistically significant difference between groups as determined by one-


way ANOVA (F(2,27) = 4.467, p = .021). A Tukey post hoc test revealed that the time
to complete the problem was statistically significantly lower after taking the
intermediate (23.6 ± 3.3 min, p = .046) and advanced (23.4 ± 3.2 min, p = .034) course
compared to the beginners course (27.2 ± 3.0 min). There was no statistically
significant difference between the intermediate and advanced groups (p = .989).

One-way ANOVA in SPSS Statistics (n.d.). Retrieved March 25, 2020, from https://statistics.laerd.com/spss-tutorials
/one-way-anova-using-spss-statistics.php.
Two Way
Introduction
• The one-way or two-way in the name of the test refers to the number of variables
present in the test, where two way ANOVA has two independent variables (can have
multiple levels).
• The interaction term in a two-way ANOVA informs you whether the effect of one of
your independent variables on the dependent variable is the same for all values of
your other independent variable (and vice versa).
• Both one-way ANOVA and two-way ANOVA share the same limitation.

Two-way ANOVA in SPSS Statistics (n.d). Retrieved June 1, 2020 from https://statistics.laerd.com/spss-tutorials/two-way-anova-using-spss-statistics.php
Purpose of the Test

• The primary purpose of a two-way ANOVA is to understand if there is an interaction


between the two independent variables on the dependent variable.
• For example, you could use a two-way ANOVA to understand whether there is an
interaction between gender and educational level on test anxiety amongst
university students, where gender (males/females) and education level
(undergraduate/postgraduate) are your independent variables, and test anxiety is
your dependent variable.

Two-way ANOVA in SPSS Statistics (n.d). Retrieved June 1, 2020 from https://statistics.laerd.com/spss-tutorials/two-way-anova-using-spss-statistics.php
Purpose of the Test

• Two-way ANOVA tells us about the main effect and the interaction effect.
• The main effect is similar to a one-way ANOVA where the effect of independent
variable one and two would be measured separately.
• Whereas, the interaction effect is the one where independent variables are
considered at the same time.
• That’s why a two-way ANOVA can have up to three hypotheses, which are as
follows:

Two-way ANOVA in SPSS Statistics (n.d). Retrieved June 1, 2020 from https://statistics.laerd.com/spss-tutorials/two-way-anova-using-spss-statistics.php
Purpose of the Test
Two null hypotheses will be tested if we have placed only one observation in each cell.
For this example, those hypotheses will be:
H1: All the independent variable one groups have equal mean score.
H2: All the independent variable two groups have equal mean score.

For multiple observations in cells, we would also be testing a third hypothesis:


H3: The factors are independent or the interaction effect does not exist.

Two-way ANOVA in SPSS Statistics (n.d). Retrieved June 1, 2020 from https://statistics.laerd.com/spss-tutorials/two-way-anova-using-spss-statistics.php
Conditions and Assumptions for Test
Assumption #1:
Dependent variable that is continuous (i.e., interval or ratio level).
Assumption #2:
TWO independent variables that are categorical (i.e., two or more groups).
Independent samples/groups – no relationship between the subjects in the sample, meaning:-
– subjects in the first group cannot also be in the second group.
– no subject in either group can influence subjects in the other group.
– no group can influence the other group.
Assumption #3:
Random sample of data from the population.

Two-way ANOVA in SPSS Statistics (n.d). Retrieved June 1, 2020 from https://statistics.laerd.com/spss-tutorials/two-way-anova-using-spss-statistics.php
Conditions and Assumptions for Test
Assumption #4:
Normal distribution (approximately) of the dependent variable for each group
(i.e., for each level of the factor).
Assumption #5:
Homogeneity of variances (i.e., variances approximately equal across groups):-
– When this assumption is violated and the sample sizes differ among groups, the p value for the
overall F test is not trustworthy. These conditions warrant using alternative statistics that do not
assume equal variances among populations, such as the Browne-Forsythe or Welch statistics
– When this assumption is violated, regardless of whether the group sample sizes are fairly equal,
the results may not be trustworthy for post hoc tests. When variances are unequal, post hoc
tests that do not assume equal variances should be used.

Two-way ANOVA in SPSS Statistics (n.d). Retrieved June 1, 2020 from https://statistics.laerd.com/spss-tutorials/two-way-anova-using-spss-statistics.php
Conditions and Assumptions for Test

Assumption #6:
There should be no significant outliers.
• Outliers are simply single data points within your data that do not follow the usual
pattern. For example, in a study of 100 students’ IQ scores where the mean is 108
with small variation among students, one student had a score of 189.

Two-way ANOVA in SPSS Statistics (n.d). Retrieved June 1, 2020 from https://statistics.laerd.com/spss-tutorials/two-way-anova-using-spss-statistics.php
Example of Study
A researcher was interested in whether an individual's interest in politics
was influenced by their level of education and gender.
They recruited a random sample of participants to their study and asked
them about their interest in politics, which they scored from 0 to 100, with
higher scores indicating a greater interest in politics.
The researcher then divided the participants by gender (Male/Female) and
then again by level of education (School/College/University). Therefore,
the dependent variable was "interest in politics", and the two independent
variables were "gender" and "education".

Two-way ANOVA in SPSS Statistics (n.d). Retrieved June 1, 2020 from https://statistics.laerd.com/spss-tutorials/two-way-anova-using-spss-statistics.php
Analysing and Explaining the Results
Descriptives Table
• This table is very useful because it
provides the mean and standard
deviation for each combination of the
groups of the independent variables
(what is sometimes referred to as each
"cell" of the design).
• In addition, the table provides "Total"
rows, which allows means and standard
deviations for groups only split by one
independent variable, or none at all, to
be known. This might be more useful if
you do not have a statistically significant
interaction.
Two-way ANOVA in SPSS Statistics (n.d). Retrieved June 1, 2020 from https://statistics.laerd.com/spss-tutorials/two-way-anova-using-spss-statistics.php
Analysing and Explaining the Results
Plot of the results
• The plot of the mean "interest in
politics" score for each combination of
groups of "Gender" and "Edu_level"
are plotted in a line graph, as shown:
• An interaction effect can usually be
seen as a set of non-parallel lines.
• You can see from this graph that the
lines do not appear to be parallel (with
the lines actually crossing).

Two-way ANOVA in SPSS Statistics (n.d). Retrieved June 1, 2020 from https://statistics.laerd.com/spss-tutorials/two-way-anova-using-spss-statistics.php
Analysing and Explaining the Results
Statistical significance of the two-way ANOVA
• The actual result of the two-way ANOVA – namely, whether either of the two independent
variables or their interaction are statistically significant – is shown in the Tests of Between-
Subjects Effects table, as shown below:-

Two-way ANOVA in SPSS Statistics (n.d). Retrieved June 1, 2020 from https://statistics.laerd.com/spss-tutorials/two-way-anova-using-spss-statistics.php
Analysing and Explaining the Results
Statistical significance of the two-way ANOVA
• These rows inform us whether our independent variables (the "Gender" and "Edu_Level"
rows) and their interaction (the "Gender*Edu_Level" row) have a statistically significant
effect on the dependent variable, "interest in politics".
• You can see from the "Sig." column that we have a statistically significant interaction at the p
= .014 level.
• We can see from the table above that there was no statistically significant difference in
mean interest in politics between males and females (p = .207), but there were statistically
significant differences between educational levels (p < .0005).

Two-way ANOVA in SPSS Statistics (n.d). Retrieved June 1, 2020 from https://statistics.laerd.com/spss-tutorials/two-way-anova-using-spss-statistics.php
Analysing and Explaining the Results
Multiple Comparisons Table
• If you do not have a statistically significant interaction, you might interpret the Tukey post
hoc test results for the different levels of education, which can be found in the Multiple
Comparisons table, as shown below:

Two-way ANOVA in SPSS Statistics (n.d). Retrieved June 1, 2020 from https://statistics.laerd.com/spss-tutorials/two-way-anova-using-spss-statistics.php
Analysing and Explaining the Results
Multiple Comparisons Table
• You can see from the table above that there is some repetition of the results, but
regardless of which row we choose to read from, we are interested in the
differences between (1) School and College, (2) School and University, and (3)
College and University.
• From the results, we can see that there is a statistically significant difference
between all three different educational levels (p < .0005).

Two-way ANOVA in SPSS Statistics (n.d). Retrieved June 1, 2020 from https://statistics.laerd.com/spss-tutorials/two-way-anova-using-spss-statistics.php
Conclusion and Writing the Results

• You should emphasize the results from the interaction first before you mention the main
effects. For example, you might report the result as:-

Simple main effects analysis showed that males were significantly more interested in politics than
females when educated to university level (p = .002), but there were no differences between gender
when educated to school (p = .465) or college level (p = .793).

Two-way ANOVA in SPSS Statistics (n.d). Retrieved June 1, 2020 from https://statistics.laerd.com/spss-tutorials/two-way-anova-using-spss-statistics.php
Conclusion and Writing the Results

• If you had a statistically significant interaction term and carried out the procedure for simple
main effects in SPSS Statistics, you would also report these results. Briefly, you might report
these as:

A two-way ANOVA was conducted that examined the effect of gender and education level on interest
in politics. There was a statistically significant interaction between the effects of gender and education
level on interest in politics, F (2, 54) = 4.643, p = .014.

Two-way ANOVA in SPSS Statistics (n.d). Retrieved June 1, 2020 from https://statistics.laerd.com/spss-tutorials/two-way-anova-using-spss-statistics.php
Repeated Measures
Introduction
• A two-way repeated measures ANOVA are also known as a two-factor repeated
measures ANOVA, two-factor or two-way ANOVA with repeated measures, or
within-within-subjects ANOVA.
• The repeated measures ANOVA compares means across one or more variables that
are based on repeated observations.
• The repeated measures ANOVA is similar to the dependent sample T-Test (paired T
test), because it also compares the mean scores of one group to another group on
different observations.
• Different to both one way ANOVA (more than one factor) and two way ANOVA,
where in repeated measures ANOVA the subjects undergo both conditions
(independent variables).
Purpose of the Test
• A two-way repeated measures ANOVA compares the mean differences between
groups that have been split on two within-subjects factors (also known as
independent variables).
• A two-way repeated measures ANOVA is often used in studies where you have
measured a dependent variable over two or more time points, or when subjects
have undergone two or more conditions (i.e., the two factors are "time" and
"conditions").
• The primary purpose of a two-way repeated measures ANOVA is to understand if
there is an interaction between these two factors on the dependent variable
We can analyse data using a repeated measures ANOVA for two types of study design. Studies
that investigate either
(1) changes in mean scores over three or more time points, or
(2) differences in mean scores under three or more different conditions.

For example, for (1), you might be investigating the effect of a 6-month exercise training
programme on blood pressure and want to measure blood pressure at 3 separate time points
(pre-, midway and post-exercise intervention), which would allow you to develop a time-course
for any exercise effect.

For (2), you might get the same subjects to eat different types of cake (chocolate, caramel and
lemon) and rate each one for taste, rather than having different people taste each different
cake. The important point with these two study designs is that the same people are being
measured more than once on the same dependent variable (i.e., why it is called repeated
measures).
Repeated Measures ANOVA in SPSS Statistics (n.d). Retrieved June 1, 2020 from https://statistics.laerd.com
/statistical-guides/repeated-measures-anova-statistical-guide.php
(1) (2)

Repeated Measures ANOVA in SPSS Statistics (n.d). Retrieved June 1, 2020 from https://statistics.laerd.com
/statistical-guides/repeated-measures-anova-statistical-guide.php
Hypothesis
The repeated measures ANOVA tests for whether there are any differences between
related population means. The null hypothesis (H0) states that the means are equal:

H0: µ1 = µ2 = µ3 = … = µk

where µ = population mean and k = number of related groups.


The alternative hypothesis (HA) states that the related population means are not equal
(at least one mean is different to another mean):
HA: at least two means are significantly different

Repeated Measures ANOVA in SPSS Statistics (n.d). Retrieved June 1, 2020 from https://statistics.laerd.com
/statistical-guides/repeated-measures-anova-statistical-guide.php
Conditions and Assumptions for Test
Assumption #1:
 Dependent variable that is continuous (i.e., interval or ratio level).
Assumption #2:
 TWO independent variables (two within-subjects factors) that are at least two
categorical, "related groups" or "matched pairs".
Assumption #3:
 Random sample of data from the population.
Conditions and Assumptions for Test
Assumption #4:
Normal distribution (approximately) of the dependent variable for each group (i.e., for
each level of the factor).
Assumption #5:
Known as sphericity, the variances of the differences between all combinations of
related groups must be equal.
Assumption #6:
There should be no significant outliers. Outliers are simply single data points within
your data that do not follow the usual pattern. For example, in a study of 100 students’
IQ scores where the mean is 108 with small variation among students, one student had
a score of 189.
Example of Study
A researcher was interested in discovering whether a short-term (2-week) high-
intensity exercise-training programme can elicit reductions in a marker of heart disease
called C-Reactive Protein (CRP).
To answer this question, the researcher recruited 12 subjects and had them perform
two trials/treatments – a control trial and an intervention trial – which were
counterbalanced and with sufficient time between trials to allow for residual effects to
dissipate.
In the control trial, subjects continued their normal activities, whilst in the intervention
trial, they exercised intensely for 45 minutes each day. CRP concentration was
measured three times: at the beginning, midway (one week) and at the end of the
trials. For the control trial, the two within-subjects factors are time and treatment (i.e.,
control or intervention), and the dependent variable is CRP.
In variable terms, the researcher wishes to know if there is an interaction between
time and treatment on CRP.
Analysing and Explaining the Results

Descriptives Table
• This table is very useful because it
provides the mean and standard
deviation for each combination of the
groups of the independent variables.
Analysing and Explaining the Results
Plot of the results
• The plot of the mean “back pain"
score for each combination of
groups of “treatment" and
“time" are plotted in a line graph,
as shown:
• An interaction effect can usually
be seen as a set of non-parallel
lines.
• You can see from this graph that
the lines do not appear to be
parallel (with the lines actually
crossing).
Analysing and Explaining the Results
Mauchly’s Test of Sphericity
As a rule of thumb, sphericity is assumed if Sig. > 0.05.
For our data, Sig. = 0.54 so sphericity is no issue here.
Analysing and Explaining the Results
Tests of Within-Subjects Effects
Our significance level, Sig. = .000. So if the means are perfectly equal in the
population, there's a 0% chance of finding the differences between the
means that we observe in the sample. We therefore reject the null
hypothesis of equal means.
Analysing and Explaining the Results
• Tests of Within-Subjects Effects
• Our significance level, Sig. = .000. So if the means are perfectly equal in
the population, there's a 0% chance of finding the differences between
the means that we observe in the sample. We therefore reject the null
hypothesis of equal means.
Analysing and Explaining the Results
Tests of Within-Subjects Effects
Our significance level, Sig. = .000. So if the means are perfectly equal in the
population, there's a 0% chance of finding the differences between the
means that we observe in the sample. We therefore reject the null
hypothesis of equal means.
References
• Fraenkel, J.R., Wallen,N.E., & Hyun, H.H., (2019). How to design and evaluate research in education. Tenth Edition. New York, NY:
McGraw-Hill Education International Edition.
• Borden,K.S.,& Abbott,B.B.,(2019). Research Design and Methods A Process Approach. Tenth Edition. New York, NY: McGraw-Hill Education
International Edition.
• SPSS Repeated Measures ANOVA Tutorial. (n.d.). Retrieved March 20, 2020, from
https://www.spss-tutorials.com/spss-repeated-measures-anova/.
• SPSS Repeated Measures ANOVA Tutorial II. (n.d.). Retrieved March 20, 2020, from
https://www.spss-tutorials.com/spss-repeated-measures-anova-example-2/ .
• One-way ANOVA in SPSS Statistics (n.d.). Retrieved March 25, 2020, from
https://statistics.laerd.com/spss-tutorials/one-way-anova-using-spss-statistics.php.
• Two-way ANOVA in SPSS Statistics (n.d). Retrieved June 1, 2020 from https://statistics.laerd.com/spss-tutorials
/two-way-anova-using-spss-statistics.php
• Repeated Measures ANOVA in SPSS Statistics (n.d). Retrieved June 1, 2020 from https://statistics.laerd.com
/statistical-guides/repeated-measures-anova-statistical-guide.php
• A Simple Introduction to ANOVA (with applications in Excel) (n.d.). Retrieved March 25, 2020, from
https://www.analyticsvidhya.com/blog/2018/01/anova-analysis-of-variance/.
• ANOVA Test: Definition, Types, Examples(n.d.). Retrieved March 25, 2020, from
https://www.statisticshowto.datasciencecentral.com/probability-and-statistics/hypothesis-testing/anova/

You might also like