ANOVA - Edit (Jay & Dya)
ANOVA - Edit (Jay & Dya)
ANOVA - Edit (Jay & Dya)
ANOVA
• One way Anova
• Two Way Anova
• Repeated Measures Anova
INTRODUCTION
• When a researcher desires to find out
whether there are significant differences
between the means of more than two
groups, they use Analysis Of Variance
(ANOVA).
• It is also used when more than one
independent variable is investigated
(Fraenkel, Wallen and Hyun, 2019).
• The analysis of variance is an effective way to
determine whether the means of more than
two samples are too different to attribute to
sampling error.
Fraenkel, J.R., Wallen,N.E., & Hyun, H.H., (2019). How to design and evaluate research in education. Tenth Edition. New York, NY: McGraw-Hill
Education.
INTRODUCTION
A Simple Introduction to ANOVA (with applications in Excel) (n.d.). Retrieved March 27,2020 from https://
www.analyticsvidhya.com/blog/2018/01/anova-analysis-of-variance/.
Hypothesis
• The Null hypothesis in ANOVA is valid when all the sample means are equal, or they don’t
have any significant difference.
• Thus, they can be considered as a part of a larger set of the population.
• On the other hand, the alternate hypothesis is valid when at least one of the sample
means is different from the rest of the sample means.
HO : μ1 = μ2 = · · · = μL
Null Hypothesis
H1 : μ1 ≠ μ2
Alternate Hypothesis
A Simple Introduction to ANOVA (with applications in Excel) (n.d.). Retrieved March 27,2020 from https://
www.analyticsvidhya.com/blog/2018/01/anova-analysis-of-variance/.
Partitioning Variation
May be caused by the variation in your
independent variable, by individual
differences among different subjects in
Between-Groups
your group, by experimental error or
variability
by combination of these (Gravetter &
Wallnau, 2017)
Total variation
May be attributed to error. This error
can arise from either or both of two
Within-groups sources: individual differences
variability between subjects treated alike within
group and experimental error
( Gravetter & Wallnau, 2017)
Partitioning Variation
A B C
Between-Groups
variability
Total variation A B C
Within-groups
variability
Between Group Variability
A Simple Introduction to ANOVA (with applications in Excel) (n.d.). Retrieved March 27,2020 from https://
www.analyticsvidhya.com/blog/2018/01/anova-analysis-of-variance/.
Between Group Variability
A Simple Introduction to ANOVA (with applications in Excel) (n.d.). Retrieved March 27,2020 from https://
www.analyticsvidhya.com/blog/2018/01/anova-analysis-of-variance/.
Source : Psychstat – Missouri State
Such variability between the distributions called between-group variability. It refers to
variations between the distributions of individual groups (or levels) as the values within each
group are different.
If the distributions overlap or are close, the grand mean will be similar to the individual
means whereas if the distributions are far apart, difference between means and grand mean
would be large.
A Simple Introduction to ANOVA (with applications in Excel) (n.d.). Retrieved March 27,2020 from https://
www.analyticsvidhya.com/blog/2018/01/anova-analysis-of-variance/.
Calculating Between Group Variability
• We will calculate Between Group Variability just as we calculate the standard deviation.
• Given the Sample means and Grand mean, we can calculate it as:-
A Simple Introduction to ANOVA (with applications in Excel) (n.d.). Retrieved March 27,2020 from https://
www.analyticsvidhya.com/blog/2018/01/anova-analysis-of-variance/.
Calculating Between Group Variability
• We also want to weigh each squared deviation by the size of the sample.
• In other words, a deviation is given greater weight if it’s from a larger sample.
• Hence, we’ll multiply each squared deviation by each sample size and add them up.
This is called the sum-of-squares for between-group variability (Ssbetween).
A Simple Introduction to ANOVA (with applications in Excel) (n.d.). Retrieved March 27,2020 from https://
www.analyticsvidhya.com/blog/2018/01/anova-analysis-of-variance/.
Calculating Between Group Variability
A Simple Introduction to ANOVA (with applications in Excel) (n.d.). Retrieved March 27,2020 from https://
www.analyticsvidhya.com/blog/2018/01/anova-analysis-of-variance/.
Within Group Variability
• Consider the given distributions of three samples. As the spread (variability) of each
sample is increased, their distributions overlap and they become part of a big
population.
A Simple Introduction to ANOVA (with applications in Excel) (n.d.). Retrieved March 27,2020 from https://
www.analyticsvidhya.com/blog/2018/01/anova-analysis-of-variance/.
Within Group Variability
• Now consider another distribution of the same three samples but with less
variability. Although the means of samples are similar to the samples in
the above image, they seem to belong to different populations.
A Simple Introduction to ANOVA (with applications in Excel) (n.d.). Retrieved March 27,2020 from https://
www.analyticsvidhya.com/blog/2018/01/anova-analysis-of-variance/.
Within Group Variability
• Such variations within a sample are denoted by Within-group variation.
• It refers to variations caused by differences within individual groups (or levels) as not
all the values within each group are the same.
• Each sample is looked at on its own and variability between the individual points in
the sample is calculated.
• In other words, no interactions between samples are considered.
A Simple Introduction to ANOVA (with applications in Excel) (n.d.). Retrieved March 27,2020 from https://
www.analyticsvidhya.com/blog/2018/01/anova-analysis-of-variance/.
Calculating Within Group Variability
• We can measure Within-group variability by looking at how much each value in each
sample differs from its respective sample mean.
• We’ll take the squared deviation of each value from its respective sample mean and
add them up.
• This is the sum of squares for within-group variability.
A Simple Introduction to ANOVA (with applications in Excel) (n.d.). Retrieved March 27,2020 from https://
www.analyticsvidhya.com/blog/2018/01/anova-analysis-of-variance/.
Calculating Within Group Variability
A Simple Introduction to ANOVA (with applications in Excel) (n.d.). Retrieved March 27,2020 from https://
www.analyticsvidhya.com/blog/2018/01/anova-analysis-of-variance/.
Calculating Within Group Variability
• Like between-group variability, we then divide the sum of squared deviations by the degrees
of freedom to find a less-biased estimator for the average squared deviation (essentially, the
average-sized square from the figure above).
• This time, the degrees of freedom is the sum of the sample sizes (N) minus the number of
samples (k).
A Simple Introduction to ANOVA (with applications in Excel) (n.d.). Retrieved March 27,2020 from https://
www.analyticsvidhya.com/blog/2018/01/anova-analysis-of-variance/.
F-Statistic
• The statistic which measures if the means of different samples are significantly different or
not is called the F-Ratio.
• The lower the F-Ratio, the more similar are the sample means. In that case, we cannot reject
the null hypothesis.
Source: www.spss-tutorials.com
A Simple Introduction to ANOVA (with applications in Excel) (n.d.). Retrieved March 27,2020 from https://
www.analyticsvidhya.com/blog/2018/01/anova-analysis-of-variance/.
F-Statistic
• F = Between group variability / Within
group variability
• As between group variability increases,
sample means grow further apart from
each other. In other words, the samples
are more probable to be belonging to
totally different populations.
• The lower the F-Ratio, the more similar are
the sample means. In that case, we cannot
reject the null hypothesis.
A Simple Introduction to ANOVA (with applications in Excel) (n.d.). Retrieved March 27,2020 from https://
www.analyticsvidhya.com/blog/2018/01/anova-analysis-of-variance/.
F-Statistic
Assumption #2 :
Independent variable that is categorical (i.e., two or more groups).
Assumption #3:
Independent samples/groups – no relationship between the subjects in the sample,
meaning:
– subjects in the first group cannot also be in the second group.
– no subject in either group can influence subjects in the other group.
– no group can influence the other group.
One-way ANOVA in SPSS Statistics (n.d.). Retrieved March 25, 2020, from https://statistics.laerd.com/spss-tutorials
/one-way-anova-using-spss-statistics.php.
Conditions and Assumptions for Test
Assumption #4:
Normal distribution (approximately) of the dependent variable for each group
(i.e., for each level of the factor).
Assumption #5:
Homogeneity of variances (i.e., variances approximately equal across groups):
When this assumption is violated and the sample sizes differ among groups, the p value for the
overall F test is not trustworthy. These conditions warrant using alternative statistics that do not
assume equal variances among populations, such as the Browne-Forsythe or Welch statistics
When this assumption is violated, regardless of whether the group sample sizes are fairly equal, the
results may not be trustworthy for post hoc tests. When variances are unequal, post hoc tests that
do not assume equal variances should be used (e.g., Dunnett’s C).
One-way ANOVA in SPSS Statistics (n.d.). Retrieved March 25, 2020, from https://statistics.laerd.com/spss-tutorials
/one-way-anova-using-spss-statistics.php.
Conditions and Assumptions for Test
Assumption #6:
There should be no significant outliers.
• Outliers are simply single data points within your data that do not follow the usual
pattern.
• For example, in a study of 100 students’ IQ scores where the mean is 108 with
small variation among students, one student had a score of 189.
One-way ANOVA in SPSS Statistics (n.d.). Retrieved March 25, 2020, from https://statistics.laerd.com/spss-tutorials
/one-way-anova-using-spss-statistics.php.
Example of Study
A manager wants to raise the productivity at his company by increasing the speed at
which his employees can use a particular spreadsheet program. As he does not have the
skills in-house, he employs an external agency which provides training in this
spreadsheet program.
They offer 3 courses: a beginner, intermediate and advanced course. He is unsure which
course is needed for the type of work they do at his company, so he sends 10 employees
on the beginner course, 10 on the intermediate and 10 on the advanced course.
When they all return from the training, he gives them a problem to solve using the
spreadsheet program, and times how long it takes them to complete the problem. He
then compares the three courses (beginner, intermediate, advanced) to see if there are
any differences in the average time it took to complete the problem.
One-way ANOVA in SPSS Statistics (n.d.). Retrieved March 25, 2020, from https://statistics.laerd.com/spss-tutorials
/one-way-anova-using-spss-statistics.php.
Analysing and Explaining the Results
Descriptives Table
• The descriptive table (see above) provides some very useful descriptive statistics, including
the mean, standard deviation and 95% confidence intervals for the dependent variable
(Time) for each separate group (Beginners, Intermediate and Advanced), as well as when all
groups are combined (Total).
One-way ANOVA in SPSS Statistics (n.d.). Retrieved March 25, 2020, from https://statistics.laerd.com/spss-tutorials
/one-way-anova-using-spss-statistics.php.
Analysing and Explaining the Results
ANOVA Table
• This is the table that shows the output of the ANOVA analysis and whether there is a
statistically significant difference between our group means. We can see that the
significance value is 0.021 (i.e., p = .021), which is below 0.05. and, therefore, there is a
statistically significant difference in the mean length of time to complete the spreadsheet
problem between the different courses taken.
One-way ANOVA in SPSS Statistics (n.d.). Retrieved March 25, 2020, from https://statistics.laerd.com/spss-tutorials
/one-way-anova-using-spss-statistics.php.
Analysing and Explaining the Results
One-way ANOVA in SPSS Statistics (n.d.). Retrieved March 25, 2020, from https://statistics.laerd.com/spss-tutorials
/one-way-anova-using-spss-statistics.php.
Analysing and Explaining the Results
• We can see from the table below that there is a statistically significant difference in
time to complete the problem between the group that took the beginner course and
the intermediate course (p = 0.046), as well as between the beginner course and
advanced course (p = 0.034). However, there were no differences between the
groups that took the intermediate and advanced course (p = 0.989).
One-way ANOVA in SPSS Statistics (n.d.). Retrieved March 25, 2020, from https://statistics.laerd.com/spss-tutorials
/one-way-anova-using-spss-statistics.php.
Conclusion and Writing the Results
• Based on the results above, you could report the results of the study as follows (this does
not include the results from your assumptions tests or effect size calculations):-
One-way ANOVA in SPSS Statistics (n.d.). Retrieved March 25, 2020, from https://statistics.laerd.com/spss-tutorials
/one-way-anova-using-spss-statistics.php.
Two Way
Introduction
• The one-way or two-way in the name of the test refers to the number of variables
present in the test, where two way ANOVA has two independent variables (can have
multiple levels).
• The interaction term in a two-way ANOVA informs you whether the effect of one of
your independent variables on the dependent variable is the same for all values of
your other independent variable (and vice versa).
• Both one-way ANOVA and two-way ANOVA share the same limitation.
Two-way ANOVA in SPSS Statistics (n.d). Retrieved June 1, 2020 from https://statistics.laerd.com/spss-tutorials/two-way-anova-using-spss-statistics.php
Purpose of the Test
Two-way ANOVA in SPSS Statistics (n.d). Retrieved June 1, 2020 from https://statistics.laerd.com/spss-tutorials/two-way-anova-using-spss-statistics.php
Purpose of the Test
• Two-way ANOVA tells us about the main effect and the interaction effect.
• The main effect is similar to a one-way ANOVA where the effect of independent
variable one and two would be measured separately.
• Whereas, the interaction effect is the one where independent variables are
considered at the same time.
• That’s why a two-way ANOVA can have up to three hypotheses, which are as
follows:
Two-way ANOVA in SPSS Statistics (n.d). Retrieved June 1, 2020 from https://statistics.laerd.com/spss-tutorials/two-way-anova-using-spss-statistics.php
Purpose of the Test
Two null hypotheses will be tested if we have placed only one observation in each cell.
For this example, those hypotheses will be:
H1: All the independent variable one groups have equal mean score.
H2: All the independent variable two groups have equal mean score.
Two-way ANOVA in SPSS Statistics (n.d). Retrieved June 1, 2020 from https://statistics.laerd.com/spss-tutorials/two-way-anova-using-spss-statistics.php
Conditions and Assumptions for Test
Assumption #1:
Dependent variable that is continuous (i.e., interval or ratio level).
Assumption #2:
TWO independent variables that are categorical (i.e., two or more groups).
Independent samples/groups – no relationship between the subjects in the sample, meaning:-
– subjects in the first group cannot also be in the second group.
– no subject in either group can influence subjects in the other group.
– no group can influence the other group.
Assumption #3:
Random sample of data from the population.
Two-way ANOVA in SPSS Statistics (n.d). Retrieved June 1, 2020 from https://statistics.laerd.com/spss-tutorials/two-way-anova-using-spss-statistics.php
Conditions and Assumptions for Test
Assumption #4:
Normal distribution (approximately) of the dependent variable for each group
(i.e., for each level of the factor).
Assumption #5:
Homogeneity of variances (i.e., variances approximately equal across groups):-
– When this assumption is violated and the sample sizes differ among groups, the p value for the
overall F test is not trustworthy. These conditions warrant using alternative statistics that do not
assume equal variances among populations, such as the Browne-Forsythe or Welch statistics
– When this assumption is violated, regardless of whether the group sample sizes are fairly equal,
the results may not be trustworthy for post hoc tests. When variances are unequal, post hoc
tests that do not assume equal variances should be used.
Two-way ANOVA in SPSS Statistics (n.d). Retrieved June 1, 2020 from https://statistics.laerd.com/spss-tutorials/two-way-anova-using-spss-statistics.php
Conditions and Assumptions for Test
Assumption #6:
There should be no significant outliers.
• Outliers are simply single data points within your data that do not follow the usual
pattern. For example, in a study of 100 students’ IQ scores where the mean is 108
with small variation among students, one student had a score of 189.
Two-way ANOVA in SPSS Statistics (n.d). Retrieved June 1, 2020 from https://statistics.laerd.com/spss-tutorials/two-way-anova-using-spss-statistics.php
Example of Study
A researcher was interested in whether an individual's interest in politics
was influenced by their level of education and gender.
They recruited a random sample of participants to their study and asked
them about their interest in politics, which they scored from 0 to 100, with
higher scores indicating a greater interest in politics.
The researcher then divided the participants by gender (Male/Female) and
then again by level of education (School/College/University). Therefore,
the dependent variable was "interest in politics", and the two independent
variables were "gender" and "education".
Two-way ANOVA in SPSS Statistics (n.d). Retrieved June 1, 2020 from https://statistics.laerd.com/spss-tutorials/two-way-anova-using-spss-statistics.php
Analysing and Explaining the Results
Descriptives Table
• This table is very useful because it
provides the mean and standard
deviation for each combination of the
groups of the independent variables
(what is sometimes referred to as each
"cell" of the design).
• In addition, the table provides "Total"
rows, which allows means and standard
deviations for groups only split by one
independent variable, or none at all, to
be known. This might be more useful if
you do not have a statistically significant
interaction.
Two-way ANOVA in SPSS Statistics (n.d). Retrieved June 1, 2020 from https://statistics.laerd.com/spss-tutorials/two-way-anova-using-spss-statistics.php
Analysing and Explaining the Results
Plot of the results
• The plot of the mean "interest in
politics" score for each combination of
groups of "Gender" and "Edu_level"
are plotted in a line graph, as shown:
• An interaction effect can usually be
seen as a set of non-parallel lines.
• You can see from this graph that the
lines do not appear to be parallel (with
the lines actually crossing).
Two-way ANOVA in SPSS Statistics (n.d). Retrieved June 1, 2020 from https://statistics.laerd.com/spss-tutorials/two-way-anova-using-spss-statistics.php
Analysing and Explaining the Results
Statistical significance of the two-way ANOVA
• The actual result of the two-way ANOVA – namely, whether either of the two independent
variables or their interaction are statistically significant – is shown in the Tests of Between-
Subjects Effects table, as shown below:-
Two-way ANOVA in SPSS Statistics (n.d). Retrieved June 1, 2020 from https://statistics.laerd.com/spss-tutorials/two-way-anova-using-spss-statistics.php
Analysing and Explaining the Results
Statistical significance of the two-way ANOVA
• These rows inform us whether our independent variables (the "Gender" and "Edu_Level"
rows) and their interaction (the "Gender*Edu_Level" row) have a statistically significant
effect on the dependent variable, "interest in politics".
• You can see from the "Sig." column that we have a statistically significant interaction at the p
= .014 level.
• We can see from the table above that there was no statistically significant difference in
mean interest in politics between males and females (p = .207), but there were statistically
significant differences between educational levels (p < .0005).
Two-way ANOVA in SPSS Statistics (n.d). Retrieved June 1, 2020 from https://statistics.laerd.com/spss-tutorials/two-way-anova-using-spss-statistics.php
Analysing and Explaining the Results
Multiple Comparisons Table
• If you do not have a statistically significant interaction, you might interpret the Tukey post
hoc test results for the different levels of education, which can be found in the Multiple
Comparisons table, as shown below:
Two-way ANOVA in SPSS Statistics (n.d). Retrieved June 1, 2020 from https://statistics.laerd.com/spss-tutorials/two-way-anova-using-spss-statistics.php
Analysing and Explaining the Results
Multiple Comparisons Table
• You can see from the table above that there is some repetition of the results, but
regardless of which row we choose to read from, we are interested in the
differences between (1) School and College, (2) School and University, and (3)
College and University.
• From the results, we can see that there is a statistically significant difference
between all three different educational levels (p < .0005).
Two-way ANOVA in SPSS Statistics (n.d). Retrieved June 1, 2020 from https://statistics.laerd.com/spss-tutorials/two-way-anova-using-spss-statistics.php
Conclusion and Writing the Results
• You should emphasize the results from the interaction first before you mention the main
effects. For example, you might report the result as:-
Simple main effects analysis showed that males were significantly more interested in politics than
females when educated to university level (p = .002), but there were no differences between gender
when educated to school (p = .465) or college level (p = .793).
Two-way ANOVA in SPSS Statistics (n.d). Retrieved June 1, 2020 from https://statistics.laerd.com/spss-tutorials/two-way-anova-using-spss-statistics.php
Conclusion and Writing the Results
• If you had a statistically significant interaction term and carried out the procedure for simple
main effects in SPSS Statistics, you would also report these results. Briefly, you might report
these as:
A two-way ANOVA was conducted that examined the effect of gender and education level on interest
in politics. There was a statistically significant interaction between the effects of gender and education
level on interest in politics, F (2, 54) = 4.643, p = .014.
Two-way ANOVA in SPSS Statistics (n.d). Retrieved June 1, 2020 from https://statistics.laerd.com/spss-tutorials/two-way-anova-using-spss-statistics.php
Repeated Measures
Introduction
• A two-way repeated measures ANOVA are also known as a two-factor repeated
measures ANOVA, two-factor or two-way ANOVA with repeated measures, or
within-within-subjects ANOVA.
• The repeated measures ANOVA compares means across one or more variables that
are based on repeated observations.
• The repeated measures ANOVA is similar to the dependent sample T-Test (paired T
test), because it also compares the mean scores of one group to another group on
different observations.
• Different to both one way ANOVA (more than one factor) and two way ANOVA,
where in repeated measures ANOVA the subjects undergo both conditions
(independent variables).
Purpose of the Test
• A two-way repeated measures ANOVA compares the mean differences between
groups that have been split on two within-subjects factors (also known as
independent variables).
• A two-way repeated measures ANOVA is often used in studies where you have
measured a dependent variable over two or more time points, or when subjects
have undergone two or more conditions (i.e., the two factors are "time" and
"conditions").
• The primary purpose of a two-way repeated measures ANOVA is to understand if
there is an interaction between these two factors on the dependent variable
We can analyse data using a repeated measures ANOVA for two types of study design. Studies
that investigate either
(1) changes in mean scores over three or more time points, or
(2) differences in mean scores under three or more different conditions.
For example, for (1), you might be investigating the effect of a 6-month exercise training
programme on blood pressure and want to measure blood pressure at 3 separate time points
(pre-, midway and post-exercise intervention), which would allow you to develop a time-course
for any exercise effect.
For (2), you might get the same subjects to eat different types of cake (chocolate, caramel and
lemon) and rate each one for taste, rather than having different people taste each different
cake. The important point with these two study designs is that the same people are being
measured more than once on the same dependent variable (i.e., why it is called repeated
measures).
Repeated Measures ANOVA in SPSS Statistics (n.d). Retrieved June 1, 2020 from https://statistics.laerd.com
/statistical-guides/repeated-measures-anova-statistical-guide.php
(1) (2)
Repeated Measures ANOVA in SPSS Statistics (n.d). Retrieved June 1, 2020 from https://statistics.laerd.com
/statistical-guides/repeated-measures-anova-statistical-guide.php
Hypothesis
The repeated measures ANOVA tests for whether there are any differences between
related population means. The null hypothesis (H0) states that the means are equal:
H0: µ1 = µ2 = µ3 = … = µk
Repeated Measures ANOVA in SPSS Statistics (n.d). Retrieved June 1, 2020 from https://statistics.laerd.com
/statistical-guides/repeated-measures-anova-statistical-guide.php
Conditions and Assumptions for Test
Assumption #1:
Dependent variable that is continuous (i.e., interval or ratio level).
Assumption #2:
TWO independent variables (two within-subjects factors) that are at least two
categorical, "related groups" or "matched pairs".
Assumption #3:
Random sample of data from the population.
Conditions and Assumptions for Test
Assumption #4:
Normal distribution (approximately) of the dependent variable for each group (i.e., for
each level of the factor).
Assumption #5:
Known as sphericity, the variances of the differences between all combinations of
related groups must be equal.
Assumption #6:
There should be no significant outliers. Outliers are simply single data points within
your data that do not follow the usual pattern. For example, in a study of 100 students’
IQ scores where the mean is 108 with small variation among students, one student had
a score of 189.
Example of Study
A researcher was interested in discovering whether a short-term (2-week) high-
intensity exercise-training programme can elicit reductions in a marker of heart disease
called C-Reactive Protein (CRP).
To answer this question, the researcher recruited 12 subjects and had them perform
two trials/treatments – a control trial and an intervention trial – which were
counterbalanced and with sufficient time between trials to allow for residual effects to
dissipate.
In the control trial, subjects continued their normal activities, whilst in the intervention
trial, they exercised intensely for 45 minutes each day. CRP concentration was
measured three times: at the beginning, midway (one week) and at the end of the
trials. For the control trial, the two within-subjects factors are time and treatment (i.e.,
control or intervention), and the dependent variable is CRP.
In variable terms, the researcher wishes to know if there is an interaction between
time and treatment on CRP.
Analysing and Explaining the Results
Descriptives Table
• This table is very useful because it
provides the mean and standard
deviation for each combination of the
groups of the independent variables.
Analysing and Explaining the Results
Plot of the results
• The plot of the mean “back pain"
score for each combination of
groups of “treatment" and
“time" are plotted in a line graph,
as shown:
• An interaction effect can usually
be seen as a set of non-parallel
lines.
• You can see from this graph that
the lines do not appear to be
parallel (with the lines actually
crossing).
Analysing and Explaining the Results
Mauchly’s Test of Sphericity
As a rule of thumb, sphericity is assumed if Sig. > 0.05.
For our data, Sig. = 0.54 so sphericity is no issue here.
Analysing and Explaining the Results
Tests of Within-Subjects Effects
Our significance level, Sig. = .000. So if the means are perfectly equal in the
population, there's a 0% chance of finding the differences between the
means that we observe in the sample. We therefore reject the null
hypothesis of equal means.
Analysing and Explaining the Results
• Tests of Within-Subjects Effects
• Our significance level, Sig. = .000. So if the means are perfectly equal in
the population, there's a 0% chance of finding the differences between
the means that we observe in the sample. We therefore reject the null
hypothesis of equal means.
Analysing and Explaining the Results
Tests of Within-Subjects Effects
Our significance level, Sig. = .000. So if the means are perfectly equal in the
population, there's a 0% chance of finding the differences between the
means that we observe in the sample. We therefore reject the null
hypothesis of equal means.
References
• Fraenkel, J.R., Wallen,N.E., & Hyun, H.H., (2019). How to design and evaluate research in education. Tenth Edition. New York, NY:
McGraw-Hill Education International Edition.
• Borden,K.S.,& Abbott,B.B.,(2019). Research Design and Methods A Process Approach. Tenth Edition. New York, NY: McGraw-Hill Education
International Edition.
• SPSS Repeated Measures ANOVA Tutorial. (n.d.). Retrieved March 20, 2020, from
https://www.spss-tutorials.com/spss-repeated-measures-anova/.
• SPSS Repeated Measures ANOVA Tutorial II. (n.d.). Retrieved March 20, 2020, from
https://www.spss-tutorials.com/spss-repeated-measures-anova-example-2/ .
• One-way ANOVA in SPSS Statistics (n.d.). Retrieved March 25, 2020, from
https://statistics.laerd.com/spss-tutorials/one-way-anova-using-spss-statistics.php.
• Two-way ANOVA in SPSS Statistics (n.d). Retrieved June 1, 2020 from https://statistics.laerd.com/spss-tutorials
/two-way-anova-using-spss-statistics.php
• Repeated Measures ANOVA in SPSS Statistics (n.d). Retrieved June 1, 2020 from https://statistics.laerd.com
/statistical-guides/repeated-measures-anova-statistical-guide.php
• A Simple Introduction to ANOVA (with applications in Excel) (n.d.). Retrieved March 25, 2020, from
https://www.analyticsvidhya.com/blog/2018/01/anova-analysis-of-variance/.
• ANOVA Test: Definition, Types, Examples(n.d.). Retrieved March 25, 2020, from
https://www.statisticshowto.datasciencecentral.com/probability-and-statistics/hypothesis-testing/anova/