Spss Anova PDF
Spss Anova PDF
Spss Anova PDF
Chapter 9 does no more than introduce the repeated-measures ANOVA, the MANOVA, and
the ANCOVA, and discriminant analysis. But, you can likely envision how complicated it can
be to obtain calculated values for these tests. Calculations for any of these tests may cause
anxiety for those uncomfortable with math. So, the possibility of needing to combine these
operations for tests such as a repeated-measures ANCOVA or a multiple discriminant
analysis may seem utterly overwhelming! Luckily, SPSS provides an option for those who
wish to avoid the time-consuming and labor-intensive calculations. Each of the following
sections provides instructions for using SPSS to perform its respective test as well as for
interpreting the tests output.
Example 9.15 - SPSS Data View Screen for Repeated Measures ANOVA
A partial display of the imaginary data used to create the tables in Example 9.10 shows three
separate rows, each pertaining to one of the conditions in which subjects complete crossword
puzzles.
Reading across each row, SPSS knows that the values pertain to the same subjects or to
subjects who have some connection with each other. The program looks for this association
when you instruct it to perform a repeated measures ANOVA. SPSS regards this association
as an additional factor in the analysis. Even a oneway repeated-measures ANOVA requires
attention to this additional factor. As a result, you must use SPSSs General Linear Model
function to perform the test. This function, also used for the multi-way ANOVA described
in Chapter 7, suits situations involving a comparison of at least three groups that have
relationships among themselves or with other variables. Its wide applicability makes it
appropriate for some of the other tests described in Chapter 9 as well.
SPSS, however, requires more input for the repeated-measures ANOVA than Chapter 7s
multi-way ANOVA. The necessary steps for a one-way repeated measures ANOVA are as
follows.
1. Choose the General Linear Model option in SPSS Analyze pull-down menu.
2. Choose Repeated Measures from the prompts given. A window entitled Repeated
Measures Define Factor(s) should appear.
3. You have two main tasks in the Repeated Measures Define Factor(s) window.
a. SPSS asks you to create a name that describes the overall comparison factor. This
name, commonly terms such as condition, or time, distinguishes between the
sets of data that you wish to compare. You should type this term into the box
marked Within-Subject Factor Name at the top of the window.
b. The number that you type into the Number of Levels box tells SPSS how many data
sets you wish to compare. Your analysis can include all of your data sets or only
some of them. In the next window, you can specify which data sets to include in the
analysis.
4. Click Define. A window entitled Repeated Measures appears.
5. A box on the left side of the window contains the names of all variables for which you
have entered data. For each variable that you would like to include in the analysis, click
on its name and, then, on the arrow pointing to the Within-Subjects Variables box.
Doing so should move the variable name.
6. To include descriptive statistics for the groups in the output, click on the windows
Options button.
a. Move the name of the analysis within subjects factor to the box labeled Display
Means for box.
b. Mark Descriptive Statistics in Display box.
c. Click Continue to return to the Repeated Measures box.
7. Click OK.
These steps create many output tables. Not all of these tables provide new information or
values that help you to determine whether category means differ significantly. The one of
primary interest to you should be the Tests of Within-Subjects Effects Table, which
contains significance values for the repeated-measures ANOVA, itself. In the upper portion
of this table, labeled with the independent variables name, four F and four p values appear.
These values are usually the same or almost the same. However, for a standard repeated-
measures ANOVA the row labeled Sphericity Assumed provides the F and p values that
you need.
If you requested that SPSS provide you with descriptive statistics (Step #6 in the process)
output also includes a table entitled Descriptive Statistics. Values in this table become
especially useful when you reject the null hypothesis. In this situation, you must refer to the
descriptive statitics to determine what category or categories means differ from the
other(s) and the direction of the difference. This information can also help you determine
how to begin post-hoc tests.
Example 9.16 Selected SPSS Output for One-Way Repeated Measures ANOVA
A oneway analysis performed using an expansion of the mock data set shown in Table 9.9
produces the following descriptive statistics and within-subject effects values.
Descriptive Statistics
Measure:MEASURE_1
Type III Sum of
Source Squares Df Mean Square F Sig.
9.10 AND TABLE 9.11 SELECTED SPSS OUTPUT FOR ONEWAY REPEATED-MEASURES ANOVA
According to the means listed in the Descriptive Statistics Table (Table 9.10), subjects spent almost the same
amount of time completing crossword puzzles from newspapers and from magazines. Puzzles from
crossword puzzle books, however, took more time than did those from either or the other two publications.
The Sphericity Assumed Row in the section of Table 9.11, entitled Tests of Within-Subjects Effects, contains
the F and p values that indicate whether these means differ significantly.
The researcher must strongly contemplate the decision about accepting or rejecting the
null hypothesis for this analysis. The p value of .088 exceeds that standard of .05,
suggesting that no significant differences exist between the means listed in Table 9.10.
Raising the level to .10, however, would allow the researcher to reject the null hypothesis
of equality between means. He or she should weigh the importance of finding significant
differences against the increased chance of making a Type I error when deciding whether
to change the value.
As with almost all SPSS output, the first table shown simply identifies the categories and
the number of subjects in each one. Of more interest that this information, however, is
likely the Descriptive Statistics output table, which appears only if you included Step #4
in the process of requesting the MANOVA. This table contains group means and standard
deviations for each individual dependent variable.
To assess the significance of differences between the mean values, you must evaluate
values in the Multivariate Tests table and, in some cases, the Tests of Between-Subjects
Effects table. The first of these tables contains F and p values for the MANOVA analysis
comparing groups canonical variate means. The Tests of Between Subject Effects table
provides data for ANOVAs performed using each individual dependent variable.
Descriptive Statistics
TABLE 9.12, TABLE 9.13, AND TABLE 9.14 SELECTED SPSS OUTPUT FOR ONEWAY MANOVA
Category means and standard deviations for the canonical variate appear in Table 9.12, entitled Descriptive
Statistics. Values in the Multivariate Tests table (Table 9.13) indicate whether these means differ
significantly. In this table, the row labeled Wilks Lambda contains the values pertaining to the MANOVA
procedure described in Chapter 9. To further understand the p value included in this table, the researcher
might find values in Table 9.14, Test of Between-Subjects Effects, useful. This table provides p values for
oneway ANOVAs comparing category means for each of the dependent variables that compose the canonical
variate.
Values in lower portion of the Multivariate Tests table, labeled genre, indicate whether
canonical variate means differ significantly for those who experienced the story by reading
it, watching it as a film, and watching it as a Broadway musical. In this table, SPSS presents
the results from four possible techniques of obtaining F for the MANOVA. For an analysis
using Section 9.3.2s method involving , values in the Wilks Lambda row of the table
should be examined. The F of 6.743 and the p of .000) indicate a significant difference
between the mean canonical variate values for each genre.
The presence of a significant difference in canonical variate means, however, does not
imply significant differences in the means for each dependent variable. The results of
ANOVAs that compare the mean setting, characters, and plot scores for each category
appear in Table 9.14. According to the values in the genre row of this table and based
upon the standard of .05, subjects in the three independent-variable categories do not
have significantly different recall of characters (F=1.715, p=.182). They do, however, have
significantly different recall of the storys setting (F=14.932, p=.000) and plot (F=7.355,
p=.001). The differences in these dependent variable scores, provide a mathematical
explanation for the differences in canonical variate scores.
Although not an issue for this analysis, values from the Table 9.14 can also provide a
behind the scenes look when you have insignificant results. One cannot assume that
accepting the MANOVAs null hypothesis implies that the independent variable groups have
equal scores on each dependent variable. Scores for one or more dependent variables may
differ significantly among groups. But, a majority of dependent variables with similar
scores may mute these differences in the canonical variate. The ANOVA results presented
in the Tests of Between Subjects Effects table identify any individual dependent variables
with significantly different group means.
Had results from Example 9.17s analysis led to an accepted null hypothesis, you could end
your analysis by stating that no significant differences between mean canonical variate
values exist. However, with a rejected null hypothesis, you must continue the analysis with
post-hoc comparisons to find at least one reason for the significant difference
The same technique for performing post-hoc analyses for the ANOVA applies to the
MANOVA. However, rather than comparing category means for individual dependent
variables, the MANOVAs post-hoc analyses compare category means for canonical variates.
So, you should begin by identifying a category or categories with combinations of
dependent-variable scores that you believe differ from the others. The total values in the
Descriptive Statistics table can help you to determine which category or categories you
should contrast from the others.
You can obtain very specific information about the source of significant omnibus MANOVA
results by determining whether you can associate these differences with particular
dependent variables. To do so, you need to compare the means for a particular dependent
variable across categories or combinations of categories that your original post-hoc tests
identified as different. This investigation uses values in the Tests of Between-Subjects
Effects table. The rows labeled with independent variable names contain results from
ANOVAs that compare dependent variable means. (Note that, in Example 9.17, these values
and those that appear in the Corrected Model row are the same. The two rows contain
identical values only for a oneway test.) The values that appear to the right of each
dependent variable name indicate whether category means for that dependent variable,
alone, differ significantly.
If a dependent variables scores dont differ significantly among groups (p>) then that
dependent variable doesnt contribute to the difference in canonical variate values. But, you
may wish to give some attention to dependent variables with scores that do differ
significantly (p<). Post-hoc comparisons of these dependent variables means amount to
nothing more than the t-tests used for post-hoc analyses of ANOVA results, described in
Chapter 7. When results of these tests indicate significant differences between means, you
know that that scores for this component of the canonical variate helps to account for its
significantly different canonical variate means.
ANCOVA and MANCOVA with SPSS
If you know how to use SPSSs Univariate window to perform a multi-way ANOVA, then
you simply need to add a step to the process for an ANCOVA. Similarly, performing a
MANCOVA requires just one more step than performing a MANOVA using SPSSs
Multivariate window. In both cases, this step involves the identification of covariates. Both
the Univariate and the Multivariate windows contain a box labeled Covariate(s).
The entire process for performing an ANCOVA in SPSS, then, requires six steps.
1. Choose Compare Means from the Analyze pull-down menu.
2. Choose General Linear Model from the options provided. A new menu should appear
to the right of the pull-down menu. Select Univariate from the new menu . A
Univariate window should appear on the screen.
3. Highlight the name of the dependent variable from the list appearing in the upper left
corner of the window. Click on the arrow to the left of the Dependent Variable box. The
name of the variable should move to this box.
4. Highlight the name of one independent variable from the list appearing in the upper left
corner of the window. Click on the arrow to the left of the Fixed Factor(s) box. The
name of the variable should move to this box. Continue this process with each
independent variable name until they all appear as fixed factors.
5. Highlight the name of one covariate from the list appearing in the upper left corner of
the window. Click on the arrow to the left of the Covariate(s) box. The name of the
variable should move to this box. Continue this process with each independent variable
name until they all appear as covariates.
6. If you would like your output to include descriptive statistics, select the Options
button, located on the right side of the window. A new window, entitled Univariate:
Options should appear. Select Descriptive Statistics from the Display portion of this
window. Then, click Continue to return to the One-Way ANOVA window. Failing to
complete this step will still produce valid ANCOVA results.
7. Click OK.
Assuming you performed Step #6, above, the SPSS output for an ANCOVA begins with
descriptive statistics for each independent-variable category. The results of the significance
test appear in the table entitled Tests of Between-Subjects Effects. The Corrected Model
values in this table provide the ANCOVAs adjusted sum of squares and the resulting F and
significance (p) values.
Descriptive Statistics
Dependent Variable:change
Dependent Variable:change
TABLE 9.15 AND TABLE 9.16 SELECTED SPSS OUTPUT FOR ONEWAY ANCOVA
Descriptive statistics for each category of the independent variable appear in Table 9.15, labeled Descriptive
Statistics. The Tests of Between-Subjects Effects table (Table 9.16) lists both the independent variable, in
this case, technique, and the covariate, in this case health, as predictors of the dependent variable. The values
used to determine whether changes in heart rate differ significantly with respect to the independent variable
and considering the possible effects of the covariate appear in the top row of this table.
According to results of this analysis, those exposed each of the three relaxation techniques
did not experience significantly different changes in heart rate. The p value of .183 lies
above the standard of .05 as well as above an elevated of .10, indicating that one would
accept the null hypothesis of equality at these levels of significance. The analysis
considered differences in the overall health of patients in the three independent-variable
conditions when calculating these results, hence the designation of a Type III Sum of
Squares value in the Tests of Between-Subjects Effects table.
The process used to request and analyze SPSS results of an ANCOVA translate easily into a
MANCOVA. Performing a MANCOVA in SPSS requires the same steps, only you would need
to use SPSSs Multivariate, rather than Univariate window. In the Multivariate window, you
can identify as many dependent variables as needed for the analysis. SPSS assembles the
values for the dependent variables into canonical variate scores. By inputting names of
covariates into the Covariate(s) box, you tell SPSS to consider the roles of these covariates
upon the relationship between the independent variables and the canonical variate.
The MANCOVA output that results contains a Multivariate Tests table. This table
resembles the Multivariate Tests table produced for a MANCOVA, however, it also
includes the names of covariates. Assuming you wish to consider results based upon the
Wilks Lambda procedure for obtaining F, you should focus upon values in this row of the
table. A p-value that exceeds indicates significant differences between mean canonical
variate values for the covariate-biased independent-variable categories.
Discriminant Analysis with SPSS
Rather than working with pre-existing classifications of subjects, as the other tests in
Chapter 9 do, a discriminant analysis attempts to create classifications. To conduct a
discriminant analysis in SPSS, therefore, you cannot use the General Linear Model
function. The following process allows you to use continuous values to predict subjects
group placements.
1. Choose the Classify option in SPSS Analyze pull-down menu.
2. Identify your desired type of classification as Discriminant. Choose Discriminant
from the prompts given. A window entitled a window entitled Discriminant Analysis
should appear.
3. In this window, you can define the variables involved in the analysis as follows
a. Move the name of the categorical dependent variable from the box on the left to the
Grouping Variable box. You must also click on the Define Range button below
this box and type the values for the lowest and highest dummy-variable values used
to identify groups.
b. Identify the continuous measure(s) used to predict subjects categories by moving
the names of the predictor(s) to the Independents box.
4. Click OK.
The Discriminant Analysis Independents Variable box allows you to identify more than
one predictor of subjects categories. Inputting more than one independent variable leads
to a multiple discriminant analysis. The analysis presented in Chapter 9s examples, though,
use a single independent variable.
Example 9.19 - SPSS Output for Discriminant Analysis
Tables 9.18 through 9.21 show the some of the output from applying these steps to
imaginary data for the acreage and fencing style example first presented in Example 9.9. As
with the output for most tests of significance, SPSS first presents descriptive statistics and
then follows with values that indicate predictability. Among these values is a measure of
significance based upon the conversion of Wilks Lambda into F, as described in Section 9.5.
Group Statistics
Valid N (listwise)
Eigenvalues
Functio Canonical
n Eigenvalue % of Variance Cumulative % Correlation
a
1 .094 100.0 100.0 .294
Wilks' Lambda
Test of
Functio
n(s) Wilks' Lambda Chi-square Df Sig.
Function
1
Acreage 1
TABLE 9.18, TABLE 9.19, TABLE 9.20, AND TABLE 9.21 SPSS OUTPUT FOR DISCRIMINANT ANALYSIS
The number of subjects in each grouping variable category appear as Group Statistics in Table 9.18 The
remainder of the tables provide information regarding the predictability of these groups from continuous
predictor variable values. The Eigenvalues table (Table 9.19) contains a correlation coefficient (See Chapter
8) representing the linear relationship between the predictor variable and the grouping variable. With the
significance value in the Wilks Lambda table (Table 9.20) and the coefficient in the Standardized Canonical
Discriminant Function Coefficients table (Table 9.21), the user can determine the strength of the relationship
between variables.
Of course, given the fact that this analysis involves only one independent variable, the
output is relatively simplistic compared to the output for a multiple discriminant analysis.
The canonical correlation shown in Table 9.19 amounts to the pairwise correlation
between the two variables. For a multiple discriminant analysis, it would describe the
linear relationship between the canonical variate (a combination of independent variables)
and the grouping variable. Also, the coefficient of 1, shown in Table 9.21, implies a
discriminating function of G=x. This equation suggests that all of the responsibility for
predicting fencing style lies with acreage. Still, you can easily see, based upon the
significance value in Table 9.20, that acreage sufficiently predicts the type of fencing used
to enclose property. The p value of .004 indicates a significant relationship between
acreage and fencing type at both =.05 and =.01.
For evaluations that involve more predictors than that in Example 9.20 does, you can use
output values in a variety of ways. In particular, researchers often use values in the
Standardized Canonical Discriminant Function Coefficient table for more than just
identifying the discriminating function. These values can signify the importance of each
predictor variable in the relationship with the grouping variable. Because predictor
variables with very small coefficients have weak linear relationships with the grouping
variable, they likely add little to the predictability of the model. You may wish to perform
another discriminant analysis, omitting the predictor variables with low coefficients, to
determine whether you really need them to help classify subjects. If results of this analysis
also indicate significance, then you know that their presence makes little difference in the
ability to classify subjects. So, you do not have to regard them as contributors to the overall
canonical predictor. This process allows you to limit your grouping variables to only those
that truly help to predict subjects categories.