Chapter Seven Multiple Regression An Introduction To Multiple Regression Performing A Multiple Regression On Spss

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 20

Chapter Seven Multiple regression An introduction to multiple regression Performing a multiple regression on SPSS

Section 1: An introduction to m ultiple regression


WHAT IS MULTIPLE REGRESSION? Multiple regression is a statistical technique that allows us to predict someones score on one variable on the basis of their scores on several other variables. An example might help. Suppose we were interested in predicting how much an individual enjoys their job. Variables such as salary, extent of academic qualifications, age, sex, number of years in full-time employment and socioeconomic status might all contribute towards job satisfaction. If we collected data on all of these variables, perhaps by surveying a few hundred members of the public, we would be able to see how many and which of these variables gave rise to the most accurate prediction of job satisfaction. We might find that job satisfaction is most accurately predicted by type of occupation, salary and years in full-time employment, with the other variables not helping us to predict job satisfaction. When using multiple regression in psychology, many researchers use the term independent variables to identify those variables that they think will influence some other dependent variable. We prefer to use the term predictor variables for those variables that may be useful in predicting the scores on another variable that we call the criterion variable. Thus, in our example above, type of occupation, salary and years in full-time employment would emerge as significant predictor variables, which allow us to estimate the criterion variable - how satisfied someone is likely to be with their job. As we have pointed out before, human behaviour is inherently noisy and therefore it is not possible to produce totally accurate predictions, but multiple regression allows us to identify a set of predictor variables which together provide a useful estimate of a participants likely score on a criterion variable.

HOW DOES MULTIPLE REGRESSION RELATE TO CORRELATION AND ANALYSIS OF VARIANCE? In a previous section (Chapter 4, Section 2), we introduced you to correlation and the regression line. If two variables are correlated, then knowing the score on one variable will allow you to predict the score on the other variable. The stronger the correlation, the closer the scores will fall to the regression line and therefore the more accurate the prediction. Multiple regression is simply an extension of this principle, where we predict one variable on the basis of several other variables. Having more than one predictor variable is useful when predicting human

206

SPSS for Psychologists - Chapter Seven

behaviour, as our actions, thoughts and emotions are all likely to be influenced by some combination of several factors. Using multiple regression we can test theories (or models) about precisely which set of variables is influencing our behaviour. As we discussed in Chapter 6, Section 1, on Analysis of Variance, human behaviour is rather variable and therefore difficult to predict. What we are doing in both ANOVA and multiple regression is seeking to account for the variance in the scores we observe. Thus, in the example above, people might vary greatly in their levels of job satisfaction. Some of this variance will be accounted for by the variables we have identified. For example, we might be able to say that salary accounts for a fairly large percentage of the variance in job satisfaction, and hence it is very useful to know someones salary when trying to predict their job satisfaction. You might now be able to see that the ideas here are rather similar to those underlying ANOVA. In ANOVA we are trying to determine how much of the variance is accounted for by our manipulation of the independent variables (relative to the percentage of the variance we cannot account for). In multiple regression we do not directly manipulate the IVs but instead just measure the naturally occurring levels of the variables and see if this helps us predict the score on the dependent variable (or criterion variable). Thus, ANOVA is actually a rather specific and restricted example of the general approach adopted in multiple regression. To put this another way, in ANOVA we can directly manipulate the factors and measure the resulting change in the dependent variable. In multiple regression we simply measure the naturally occurring scores on a number of predictor variables and try to establish which set of the observed variables gives rise to the best prediction of the criterion variable. A current trend in statistics is to emphasise the similarity between multiple regression and ANOVA, and between correlation and the t-test. All of these statistical techniques are basically seeking to do the same thing - explain the variance in the level of one variable on the basis of the level of one or more other variables. These other variables might be manipulated directly in the case of controlled experiments, or be observed in the case of surveys or observational studies, but the underlying principle is the same. Thus, although we have given separate chapters to each of these procedures they are fundamentally all the same procedure. This underlying single approach is called the General Linear Model - a term you first encountered when we were undertaking ANOVA in Chapter 6, Section 1.

SPSS for Psychologists - Chapter Seven

207

WHEN SHOULD I USE MULTIPLE REGRESSION? 1. You can use this statistical technique when exploring linear relationships between the predictor and criterion variables - that is, when the relationship follows a straight line. (To examine non-linear relationships, special techniques can be used.) 2. The criterion variable that you are seeking to predict should be measured on a continuous scale (such as interval or ratio scale). There is a separate regression method called logistic regression that can be used for dichotomous dependent variables (not covered here). 3. The predictor variables that you select should be measured on a ratio, interval, or ordinal scale. A nominal predictor variable is legitimate but only if it is dichotomous, i.e. there are no more that two categories. For example, sex is acceptable (where male is coded as 1 and female as 0) but gender identity (masculine, feminine and androgynous) could not be coded as a single variable. Instead, you would create three different variables each with two categories (masculine/not masculine; feminine/not feminine and androgynous/not androgynous). The term dummy variable is used to describe this type of dichotomous variable. 4. Multiple regression requires a large number of observations. The number of cases (participants) must substantially exceed the number of predictor variables you are using in your regression. The absolute minimum is that you have five times as many participants as predictor variables. A more acceptable ratio is 10:1, but some people argue that this should be as high as 40:1 for some statistical selection methods (see page 210).

TERMINOLOGY There are certain terms we need to clarify to allow you to understand the results of this statistical technique. Beta (standardised regression coefficients) The beta value is a measure of how strongly each predictor variable influences the criterion variable. The beta is measured in units of standard deviation. For example, a beta value of 2.5 indicates that a change of one standard deviation in the predictor variable will result in a change of 2.5 standard deviations in the criterion variable. Thus, the higher the beta value the greater the impact of the predictor variable on the criterion variable. When you have only one predictor variable in your model, then beta is equivalent to the correlation coefficient between the predictor and the criterion variable. This

208
SPSS for Psychologists - Chapter Seven

equivalence makes sense, as this situation is a correlation between two variables. When you have more than one predictor variable, you cannot compare the contribution of each predictor variable by simply comparing the correlation coefficients. The beta regression coefficient is computed to allow you to make such comparisons and to assess the strength of the relationship between each predictor variable to the criterion variable. R, R Square, Adjusted R Square R is a measure of the correlation between the observed value and the predicted value of the criterion variable. In our example this would be the correlation between the levels of job satisfaction reported by our participants and the levels predicted for them by our predictor variables. R Square (R 2) is the square of this measure of correlation and indicates the proportion of the variance in the criterion variable which is accounted for by our model - in our example the proportion of the variance in the job satisfaction scores accounted for by our set of predictor variables (salary, etc.). In essence, this is a measure of how good a prediction of the criterion variable we can make by knowing the predictor variables. However, R square tends to somewhat over-estimate the success of the model when applied to the real world, so an Adjusted R Square value is calculated which takes into account the number of variables in the model and the number of observations (participants) our model is based on. This Adjusted R Square value gives the most useful measure of the success of our model. If, for example we have an Adjusted R Square value of 0.75 we can say that our model has accounted for 75% of the variance in the criterion variable.

DESIGN CONSIDERATIONS Multicollinearity When choosing a predictor variable you should select one that might be correlated with the criterion variable, but that is not strongly correlated with the other predictor variables. However, correlations amongst the predictor variables are not unusual. The term multicollinearity (or collinearity) is used to describe the situation when a high correlation is detected between two or more predictor variables. Such high correlations cause problems when trying to draw inferences about the relative contribution of each predictor variable to the success of the model. SPSS provides you with a means of checking for this and we describe this below. Selection methods

SPSS for Psychologists - Chapter Seven

209

There are different ways that the relative contribution of each predictor variable can be assessed. In the simultaneous method (which SPSS calls the Enter method), the researcher specifies the set of predictor variables that make up the model. The success of this model in predicting the criterion variable is then assessed. In contrast, hierarchical methods enter the variables into the model in a specified order. The order specified should reflect some theoretical consideration or previous findings. If you have no reason to believe that one variable is likely to be more important than another you should not use this method. As each variable is entered into the model its contribution is assessed. If adding the variable does not significantly increase the predictive power of the model then the variable is dropped. In statistical methods, the order in which the predictor variables are entered into (or taken out of) the model is determined according to the strength of their correlation with the criterion variable. Actually there are several versions of this method, called forward selection, backward selection and stepwise selection. In Forward selection, SPSS enters the variables into the model one at a time in an order determined by the strength of their correlation with the criterion variable. The effect of adding each is assessed as it is entered, and variables that do not significantly add to the success of the model are excluded. In Backward selection, SPSS enters all the predictor variables into the model. The weakest predictor variable is then removed and the regression re-calculated. If this significantly weakens the model then the predictor variable is re-entered otherwise it is deleted. This procedure is then repeated until only useful predictor variables remain in the model. Stepwise is the most sophisticated of these statistical methods. Each variable is entered in sequence and its value assessed. If adding the variable contributes to the model then it is retained, but all other variables in the model are then re-tested to see if they are still contributing to the success of the model. If they no longer contribute significantly they are removed. Thus, this method should ensure that you end up with the smallest possible set of predictor variables included in your model. In addition to the Enter, Stepwise, Forward and Backward methods, SPSS also offers the Remove method in which variables are removed from the model in a block - the use of this method will not be described here. How to choose the appropriate method?

210

SPSS for Psychologists - Chapter Seven

If you have no theoretical model in mind, and/or you have relatively low numbers of cases, then it is probably safest to use Enter, the simultaneous method. Statistical procedures should be used with caution and only when you have a large number of cases. This is because minor variations in the data due to sampling errors can have a large effect on the order in which variables are entered and therefore the likelihood of them being retained. However, one advantage of the Stepwise method is that it should always result in the most parsimonious model. This could be important if you wanted to know the minimum number of variables you would need to measure to predict the criterion variable. If for this, or some other reason, you decide to select a statistical method, then you should really attempt to validate your results with a second independent set of data. The can be done either by conducting a second study, or by randomly splitting your data set into two halves (see Chapter 5, Section 3). Only results that are common to both analyses should be reported.

SPSS for Psychologists - Chapter Seven

211

Section 2: Perform ing a m ultiple regression on SPS S


EXAMPLE STUDY In an investigation of childrens spelling, a colleague of ours, Corriene Reed, decided to look at the importance of several psycholinguistic variables on spelling performance. Previous research has shown that age of acquisition has an effect on childrens reading and also on object naming. A total of 64 children, aged between 7 and 9 years, completed standardised reading and spelling tests and were then asked to spell 48 words that varied systematically according to certain features such as age of acquisition, word frequency, word length, and imageability. Word length and age of acquisition emerged as significant predictors of whether the word was likely to be spelt correctly. Further analysis was conducted on the data to determine whether the spelling performance on this list of 48 words accurately reflected the childrens spelling ability as estimated by a standardised spelling test. Childrens chronological age, their reading age, their standardised reading score and their standardised spelling score were chosen as the predictor variables. The criterion variable was the percentage correct spelling score attained by each child using the list of 48 words. For the purposes of this book, we have created a data file that will reproduce some of the findings from this second analysis. As you will see, the standardised spelling score derived from a validated test emerged as a strong predictor of the spelling score achieved on the word list. The data file contains only a subset of the data collected and is used here to demonstrate multiple regression. (These data are available in the Appendix.)

HOW TO PERFORM THE TEST For SPSS Versions 9 and 10, click on Analyze Regression Linear For SPSS Version 8, click on Statistics Regression Linear You will then be presented with the Linear Regression dialogue box shown below. You now need to select the criterion (dependent) and the predictor (independent) variables. We have chosen to use the percentage correct spelling score (spelperc) as our criterion variable. As our predictor variables we have used chronological age

212

SPSS for Psychologists - Chapter Seven

(age), reading age (readage), standardised reading score (standsc), and standardised spelling score (spellsc). As we have a relatively small number of cases and do not have any strong theoretical predictions, we recommend you select Enter (the simultaneous method). This is usually the safest to adopt.
Select the Criterion (or dependent) variable and click here to move it into the Dependent box. Select the predictor (or independent) variables and click here to move them into Independent(s) box. Choose the Method you wish to employ. If in doubt use the Enter method.

Now click on the button. This will bring up the Linear Regression: Statistics dialogue box shown below
Select Estimates.

Select Model fit and Descriptives. You may also select Collinearity diagnostics. If you are not using the Enter method you should also select R squared change.

The Collinearity diagnostics option gives some useful additional output that allows you to assess whether you have a problem with collinearity in your data. The R squared change option is useful if you have selected a statistical method such as

SPSS for Psychologists - Chapter Seven

213

stepwise as it makes clear how the power of the model changes with the addition or removal of a predictor variable from the model. When you have selected the statistics options you require, click on the Continue button. This will return you to the Linear Regression dialogue box. Now click on the button. The output that will be produced is illustrated on the following pages.

Tip The SPSS multiple regression option was set to Exclude cases listwise. Hence, although the researcher collected data from 52 participants, SPSS analysed the data from only the 47 participants who had no missing values.

214

SPSS for Psychologists - Chapter Seven

SPSS Output for multiple regression USING ENTER METHOD

Obtained Using Menu Items: Regression > Linear (Method = Enter)


This first table is produced by the Descriptives option. N 47 47 47 47 47

Descriptive Statistics Mean percentage correct spelling chronological age reading age standardised reading score standardised spelling score 59.7660 93.4043 89.0213 95.5745 107.0851 Std. Deviation 23.9331 7.4910 21.3648 17.7834 14.9882

This second table gives details of the correlation between each pair of variables. We do not want strong correlations between the criterion and the predictor variables. The values here are acceptable.

Correlations percentage correct spelling Pearson Correlation percentage correct spelling chronological age reading age standardised reading score standardised spelling score percentage correct spelling chronological age reading age standardised reading score standardised spelling score percentage correct spelling chronological age reading age standardised reading score standardised spelling score 1.000 -.074 .623 .778 .847 -.074 1.000 .124 -.344 -.416 .311 .311 .000 .000 .000 47 47 47 47 47 .203 .009 .002 47 47 47 47 47 .000 .000 47 47 47 47 47 .000 47 47 47 47 47 47 47 47 47 47 .623 .124 1.000 .683 .570 .000 .203 .778 -.344 .683 1.000 .793 .000 .009 .000 .847 -.416 .570 .793 1.000 .000 .002 .000 .000 chronological age standardised reading score standardised spelling score

reading age

Sig. (1-tailed)

SPSS for Psychologists - Chapter Seven

215

Variables Entered/Removedb Model 1 Variables Entered standardis ed spelling score, chronologi cal age, reading age, standardis ed reading a score a. All requested variables entered. b. Dependent Variable: percentage correct spelling Variables Removed Method

This third table tells us about the predictor variables and the method used. Here we can see that all of our predictor variables were entered simultaneously (because we selected the Enter method.

Enter

Model Summary Model 1 R .923a R Square .852 Adjusted R Square .838 Std. Error of the Estimate 9.6377

This table is important. The Adjusted R Square value tells us that our model accounts for 83.8% of variance in the spelling scores - a very good model!

a. Predictors: (Constant), standardised spelling score, chronological age, reading age, standardised reading score
ANOVAb Model 1 Sum of Squares 22447.277 3901.149 26348.426 df 4 42 46 Mean Square 5611.819 92.884 F 60.417 Sig. .000 a

This table reports an ANOVA, which assesses the overall significance of our model. As p < 0.05 our model is significant.

Regression Residual Total

a. Predictors: (Constant), standardised spelling score, chronological age, reading age, standardised reading score b. Dependent Variable: percentage correct spelling

Coefficientsa Standardi zed Coefficien ts Beta .406 -.144

Model 1

(Constant) chronological age reading age standardised reading score standardised spelling score

Unstandardized Coefficients B Std. Error -232.079 30.500 1.298 .252 -.162 .110

t -7.609 5.159 -1.469

Sig. .000 .000 .149

The Standardized Beta Coefficients give a measure of the contribution of each variable to the model. A large value indicates that a unit change in this predictor variable has a large effect on the criterion variable. The t and Sig (p) values give a rough indication of the impact of each predictor variable - a big absolute t value and small p value suggests that a predictor variable is having a large impact on the criterion variable. If you requested Collinearity diagnostics these will also be included in this table - see next page.

.530 1.254

.156 .165

.394 .786

3.393 7.584

.002 .000

a.

216

Dependent Variable: percentage correct spelling

SPSS for Psychol ogists Chapter Seven

Collinearity diagnostics If you requested the optional Collinearity diagnostics , these will be shown in an additional two columns of the Coefficients table (the last table shown above) and a further table (titled Collinearity diagnostics) that is not shown here. Ignore this extra table and simply look at the two new columns.
Coefficientsa Standardi zed Coefficien ts Beta .406 -.144 .394 .786

Model 1

(Constant) chronological age reading age standardised reading score standardised spelling score

Unstandardized Coefficients B Std. Error -232.079 30.500 1.298 .252 -.162 .110 .530 1.254 .156 .165

t -7.609 5.159 -1.469 3.393 7.584

Sig. .000 .000 .149 .002 .000

Collinearity Statistics Tolerance VIF .568 .365 .262 .329 1.759 2.737 3.820 3.044

a. Dependent Variable: percentage correct spelling

The tolerance values are a measure of the correlation between the predictor variables and can vary between 0 and 1. The closer to zero the tolerance value is for a variable, the stronger the relationship between this and the other predictor variables. You should worry about variables that have a very low tolerance. SPSS will not include a predictor variable in a model if it has a tolerance of less that 0.0001. However, you may want to set your own criteria rather higher - perhaps excluding any variable that has a tolerance level of less than 0.01. VIF is an alternative measure of collinearity (in fact it is the reciprocal of tolerance) in which a large value indicates a strong relationship between predictor variables. Reporting the results When reporting the results of a multiple regression analysis, you want to inform the reader about the proportion of the variance accounted for by your model, the significance of your model and the significance of the predictor variables. In the results section, we would write: Using the enter method, a significant model emerged (F4,42=60.417, p < 0.0005. Adjusted R square = .838. Significant variables are shown below: Predictor Variable Beta p Chronological age .406 p < 0.0005 Standardised reading score .394 p = 0.002 Standardised spelling score .786 p < 0.0005 (Reading age was not a significant predictor in this model.)

SPSS for Psychologists - Chapter Seven

217

OUTPUT FROM MULTIPLE REGRESSION USING STEPWISE METHOD Obtained Using Menu Items: Regression > Linear (Method = Stepwise) Reproduced below are the key parts of the output produced when you the Stepwise method is selected. When using this method you should also select the R Squared Change option in the Linear Regression: Statistics dialogue box (see page 213).
Variables Entered/Removed a Model 1 Variables Entered Variables Removed Method Stepwise (Criteria: Probabilit y-of-F-to-e nter <= .050, Probabilit y-of-F-to-r emove >= .100). Stepwise (Criteria: Probabilit y-of-F-to-e nter <= .050, Probabilit y-of-F-to-r emove >= .100). Stepwise (Criteria: Probabilit y-of-F-to-e nter <= .050, Probabilit y-of-F-to-r emove >= .100).

standardis ed spelling score

This table shows us the order in which the variables were entered and removed form our model. We can see that in this case three variables were added and none were removed.

chronologi cal age

standardis ed reading score

a. Dependent Variable: percentage correct spelling

Here we can see that model 1, which included only standardised spelling score accounted for 71% of the variance (Adjusted R2=0.711). The inclusion of chronological age into model 2 resulted in an additional 9% of the variance being explained (R2 change = 0.094). The final model 3 also included standardised reading score, and this model accounted for 83% of the variance (Adjusted R2=0.833).
Model Summary

Change Statistics Model 1 2 3 R .847a .900b .919c R Square .717 .811 .844 Adjusted R Square .711 .802 .833 Std. Error of the Estimate 12.8708 10.6481 9.7665 R Square Change F Change .717 114.055 .094 21.747 .034 9.302 df1 1 1 1 df2 45 44 43 Sig. F Change .000 .000 .004

a. Predictors: (Constant), standardised spelling score b. c.

218

Predictors: (Constant), standardised spelling score, chronological age Predictors: (Constant), standardised spelling score, chronological age, standardised reading score

SPSS for Psychologists Chapter Seven

ANOVAd Model 1 Sum of Squares 18893.882 7454.543 26348.426 21359.610 4988.815 26348.426 22246.870 4101.556 26348.426 df 1 45 46 2 44 46 3 43 46 Mean Square 18893.882 165.657 10679.805 113.382 7415.623 95.385 F 114.055

Regression Residual Total Regression Residual Total Regression Residual Total

Sig. .000 a

This table reports the ANOVA result for the three models.

94.193

.000 b

77.744

.000 c

a. Predictors: (Constant), standardised spelling score b. Predictors: (Constant), standardised spelling score, chronological age c. Predictors: (Constant), standardised spelling score, chronological age, standardised reading score d. Dependent Variable: percentage correct spelling
Coefficientsa Standardi zed Coefficien ts Beta

Model 1

(Constant) standardised spelling score (Constant) standardised spelling score chronological age (Constant) standardised spelling score chronological age standardised reading score

Unstandardized Coefficients B Std. Error -85.032 13.688

t -6.212

Sig. .000

Collinearity Statistics Tolerance VIF

Here SPSS reports the Beta, t and sig (p) values for each of the models. These were explained in the output from the Enter method.

1.352 -209.328 1.576 1.075 -209.171 1.197 1.092 .406

.127 28.959 .115 .230 26.562 .163 .211 .133

.847

10.680 -7.228

.000 .000 .000 .000 .000 .000 .000 .004

1.000

1.000

.987 .336 .750 .342 .301

13.679 4.663 -7.875 7.349 5.162 3.050

.827 .827 .348 .827 .371

1.209 1.209 2.875 1.210 2.698

a. Dependent Variable: percentage correct spelling

Excluded Variablesd Collinearity Statistics Minimum Tolerance VIF Tolerance .827 1.209 .827 .675 1.481 .675

Model 1

chronological age reading age standardised reading score reading age standardised reading score reading age

Beta In .336 a .208 a


a

t 4.663 2.249

Sig. .000 .030

Partial Correlation .575 .321

This table gives statistics for the variables that were excluded from each model.

.288 .036 b .301


b

2.317 .395 3.050 -1.469

.025 .695 .004 .149

.330 .060 .422 -.221

.371 .517 .371 .365

2.696 1.933 2.698 2.737

.371 .435 .348 .262

-.144c

a. Predictors in the Model: (Constant), standardised spelling score b. Predictors in the Model: (Constant), standardised spelling score, chronological age c. Predictors in the Model: (Constant), standardised spelling score, chronological age, standardised reading score d. Dependent Variable: percentage correct spelling

SPSS for Psychologists - Chapter Seven

219

Thus, the final model to emerge from the Stepwise analysis contains only three predictor variables. The predictor variable reading age, which was not significant in the Enter analysis, was also not included in the Stepwise analysis as it did not significantly strengthen the model. REPORTING THE RESULTS In your results section, you would report the significance of the model by citing the F and the associated p value, along with the adjusted R square, which indicates the strength of the model. So, for the final model reported above, we would write: Adjusted R square = .833; F3,43 = 77.7, p < 0.0005 (using the stepwise method). Significant variables are shown below. Predictor Variable Beta p Standardised spelling score: .750 p <0.0005 Chronological age .342 p <0.0005 Standardised reading score .301 p =0.004 (Reading age was not a significant predictor in this model.)

220

SPSS for Psychologists - Chapter Seven

You might also like