Regression
Regression
(ver. 6.0)
Oscar Torres-Reyna
Data Consultant
[email protected]
http://dss.princeton.edu/training/
PU/DSS/OTR
Regression: a practical approach (overview) We use regression to estimate the unknown effect of changing one variable over another (Stock and Watson, 2003, ch. 4) When running a regression we are making two assumptions, 1) there is a linear relationship between two variables (i.e. X and Y) and 2) this relationship is additive (i.e. Y= x1 + x2 + +xN). Technically, linear regression estimates how much Y changes when X changes one unit. In Stata use the command regress, type: regress [dependent variable] [independent variable(s)] regress y x In a multivariate setting we type: regress y x1 x2 x3 Before running a regression it is recommended to have a clear idea of what you are trying to estimate (i.e. which are your outcome and predictor variables). A regression makes sense only if there is a sound theory behind it.
2
PU/DSS/OTR
Outcome (Y) variable SAT scores, variable csat in dataset Predictor (X) variables Per pupil expenditures primary & secondary (expense) % HS graduates taking SAT (percent) Median household income (income) % adults with HS diploma (high) % adults with college degree (college) Region (region)
*Source: Data and examples come from the book Statistics with Stata (updated for version 9) by Lawrence C. Hamilton (chapter 6). Click here to download the data or search for it at http://www.duxbury.com/highered/. 3 Use the file states.dta (educational data for the U.S.).
PU/DSS/OTR
Regression: variables
It is recommended first to examine the variables in the model to check for possible errors, type: use http://dss.princeton.edu/training/states.dta describe csat expense percent income high college region summarize csat expense percent income high college region
. describe csat expense percent income high college region display format %9.0g %9.0g %9.0g %10.0g %9.0g %9.0g %9.0g value label variable label Mean composite SAT score Per pup il expenditures prim&sec % HS graduates taking SAT Median household income, $1,000 % adults HS diploma % adults college degree Geographical region
storage variable name type csat expense percent income high college region int int byte double float float byte
region
csat expense percent income high college region Obs 51 51 51 51 51 51 50 Mean 944.098 5235.961 35.76471 33.95657 76.26078 20.02157 2.54 Std. Dev. 66.93497 1401.155 26.19281 6.423134 5.588741 4.16578 1.128662 Min 832 2960 4 23.465 64.3 12.3 1 Max 1093 9259 81 48.618 86.6 33.3 4
PU/DSS/OTR
. regress
csat expense, robust Number of obs = F( 1, 49) = Prob > F = R-squared = Root MSE = 51 36.80 0.0000 0.2174 59.814
2
This is the p-value of the model. It tests whether R2 is different from 0. Usually we need a p-value lower than 0.05 to show a statistically significant relationship between X and Y. R-square shows the amount of variance of Y explained by X. In this case expense explains 22% of the variance in SAT scores. Adj R2 (not shown here) shows the same as R2 but adjusted by the # of cases and # of variables. When the # of variables is small and the # of cases is very large then Adj R2 is closer to R2. This provides a more honest association between X and Y.
Linear regression
Root MSE: root mean squared error, is the sd of the regression. The closer to zero better the fit.
Robust Std. Err. .0036719 24.35468
6
t -6.07 43.55
csat = 1061 - 0.022*expense For each one-point increase in expense, SAT scores decrease by 0.022 points.
5
The t-values test the hypothesis that the coefficient is different from 0. To reject this, you need a t-value greater than 1.96 (for 95% confidence). You can get the t-values by dividing the coefficient by its standard error. The tvalues also show the importance of a variable in the model.
Two-tail p-values test the hypothesis that each coefficient is different from 0. To reject this, the p-value has to be lower than 0.05 (you could choose also an alpha of 0.10). In this case, expense is statistically significant in explaining SAT. 5
PU/DSS/OTR
. regress csat expense percent income high college, robust Linear regression Number of obs = F( 5, 45) = Prob > F = R-squared = Root MSE = 51 50.90 0.0000 0.8243 29.571
This is the p-value of the model. It indicates the reliability of X to predict Y. Usually we need a pvalue lower than 0.05 to show a statistically significant relationship between X and Y.
2
Root MSE: root mean squared error, is the sd of the regression. The closer to zero better the fit.
csat expense percent income high college _cons Coef. .0033528 -2.618177 .1055853 1.630841 2.030894 851.5649 Robust Std. Err. .004781 .2288594 1.207246 .943318 2.113792 57.28743 t 0.70 -11.44 0.09 1.73 0.96 14.86 P>|t| 0.487 0.000 0.931 0.091 0.342 0.000
[95% Conf. Interval] -.0062766 -3.079123 -2.325933 -.2690989 -2.226502 736.1821 .0129823 -2.15723 2.537104 3.530781 6.28829 966.9477
3
R-square shows the amount of variance of Y explained by X. In this case the model explains 82.43% of the variance in SAT scores. Adj R2 (not shown here) shows the same as R2 but adjusted by the # of cases and # of variables. When the # of variables is small and the # of cases is very large then Adj R2 is closer to R2. This provides a more honest association between X and Y.
4 5
The t-values test the hypothesis that the coefficient is different from 0. To reject this, you need a t-value greater than 1.96 (at 0.05 confidence). You can get the t-values by dividing the coefficient by its standard error. The tvalues also show the importance of a variable in the model. In this case, percent is the most important.
Two-tail p-values test the hypothesis that each coefficient is different from 0. To reject this, the p-value has to be lower than 0.05 (you could choose also an alpha of 0.10). In this case, expense, income, and college are not statistically significant in explaining SAT; high is almost significant at 0.10. Percent is the only variable that has some significant impact on SAT (its coefficient is different 6 from 0)
PU/DSS/OTR
Linear regression
csat expense percent income high college _Iregion_2 _Iregion_3 _Iregion_4 _cons
Coef. -.002021 -3.007647 -.1674421 1.814731 4.670564 69.45333 25.39701 34.57704 808.0206
P>|t| 0.576 0.000 0.889 0.085 0.006 0.000 0.049 0.001 0.000
[95% Conf. Interval] -.0092676 -3.483864 -2.583638 -.2592168 1.439705 33.10295 .101086 15.4926 670.9661 .0052256 -2.53143 2.248754 3.888679 7.901422 105.8037 50.69293 53.66149 945.0751
NOTE: By default xi excludes the first value, to select a different value, before running the regression type: char region[omit] 4 xi: regress csat expense percent income high college i.region, robust This will select Midwest (4) as the reference category for the dummy variables. NOTE: Another way to create dummy variables is to type: tab region, gen(region) This will create four new variables (or a many a categories in the variable), one for each region in this case.
7
PU/DSS/OTR
MSS 200269.84 22252.2045 D (k 1) 9 F= = = = = 70.13 12691.5396 317.28849 E RSS 40 nk MSS ei = 200269.84 = A = 0.9404 = 1 R = 2 TSS ( yi y ) 12691.5396 C
2 2
AdjR 2 = 1
RootMSE =
A = Model Sum of Squares (MSS). The closer to TSS the better fit. B = Residual Sum of Squares (RSS) C = Total Sum of Squares (TSS) D = Average Model Sum of Squares = MSS/(k-1) where k = # predictors E = Average Residual Sum of Squares = RSS/(n k) where n = # of observations F = Average Total Sum of Squares = TSS/(n 1) R2 shows the amount of observed variance explained by the model, in this case 94%. The F-statistic, F(9,40), tests whether R2 is different from zero. Root MSE shows the average distance of the estimator from the mean, in this case 18 points in estimating SAT scores. Source: Kohler, Ulrich, Frauke Kreuter, Data Analysis Using Stata, 2009 8
PU/DSS/OTR
Regression: estto/esttab
To show the models side-by-side you can use the commands estto and esttab: regress csat expense, robust eststo model1 regress csat expense percent income high college, robust eststo model2 xi: regress csat expense percent income high college i.region, robust eststo model3 . esttab, r2 ar2 se scalar(rmse) esttab, r2 ar2 se scalar(rmse)
(1) csat expense percent income high college _Iregion_2 _Iregion_3 _Iregion_4 percent2 _cons N R-sq adj. R-sq rmse 1060.7*** (24.35) 51 0.217 0.201 59.81 851.6*** (57.29) 51 0.824 0.805 29.57 808.0*** (67.86) 50 0.911 0.894 21.49 -0.0223*** (0.00367) (2) csat 0.00335 (0.00478) -2.618*** (0.229) 0.106 (1.207) 1.631 (0.943) 2.031 (2.114) (3) csat -0.00202 (0.00359) -3.008*** (0.236) -0.167 (1.196) 1.815 (1.027) 4.671** (1.600) 69.45*** (18.00) 25.40* (12.53) 34.58*** (9.450)
9
PU/DSS/OTR
. pwcorr csat expense percent income high college, star(0.05) sig csat csat expense percent income high college 1.0000 -0.4663* 0.0006 -0.8758* 0.0000 -0.4713* 0.0005 0.0858 0.5495 -0.3729* 0.0070 1.0000 0.6509* 0.0000 0.6784* 0.0000 0.3133* 0.0252 0.6400* 0.0000 1.0000 0.6733* 0.0000 0.1413 0.3226 0.6091* 0.0000 1.0000 0.5099* 0.0001 0.7234* 0.0000 1.0000 0.5319* 0.0001 1.0000 expense percent income high college
10
PU/DSS/OTR
Y
Mean composite SAT score
11
PU/DSS/OTR
800
20
60
80
800
65
70
75 % adults HS diploma
80
85
There seem to be a curvilinear relationship between csat and percent, and slightly linear between csat and high. To deal with U-shaped curves we need to add a square version of the variable, in this case percent square generate percent2 = percent^2
12
PU/DSS/OTR
-250
600
20
60
80
70
75 % adults HS diploma
80
85
The option lowess (locally weighted scatterplot smoothing) draw the observed pattern in the data to help identify nonlinearities. Percent shows a quadratic relation, it makes sense to add a square version of it. High shows a polynomial pattern as well but goes around the regression line (except on the right). We could keep it as is for now. The model is: xi: regress csat expense percent percent2 income high college i.region, robust
Form more details see http://www.ats.ucla.edu/stat/stata/webbooks/reg/chapter2/statareg2.htm, and/or type help acprplot and help lowess.
13
PU/DSS/OTR
Regression: models
xi: regress csat expense percent percent2 income high college i.region, robust eststo model4 esttab, r2 ar2 se scalar(rmse)
. esttab, r2 ar2 se scalar(rmse) (1) csat expense percent income high college _Iregion_2 _Iregion_3 _Iregion_4 percent2 _cons N R-sq adj. R-sq rmse 1060.7*** (24.35) 51 0.217 0.201 59.81 851.6*** (57.29) 51 0.824 0.805 29.57 808.0*** (67.86) 50 0.911 0.894 21.49 -0.0223*** (0.00367) (2) csat 0.00335 (0.00478) -2.618*** (0.229) 0.106 (1.207) 1.631 (0.943) 2.031 (2.114 ) (3) csat -0.00202 (0.00359) -3.008*** (0.236) -0.167 (1.196) 1.815 (1.027) 4.671** (1.600) 69.45*** (18.00) 25.40* (12.53) 34.58*** (9.450) (4) csat 0.00141 (0.00372) -5.945*** (0.641) -0.914 (0.973) 1.869 (0.931) 3.418** (1.145) 5.077 (20.75) 5.209 (10.42) 19.25* (8.110) 0.0460*** (0.0102) 874.0*** (58.13) 50 0.940 0.927 17.81
14
PU/DSS/OTR
How good the model is will depend on how well it predicts Y, the linearity of the model and the behavior of the residuals. There are two ways to generate the predicted values of Y (usually called Yhat) given the model: Option A, using generate after running the regression:
xi: regress csat expense percent percent2 income high college i.region, robust generate csat_predict = _b[_cons] + _b[percent]*percent + _b[percent]*percent + _b[percent2]*percent2 + _b[high]*high +
Option B, using predict immediately after running the regression: xi: regress csat expense percent percent2 income high college i.region, robust predict csat_predict label variable csat_predict "csat predicted"
. predict csat_predict (option xb assumed; fitted values) (1 missing value generated) . label variable csat_predict "csat predicted"
15
PU/DSS/OTR
850
900
1000
1050
We should expect a 45 degree pattern in the data. Y-axis is the observed data and x-axis the predicted data (Yhat). In this case the model seems to be doing a good job in predicting csat
16
PU/DSS/OTR
When plotting residuals vs. predicted values (Yhat) we should not observe any pattern at all. In Stata we do this using rvfplot right after running the regression, it will automatically draw a scatterplot between residuals and predicted values.
rvfplot, yline(0)
40 Residuals 0 20
-40
-20
850
900
1000
1050
17
PU/DSS/OTR
The graphical and the Breush-Pagan test suggest the possible presence of heteroskedasticity in our model. The problem with this is that we may have the wrong estimates of the standard errors for the coefficients and therefore their t-values. There are two ways to deal with this problem, one is using heteroskedasticity-robust standard errors, the other one is using weighted least squares (see Stock and Watson, 2003, chapter 15). WLS requires knowledge of the conditional variance on which the weights are based, if this is known (rarely the case) then use WLS. In practice it is recommended to use heteroskedasticity-robust standard errors to deal with heteroskedasticity. By default Stata assumes homoskedastic standard errors, so we need to adjust our model to account for heteroskedasticity. To do this we use the option robust in the regress command. xi: regress csat expense percent percent2 income high college i.region, robust Following Stock and Watson, as a rule-of-thumb, you should always assume heteroskedasticiy in your model 18 (see Stock and Watson, 2003, chapter 4) .
PU/DSS/OTR
The null hypothesis is that the model does not have omitted-variables bias, the p-value is higher than the usual threshold of 0.05 (95% significance), so we fail to reject the null and conclude that we do not need more variables.
19
PU/DSS/OTR
. linktest Source Model Residual Total csat _hat _hatsq _cons SS 200272.359 12689.0209 212961.38 Coef. 1.144949 -.0000761 -68.69417 df 2 47 49 MS 100136.18 269.979169 4346.15061 t 0.76 -0.10 -0.10 P>|t| 0.450 0.923 0.924 Number of obs F( 2, 47) Prob > F R-squared Adj R-squared Root MSE = = = = = = 50 370.90 0.0000 0.9404 0.9379 16.431
20
PU/DSS/OTR
Regression: multicollinearity
An important assumption for the multiple regression model is that independent variables are not perfectly multicolinear. One regressor should not be a linear function of another. When multicollinearity is present standand errors may be inflated. Stata will drop one of the variables to avoid a division by zero in the OLS procedure (see Stock and Watson, 2003, chapter 5). The Stata command to check for multicollinearity is vif (variance inflation factor). Right after running the regression type:
. vif Variable percent2 percent _Iregion_2 income _Iregion_3 high college expense _Iregion_4 Mean VIF VIF 70.80 49.52 8.47 4.97 4.89 4.71 4.52 3.33 2.14 17.04 1/VIF 0.014124 0.020193 0.118063 0.201326 0.204445 0.212134 0.221348 0.300111 0.467506
A vif > 10 or a 1/vif < 0.10 indicates trouble. We know that percent and percent2 are related since one is the square of the other. They are ok since percent has a quadratic relationship with Y, but this would be an example of multicolinearity. The rest of the variables look ok.
Form more details see http://www.ats.ucla.edu/stat/stata/webbooks/reg/chapter2/statareg2.htm, and/or type help vif.
21
PU/DSS/OTR
Regression: outliers
To check for outliers we use the avplots command (added-variable plots). Outliers are data points with extreme values that could have a negative effect on our estimators. After running the regression type:
avplot percent
100
avplot expense
40 -40 -20 e( csat | X ) 0 20
-100
-50
e( csat | X ) 0
50
-10
-5
0 e( percent | X )
10
-2000
-1000
0 e( expense | X )
1000
2000
These plots regress each variable against all others, notice the coefficients on each. All data points seem to be in range, no outliers observed.
For more details and tests on this and influential and leverage variables please check http://www.ats.ucla.edu/stat/stata/webbooks/reg/chapter2/statareg2.htm Also type help diagplots in the Stata command window.
22
PU/DSS/OTR
Regression: outliers
avplots
e( csat X) -100-50 0 |50 100 e( csat |X ) 60 -40-20 0 20 40 e( csat X )40 -40-20 0 |20
2000
-10
-5 0 5 e( percent | X )
10
-500
0 e( percent2 | X )
500
-10
-5 0 5 e( income | X )
10
-6
-4
-2 0 2 e( high | X )
-5
0 e( college | X )
e( csat | X ) -40-20 0 20 40
-.4
-.2 0 .2 e( _Iregion_2 | X )
.4
-.5
0 e( _Iregion_3 | X )
.5
e( csat | X 40 ) -40-20 0 20
-.5
0 e( _Iregion_4 | X )
.5
23
PU/DSS/OTR
DfBeta
High influence if
After running the regression type: predict dfits if e(sample), dfits To generate the flag for the cutoff type: gen cutoffdfit= abs(dfits)>2*sqrt((e(df_m) +1)/e(N)) & e(sample) Same as DfBeta above (DFF_1)
DfFit
Measures how much an |DfFIT| >2*SQRT(k/N) observation influences the regression model as a whole. Where k is the number of parameters (including the How much the predicted intercept) and N is the values change as a result of sample size. including and excluding a particular observation. High impact if |COVRATIO-1| 3*k/N Where k is the number of parameters (including the intercept) and N is the sample size.
Covariance ratio
In Stata after running the regression type predict covratio if e(sample), covratio Same as DfBeta above (COV_1)
24
PU/DSS/OTR
Cooks distance
It is a summary measure of A D>1 indicates big outlier leverage and high residuals. problem .
Leverage
Where k is the number of parameters (including the intercept) and N is the sample size. A rule-of-thumb: Leverage goes from 0 to 1. A value closer to 1 or over 0.5 may indicate problems. Higher levels indicate higher distance from average values. The M-distance follows a Chisquare distribution with k-1 df and alpha=0.001 (where k is the number of independent variables). Any value over this Chi-square value may indicate problems.
Not available
Sources for the summary tables: influence indicators and distance measures Statnotes: http://faculty.chass.ncsu.edu/garson/PA765/regress.htm#outlier2 An Introduction to Econometrics Using Stata/Christopher F. Baum, Stata Press, 2006 Statistics with Stata (updated for version 9) / Lawrence Hamilton, Thomson Books/Cole, 2006 UCLA http://www.ats.ucla.edu/stat/stata/webbooks/reg/chapter2/statareg2.htm
26
PU/DSS/OTR
A kernel density plot produces a kind of histogram for the residuals, the option normal overlays a normal distribution to compare. Here residuals seem to follow a normal distribution. Below is an example using histogram.
.02
.005
Density 0 .01
If residuals do not follow a normal pattern then you should check for omitted variables, model specification, linearity, functional forms. In sum, you may need to reassess your model/theory. In practice normality does not represent much of a problem when dealing with really big samples.
.02
-40
-20
20
40
60
-40
-20
0 Residuals
20
40
27
PU/DSS/OTR
Quintile-normal plots (qnorm) check for non-normality in the extremes of the data (tails). It plots quintiles of residuals vs quintiles of a normal distribution. Tails are a bit off the normal. qnorm e
40
0.25
0.00
0.00
0.25
0.75
1.00
-40
-20
Residuals 0
20
-40
-20
0 Inverse Normal
20
40
A non-graphical test is the Shapiro-Wilk test for normality. It tests the hypothesis that the distribution is normal, in this case the null hypothesis is that the distribution of the residuals is normal. Type swilk e
. swilk e Shapiro-Wilk W test for normal data Variable e Obs 50 W 0.95566 V 2.085 z 1.567 Prob>z 0.05855
The null hypothesis is that the distribution of the residuals is normal, here the p-value is 0.06 we failed to reject the null (at 95%). We conclude then that residuals are normally distributed, with the caveat that they are not at 90%. 28
PU/DSS/OTR
To test the null hypothesis that both coefficients do not have any effect on csat (high = 0 and college = 0), type: test high college
. test high college ( 1) ( 2) high = 0 college = 0 F( 2, 40) = Prob > F = 17.12 0.0000
The p-value is 0.0000, we reject the null and conclude that both variables have indeed a significant effect on SAT.
Some other possible tests are (see Hamilton, 2006, p.176): test income = 1 test high = college test income = (high + college)/100 29
Note: Not to be confused with ttest. Type help test and help ttest for more details
PU/DSS/OTR
percent_b percent_se constant_b constant_se Obs 51 51 51 51 Mean -5.945267 .6405529 873.9537 58.12895 Std. Dev. 0 0 0 0 Min -5.945267 .6405529 873.9537 58.12895 Max -5.945267 .6405529 873.9537 58.12895
30
PU/DSS/OTR
31
PU/DSS/OTR
Regression: general guidelines The following are general guidelines for building a regression model* 1. 2. 3. 4. Make sure all relevant predictors are included. These are based on your research question, theory and knowledge on the topic. Combine those predictors that tend to measure the same thing (i.e. as an index). Consider the possibility of adding interactions (mainly for those variables with large effects) Strategy to keep or drop variables:
1. 2. 3. 4. Predictor not significant and has the expected sign -> Keep it Predictor not significant and does not have the expected sign -> Drop it Predictor is significant and has the expected sign -> Keep it Predictor is significant but does not have the expected sign -> Review, you may need more variables, it may be interacting with another variable in the model or there may be an error in the data.
*Gelman, Andrew, Jennifer Hill, Data Analysis Using Regression and Multilevel/Hierarchical Models, 2007, p. 69
32
PU/DSS/OTR
In excel
In word
33
*See the following document for some additional info/tips http://www.fiu.edu/~tardanic/brianne.pdf PU/DSS/OTR
In excel
In word
NOTE: If you run logit/probit regression with odds ratios you need to add the option eform to export the odd ratios
Type help outreg2 for more details. If you do not see outreg2, you may have to install it by typing ssc install outreg2. If this does not work type findit outreg2, select from the list and click install. Note: If you get the following error message (when you use the option append or replace it means that you need to close the excel/word window.
34
PU/DSS/OTR
For excel
For word
Name of the file for the output Click on seeout to browse the results Set # of decimals for coefficients
Include some additional statistic, in this case adj. R-sqr. You can select any statistics on the return lists (e-class, rclass or s-class). After running the regression type ereturn list for a list of available statistics.
35
PU/DSS/OTR
Binary hi_el, where 0 if English learners (el_pct) is lower than 10%, 1 equal to 10% or higher
The equation is testscr_hat = 664.1 18.1*hi_el 1.9*hi_str 3.5*str_el The effect of hi_str on the tests scores is -1.9 but given the interaction term (and assuming all coefficients are significant), the net effect is -1.9 -3.5*hi_el. If hi_el is 0 then the effect is -1.9 (which is hi_str coefficient), but if hi_el is 1 then the effect is -1.9 -3.5 = -5.4. In this case, the effect of student-teacher ratio is more negative in districts where the percent of English learners is higher. See the next slide for more detailed computations. 36 *The data used in this section is the California Test Score data set (caschool.dta) from chapter 6 of the book Introduction to Econometrics from Stock and Watson, 2003. Data can be downloaded from http://wps.aw.com/aw_stock_ie_2/50/13016/3332253.cw/index.html.For a detailed discussion please refer to the respective section in the book. PU/DSS/OTR
We then obtain the average of the estimations for the test scores (for all four scenarios, notice same values for all cases).
Here we estimate the net effect of low/high student-teacher ratio holding constant the percent of English learners. When hi_el is 0 the effect of going from low to high student-teacher ratio goes from a score of 664.2 to 662.2, a difference of 1.9. From a policy perspective you could argue that moving from high str to low str improve test scores by 1.9 in low English learners districts. When hi_el is 1, the effect of going from low to high student-teacher ratio goes from a score of 645.9 down to 640.5, a decline of 5.4 points (1.9+3.5). From a policy perspective you could say that reducing the str in districts with high percentage of English learners could improve test scores by 5.4 points. 37 *The data used in this section is the California Test Score data set (caschool.dta) from chapter 6 of the book Introduction to Econometrics from Stock and Watson, 2003. Data can be downloaded from
http://wps.aw.com/aw_stock_ie_2/50/13016/3332253.cw/index.html.For a detailed discussion please refer to the respective section in the book. PU/DSS/OTR
The equation is testscr_hat = 682.2 0.97*str + 5.6*hi_el 1.28*str_el2 The effect of str on testscr will be mediated by hi_el.
If hi_el is 0 (low) then the effect of str is 682.2 0.97*str. If hi_el is 1 (high) then the effect of str is 682.2 0.97*str + 5.6 1.28*str = 687.8 2.25*str
Notice that how hi_el changes both the intercept and the slope of str. Reducing str by one in low EL districts will increase test scores by 0.97 points, but it will have a higher impact (2.25 points) in high EL districts. The difference between these two effects is 1.28 which is the coefficient of the interaction (Stock and Watson, 2003, p.223). 38 *The data used in this section is the California Test Score data set (caschool.dta) from chapter 6 of the book Introduction to Econometrics from Stock and Watson, 2003. Data can be downloaded from http://wps.aw.com/aw_stock_ie_2/50/13016/3332253.cw/index.html.For a detailed discussion please refer to the respective section in the book. PU/DSS/OTR
The equation is testscr_hat = 686.3 1.12*str - 0.67*el_pct + 0.0012*str_el3 The effect of the interaction term is very small. Following Stock and Watson (2003, p.229), algebraically the slope of str is 1.12 + 0.0012*el_pct (remember that str_el3 is equal to str*el_pct). So:
If el_pct = 10, the slope of str is -1.108 If el_pct = 20, the slope of str is -1.096. A difference in effect of 0.012 points.
In the continuous case there is an effect but is very small (and not significant). See Stock and Watson, 2003, for further details.
39 *The data used in this section is the California Test Score data set (caschool.dta) from chapter 6 of the book Introduction to Econometrics from Stock and Watson, 2003. Data can be downloaded from http://wps.aw.com/aw_stock_ie_2/50/13016/3332253.cw/index.html.For a detailed discussion please refer to the respective section in the book. PU/DSS/OTR
Creating dummies
You can create dummy variables by either using recode or using a combination of tab/gen commands: tab major, generate(major_dum)
Check the variables window, at the end you will see three new variables. Using tab1 (for multiple frequencies) you can check that they are all 0 and 1 values
40
PU/DSS/OTR
Check the variables window, at the end you will see three new variables. Using tab1 (for multiple frequencies) you can check that they are all 0 and 1 values
41
PU/DSS/OTR
describe codebook inspect list browse count assert summarize Table (tab) tabulate
Source: http://www.ats.ucla.edu/stat/stata/notes2/commands.htm
Data manipulation
generate replace egen recode rename drop keep sort encode decode order by reshape
Formatting
format label
log notes
42
PU/DSS/OTR
Convenience
display
Books Introduction to econometrics / James H. Stock, Mark W. Watson. 2nd ed., Boston: Pearson Addison Wesley, 2007. Data analysis using regression and multilevel/hierarchical models / Andrew Gelman, Jennifer Hill. Cambridge ; New York : Cambridge University Press, 2007. Econometric analysis / William H. Greene. 6th ed., Upper Saddle River, N.J. : Prentice Hall, 2008. Designing Social Inquiry: Scientific Inference in Qualitative Research / Gary King, Robert O. Keohane, Sidney Verba, Princeton University Press, 1994. Unifying Political Methodology: The Likelihood Theory of Statistical Inference / Gary King, Cambridge University Press, 1989 Statistical Analysis: an interdisciplinary introduction to univariate & multivariate methods / Sam Kachigan, New York : Radius Press, c1986 Statistics with Stata (updated for version 9) / Lawrence Hamilton, Thomson Books/Cole, 2006
46
PU/DSS/OTR