Chapter11 Econometrics SpecificationerrorAnalysis

Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

Chapter 11

Specification Error Analysis

The specification of a linear regression model consists of a formulation of the regression relationships and of
statements or assumptions concerning the explanatory variables and disturbances. If any of these is violated,
e.g., incorrect functional form, incorrect introduction of disturbance term in the model etc., then
specification error occurs. In narrower sense, the specification error refers to explanatory variables.

The complete regression analysis depends on the explanatory variables present in the model. It is understood
in the regression analysis that only correct and important explanatory variables appears in the model. In
practice, after ensuring the correct functional form of the model, the analyst usually has a pool of
explanatory variables which possibly influence the process or experiment. Generally, all such candidate
variables are not used in the regression modeling but a subset of explanatory variables is chosen from this
pool.

While choosing a subset of explanatory variables, there are two possible options:
1. In order to make the model as realistic as possible, the analyst may include as many as
possible explanatory variables.
2. In order to make the model as simple as possible, one may include only fewer number of
explanatory variables.

In such selections, there can be two types of incorrect model specifications.


1. Omission/exclusion of relevant variables.
2. Inclusion of irrelevant variables .

Now we discuss the statistical consequences arising from the both situations.

1. Exclusion of relevant variables:


In order to keep the model simple, the analyst may delete some of the explanatory variables which may be
of importance from the point of view of theoretical considerations. There can be several reasons behind
such decisions, e.g., it may be hard to quantify the variables like taste, intelligence etc. Sometimes it may
be difficult to take correct observations on the variables like income etc.

Econometrics | Chapter 11 | Specification Error Analysis | Shalabh, IIT Kanpur


1
Let there be k candidate explanatory variables out of which suppose r variables are included and (k − r )
variables are to be deleted from the model. So partition the X and β as

   
X =X 1 X 2  and β  β1 β2  .
n× k
 n× r n× ( k − r )   r×1 ( k − r )×1) 
X β + ε , E (ε ) =
The model y = 0, V (ε ) =
σ 2 I can be expressed as
y = X 1β1 + X 2 β 2 + ε
which is called as full model or true model.

After dropping the r explanatory variable in the model, the new model is
=y X 1β1 + δ
which is called as misspecified model or false model.

Applying OLS to the false model, the OLSE of β1 is

b1F = ( X 1' X 1 ) −1 X 1' y.


The estimation error is obtained as follows:
=b1F ( X 1' X 1 ) −1 X 1' ( X 1β1 + X 2 β 2 + ε )
β1 + ( X 1' X 1 ) −1 X 1' X 2 β 2 + ( X 1' X 1 ) −1 X 1'ε
=
b1F − β1 =θ + ( X 1' X 1 ) −1 X 1'ε

where θ = ( X 1' X 1 ) −1 X 1' X 2 β 2 .


Thus
E (b1F − β1 ) =θ + ( X 1' X 1 ) −1 E (ε )

which is a linear function of β 2 , i.e., the coefficients of excluded variables. So b1F is biased, in general.

The bias vanishes if X 1' X 2 = 0, i.e., X 1 and X 2 are orthogonal or uncorrelated.

The mean squared error matrix of b1F is

MSE (b1F ) =E (b1F − β1 )(b1F − β1 ) '


=E θθ '+ θε ' X 1 ( X 1' X 1 ) −1 + ( X 1' X 1 ) −1 X 1'εθ '+ ( X 1' X 1 ) −1 X 1'εε ' X 1 ( X 1' X 1 ) −1 
= θθ '+ 0 + 0 + σ 2 ( X 1' X 1 ) −1 X 1' IX 1 ( X 1' X 1 ) −1
= θθ '+ σ 2 ( X 1' X 1 ) −1.

Econometrics | Chapter 11 | Specification Error Analysis | Shalabh, IIT Kanpur


2
So efficiency generally declines. Note that the second term is the conventional form of MSE.
The residual sum of squares is
SS res e 'e
σˆ 2
= =
n−r n−r
where e =
y − X 1b1F =
H1 y ,
H1= I − X 1 ( X 1' X 1 ) −1 X 1' .
Thus
y H1 ( X 1β1 + X 2 β 2 + ε )
H1 =
0 + H1 ( X 2 β 2 + ε )
=
= H1 ( X 2 β 2 + ε ).

y′H1 y = ( X 1β1 + X 2 β 2 + ε ) H1 ( X 2 β 2 + ε )
= ( β 2' X 2 H1' H1 X 2 β 2 + β 2' X 2' H1ε + β 2' X 2 H1' X 2 β 2 + β1' X 1' H1ε + ε ' H1' X 2 β 2 + ε ' H1ε ).

1
=E (s 2 )  E ( β 2' X 2' H1 X 2 β 2 ) + 0 + 0 + E (ε ' H ε ) 
n−r
1
=  β 2' X 2' H1 X 2 β 2 ) + (n − r )σ 2 
n−r
1
= σ2 + β 2' X 2' H1 X 2 β 2 .
n−r
Thus s2 is a biased estimator of σ 2 and s 2 provides an over estimate of σ 2 . Note that even if
X 1' X 2 = 0, then also s 2 gives an overestimate of σ 2 . So the statistical inferences based on this will be

faulty. The t -test and confidence region will be invalid in this case.

If the response is to be predicted at x ' = ( x1' , x2' ), then using the full model, the predicted value is

= ' b x '( X ' X ) −1 X ' y


yˆ x=
with
E ( yˆ ) = x ' β
( yˆ ) σ 2 1 + x '( X ' X ) −1 x  .
Var=

When subset model is used then the predictor is


yˆ1 = x1' b1F
and then

Econometrics | Chapter 11 | Specification Error Analysis | Shalabh, IIT Kanpur


3
E ( yˆ1 ) = x1' ( X 1' X 1 ) −1 X 1' E ( y )
= x1' ( X 1' X 1 ) −1 X 1' E ( X 1β1 + X 2 β 2 + ε )
= x1' ( X 1' X 1 ) −1 X 1' ( X 1β1 + X 2 β 2 )
= x1' β1 + x1' ( X 1' X 1 ) −1 X 1' X 2 β 2
= x1' β1 + xi'θ .

Thus ŷ1 is a biased predictor of y . It is unbiased when X 1' X 2 = 0. The MSE of predictor is

MSE ( yˆ1 ) = σ 2 1 + x1' ( X 1' X 1 ) −1 x1  + ( x1'θ − x2' β 2 ) .


2

Also
Var ( yˆ ) ≥ MSE ( yˆ1 )

provided V ( βˆ2 ) − β 2 β 2' is positive semidefinite.

2. Inclusion of irrelevant variables


Sometimes due to enthusiasm and to make the model more realistic, the analyst may include some
explanatory variables that are not very relevant to the model. Such variables may contribute very little to
the explanatory power of the model. This may tend to reduce the degrees of freedom (n − k ) and
consequently the validity of inference drawn may be questionable. For example, the value of coefficient of
determination will increase indicating that the model is getting better which may not really be true.

Let the true model be


X β + ε , E (ε ) =
y= 0, V (ε ) =
σ 2I
which comprise k explanatory variable. Suppose now r additional explanatory variables are added to the
model and resulting model becomes
y = X β + Zγ + δ
where Z is a n × r matrix of n observations on each of the r explanatory variables and γ is r ×1 vector
of regression coefficient associated with Z and δ is disturbance term. This model is termed as false model.

Applying OLS to false model, we get

Econometrics | Chapter 11 | Specification Error Analysis | Shalabh, IIT Kanpur


4
−1
 bF   X ' X X ' Z   X ' y 
 =   
 cF   Z ' X Z ' Z   Z ' y 
 X ' X X ' Z   bF   X ' y 
   =  
 Z ' X Z ' Z   cF   Z ' y 

⇒ X ' XbF + X ' ZcF =


X 'y (1)
Z ' XbF + Z ' ZcF =
Z'y (2)

where bF and CF are the OLSEs of β and γ respectively.

Premultiply equation (2) by X ' Z ( Z ' Z ) −1 , we get

X ' Z ( Z ' Z ) −1 Z ' XbF + X ' Z ( Z ' Z ) −1 Z ' ZcF =


X ' Z ( Z ' Z ) −1 Z ' y. (3)
Subtracting equation (1) from (3), we get
 X ' X − X ' Z ( Z ' Z ) −1 Z ' X  bF = X ' y − X ' Z ( Z ' Z ) −1 Z ' y
X '  I − Z ( Z ' Z ) −1 Z ' XbF = X '  I − Z ( Z ' Z ) −1 Z ' y
( X ' H Z X ) −1 X ' H Z y
⇒ bF =
where H Z = I − Z ( Z ' Z ) −1 Z '.

The estimation error of bF is

=bF − β ( X ' H Z X ) −1 X ' H Z y − β


= ( X ' H Z X ) −1 X ' H Z ( X β + ε ) − β
= ( X ' H Z X ) −1 X ' H Z ε .
Thus
E (bF − β ) ( X ' H
= = Z X ) X ' H Z E (ε )
−1
0

so bF is unbiased even when some irrelevant variables are added to the model.

The covariance matrix is

V (bF ) =E ( bF − β )( bF − β )
1

= E ( X ' H Z X ) X ' H Z εε ' H Z X ( X ' H Z X ) −1 


−1

 
= σ 2 ( X ' H Z X ) X ' H Z IH Z X ( X ' H Z X )
−1 −1

= σ 2 ( X ' HZ X ) .
−1

Econometrics | Chapter 11 | Specification Error Analysis | Shalabh, IIT Kanpur


5
If OLS is applied to true model, then
bT = ( X ' X ) −1 X ' y

with E (bT ) = β

V (bT ) = σ 2 ( X ' X ) −1.

To compare bF and bT , we use the following result.

Result: If A and B are two positive definite matrices then A − B is atleast positive semi definite if

B −1 − A−1 is also atleast positive semi definite.

Let
A = ( X ' H Z X ) −1
B = ( X ' X ) −1
B −1 − A−1 = X ' X − X ' H Z X
= X ' X − X ' X + X ' Z ( Z ' Z ) −1 Z ' X
= X ' Z ( Z ' Z ) −1 Z ' X
which is atleast positive semi definite matrix. This implies that the efficiency declines unless X ' Z = 0. If
X ' Z = 0, i.e., X and Z are orthogonal, then both are equally efficient.
The residual sum of squares under false model is
SS res = eF' eF
where

eF =−
y XbF − ZCF
bF = ( X ' H Z X ) −1 X ' H Z y
=cF ( Z ' Z ) −1 Z ' y − ( Z ' Z ) −1 Z ' XbF
= ( Z ' Z ) −1 Z '( y − XbF )
= ( Z ' Z ) −1 Z '  I − X ( X ' H Z X ) −1 X ' H z  y
= ( Z ' Z ) −1 Z ' H XZ y
H Z = I − Z ( Z ' Z ) −1 Z '
H Zx= I − X ( X ' H Z X ) −1 X ' H Z
2
H ZX = H ZX : idempotent.

Econometrics | Chapter 11 | Specification Error Analysis | Shalabh, IIT Kanpur


6
So
y − X ( X ' H Z X ) −1 X ' H Z y − Z ( Z ' Z ) −1 Z ' H ZX y
eF =
 I − X ( X ' H Z X ) −1 X ' H Z − Z ( Z ' Z ) −1 Z ' H ZX  y
=
=  H ZX − ( I − H Z ) H ZX  y
= H Z H ZX y
= H=
* *
ZX y where H ZX H Z H ZX .
Thus
SS res = eF' eF
= y ' H Z H ZX H ZX H Z y
= y ' H Z H ZX y
= y ' H ZX
*
y
E ( SS res ) = σ 2 tr ( H ZX
*
)
= σ 2 (n − k − r )
 SS res 
 =σ .
2
E
 n − k − r 
SS res
So is an unbiased estimator of σ 2 .
n−k −r

A comparison of exclusion and inclusion of variables is as follows:

Exclusion type Inclusion type


Estimation of coefficients Biased Unbiased
Efficiency Generally declines Declines
Estimation of disturbance term Over-estimate Unbiased
Conventional test of hypothesis Invalid and faulty inferences Valid though erroneous
and confidence region

Econometrics | Chapter 11 | Specification Error Analysis | Shalabh, IIT Kanpur


7

You might also like