BLUE Properties of OLS Estimators and Gauss Markov

Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

2.9.

Properties of Least Squares Estimators


The least squares estimates are called BLUE (hest, linear,
unbiased, estimates) provided that the random termu satisfies some
general assumptions, namely that the u has zero mean and constant
The Simple Linear
26
Regression Model
An Introduction to Econometrias
variance This Let suppose that
proposition, together ith the set of conditions under
which it is true, is known
us
K,i =1, 2, n)
Theorem.
as
Gauss-Markov Lenst-Squares
Tbe OOLS estimators
possess three properties: they are linear,
unbiased and have the smallest variance (compared to other linear
anbiased estimators). Thus the OLS estimators are BLUE. i-1
L The property of linearity
This shows that p is a linear function of Y,
The least-squares estimates á and are linear functions of the
observed sample vahues Y. Similarly, i-7-bx-y-r M -2
Fi

2--
Since B=1

- This shows that a is a linear function of Y


Thus both á and B are expressed as linear functions of the Ts.
and á =F-B 2. The property of unbiasedness
of á and p can be obtained as folows:
Themean
Ex-F-
Now B4
2x-F{Y-7)
x-F Since B =
-
Ex-)-}%-3
- Yx--F-3
Xx-
yx,-)
whereX, -X)=0 and , =
X, -X for i
=

1, 2,

where z, -(X, -)
28 An Introduction to Econometrics The Simple Linear Regression Model 29

Ifwe putK, -0 andKX, =1


B-KY, where K i-1 i-1

in the expression of

andi - - z i=l i-1


K,u,, we get,

Now Y. We now put Y =a + RX, + u B-B.1+K,4, i=l

-.a+BX, +u) Now mean of p =


E(0)= E(B)+K,E(u)
Eu) =o
i=l =] Thus we have, E)-p i.e., mean of p is B.

Similarly, = -Xs.
Since K, = # 0 , as -0 i=l

-2-KK, (aBX, +) Y-a+pX, u]

and KX, =2K(+X) where -


i=1 =l
X -F X-+
i=1 i=1
i=1

i=1 i=l
0.-. =l
1 n we have,

i=1 =a+pX+4-BR-XK i=l


i=l

or, =a -XK4
- =1 i=l

Elu)- XK,E4)
i=l

o5,Ela)-Ela) il i=l

:.
Ea)=a, as E(4)=0
30 An Introduction to Econometrics
The Simple Linear Regression Model 31

This shows that mean of a is a


Thus, it is proved that â and B are unbiased estimators of a and B.
3. The minimum variance 4
property
In this property we shall prove the Gauss Markov Theorem,
which states that the least squares estimates are best (have the
smallest variance) as compared with any other linear unbiased
estimator obtained from other econometric methods.
: Var(B)--
First we have to find Var (B) and Var (@) and then we have to
i=1 i=l

prove the minimum varianoe property.


Variance of á =Var(á).
Variance of f Var(i)-=[h-=(6)-zlj-of since =Y - BX (see property 1)

Since B-B+K4 (see property 2) stituting B =2K,Y, we

i=l

obtain =Y -XKY
.B-p-2K4 i=1
i=1

or. (6-B) =

i=1

o, B6-p-EK4
Li-1

Now, Var(a) = Var|


or, Var(i)- K4+2KK 414 isj

2 - x Var(,)
-xEu)»22KK,Eu)
i=l
Since Var(Y) o =

xE(). [. Elue,)=0 forinj


i=1 :Var(a)-4-XK,

i=l

3160-3
32 33
An Introduction to Econometrics The Simple Linear Regression Model

estimate p. Thus we have to prove that Var(B]< Var(B


Proof
is by assumption linear combinntion of
The new estimator B a

weighted of the sample values Y,, the weights


the Y's, a sum

K, -0 and
x
=l
being different from the weights of the least

squares estimates.

For example, let us assumep-CY where C,- K d,d


is an arbitrary set of weights similar (but not the same) to the K,s

in the expression of 3 and we


Let us put Y, =a +BX, +u,
obtain

P-2Ca +BX, +u,)


Ver)- /-2* i-l

aC,+BC,X, +C4)
n--x+n is also unbiased estimator of B,
Itis assumed that like B, B an

ie, E(p)=B.
-2X, +nX+n
Now Ep')-EaCBC
Li-1
X, Cu)
x-2-22n x
Case (a) B has the least variance:
p)-ac-ex-e
Now E()= B if, and only if
Since Var (@)-o:/
Now we want to prove that any other
Yc-0. cx -I and c4 -0
linear unbiased estimate
of the true parameter, for example
p', obtained from any other
econometric method, has a bigger variance than the least But C0implies4-0, because
squares
Model 35
The Simple Linear Regression
34 An Introduction to Econometrics
=a +BX, as E(u,) =0

Now, Var(Y,) =E[a +BX, +u, -a-BX,J

c-4)--4
i-1 il
2 i=l
=0

Similarly, we may obtain

o - , and Var(")-Vacx
as-o =1

-Var()-c%
- -
then4 - Now, c-K, +d,
i=l

d,X, 0, ->K+2d +2K,d


Similarly cX, =1 requires
il
=
i=l

since X-K+d)X, -K,X, +24X


i-l il
- i=l

Given that
KX, -1.
¢x, -1 if
YaX, - 4 x, -Fa
i=1 Given thatKd, =i=1
i=1
Thus Bwill be a linear unbiased estimate of ß (with weights

C-K+d) ir E-0.4-0,Cx,-1and 4x, -0


i-1

Since (from property 1) we


i=1
know that, 4x-4 = 0 (as d,X, =0 and d , =0)
i=l
6-KY and Var(@)-Var|KY
i=l
Substituting we find

Var()-K*Var (Y)=a%
:Var(Y,)=Ely, - EY]
t=l i=I
var() i=l

where Y =a +
pX, +4
E(Y.) =E(a +BX,)+ E(4,) i=1
37
38 An Introduction to Econometrics The Simple Linear Regression Model

: Var(p') Varlö) +o24 Cx, =1 and 2 C4


0

- Now El)-a if and only if c 0,


i-1
i=1 i=1

=0
Since odf>0, it proves that These conditions imply d, =0 and 2dX,
i=l
i=l

Var(p)»Var() The variance of a is given by,

or, Var)«Var(p") Varfa")-a-Ela")-Ela*-


Thus it is proved that p is the BLUE of B.
Case (b) In the same way it can be proved thal the least squaree
constant intorcept á powsesses minimum variance. We take a new
estimator a which we assume to bea linear function of the Y's
with weighte C K* d,.
where K,

Since a i1
l

Similarly, a-2-Xc.]y,-1)
Since
C,-0 and 2
This shows that like d, a is also a linear function-in Ys. We have, Var(u"=
Now a is to be regarded as an unbiased estimator of a if

Ba=a.
We substitute for Y, =a +
BX, +4 in a and we ge, 1

o-o|x-C -x - 1ere K ? -
i=1

Varla )=o 4|
38 An Introduction to Econometrics The Simple Linear Regression Model 39

- (a + BX, +u-a -pX-7)a +px, -á -

or, Var(a")- Var(â) +o


-p(X,-)-u, -mi]-[i(x,-
Since 0, because all di's are not zero e =-(B-B +(4, -7) where =X, -X
or, e? -(-B)+(4
-7-2,(-P u, -
Thus we have, Vara")> Varta)
or, Var()< var(a =(-B)a+(, -m)*-2-p(z4, -x7)
Hence it is proved that á is the BLUE offa.
2.10. The Variance of the Random Variableu
The formulae of the variance of a and B involve the varianceof
--2 --40-b 1=l

the random term u, o However, the true variance of u, cannot be


computed since the values of u, are not observable. But we may
obtain an unbiased estimate of ofirom the expression
4
n

-n-2 i=l
=l

where e, = Y - Y, = Y, -â -BX,

Y, is the observed value and is the estimated value i.e.,


[Since B=B+i i-0-
Y-a+pX,+e, and Y, =d +0X, for i 1,2, n] i-1

Proof: One property of the regression line =â +BX, is that it

passes through the point (X,Y} So, F å +RX.


Again we know that Y=a +BX +7. --
from the observed relationship.

Where Y, a
+BX, +u,; 2Yna +B
- =1 i=1

2i1
u 0
or, Y=a +RX+7 24,
i=1
as =

Since e =Y-Y
2
-(Y-)-(-7)
40 The Simple Linear Regression 41
An Introduction to Econometrics Model

So, n-2 is an unbiased estimator of o. If we denote

-2-6-2 =
i=l
then is an unbiased estimator of

J
2.11. Maximum Likelihood Estimators (MLE's) of a, B
and o
If each 4[Y =a + pX, +«,] is normally distributed with mean 0
i=1 and variance o i.e. u, N (0, o*u) and u, u .. u, are
independent, then MLE of a and B are equivalent to the OLS
estimators of a and B (i.e., á and ß).

Et)-23Eue)
i=1
Proof: Since u, N (0, o), the p.d.f. of u, is given by,

)
f4)2n
1 2'u as =0.
Ei)+2yTsx,Fu, u,)|
i=l 2u distribution function of 1s
joint probability
The u, u .,
4,
given by f (u,, u, ., u) and given the set of sample observations
it is looked upon 'as a function of the parameters and is called the
are
likelihood function of the parameters. Since u, u
independent, then we can write,

flu, uz., Un)= f 4 )


: E(u)-o and El4,u,) =
or, La, B, oi)=f4)
E -no?- or,
1
La, B, o)-2ru
n

no-o-o or, La, B, o)- u


e

= no-2=oi(n-2)
Taking Log on both sides we get,

log L=-log21 - n logo

or, -2
2Y-a-X,)
-og 2-nlogG

You might also like