BLUE Properties of OLS Estimators and Gauss Markov
BLUE Properties of OLS Estimators and Gauss Markov
BLUE Properties of OLS Estimators and Gauss Markov
2--
Since B=1
1, 2,
where z, -(X, -)
28 An Introduction to Econometrics The Simple Linear Regression Model 29
in the expression of
Similarly, = -Xs.
Since K, = # 0 , as -0 i=l
i=1 i=l
0.-. =l
1 n we have,
or, =a -XK4
- =1 i=l
Elu)- XK,E4)
i=l
o5,Ela)-Ela) il i=l
:.
Ea)=a, as E(4)=0
30 An Introduction to Econometrics
The Simple Linear Regression Model 31
i=l
obtain =Y -XKY
.B-p-2K4 i=1
i=1
or. (6-B) =
i=1
o, B6-p-EK4
Li-1
2 - x Var(,)
-xEu)»22KK,Eu)
i=l
Since Var(Y) o =
i=l
3160-3
32 33
An Introduction to Econometrics The Simple Linear Regression Model
K, -0 and
x
=l
being different from the weights of the least
squares estimates.
aC,+BC,X, +C4)
n--x+n is also unbiased estimator of B,
Itis assumed that like B, B an
ie, E(p)=B.
-2X, +nX+n
Now Ep')-EaCBC
Li-1
X, Cu)
x-2-22n x
Case (a) B has the least variance:
p)-ac-ex-e
Now E()= B if, and only if
Since Var (@)-o:/
Now we want to prove that any other
Yc-0. cx -I and c4 -0
linear unbiased estimate
of the true parameter, for example
p', obtained from any other
econometric method, has a bigger variance than the least But C0implies4-0, because
squares
Model 35
The Simple Linear Regression
34 An Introduction to Econometrics
=a +BX, as E(u,) =0
c-4)--4
i-1 il
2 i=l
=0
o - , and Var(")-Vacx
as-o =1
-Var()-c%
- -
then4 - Now, c-K, +d,
i=l
Given that
KX, -1.
¢x, -1 if
YaX, - 4 x, -Fa
i=1 Given thatKd, =i=1
i=1
Thus Bwill be a linear unbiased estimate of ß (with weights
Var()-K*Var (Y)=a%
:Var(Y,)=Ely, - EY]
t=l i=I
var() i=l
where Y =a +
pX, +4
E(Y.) =E(a +BX,)+ E(4,) i=1
37
38 An Introduction to Econometrics The Simple Linear Regression Model
=0
Since odf>0, it proves that These conditions imply d, =0 and 2dX,
i=l
i=l
Since a i1
l
Similarly, a-2-Xc.]y,-1)
Since
C,-0 and 2
This shows that like d, a is also a linear function-in Ys. We have, Var(u"=
Now a is to be regarded as an unbiased estimator of a if
Ba=a.
We substitute for Y, =a +
BX, +4 in a and we ge, 1
o-o|x-C -x - 1ere K ? -
i=1
Varla )=o 4|
38 An Introduction to Econometrics The Simple Linear Regression Model 39
-n-2 i=l
=l
where e, = Y - Y, = Y, -â -BX,
Where Y, a
+BX, +u,; 2Yna +B
- =1 i=1
2i1
u 0
or, Y=a +RX+7 24,
i=1
as =
Since e =Y-Y
2
-(Y-)-(-7)
40 The Simple Linear Regression 41
An Introduction to Econometrics Model
-2-6-2 =
i=l
then is an unbiased estimator of
J
2.11. Maximum Likelihood Estimators (MLE's) of a, B
and o
If each 4[Y =a + pX, +«,] is normally distributed with mean 0
i=1 and variance o i.e. u, N (0, o*u) and u, u .. u, are
independent, then MLE of a and B are equivalent to the OLS
estimators of a and B (i.e., á and ß).
Et)-23Eue)
i=1
Proof: Since u, N (0, o), the p.d.f. of u, is given by,
)
f4)2n
1 2'u as =0.
Ei)+2yTsx,Fu, u,)|
i=l 2u distribution function of 1s
joint probability
The u, u .,
4,
given by f (u,, u, ., u) and given the set of sample observations
it is looked upon 'as a function of the parameters and is called the
are
likelihood function of the parameters. Since u, u
independent, then we can write,
= no-2=oi(n-2)
Taking Log on both sides we get,
or, -2
2Y-a-X,)
-og 2-nlogG