Probability ND Statistics 2
Probability ND Statistics 2
Probability ND Statistics 2
STATISTICS II
August 6, 2014
COURSE OUTLINE
µi
P (X = xj ) = =0
∞
Where MI is the number of occurrences of value Xi But the fact this
probability is zero does not mean that this occurrence is impossible.
This suggests that we have another way of expressing probability.
This is done by the use of CDF as follows
ˆ µ
F (µ) = f (x)dx
−∞
1
Where f (x) is a function which is continuous and dierentiable .
This function f (x)is called the probability density function(pdf) of a con-
tinuous random variable X
For f (x) to be a pdf it must satisfy the following conditions
1. f (x) ≥ 0
´∞
2. −∞
f (x)dx = 1
with the above denition of distribution function ,we can derive the proba-
bilities for a continuous
random variable.
3x2 ,0 ≤ x ≤ 1
Suppose f (x) =
0 elsewhare
1. p(x ≤ a) = p(x ≥ a)
ˆ
´a 1
= 0
f (x)dx = f (x)dx
a
ˆ
´a 1
= 0
3x dx2
= 3x2 dx
0
= x3 |a0 = x3 |1a
= a3 − 0 = 1 − a3
= 2a3 = 1
1
= a = 0.5 2
2. p(x > b)
2
ˆ 1
= f (x)dx = 0.5
b
ˆ 1
= = 3x2 dx = 0.5
b
= = x3 |1b = 0.5
= b3 =⇒ = 1 − b3 = 0.5
= b3 = 0.95
1
= b = 0.95 2
= 0.983
i. By denition
ˆ µ
F (µ) = f (x)dx
−∞
3
d
f (x) = F (X)
dx
D
= [−e−2x ]
dx
= 2e−2x
2e−2x ,x > 0
= f (x) =
0 , elsewhere
ii. Now
f (x) ≥ 0, e ∼
= 2.71
ˆ ∞
2e−2x dx = −e−2x |∞
0
0
= −[e−∞ − 1]
= 1
ˆ µ
F (µ) = f (x)dx
−∞
4
=
´µ
= 0.5 0
xdxdx
1 2 µ
= 4
x |0
µ2
= 4
µ2
= F (µ) = 4
,0 < µ < µ
x2 ,0 < x < 2
4
=
0 , elsewhere
µ <0 0 1 2 >2
F (µ) 0 0 14 1 1
solution
5
2a = 1
1
=⇒ a = 2
ˆ 1 ˆ 1.5
1 1
= xdx + dx
2 0 2 1
1 x2 1 1 1.5
= | + x|
2 2 0 2 1
1 1 3
= + ( − 1)
4 2 2
1 1
= + = 0.5
4 4
Exercise
Which of the following functions can represent pdf
2 − x , 1 ≤ x ≤ 2
1. f (x) =
0 , elswhere
2 − x , 0 ≤ x ≤ 3
2. f (x) =
0 , elsewhere
x2 ,1 ≤ x ≤ 2
1
(2 − x) ,1 ≤ x ≤ 2
3. f (x) = 3
x−2 ,1 ≤ x ≤ 2
0 , elsewhere
6
MEASURES OF CENTRAL TENDENCY
MEDIAN
1. Find the median and the 25th percentile of the following pdf
3(1 − x)2 ,0 < x < 1
f (x) =
0 , elswhere
Solution
ˆ m ˆ 1
1
f (x)dx = f (x)dx =
0 m 2
ˆ m
3 (1 − x)2 dx = 0.5
0
7
let (1-x)=z
−dx = dz
dx = −dz
ˆ 1
−3 z2 = 0.5
1−m
−z 3 |1−m
1 = 0.5
1
1−m = 0.5 3
1
=⇒ m = 1 − 0.5 3
ˆ p
f (x)dx = 0.25
0
ˆ p
3 (1 − x)2 dx = 0.25
0
8
let (1-x)=z
−dx = dz
dx = −dz
ˆ 1−p
−3 z2 = 0.25
1−m
−z 3 |1−p
1 = 0.25
1
1−p = 0.25 3
1
=⇒ p = 1 − 0.25 3
9
ˆ m ˆ 1
1
f (x)dx = f (x)dx =
0 m 2
0.25 ,0 ≤ x ≤ 1
FX (x) = 0.4375 , 1 ≤ x ≤ 2
0.5781 , 2 ≤ x ≤ 3
MODE
Solution
10
(a) f (x) = 0.5x2 e−x
vdv/du + udv/dx
at maximum f 0 (x) = 0
0.5[2xe−x − x2 e−x ] = 0
2xe−x = x2 e−x
x=2
= 0.5(−2e−2 )
11
cx ,0 ≤ x ≤ 1
f (x) = c(2 − x) , 1 ≤ x ≤ 2
0 , elsewhere
Determine the value of c.Find the cdf,the median and mode of f (x)
Solution
ˆ ∞
f (x)dx = 1
−∞
´1 ´2
0
cxdx + 1
c(2 − 1)dx = 1
cx2 2cx−cx2 2
2
|10 + [2cx − 2
]1 =1
c 4c c
2
+ 4c − 2
− 2c + 2
=1
x ,0 ≤ x ≤ 1
Therefore f (x) = 2 − x ,1 ≤ x ≤ 2
0 , elsewhere
x ,0 ≤ x ≤ 1
Therefore f (x) = 2 − x ,1 ≤ x ≤ 2
0 , elsewhere
The cdf
F (x) = 0,x < x =⇒ f (0) = 0
for 0 ≤ x ≤ 1
´
F (x) = xdx = x2
2
+c ]
Since F (0) = 0 =⇒ 21 (0) + c =⇒ c = 0
=⇒ F (0) = 12 x2 , 0 ≤ x ≤ 1
12
1
=⇒ F (1) = 2
Now F (1) = 1
2
3 x2 (4 − x) , 0 ≤ x < 4
4. The pdf of a random variable is f (x) = 6
0 , elsewhere
Determine the mode
Solution
At maximum f 0 (x) = 0
3 3 2
=⇒ 64
2x[4 − x] − 64
x =0
3
=⇒ 64
[8x − 2x2 − x2 ] = 0
3
=⇒ 64
[8x − 3x2 ]
3
=⇒ 64
x[8 − 3x] = 0
8
x = 0 or x = 3
Next
13
3 3
f 00 (x) = (−3)x + (8 − 3x)
64 64
3 3
=⇒ 64
(−3)x + 64
(8 − 3x)
− 9x
64
+ 3
64
(8 − 3x)
At x = 0, f 000 (0) = 24
64
>0
=⇒ x = 0 gives minimum
At x = 38 ,
8 −9 3 3
f 00 ( ) = + −
3 24 8 8
−3
8
<0
Hence x = 8
3
gives the mode
14
´ x2
F (x) = x2 dx = 3
+ c1
Since F (0) = 0 =⇒ 21 (03 ) + c1 =⇒ c1 = 0
1
=⇒ F (1) = 3
1
F (x) = 2
Now F (1) = 1
3
15
Solve
2 1
3
x − 61 x2 − 1
6
=
2
=⇒ 4x − x2 − 1 = 3
x2 − 4x + 4 = 0
x2 − 2x − 2x + 4 = 0
x(x − 2) − 2(x − 2) = 0
(x − 2)2 = 0
=⇒ x = 2
The median is 2
Expectation gives the average value of a random variable and hence is re-
garded as the mean of x
DEFINITION
Let X be a random variable .The expectation (mean) of x denoted by
E(X) is given by
Examples
16
Let the random variable X=no of points on a die.We need to get the
probability of X.
xi 1 2 3 4 5 6
i.e 1 1 1 1 1 1
p(X = xi ) 6 6 6 6 6 6
By denition
X
E(X) = xp(X = x)
all x
1 1 1 1 1
= (1) + (2) + (3) + (4) + (6)
6 6 6 6 6
7
=
2
n(a + 1)
= sn =
2
6( 16 + 1)
=
2
6
=1+
2
7
=
2
17
1 2 3 6
= 1( 21 ) + 2( 21 ) + 3( 21 ) + · · · + 6( 21 )
= 133
ˆ ∞
E(X) = xf (x)dx
−∞
ˆ 3
1
= (3 − x)(1 + x)dx
0 9
ˆ ˆ 3 ˆ 3 ˆ 3
1 3 2 2
= [ 3xdx + 3x dx − x − x3 dx]
9 0 0 0 0
1 3 3x3 x3 x4 3
= [ x2 |30 + − − |0 ]
9 2 3 3 4
1 27 81 27 81
= ( + − − )
9 2 3 3 4
3 9
= +3−1−
2 4
5
=
4
4. A bowl containing 10 chips ,of which 8 are marked USD 2 each and 2
are marked USD 5 each.Let a person choose ,at random and without
replacement 3 chips from this bowl.If the person is receive the sum of
18
the resulting amounts.Find his expectation.
SOLUTION
Let the E be the event of picking a chip marked USD 2 and let T be
the event of picking a chip marked USD 5.Let X =the random variable
representing the sum.
Probability distribution
x 6 9 12 15
P(X=x) 0.467 0.467 0.067 0
X
E(X) = xp(X = x)
all x
= 7.8
EXERCISE
19
Find E(X)
SOLUTION=2
PROPERTIES OF EXPECTATION
2. Let g(x) and h(x) be two functions of X then for any constants a and
b
E[g(x) ± bh(x)] = aE[g(x) ± bE[h(x)]
Proof
By denition
´∞
E[g(x) ± bh(x)] = −∞ (ag(x) ± bh(x)f (x)dx
´∞ ´∞
= −∞ (ag(x)f (x)dx ± −∞ (bh(x)f (x)dx
´∞ ´∞
= a −∞ g(x)f (x)dx ± b −∞ h(x)f (x)dx
= aE[g(x)] ± bE[h(x)]
EXAMPLE
20
1. The random variable X has pmf
x , x = 1, 2, 3, 4
10
P (X = x) =
0 elsewhere
Compute E(5x3 − 2x2 )
SOLUTION
1 2 3 4
5E(X 3 ) = 5[1( 10 ) + 8( 10 )] + 27( 10 ) + 64( 10 )
= 5[0.1 + 1.6 + 8.1 + 25.6]
= 5(35.4)
= 177
Next
1 2 3 4
2E[X 2 ] = 2[1( 10 + 4( 10 ) + 9( 10 ) + 16( 10 )]
= 2[0.1 + 0.8 + 2.7 + 6.4]
= 20
∴ 5E[X 2 ] − 2E[X 2 ] = 177 − 20 = 157
I.e The expectation of the square of the deviations of the values of X from
its mean.
21
More precisely if X is a discrete random variable then
X
var(X) = [x − E(x)]2 .P (X = x)
all x
NB
= E[x2 ] − (E[x])2
Remarks
22
var(x) = σ 2 = E(X)2 − [E(X)]2
= E(X)2 − µ2
but (X − µ2 ) = X 2 − 2µX + µ2
σ 2 = E(X 2 − 2µX + µ2 )
= E(X)2 − 2µE(X) + µ2
= E(X)2 − 2µ2 + µ2
= E(X)2 − µ2
Remarks
23
var(g(x)) = E[g(x) − E(g(x)]2
Proof
= a2 E(x)2 − [aE(x)]2
= a2 [E(x)2 − [E(x)]2 ]
= a2 var(x)
24
var(ax + b) = E[(ax + b)2 ] − [E(ax + b)]2
= E[a2 x2 + 2axb + b2 ] − [aE(x) + b]2
= a2 E(x)2 − a2 [E(x)]2
= a2 var(x)
EXAMPLES
Find variance of X
ˆ ∞
E[x] = xf (x)dx
−∞
ˆ 3
x+3
=⇒ E[x] = x( )dx
−3 18
25
ˆ 3
1
= (x2 + 3x)dx
18 −3
ˆ 3 ˆ 3
1 2
= x dx + 3 x dx
18 −3 −3
1 x3 3 3x2 3
= | + |
18 3 −3 2 −3
1 27 27
= 9+9+ −
18 2 2
=1
Next
ˆ ∞
2
E(x ) = x2 f (x)dx
−∞
ˆ 3
x+3
= x2 ( )dx
−3 18
26
ˆ 3
1
= (x3 + 3x2 )dx
18 −3
ˆ 3 ˆ 3
1 3 2
= x dx + 3 x dx
18 −3 −3
1 x4 3 3x3 3
= | + |
18 4 −3 3 −3
1 81 81
= + 27 − − 27
18 4 4
1
= (27 + 27)
18
54
= =3
18
=3−1=2
Sample is X1
27
1 2 3 4 5 6
1 0 1 2 3 4 5
2 1 0 1 2 3 4
X2 3 2 1 0 1 2 3
4 3 2 1 0 1 2
5 4 3 2 1 0 1
6 5 4 3 2 1 0
The possible values of Y are 0,1,2,3,4,5
Probability distribution
Y 0 1 2 3 4 5
P(Y=y) 6
36
10
36
8
36
6
36
4
36
2
36
Now
var(Y ) = E(Y 2 ) − [E(Y )]2
But
X
E(Y ) = yP (Y = y)
all y
6 10 8 6 4 2
= 0( ) + 1( ) + 2( ) + 3( ) + 4( ) + 5( )
36 36 36 36 36 36
1
= (70)
36
= 1.94
28
Next
X
E(Y 2 ) = y 2 P (Y = y)
all y
6 10 8 6 4 2
= 0( ) + 1( ) + 4( ) + 9( ) + 16( ) + 25( )
36 36 36 36 36 36
210
=
36
35
=
6
So
35
= − (1.94)2
6
= 2.05
MOMENTS
29
These higher order moments are computed about zero origin
i.e E[(X r )] = E[(X − 0)r ]
Therefore they are referred to as raw moments about the origin or uncor-
rected moments
This means that moments can also be computed about another value e.g
about the mean of the probability distribution.
In this case they are referred to as central moments.
DEFINITION
If X is a random variable the rth central moment about the value a is
dened as µr = E[(X − µ)r ] (corrected moment =mean)
Remarks
= E(X) − µ
=µ−µ
=0
30
i.e
µ2 = E[(x − µ)2 ]
= E(x2 ) − 2µE(x) + µ2
= E(x2 )2µ2 + µ2
= E(x2 ) − µ2
= E(x2 ) − [E(x)]2
= var(x)
(a) The rth raw moment (moment about the origin) is dened as
µ0r = E(xr )
X
= xr P (X = x)
all x
(b) The rth central moment (moment about the mean) is deed as
X
= (x − µ)r P (X = x)
all x
31
Let X be a continuous random variable with pdf f(x).Then
µ0r = E(xr )
ˆ ∞
= xr f (x)dx
−∞
ˆ ∞
= (x − µ)r f (x)dx
−∞
By denition
32
µ2 = E[(x − x)2 ]
= E[(x − µ01 )2 ]
µ2 = E x2 − 2µ01 x + (µ01 )2
= µ02 − (µ01 )2
33
µ3 = E[(x − x)3 ]
= E[(x − µ01 )3 ]
EXERCISE
34
µ4 = E[(x − x)4 ]
= E[(x − µ01 )4 ]
FACTORIAL MOMENTS
DEFINITION
If X is a random variable ,the rth factorial moment of X is dened as
µF = E [X(X − 1)(X − 2) . . . (X − r + 1)]
For discrete random variables ,the factorial moments are easy to compute
35
the the raw moments.
Note that the 1st factorial moment is always the mean E(x)
EXAMPLE
1. A die is thrown once .The possible outcomes are 1,2,3,4,5,6 which are
at unit intervals.Find the 1st,2nd and 3rd factorial moments
SOLUTION
Now
x 1 2 3 4 5 6
P(X=x) 1
6
1
6
1
6
1
6
1
6
1
6
X-1 0 1 2 3 4 5
X(X-1) 0 2 6 12 20 30
X-2 -1 0 1 2 3 4
x(X-1)(X-2) 0 0 6 24 60 120
Therefore
(a) 1st
X
E(X) = P (X = x)
all x
1 1 1 1 1 1
= 1( ) + 2( ) + 3( ) + 4( ) + 5( ) + 6( )
6 6 6 6 6 6
7
=
2
36
(b) 2nd factorial moment
X
E[X(X − 1)] = X(X − 1)P (X = x)
all x
1 1 1 1 1 1
= 0( ) + 2( ) + 6( ) + 12( ) + 20( ) + 30( )
6 6 6 6 6 6
70
=
6
X
E[X(X − 1)(X − 2)] = X(X − 1)(X − 2)P (X = x)
all x
1 1 1 1 1 1
= 0( ) + 0( ) + 6( ) + 24( ) + 60( ) + 120( )
6 6 6 6 6 6
210
=
6
37
If X is a discrete random variable with pmf P (X = x) ,then
mx (t) = E(etx )
(1)
X
= etx P (X = x)
all x
mx (t) = E(etx )
ˆ ∞
= etx f (x)dx (2)
−∞
Note that the mgf in (1) and (2) exists if is nite and if the integral is
P
nite respectively.
mx (t) = E(etx )
X
= etx P (X = x)
all x
38
t2 x2 t3 x3
etx = 1 + tx + + + ...
2! 3!
X
=⇒ mx (t) = etx P (X = x)
all x
X t2 x2 t3 x3
= 1 + tx + + + . . . P (X = x)
all x
2! 3!
X X X t 2 x2 X t3 x3
= 1(P (X = x) + tx(P (X = x) + (P (X = x) + (P (X = x) + . . .
all x all x all x
2! all x
3!
X X X (0)2 x3 X (0)3 x4
=⇒ m0x (0) = 0 + x(P (X = x) + (0)x2 (P (X = x) + (P (X = x) + (P
all x all x all x
2! all x
3!
X
m0x (0) = x(P (X = x) = E(x) = µ01
all x
39
X X X t2 x3 X t3 x4
m0x (t) = x(P (X = x) + tx2 (P (X = x) + (P (X = x) + (P (X = x) . .
all x all x all x
2! all x
3!
X X 2tx 3 X 3t x 2 4 X4
m00x (t) = 0 + x2 (P (X = x) + (P (X = x) + (P (X = x) +
all x all x
2! all x
3! all x
= E(X 2 ) = µ02
In general
µ0r = E(X r ) = mrx (t)|t=0
The rth raw moment is derived by dierentiating the m.g.f r times with
respect to t and setting to 0.
Note that the results can also be derived for continuous random variables.
40
t2 x2 t3 x3
etx = 1 + tx + + + ...
2! 3!
ˆ ∞
=⇒ mx (t) = etx f (x)dx
−∞
ˆ ∞
t2 x2 t3 x3
= 1 + tx + + + . . . f (x)dx
−∞ 2! 3!
ˆ ∞ ˆ ∞ ˆ ∞ 2 2 ˆ ∞ 3 3
tx tx
= 1f (x)dx + txf (x)dx + f (x)dx + f (x)dx + . . .
−∞ −∞ −∞ 2! −∞ 3!
t2 0 t3 t4
= 1 + tµ01 + µ2 + µ03 + µ04 + . . .
2! 3! 4!
ˆ ∞ ˆ ∞
0
mx (t) = 0 + xf (x)dx + tx2 f (x)dx + . . . (1)
−∞ −∞
ˆ ∞
m0x (0) = xf (x)dx
−∞
E(X) = µ01
41
ˆ ∞ ˆ ∞
m0x (t) =0+ xf (x)dx + tx2 f (x)dx + . . .
−∞ −∞
ˆ ∞ ˆ ∞ ˆ ∞ 2 4 ˆ ∞ 3 5
2tx3 3t x 4t x
m00x (t)
=0+0+ 2
x f (x)dx + f (x)dx + f (x)dx + f (x)d
−∞ −∞ 2! −∞ 3! −∞ 3!
ˆ ∞ ˆ ∞ ˆ ∞ ˆ ∞
00 2 2(0)x3 3(0)2 x4 4(0)3 x5
mx (t) = 0 + x f (x)dx + f (x)dx + f (x)dx + f (x)dx . . .
−∞ −∞ 2! −∞ 3! −∞ 3!
ˆ ∞
00
mx (0) = x2 f (x)dx
−∞
= E(X 2 ) = µ02
(a) Derive the expected value of X and the variance of X from the
m.g.f
(b) Verify the results by computing the above quantities directly from
the denition
SOLUTION
(a) By denition
42
ˆ ∞
mx (t) = etx f (x)dx
−∞
ˆ ∞
= xλe−λx dx
−∞
ˆ ∞
= λe(t−λ)x dx
−∞
ˆ ∞
= λe−(λ−t)x dx
−∞
λ
= e−(λ−t)x |∞
0
−(λ − t)
λ
e−(λ−t)∞ − e−(λ−t)0
=
−(λ − t)
λ
e−∞ − e0
=
−(λ − t)
λ 1
= −1
−(λ − t) e∞
λ
=
−(λ − t)
= λ(λ − t)−1
λ
=
(λ − t)2
1
=
λ
43
V ar(x) = E(x2 ) − (E(x))2
2
V ar(x) = m00x (t)|0 − (m0x (t)|0 )
d λ
´∞
(b) We need to verify that E(x) = xλe−λx dx = λ1 and
−∞
´∞ ´
∞
2
var(x) = −∞ x2 λe−λx dx − −∞ xλe−λx dx = λ12
´∞
E(x) = λ −∞ xe−λx dx
´
Using integration by parts udv = uv − vdu
Let u = x and dv = e−λx dx
=⇒ du = dx and v = −eλ
−λx
ˆ ∞
E(x) = λ xe−λx dx
−∞
ˆ ∞
xe−λx e−λx
=λ + dx
λ −∞ λ
∞
xe−λx e−λx
=λ + 2
λ λ 0
−λx ∞
−λx ∞
e
= −xe 0
−
λ 0
1
1 ( ∞ −−1)
=− 0−
λ
1
1
=
λ
1
=
λ
44
Next
2
1 2
var(x) = E(x ) −
λ
ˆ ∞
2
but E(X ) = x2 f (x)dx
−∞
ˆ ∞
= x2 λe−λx dx
−∞
ˆ ∞
=λ x2 e−λx dx
−∞
ˆ ∞
2
E(x ) = λ x2 e−λx dx
−∞
ˆ ∞
−x2 e−λx 2
−λx
=λ + xe dx
λ λ −∞
∞
−x2 e−λx 2 −xe−λx e−λx
=λ + − 2
λ λ λ λ 0
∞ ∞
−x2 e−λx −xe−λx e−λx
2
= − − 2
λ 0 λ λ λ 0
∞ 2 ∞ 2 ∞
= −x2 e−λx 0 + −xe−λx 0 + 2 −e−λx 0
λ λ
2 1
= 0 + 0 + 2 −( ∞ − 1)
λ e
2
=
λ2
45
∴ var(x) = E(x2 ) − (E(x))2
2
2 1
= 2−
λ λ
1
=
λ2
TRIBUTION
These are theoretical distributions which are not obtained by actual obser-
vations/experiments.They are derived mathematically on the basis of certain
assumptions.These distributions are broadly classied into two categories
BERNOULLI DISTRIBUTION
46
This simple distribution is completely dened and is characterized by a
singe parameter P where P=p(success)
=⇒ pr(X = 1) = p
P (X = 0) = (1 − p) = q
µ0x = E[X r ]
X1
= xr p(X = x)
x=0
= 0 + 1r p1 (1 − p)1−1
= p
47
MOMENT GENERATING FUNCTION
mx (t) = E[etx ]
X1
= etx p(X = x)
x=0
1
X
= etx px (1 − p)1−x
x=0
= e0 (1 − p) + et p
= (1 − p) + pet
Let a trial result in success with a constant probability p and in failure with
probability (1-p)=q.Then the probability of X successes in n independent
trials is given by
n px q n−x , x = 0, 1, 2, . . . , n
x
p(X = x) =
0 , elsewhere
48
which is a Binomial distribution with parameters n and p
i.e X ∼ Bin(n, p)
EXAMPLE
10!
= (0.1)2 (0.9)8
8!2!
10 × 9
= (0.1)2 (0.9)8
2
= 0.1937
(b) p(X ≥ 2)
49
= p(X = 2) + p(X = 3) + · · · + p(X = 10)
= 1 − p(X < 2)
10 0 10−0 10 1 10−1
= 1− pq + pq
0 1
10 0 10 10 1 9
= 1− (0.1) (0.9) + (0.1) (0.9)
0 1
= 1 − [0.3486 + 0.387]
= 0.264
EXERCISE
50
mx (t) = E(etx )
n
tx n
X
= e px q n−x
x=0
x
n
X
t x
n−x n
= pe q
x=0
x
n
= pet + q
51
Now,
n−1
m0x (t) = n pet + q pet
n−1
= npet pet + q
= np1n−1
= np
52
V ar(X) = E(X 2 ) − [E(X)]2
= E(X 2 ) − n2 p2
n−1 n−2
m00x (t) = npet pet + q + n − 1 pet + q npet pet
= np + n(n − 1)p2
= np + n2 p2 − p2
∴ V ar(X) = E(X 2 ) − n2 p2
= np + n2 p2 − p2 − n2 p2
= np − np2
= np(1 − p)
= npq
53
be the number of failures in a sample of size n.In this case sampling stops
after exactly k successes and he last draw must be a success.The number of
ways this can happen is n−1
k−1
One of these ways could be pp . . . p}k − 1successes;(1 − p)(1 − p)(1 −
p) . . . (1 − p)p}n − k successes
Therefore the probability of obtaining k-1 successes in n-1 trials is
n − 1 k−1
f (k) = p (1 − p)n−k p
k−1
n−1pk (1 − p)n−k , k > 0, n = k, k + 1, k + 2, . . .
k−1
=
0 , elsewhere
mx (t) = E etx
∞
X n − 1 k n−k t(k)
= p q e
n=k
k − 1
where p = q = 1 − p
= pk + pk qet + pk (qet )2 + . . .
= pk 1 + qet + (qet )2 + . . .
pk
=
(1 − qet)k
k
p
=
(1 − qet)
54
MEAN(X)
−k−1
m0x (t)|t=0 pk (−k) 1 − qet −qet
=
−k−1
kpk qet 1 − qet
=
kq
=
p
VAR(X)
55
E X 2 − [E (X)]2
2
k2q2
= E X − 2
p
E (X 2 ) = m00x (t)|t=0
−k−1 −k−2
m00x (t) = kpk 1 − qet + kpk q t (−k − 1) 1 − qet −qet
kq k 2 q 2 kq 2
E (X 2 ) = + 2 + 2
p2 p p
k2q2
V ar(X) = E X2 − 2
p
kq k 2 q 2 kq 2 k 2 q 2
= + 2 + 2 − 2
p2 p p p
kq kq 2
= + 2
p2 p
kq(p + q)
=
p2
kq
=
p2
56
Which is a geometric distribution.
57
The mgf of this distribution is
−2
m0x (t) = (−1) p 1 − qet −qet
= p (p)−2 (q)
pq
E [X] =
p2
q
E [X] =
p
E (X 2 ) = m00x (t)|t=0
−2 −3
m00x (t) = pqet 1 − qet + (−2) pqet 1 − qet −qet
−2 −3
pqet 1 − qet + (2) pqet qet 1 − qet
=
q 2q 2
E X2
= + 2
p p
58
Now VAR(X)
E X 2 − [E (X)]2
2
q 2q 2 q
= + 2 −
p p p
q 2q 2 q 2
= + 2 − 2
p p p
qp + 2q 2 − q 2
=
p2
qp + q 2
=
p2
q (p + q)
=
p2
q
=
p2
Note that p + q = 1
POISSON DISTRIBUTION
59
m=Average number of successes per a given time span,Then given a ran-
dom variable X
n! m x m n−x
p(X = x) = (n−x)!x! n
1− n
n! m x m n−x
limn→∞ p (X = x) = limn→∞ (n−x)!x! n
1− n
n! m x m n m −x
= limn→∞ (n−x)!x! nx
1− n
1− n
mx n! m n m −x
= nx
limn→∞ (n−x)!x! × limn→∞ 1 − n
1− n
n! n(n−1)(n−2)...(n−x+1)
But limn→∞ (n−x)!x! = limn→∞ n.n....n
= nn limn→∞ 1 − 1 2 x+1
n
limn→∞ 1 − n
. . . limn→∞ 1 − n
= 1 × 1 × 1··· × 1 = 1
m n
= e−m
limn→∞ 1 − n
1 n
since limn→∞ 1 + n
=e
a
and limn→∞ 1 + n
= ea
m −x
= 1−x = 1
and limn→∞ 1 − n
mx
=⇒ limn→∞ p (X = x) = x!
(1) e−m
λx e−λ , x = 0, 1, 2 . . .
x!
=
0 , elsewhere
60
THE MGF OF A POISSON DISTRIBUTION
mx (t) = e etx
∞
X λx e−λ
= etx
x=0
x!
∞
−λ
X λx
= e etx
x=0
x!
∞ x
X (λet )
= e−λ
x=0
x!
" #
2 3
(λet ) (λet )
= e−λ 1 + λet + + + ...
2! 3!
= e−λ eλet
t−1 )
= eλ(e
MEAN(X)
t−1 )
m0x (t) = λet eλ(e
61
VAR(X)
E X2 m00x (t)|t=0
=
t−1 ) t−1 )
m00x (t) = λet eλ(e + λet λet eλ(e
E (X 2 ) = m00x (t)|t=0 = λ + λ2
V ar(X) = E X 2 − [E (X)]2
= λ + λ2 − (λ)2
V ar(X) = λ
EXAMPLE
SOLUTION
62
X ∼ P0 (0.6)
e−0.6 (0.6)x
p (X = x) = , x = 0, 1, 2
x!
e−0.6 (0.6)0
p (X = 0) = = 0.55
0!
X ∼ P0 (0.6)
X1 ∼ P0 (4 × 0.6)
X1 ∼ P0 (2.4)
e−2.4 (2.4)x
p (X1 = x) = , x = 0, 1, 2, . . .
x!
" #
e−2.4 (2.4)0 e−2.4 (2.4)1 e−2.4 (2.4)2
=1− + +
0! 1! 2!
e−2.4
=1−
6.28
= 1 − 0.5697
= 0.430
63
2. A certain hospital usually admits 50 patients pr day.On average 3 pa-
tients in 100 require rooms provided with special facilities on a certain
day . It is found that there re 3 such rooms available.Assuming that
50 patients will be admitted .Find the probability that more than 3
patients will require such special rooms.
SOLUTION
p=3/100=0.03
n=50
λ = np = 0.03 × 50 = 1.5
Let the random variable X be the number of patients that require rooms
with specic facilities
X ∼ P0 (1.5)
e−1.5 (1.5)x
p (X = x) = , x = 0, 1, 2 . . .
x!
p (X > 3) = 1 − [p (X = 0) + p (X = 1) + p (X = 2) + p (X = 3)]
" #
e−1.5 (1.5)0 e−1.5 (1.5)1 e−1.5 (1.5)2 e−1.5 (1.5)3
=1− + + +
0! 1! 2! 3!
(1.5)2 (1.5)3
−1.5
=1− e 1 + 1.5 + +
2! 3!
= 1 − (0.2231 × 4.1879)
= 1 − 0.934
= 0.066
64
3. An insurance company attends 50 clients per day. On average 3 in
100 require special services .On a certain day it is found that there
are 3 special service providers available.Assuming that 50 clients will
be attended to,nd the probability that more than 3 clients require
special services.
λ = np
= 500(0.002)
= 1
e−1 (1)2
p(X = 2) =
2!
= 0.184
10. SOLUTION
65
X ∼ P0 (4)
e−4 (4)x
p (X = x) = , x = 0, 1, 2, . . .
x!
e−4 (4)0
p (X = 0) = =
0!
X ∼ P0 (3 × 4)
e−12 (12)x
p (X = x) = , x = 0, 1, 2, . . .
x!
p (X < 2) = p (X = 0) + p (X = 1)
= e−12 (1 + 12)
= 0.000079
66
1
X ∼ P0 ×4
2
X2 ∼ P0 (2)
e−2 (2)x
p (X = x) = , x = 0, 1, 2, . . .
x!
" #
−2 0 −2 1 −2 2
e (2) e (2) e (2)
=1− + +
0! 1! 2!
= 1 − e−2 (1 + 2 + 2)
= 0.3233
HYPERGEOMETRIC DISTRIBUTION
67
NP N −N P
x
p(X = x) = N
n−x
n
MEAN(X)
X
E [X] = xP (X = x)
allx
k N −k
X x n−x
= x N
x=0 n
n k−1 N −k
nk X x−1 n−x
= N −1
N x=1 n−1
let x − 1 = y
=⇒ n − x = n − 1 − y
=⇒ x = y + 1
k−1 N −1−k+1
n−1
nk X y n−1−y
E [X] = N −1
N y=0 n−1
nk
E [X] =
N
VAR(X)
68
E X 2 − [E (X)]2
V ar(X) =
n k N −k
X x n−x
butE [X (X − 1)] = x(x − 1) N
x=0 n
k−2 N −2−k+2
n−2
n(n − 1)k(k − 1) X x−2 n−2−y
= N −2
N (N − 1)
y=0 n−2
let x − 2 = y
=⇒ n − x = n − y − 2
=⇒ x = y + 2
k−2 N −2−k+2
n−2
n(n − 1)k(k − 1) X y n−2−y
E [X (X − 1)] = N −2
N (N − 1)
y=0 n−2
n(n − 1)k(k − 1)
=
N (N − 1)
= E X 2 − [E (X)]2
V ar(X)
2
n(n − 1)k(k − 1) nk nk
= + +
N (N − 1) N N
2 2
n(n − 1)k(k − 1) nk n k
= + +
N (N − 1) N N2
nk (k − 1) nk
(n − 1) +1−
N N −1 N
nk (n − K) (N − n)
=
N 69 N (N − 1)
NB:The mgf of the hypergeometric distribution is not useful.
EXAMPLES
1. A box of 20 spare parts foe a certain type of a machine contains 15
good items and 5 defective items. If 4 parts selected by chance from
the box ,what is the probability that exactly 3 of them will be good?
SOLUTION
Let the random variable X=No of good items.Using the hypergeometric
distribution
k N −k
x n−x
p(X = x) = N
n
N = 20, k = 15, N − k = 5, n = 4, x = 3, n − x = 1
15 5
3
p(X = 3) = 20
1
4
5! 5!
12!3!
× 4!1!
= 20!
16!4!
(15)(14)(13) 5 (4)(3)(2)(1)
= × ×
(3)(2)(1) 1 (20)(19)(18)(17)
(5)(7)(13)
=
(19)(3)(17)
455
= = 0.4696
969
70
Let the random variable X be the number of individuals with high
blood pressure
k N −k
x n−x
p(X = x) = N
n
N = 100, k = 10, N − k = 90, n = 8
2 10 90
X x 8−x
p(X ≤ 2) = 100
x=0 8
10 90 10 90 10 90
0 8−0 1 8−1 2 8−2
= 100
+ 100
+ 100
8 8 8
= 0.97
71
k N −k
x n−x
p(X = x) = N
n
N = 16, k = 10, N − k = 6, n = 10, x = 6, n − x = 4
10 6
6
p(X = 6) = 16
4
10
5! 5!
12!3!
× 4!1!
= 20!
16!4!
(10)(9)(7)(6) 6 × 5 (6)(5)(4)(3)(2)(1)
= × ×
(4)(3)(2)(1) 2 × 1 (16)(15)(14)(13)(12)(11)
= 0.393
72
k N −k
x n−x
p(X = x) = N
n
N = 20, k = 6, N − k = 14, n = 10
3 6 14
X x 10−x
p(X ≥ 3) = 20
x=0 10
10 90 10 90 10 90
0 8−0 1 8−1 2 8−2
= 100
+ 100
+ 100
8 8 8
= 0.686
73
if the random variable X is uniformly distributed over [a, b] then
ˆ ∞
E [X] = xf (x)dx
−∞
ˆ b
1
= x dx
a b−a
ˆ b
1
= xdx
b−a a
2 b
1 x
=
b−a 2 a
2
b − a2
1
=
b−a 2
a+b
=
2
74
VAR(X)
E X 2 − (E [X])2
=
2
a+b
E X2 −
=
2
ˆ b
1
x2
E X2 = dx
a b−a
3 b
1 x
=
b−a 3 a
3
1 b − a3
=
b−a 3
b 3 − a3
=
3 (b − a)
2
b 3 − a3
a+b
∴ V AR (X) = −
3 (b − a) 2
b 3 − a3 (a + b)2
= −
3 (b − a) 4
4 b3 − a3 − (a + b)2 3 (b − a)
=
12 (b − a)
4 (b − a) b + a2 + ab − (a + b)2 3 (b − a)
=
12 (b − a)
4 b2 + a2 + ab − 3 (a + b)2
= 75 12
2
4b2 + 4a2 + 4ab − 3 a2 + b2 + 2ab
=
12
2 2 2 2
(b − a)2
∴ V AR (X) =
12
MGF OF X
mx (t) = E etx
ˆ b
1
= etx dx
a b−a
ˆ b
1 tx
= e dx
b−a a
tx b
1 e
=
b−a t a
bt
e − eat
1
=
b−a t
ebt − eat
=
t (b − a)
76
If α > 1an integration by parts shows that
ˆ ∞
Γα = (α − 1) y α−1 e−y dy
0
= (α − 1) Γ (α − 1)
Note that
B (α, β) = B (β, α)
ˆ 1
i.e B (α, β) = y α−1 (1 − y)β−1 dy
0
let 1−y =µ
du = −dy
y =1−µ
ˆ 1
B (α, β) = − (1 − u)α−1 uβ−1 du
0
ˆ 1
= uβ−1 (1 − u)α−1 du = B (β, α)
0
77
The beta function is related to the gamma function according to the formula
ΓαΓβ
B (α, β) =
Γ (α + β)
mx (t) = E(etx )
´∞ 1
−x
= 0
etx Γαβ αx
α−1
e β dx
´∞ 1
−x
= 0 Γαβ α
xα−1 etx e β dx
= 1
Γαβ α
xα−1 e−x(1−βt)/β dx
78
1
lety = x (1 − βt) /β, t < β
1−βt
=⇒ dy = β
dx
βdy
=⇒ dx = 1−βt
α ´
1 ∞ 1 α−1 −y
= 1−βt 0 Γα
y e dy
βy
andx = 1−βt
´∞ β/(1−βt)
βy
α−1
∴ mx (t) = 0 Γαβ α 1−βt
e−y dy
´∞ 1 α−1 −y
but 0 Γα
y e dy = 1(pdf )
α
1 1
= 1−βt
,t < β
= (1 − βt)−α
M EAN (X) =
= αβ (1 − βt)−α−1
79
E X 2 − (E [X])2
=
E X 2 − α2 β 2
=
E X 2 − α2 β 2
∴ V AR(X) =
= α2 β 2 + αβ 2 − α2 β 2
= αβ 2
80
Exponential distribution is also a special case of gamma distribution
where α = 1and β = θ = λ1
1 e−x/θ , 0 < x < ∞
f (x) = θ
0 , elsewhere
or
λe−λx ,0 < x < ∞
f (x) =
0 , elsewhere
The mgf of an exponential distribution is
1
1 − θt
or 1
λ−t
E(x) = λ1 or θ
V AR(X)= λ12 or θ2
or
Γ(α+β) xα−1 (1 − x)β−1 , 0 ≤ x ≤ 1, β > 0, α > 0
ΓαΓβ
f (x) =
0 elsewhere
The mgf of a beta distribution is not useful but we can as well get the
81
mean and variance for the distribution
E(X)
ˆ 1
E(X) = (x)1 f (x)dx
0
ˆ 1
(x)1 xα−1 (1 − x)β−1
= dx
0 B (α, β)
´1 xα (1 − x)β−1
= 0
dx
B (α, β)
ˆ 1
but B (α, β) = xα−1 (1 − x)β−1 dx
0
by convention
ˆ 1
xα (1 − x)β−1 dx = B (α + 1, β)
0
B (α + 1, β)
=⇒
B (α, β)
Γ (α + 1) Γβ Γ(α + β)
= ×
Γ(α + β + 1) ΓαΓβ
αΓαΓβ Γ (α + β)
= ×
(α + β) Γ (α + β) ΓαΓβ
α
=
α+β
Next VAR(X)
82
ˆ
2
1
(x2 ) xα−1 (1 − x)β−1
E(X ) = dx
0 B (α, β)
xα+2−1 (1 − x)β−1
= dx
B (α, β)
B (α + 2, β)
=
B (α, β)
Γ (α + 2) Γβ Γ (α + β)
= ×
Γ (α + β + 2) ΓαΓβ
(α + 1)Γ(α + 1)Γβ × Γ (α + β)
=
(α + β + 1) Γ (α + β + 1) ΓαΓβ
(α + 1)αΓαΓβ × Γ (α + β)
=
(α + β + 1) (α + β) Γ (α + β) ΓαΓβ
(α + 1) α
=
(α + β + 1) (α + β)
2
2 2 (α + 1) α α
E [X ] − (E [X]) = −
(α + β + 1) (α + β) α+β
(α2 + α)(α + β) − α2 (α + β + 1)
=
(α + β + 1) (α + β)2
α3 + α2 + α2 β + αβ − α3 − α2 β − α2
=
(α + β + 1) (α + β)2
αβ
=
(α + β + 1) (α + β)2
83
0.4 NORMAL DISTRIBUTION
It is not easy to evaluate the above integral and so we make use of the
84
standard normal variable.
i.e if µ =0and σ = 1 the density function for X becomes
√1 e− 2σ12 x2 , −∞ < x < ∞
2π
f (x) =
0 , elsewhere
In this situation the random variable is referred to as a standard normal
variable and its distribution is referred to as normal distribution i.e X ∼
N (0, 1) .
To simplify the computation of probabilities involving the normal variable
we rst of all standardize the random variable.i.e we make it possess the
standard normal density.This is done by the following transformation
X −µ
z=
σ
In this case z is a standardized variable.Note that if X ∼ N (µ, σ 2 ) then the
standardized variable
X −µ
z= ∼ N (0, 1) .
σ
E(Z)
E(X−µ) E(X)−E(µ) µ µ
= σ
= σ
= σ
− σ
=0
VAR(z)
=var (X−µ) = var Xσ − var σµ = var(X) µ σ2
σ σ2
− var σ
= σ2
=1
Note var σ =variance of constant =0.
µ
If z ∼ N (0, 1) then
ˆ z
1 − 1 z2
p (Z < z) = e 2 dz
−∞ 2π
85
p(x ≤ b) can be evaluated by
x−µ b−µ
= p ≤
σ σ
=⇒ p(x ≤ b) = p(Z < a∗)
(1)
where a∗ = b− σµ .
This probability on the RHS of (1) can be read from tabulated values of
the CDF of the normal distribution z.
SOLUTION
(a) If we sketch the curve and indicate region specied by the proba-
bility
86
Because of symmetry p(z < 0) = p(z > 0) = 1 − p(z > 0) = 0.5
(b) p(−1 < z < 1)
= 0.8413 − (1 − 0.8413)
= 0.6826
87
(c) p(z > 2.54)
=1-0.9945
=0.0055
88
=0.7486 − 0.0918
=0.6568
3. If X ∼ N (10, σ 2 )and p(X > 12) = 0.1587 Find p(9 < X < 11)
SOLUTION
12−10
p(X > 12) ≡ p Z > σ
= 0.1587
From the tables the value of Z which corresponds to probability 0.1587
is 1.(i.e 1 − 0.1587 = 0.8413 ≡ 1from tables)
Nextp(q < X < 11) = p(−0.5 < z < 0.5)
89
(c) The job will take between 45-60min
SOLUTION
= 1 − 0.977
= 0.0228
90
45 − 55 60 − 55
≡ p ≤z≤
10 10
= p(−1 ≤ z ≤ 0.5)
= 0.6915 − 0.1587
EXERCISE
= 2y − 1 = 0.95
y = 0.9750
91
The value of z that corresponds to the probability o.9750 is 1.96
∴ if p(z < b) = 0.975
then b=1.96
PROPERTIES OF NORMAL DISTRIBUTION
1. The normal density curve is bell-shaped and symmetric about the value
x = µ since f (x) satises f (µ + a) = f (µ − a)
=⇒ p(X < µ) = p(x > µ) = 0.5
µis the median of the density.
Note tat the normal distribution is widely used in statistics despite the
fact that populations hardly follow the exact normal distribution
This is because
92
1. If a variable does not follow a normal distribution it can be made to
follow it after making suitable transformations
ˆ ∞
1 1 2
mx (t) = etx √ e− 2σ2 (x−µ) dx
−∞ σ 2π
ˆ ∞
1 1 2
= √ e− 2σ2 (x−µ) +tx dx (1)
−∞ σ 2π
1 2
but − 1
2σ 2
(x − µ)2 + tx = − x 2
+ µ 2
− 2µx − 2σ 2
tx
2σ 2
1 2 2 2
= − x − µ − 2x(µ + σ t
2σ 2
1 2 2
µ2
= − x − 2x(µ + σ t − (2)
2σ 2 2σ 2
93
we coplete the square of
x2 − 2x(µ + σ 2 t
2 2
i.e x2 − 2x(µ + σ 2 t = x − (µ + σ 2 t − µ − σ 2 t
2
1 2 2
[(µ + σ 2 t] µ2
f rom(2) − x − (µ + σ t + −
2σ 2 2σ 2 2σ 2
2
1 2 2
[(µ2 + 2µσ 2 t + σ 4 t2 ] − µ2
= − 2 x − (µ + σ t +
2σ 2σ 2
1 2 2
σ 2 t2
=− x − (µ + σ t + µt +
2σ 2 2
From (1)
ˆ ∞
1 1 2 2 σ 2 t2
√ e− 2σ2 (x−(µ+σ t) +µt+ 2 dx
−∞ σ 2π
ˆ ∞
2 2 1 1 2 2
=e µt+ σ 2t
√ e− 2σ2 (x−(µ+σ t)
−∞ σ 2π
ˆ ∞
1 1 2 2
but √ e− 2σ2 (x−(µ+σ t) dx = 1
−∞ σ 2π
σ 2 t2
= eµt+ 2
´∞ 2
dx is a normal density with mean = µ + σ 2 t
1
√1 e− 2σ2 (x−(µ+σ t)
2
−∞ σ 2π
and var= σ2
Note that if we replace µwith 0 and σ 2 with 1 we get the mgf of standard
normal distribution
94
i.e
1 2
mz (t) = e0+ 2 t
1 2
= e2t
E(X) =µ
E [X 2 ] = m00x (t)|t=0
1 2 t2 1 2 2
m00x (t) = σ 2 eµt+ 2 σ + µ + σ 2 t µ + σ 2 t eµt+ 2 σ t
m00x (t)|t=0 = σ 2 + µ2
= σ 2 + µ2 − µ2 = σ 2
95
i.e
mZ (t) = E(etz )
ˆ ∞
1 z2
= etz √ e 2
−∞ 2π
ˆ ∞
1 (−z )
2 −2tz
= √ e 2 dz
−∞ 2π
ˆ ∞
1 (z−t)2 t2
=⇒ √ e− 2 + 2
−∞ 2π
ˆ ∞
t2 1 (z−t)2
=e2 √ e− 2 dz
−∞ 2π
t2
=e2
Since z ∼ N (0, 1) we can replace the µ and σ with 0 and 1 respectively from
(1).
t2 t2
=⇒ mz (t) = e0+ 2 = e 2
MEAN(Z)
96
E(z) = m0z (t)|t=0
t2
m0z (t) = te 2
m0z (t)|t=0 = 0
VAR(X)
= E(z 2 ) − (E [z])2
E z 2 = m00z (t)|t=0
but
∴ var(z)1 + 0 − 02 = 1
97
Test of hypothesis plays an important role in industry..
Hypothesis is an assertion or conjecture about the parameters of popula-
tion distributions
We have two hypothesis
98
LEVEL OF SIGNIFICANCE:Is the quantity of risk of the type I error which
we are ready to tolerate in making a decision about H0
It is conventionally chosen as 5% or 1%
5% is moderate precision
1% is high precision
STUDENTS T DISTRIBUTION
We have discussed the normal distribution.we need µ and σ o dene it.
z = X−µσ/√n is a normal variate mean 0 and variance 1 .i.e z ∼ N (0, 1)
In practice σ is not known and in such a case only option is use S sample
estimate
√
of S.d
n(X−µ)
s
is approximately normal if n is large.
√
If n is not large then ( s ) is distributed as t.
n X−µ
99
It tends to normal as k increases.
T-TEST
100
µ0 is an assumed value considered to t µ
X − µ0
tn−1 =
s
√
n X − µ0
= tn−1 =
s
s
n (n − 1)
= tn−1 = X − µ0 P 2
X −X
1. The life expectancy of people in the year 1970 in Brazil was expected be
50 years.A survey conducted in 11 regions of Brazil and the following
data obtained.Do the data conrm the expected view.
Life expectancy (yrs):54.2,50.4,44.2,49.7,55.4,57.0,58.2,56.6,61.9,57.5,53.4
SOLUTION
We wish to test
H0 : µ = 50
vs
HA : µ 6= 50
The test statistic
101
√
n X − µ0
tn−1 =
s
54.2+50.4+. . . +53.4
X=
11
598.5
= = 54.41
11
X 1 2
s2 = X −X
n−1
1
32799.91 − 54.412
=
10
= 23.607
√
S2 = 23.607 = 4.859
√
11 (54.41 − 50)
t=
4.859
= 3.01
102
36.3 37.0 36.6 37.5 37.5 37.9 37.9 36.9 36.7
38.5 37.9 38.8 37.5 37.1 37.0 36.8 36.7 35.7
Check the breacher's claim.Use 1
100
level of signicance
SOLUTION
We wish to test
H0 : µ = 40
vs
HA : µ < 50
The test statistic
√
n X − µ0
tn−1 =
s
36.3+37.0+36.6+. . . +36.7+35.7
X=
18
669.7
= = 37.206
18
( P 2 )
1 X Xi
S2 = Xi2 −
n−1 n
103
X 1 2
s2 = X −X
n−1
1
= (27.33 − 37.206)
17
= 0.633
√
S2 = 0.633 = 0.796
√
18 (37.206 − 40)
t=
0.796
= −14.49
104