Chapter 5

Download as pdf or txt
Download as pdf or txt
You are on page 1of 19

Section 5.

1 Continuous Random Variables: Introduction


Not all random variables are discrete. For example:
1. Waiting times for anything (train, arrival of customer, production of mRNA molecule from
gene, etc).
2. Distance a ball is thrown.
3. Size of an antenna on a bug.
The general idea is that now the sample space is uncountable. Probability mass functions and
summation no longer works.
DEFINITION: We say that X is a continuous random variable if the sample space is uncount-
able and there exists a nonnegative function f dened for all x (, ) having the property
that for any set B R,
P{X B} =
_
B
f(x)dx
The function f is called the probability density function of the random variable X, and is (sort
of) the analogue of the probability mass function in the discrete case.
So probabilities are now found by integration, rather than summation.
REMARK: We must have that
1 = P{< X < } =
_

f(x)dx
Also, taking B = [a, b] for a < b we have
P{a X b} =
_
b
a
f(x)dx
Note that taking a = b yields the moderately counter-intuitive
P{X = a} =
_
a
a
f(x)dx = 0
Recall: For discrete random variables the probability mass function can be reconstructed from
the distribution function and vice versa and changes in the distribution function corresponded
with nding where the mass of the probability was.
We now have that
F(t) = P{X t} =
_
t

f(x)dx
and so
F

(t) = f(t)
1
agreeing with the previous interpretation. Also,
P(X (a, b)) = P(X [a, b]) = etc. =
_
b
a
f(t)dt = F(b) F(a)
The density function does not represent a probability. However, its integral gives probability of
being in certain regions of R. Also, f(a) gives a measure of likelihood of being around a. That
is,
P{a /2 < X < a + /2} =
_
a+/2
a/2
f(t)dt f(a)
when is small and f is continuous at a. Thus, the probability that X will be in an interval
around a of size is approximately f(a). So, if f(a) < f(b), then
P(a /2 < X < a + /2) f(a) < f(b) P(b /2 < X < b + /2)
In other words, f(a) is a measure of how likely it is that the random variable takes a value near
a.
EXAMPLES:
1. The amount of time you must wait, in minutes, for the appearance of an mRNA molecule is
a continuous random variable with density
f(t) =
_
e
3t
, t 0
0, t < 0
What is the probability that
1. You will have to wait between 1 and 2 minutes?
2. You will have to wait longer than 1/2 minutes?
Solution: First, we havent given . We need
1 =
_

f(t)dt =
_

0
e
3t
dt =

3
e
3t
|

t=0
=

3
Therefore, = 3, and f(t) = 3e
3t
for t 0. Thus,
P{1 < X < 2} =
2
_
1
3e
3t
dt = e
31
e
32
0.0473
P{X > 1/2} =

_
1/2
3e
3t
dt = e
3/2
0.2231
2. If X is a continuous random variable with distribution function F
X
and density f
X
, nd the
density function of Y = kX.
2
2. If X is a continuous random variable with distribution function F
X
and density f
X
, nd the
density function of Y = kX.
Solution: Well do two ways (like in the book). We have
F
Y
(t) = P{Y t}
= P{kX t}
= P{X t/k}
= F
X
(t/k)
Dierentiating with respect to t yields
f
Y
(t) =
d
dt
F
X
(t/k) =
1
k
f
X
(t/k)
Another derivation is the following. We have
f
Y
(a) P{a /2 Y a + /2}
= P{a /2 kX a + /2}
= P{a/k /(2k) X a/k + /(2k)}


k
f
X
(a/k)
Dividing by yields the same result as before.
Returning to our previous example with f
X
(t) = 3e
3t
. If Y = 3X, then
f
Y
(t) =
1
3
f
X
(t/3) = e
t
, t 0.
What if Y = kX + b? We have
F
Y
(t) = P{Y t}
= P{kX + b t}
= P{X (t b)/k}
= F
X
_
t b
k
_
Dierentiating with respect to t yields
f
Y
(t) =
d
dt
F
X
_
t b
k
_
=
1
k
f
X
_
t b
k
_
3
Section 5.2 Expectation and Variance of Continuous
Random Variables
DEFINITION: If X is a random variable with density function f, then the expected value is
E[X] =
_

xf(x)dx
This is analogous to the discrete case:
1. Discretize X into small ranges (x
i1
, x
i
] where x
i
x
i1
= h is small.
2. Now think of X as discrete with the values x
i
.
3. Then,
E[X]

x
i
x
i
p(x
i
)

x
i
x
i
P(x
i1
< X x
i
)

x
i
x
i
f(x
i
)h
_

xf(x)dx
EXAMPLES:
1. Suppose that X has the density function
f(x) =
1
2
x, 0 x 2
Find E[X].
Solution: We have
E[X] =
_
2
0
x
1
2
xdx =
1
6
x
3

2
0
=
8
6
=
4
3
2. Suppose that f
X
(x) = 1 for x (0, 1). Find E[e
X
].
4
2. Suppose that f
X
(x) = 1 for x (0, 1). Find E[e
X
].
Solution: Let Y = e
X
. We need to nd the density of Y , then we can use the denition of Y .
Because the range of X is (0, 1), the range of Y is (1, e). For 1 x e we have
F
Y
(x) = P{Y x} = P{e
X
x} = P{X ln x} = F
X
(ln x) =
_
ln x
0
f
X
(t)dt = ln x
Dierentiating both sides, we get f
Y
(x) = 1/x for 1 x e and zero otherwise. Thus,
E[e
X
] = E[Y ] =
_

xf
Y
(x)dx =
_
e
1
dx = e 1
As in the discrete case, there is a theorem making these computations much easier.
THEOREM: Let X be a continuous random variable with density function f. Then for any
g : R R we have
E[g(X)] =
_

g(x)f(x)dx
Note how easy this makes the previous example:
E[e
X
] =
_
1
0
e
x
dx = e 1
To prove the theorem in the special case that g(x) 0 we need the following:
LEMMA: For a nonnegative random variable Y ,
E[Y ] =
_

0
P{Y > y}dy
Proof: We have

_
0
P{Y > y}dy =
_

0
_

y
f
Y
(x)dxdy =
_

0
_
x
0
f
Y
(x)dydx
=

_
0
f
Y
(x)
_
x
0
dydx =
_

0
xf
Y
(x)dx = E[Y ]
Proof of Theorem: For any function g : R R
0
(general case is similar) we have
E[g(X)] =

_
0
P{g(X) > y}dy =

_
0
_

_
_
x:g(x)>y
f(x)dx
_

_
dy
=
__
{(x,y)}:g(x)>y0
f(x)dxdy =
_
x:g(x)>0
f(x)
g(x)
_
0
dydx
=
_
x:g(x)>0
f(x)g(x)dx =

f(x)g(x)dx
5
Of course, it immediately follows from the above theorem that for any constants a and b we
have
E[aX + b] = aE[X] + b
In fact, putting g(x) = ax + b in the previous theorem, we get
E[aX + b] =

(ax + b)f
X
(x)dx = a

xf
X
(x)dx + b

f
X
(x)dx = aE[X] + b
Thus, expected values inherit their linearity from the linearity of the integral.
DEFINITION: If X is random variable with mean , then the variance and standard deviation
are given by
Var(X) = E[(X )
2
] =

(x )
2
f(x)dx

X
=
_
Var(x)
We also still have
Var(X) = E[X
2
] (E[X])
2
Var(aX + b) = a
2
Var(X)

aX+b
= |a|
X
The proofs are exactly the same as in the discrete case.
EXAMPLE: Consider again X with the density function
f(x) =
1
2
x, 0 x 2
Find Var[X].
Solution: Recall that we have E[X] = 4/3. We now nd the second moment
E[X
2
] =

x
2
f(x)dx =
2
_
0
x
2
1
2
xdx =
1
8
x
4

2
0
=
16
8
= 2
Therefore,
Var(X) = E[X
2
] E[X]
2
= 2 (4/3)
2
= 18/9 16/9 = 1/3
6
Section 5.3 The Uniform Random Variable
We rst consider the interval (0, 1) and say a random variable is uniformly distributed over
(0, 1) if its density function is
f(x) =
_
1, 0 < x < 1
0, else
Note that

f(x)dx =
1
_
0
dx = 1
Also, as f(x) = 0 outside of (0, 1), X only takes values in (0, 1).
As f is a constant, X is equally likely to be near each number. That is, each small region of
size is equally likely to contain X (and has a probability of of doing so).
Note that for any 0 a b 1 we have
P{a X b} =
b
_
a
f(x)dx = b a
General Case: Now consider an interval (a, b). We say that X is uniformly distributed on
(a, b) if
f(t) =
_

_
1
b a
, a < t < b
0, else
and F(t) =
_

_
0, t < a
t a
b a
, a x < b
1, t b
We can compute the expected value and variance straightaway:
7
We can compute the expected value and variance straightaway:
E[X] =
b
_
a
x
1
b a
dx =
1
b a

1
2
(b
2
a
2
) =
b + a
2
Similarly,
E[X
2
] =
b
_
a
x
2

1
b a
dx =
b
3
a
3
3(b a)
=
b
2
+ ab + a
2
3
Therefore,
Var(X) = E[X
2
] E[X]
2
=
b
2
+ ab + a
2
3

_
b + a
2
_
2
=
(b a)
2
12
EXAMPLE: Consider a uniform random variable with density
f(x) =
_

_
1
8
, x (1, 7)
0, else
Find P{X < 2}.
8
EXAMPLE: Consider a uniform random variable with density
f(x) =
_

_
1
8
, x (1, 7)
0, else
Find P{X < 2}.
Solution: The range of X is (1, 7). Thus, we have
P{X < 2} =
2
_

f(x)dx =
2
_
1
1
8
dx =
3
8
Section 5.4 Normal Random Variables
We say that X is a normal random variable or simply that X is normal with parameters and

2
if the density is given by
f(x) =
1

2
e
(x)
2
2
2
This density is a bell shaped curve and is symmetric around :
It should NOT be apparent that this is a density. We need that it integrates to one. Making
the substitution y = (x )/, we have
1

e
(x)
2
2
2
dx =
1

e
y
2
/2
dy
9
We will show that the integral is equal to

2. To do so, we actually compute the square of


the integral. Dene
I =

e
y
2
/2
dy
Then,
I
2
=

e
y
2
/2
dy

e
x
2
/2
dx =

e
(x
2
+y
2
)/2
dxdy
Now we switch to polar coordinates. That is,
x = r cos , y = r sin , dxdy = rddr
Thus,
I
2
=

_
0
2
_
0
e
r
2
/2
rddr =

_
0
e
r
2
/2
r
2
_
0
ddr =

_
0
2e
r
2
/2
rdr = 2e
r
2
/2
|

r=0
= 2
Scaling a normal random variable gives you another normal random variable.
THEOREM: Let X be Normal(,
2
). Then Y = aX + b is a normal random variable with
parameters a + b and a
2

2
Proof: We let a > 0 (the proof for a < 0 is the same). We rst consider the distribution
function of Y . We have
F
Y
(t) = P{Y t} = P{aX + b t} = P{X (t b)/a} = F
X
_
t b
a
_
Dierentiating both sides, we get
f
Y
(t) =
1
a
f
X
_
t b
a
_
=
1
a

2
exp
_

_
t b
a

_
2
2
2
_

_
=
1
a

2
exp
_

(t b a)
2
2(a)
2
_
which is the density of a normal random variable with parameters a + b and a
2

2
.
From the above theorem it follows that if X is Normal(,
2
), then
Z =
X

is normally distributed with parameters 0 and 1. Such a random variable is called a standard
normal random variable. Note that its density is simply
f
Z
(x) =
1

2
e
x
2
/2
10
By the above scaling, we see that we can compute the mean and variance for all normal random
variables if we can compute it for the standard normal. In fact, we have
E[Z] =

xf
Z
(x)dx =
1

xe
x
2
/2
dx =
1

2
e
x
2
/2

= 0
and
Var(Z) = E[Z
2
] =
1

x
2
e
x
2
/2
dx
Integration by parts with u = x and dv = xe
x
2
/2
dx yields
Var(Z) = 1
Therefore, we see that if X = + Z, then X is a normal random variable with parameters
,
2
and
E[X] = , Var(X) =
2
Var(Z) =
2
NOTATION: Typically, we denote (x) = P{Z x} for a standard normal Z. That is,
(x) =
1

2
x
_

e
y
2
/2
dy
Note that by the symmetry of the density we have
(x) = 1 (x)
Therefore, it is customary to provide charts with the values of (x) for all x (0, 3.5) in
increments of 0.01. (Page 201 in the book).
Note that if X Normal(,
2
), then can still be used:
F
X
(a) = P{X a} = P{(X )/ (a )/} =
_
a

_
EXAMPLE: Let X Normal(3, 9), i.e. = 3 and
2
= 9. Find P{2 < X < 5}.
11
EXAMPLE: Let X Normal(3, 9), i.e. = 3 and
2
= 9. Find P{2 < X < 5}.
Solution: We have, where Z is a standard normal,
P{2 < X < 5} = P
_
2 3
3
< Z <
5 3
3
_
= P{1/3 < Z < 2/3} = (2/3) (1/3)
= (2/3) (1 (1/3)) (0.66) 1 + (0.33) = 0.7454 1 + 0.6293 = 0.3747
We can use the normal random variable to approximate a binomial. This is a special case of
the Central limit theorem.
THEOREM (The DeMoivre-Laplace limit theorem): Let S
n
denote the number of successes
that occur when n independent trials, each with a probability of p, are performed. Then for
any a < b we have
P
_
a
S
n
np
_
np(1 p)
b
_
(b) (a)
A rule of thumb is that this is a good approximation when np(1 p) 10. Note that there
is no assumption of p being small like in Poisson approximation.
EXAMPLE: Suppose that a biased coin is ipped 70 times and that probability of heads is 0.4.
Approximate the probability that there are 30 heads. Compare with the actual answer. What
can you say about the probability that there are between 20 and 40 heads?
12
EXAMPLE: Suppose that a biased coin is ipped 70 times and that probability of heads is 0.4.
Approximate the probability that there are 30 heads. Compare with the actual answer. What
can you say about the probability that there are between 20 and 40 heads?
Solution: Let X be the number of times the coin lands on heads. We rst note that a normal
random variable is continuous, whereas the binomial is discrete. Thus, (and this is called the
continuity correction), we do the following
P{X = 30} = P{29.5 < X < 30.5}
= P
_
29.5 28

70 0.4 0.6
<
X 28

70 0.4 0.6
<
30.5 28

70 0.4 0.6
_
P{0.366 < Z < 0.61}
(0.61) (0.37)
0.7291 0.6443
= 0.0848
Whereas
P{X = 30} =
_
70
30
_
0.4
30
0.6
40
= 0.0853
We now evaluate the probability that there are between 20 and 40 heads:
P{20 < X < 40} = P{19.5 < X < 40.5}
= P
_
19.5 28

70 0.4 0.6
<
X 28

70 0.4 0.6
<
40.5 28

70 0.4 0.6
_
P{2.07 < Z < 3.05}
(3.05) (2.07)
= (3.05) (1 (2.07))
0.9989 (1 0.9808)
= 0.9797
13
Section 5.5 Exponential Random Variables
Exponential random variables arise as the distribution of the amount of time until some specic
event occurs.
A continuous random variable with the density function
f(x) = e
x
for x 0
and zero otherwise for some > 0 is called an exponential random variable with parameter
> 0. The distribution function for t 0 is
F(t) = P{X t} =
t
_
0
e
x
dx = e
x

t
x=0
= 1 e
t
Computing the moments of an exponential is easy with integration by parts for n > 0 :
E[X
n
] =

_
0
x
n
e
x
dx = x
n
e
x

x=0
+

_
0
nx
n1
e
x
dx =
n

_
0
x
n1
e
x
dx =
n

E[X
n1
]
Therefore, for n = 1 and n = 2 we see that
E[X] =
1

, E[X
2
] =
2

E[X] =
2

2
, . . . , E[X
n
] =
n!

n
and Var(X) =
2

2

_
1

_
2
=
1

2
EXAMPLE: Suppose the length of a phone call in minutes is an exponential random variable
with parameter = 1/10 (so the average call is 10 minutes). What is the probability that a
random call will take
(a) more than 8 minutes?
(b) between 8 and 22 minutes?
Solution: Let X be the length of the call. We have
P{X > 8} = 1 F(8) = 1
_
1 e
(1/10)8
_
= e
0.8
= 0.4493
P{8 < X < 22} = F(22) F(8) = e
0.8
e
2.2
= 0.3385
Memoryless property: We say that a nonnegative random variable X is memoryless if
P{X > s + t | X > t} = P{X > s} for all s, t 0
Note that this property is similar to the geometric random variable one.
Let us show that an an exponential random variable satises the memoryless property. In fact,
we have
P{X > s + t | X > t} =
P{X > s + t}
P{X > t}
= e
(s+t)
/e
t
= e
s
= P{X > s}
One can show that exponential is the only continuous distribution with this property.
14
EXAMPLE: Three people are in a post-oce. Persons 1 and 2 are at the counter and will leave
after an Exp() amount of time. Once one leaves, person 3 will go to the counter and be served
after an Exp() amount of time. What is the probability that person 3 is the last to leave the
post-oce?
Solution: After one leaves, person 3 takes his spot. Next, by the memoryless property, the
waiting time for the remaining person is again Exp(), just like person 3. By symmetry, the
probability is then 1/2.
Hazard Rate Functions. Let X be a positive, continuous random variable that we think
of as the lifetime of something. Let it have distribution function F and density f. Then the
hazard or failure rate function (t) of the random variable is dened via
(t) =
f(t)
F(t)
where F = 1 F
What is this about? Note that
P{X (t, t + dt) | X > t} =
P{X (t, t + dt), X > t}
P{X > t}
=
P{X (t, t + dt)}
P{X > t}

f(t)
F(t)
dt
Therefore, (t) represents the conditional probability intensity that a t-unit-old item will fail.
Note that for the exponential distribution we have
(t) =
f(t)
F(t)
=
e
t
e
t
=
The parameter is usually referred to as the rate of the distribution.
It also turns out that the hazard function (t) uniquely determines the distribution of a random
variable. In fact, we have
(t) =
d
dt
F(t)
1 F(t)
Integrating both sides, we get
ln(1 F(t)) =
t
_
0
(s)ds + k
Thus,
1 F(t) = e
k
exp
_
_
_

t
_
0
(s)ds
_
_
_
Setting t = 0 (note that F(0) = 0) shows that k = 0. Therefore
F(t) = 1 exp
_
_
_

t
_
0
(s)ds
_
_
_
15
EXAMPLE: Suppose that the life distribution of an item has the hazard rate function
(t) = t
3
, t > 0
What is the probability that
(a) the item survives to age 2?
(b) the items lifetime is between 0.4 and 1.4?
(c) the item survives to age 1?
(d) the item that survived to age 1 will survive to age 2?
16
EXAMPLE: Suppose that the life distribution of an item has the hazard rate function
(t) = t
3
, t > 0
What is the probability that
(a) the item survives to age 2?
(b) the items lifetime is between 0.4 and 1.4?
(c) the item survives to age 1?
(d) the item that survived to age 1 will survive to age 2?
Solution: We are given (t). We know that
F(t) = 1 exp
_
_
_

t
_
0
(s)ds
_
_
_
= 1 exp
_
_
_

t
_
0
s
3
ds
_
_
_
= 1 exp
_
t
4
/4
_
Therefore, letting X be the lifetime of an item, we get
(a) P{X > 2} = 1 F(2) = exp{2
4
/4} = e
4
0.01832
(b) P{0.4 < X < 1.4} = F(1.4) F(0.4) = exp{(0.4)
4
/4} exp{(1.4)
4
/4} 0.6109
(c) P{X > 1} = 1 F(1) = exp{1/4} 0.7788
(d) P{X > 2 | X > 1} =
P{X > 2}
P{X > 1}
=
1 F(2)
1 F(1)
=
e
4
e
1/4
0.0235
EXAMPLE: A smoker is said to have a rate of death twice that of a non-smoker at every
age. What does this mean?
Solution: Let
s
(t) denote the hazard of a smoker and
n
(t) be the hazard of a non-smoker.
Then

s
(t) = 2
n
(t)
Let X
n
and X
s
denote the lifetime of a non-smoker and smoker, respectively. The probability
that a non-smoker of age A will reach age B > A is
P{X
n
> B | X
n
> A} =
1 F
n
(B)
1 F
n
(A)
=
exp
_
_
_

B
_
0

n
(t)dt
_
_
_
exp
_
_
_

A
_
0

n
(t)dt
_
_
_
= exp
_
_
_

B
_
A

n
(t)dt
_
_
_
Whereas, the corresponding probability for a smoker is
P{X
s
> B | X
s
> A}=exp
_
_
_

B
_
A

s
(t)dt
_
_
_
=exp
_
_
_
2
B
_
A

n
(t)dt
_
_
_
=
_
_
exp
_
_
_

B
_
A

n
(t)dt
_
_
_
_
_
2
=[P{X
n
> B | X
n
> A}]
2
Thus, if the probability that a 50 year old non-smoker reaches 60 is 0.7165, the probability for
a smoker is 0.7165
2
= 0.5134. If the probability a 50 year old non-smoker reaches 80 is 0.2, the
probability for a smoker is 0.2
2
= 0.04.
17
Section 5.6 Other Continuous Distributions
The Gamma distribution: A random variable X is said to have a gamma distribution with
parameters (, ) for > 0 and > 0 if its density function is
f(x) =
_

_
e
x
(x)
1
(x)
, x 0
0, x < 0
where (x) is called the gamma function and is dened by
(x) =

_
0
e
y
y
x1
dy
Note that (1) = 1. For x = 1 we use integration by parts with u = y
x1
and dv = e
y
dy to
show that
(x) = e
y
y
x1
|

y=0
+ (x 1)

_
0
e
y
y
x2
dy = (x 1)(x 1)
Therefore, for integer values of n 1 we see that
(n) = (n 1)(n 1) = (n 1)(n 2)(n 2) = . . . = (n 1)(n 2) . . . 3 2(1) = (n 1)!
So, it is a generalization of the factorial.
REMARK: One can show that

_
1
2
_
=

,
_
3
2
_
=
1
2

,
_
5
2
_
=
3
4

, . . . ,
_
1
2
+ n
_
=
(2n)!
4
n
n!

, n Z
0
The gamma distribution comes up a lot as the sum of independent exponential random variables
of parameter . That is, if X
1
, . . . , X
n
are independent Exp(), then T
n
= X
1
+ . . . + X
n
is
gamma(n, ).
Recall that the time of the nth jump in a Poisson process, N(t), is the sum of n exponential
random variables of parameter . Thus, the time of the nth event follows a gamma(n, )
distribution. To prove all this we denote the time of the nth jump as T
n
and see that
P{T
n
t} = P{N(t) n} =

j=n
P{N(t) = j} =

j=n
e
t
(t)
j
j!
Dierentiating both sides, we get
f
Tn
(t)=

j=n
e
t
j(t)
j1

j!

j=n
e
t
(t)
j
j!
=

j=n
e
t
(t)
j1
(j 1)!

j=n
e
t
(t)
j
j!
=e
t
(t)
n1
(n 1)!
which is the density of a gamma(n, ) random variable.
REMARK: Note that as expected gamma(1, ) is Exp().
18
We have
E[X] =
1
(x)

_
0
xe
x
(x)
1
dx =
1
(x)

_
0
e
x
(x)

dx =
( + 1)
(x)
=

so
E[X] =

Recall that the expected value of Exp() is 1/ and this ties in nicely with the sum of expo-
nentials being a gamma. It can also be shown that
Var(X) =

2
DEFINITION: The gamma distribution with = 1/2 and = n/2 where n is a positive integer
is called the
2
n
distribution with n degrees of freedom.
Section 5.7 The Distribution of a Function of a Random
Variable
We formalize what weve already done.
EXAMPLE: Let X be uniformly distributed over (0, 1). Let Y = X
n
. For 0 y 1 we have
F
Y
(y) = P{Y y} = P{X
n
y} = P{X y
1/n
} = F
X
(y
1/n
) = y
1/n
Therefore, for y (0, 1) we have
f
Y
(y) =
1
n
y
1/n1
and zero otherwise.
THEOREM: Let X be a continuous random variable with density f
X
. Suppose g(x) is strictly
monotone and dierentiable. Let Y = g(X). Then
f
Y
(y) =
_

_
f
X
(g
1
(y))

d
dy
g
1
(y)

, if y = g(x) for some x


0, if y = g(x) for all x
Proof: Suppose g

(x) > 0 (the proof in the other case is similar). Suppose y = g(x) (i.e. it is
in the range of Y ), then
F
Y
(y) = P{g(X) y} = P{X g
1
(y)} = F
X
(g
1
(y))
Dierentiating both sides, we get
f
Y
(y) = f
X
(g
1
(y))
d
dy
g
1
(y)
19

You might also like