ST - Joseph'S College of Engineering, Chennai-119 ST - Joseph'S Institute of Technology, Chennai-119 Ma6453 - Probability and Queueing Theory Unit I Random Variables Formulae Sheet

Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

St.

JOSEPH’S COLLEGE OF ENGINEERING, CHENNAI-119


St.JOSEPH’S INSTITUTE OF TECHNOLOGY, CHENNAI-119
MA6453 – PROBABILITY AND QUEUEING THEORY
UNIT I RANDOM VARIABLES
FORMULAE SHEET

S.NO DISCRETE RANDOM VARIABLE: CONTINUOUS RANDOM VARIABLE:

1. Discrete Random Variable: Continuous Random Variable:


Let X be a discrete random variables with values x1 , x 2 , x3 ,... The Let X be a continuous random variable. A function f(x) is said to be
probability density function of X if
f ( x ) ≥ 0 , ∀x
function P ( X = xi ) is said to to be probability mass function if
i)
P ( X = x i ) ≥ 0 , ∀i
∫ f ( x) dx =1
i) ∞

∑ P( X = x ) = 1
∞ ii)
ii) i −∞
i =1

2. Distribution Function (or) Distribution Function (or)


Cumulative Distribution Function (C.D.F): Cumulative Distribution Function (C.D.F):
F ( x) = P( X ≤ x) = ∑ P(X ≤ xi )

x

xi F ( x) = P ( X ≤ x) = f ( x) dx
−∞
Properties of C.D.F:
i) F (−∞ ) = limF ( x) = 0 and F (∞ ) = limF ( x) = 1
x → −∞ x →∞

ii) P(a < X < b) = P(a ≤ X ≤ b) = P(a < X ≤ b) = P(a ≤ X < b) = F (b) − F (a)
iii) P( X ≤ a ) = F ( a )
vi) P( X > a ) = 1 − P ( X ≤ a ) = 1 − F (a )
v) P( X = a ) = 0

vi) f ( x) = [ F ( x)]
d
dx
3. Mean (or) Expectation of X = E ( X ) = X = µ1′ Mean (or) Expectation of X = E ( X ) = X = µ1′
E ( X ) = ∑ xi P( X = xi )
E ( X ) = ∫ x f ( x) dx

xi
−∞

( ) ( )
4. Variance of X: Variance of X:

( )
Var ( X ) = E X 2 − [ E ( X ) ] (or ) Var ( X ) = µ 2′ − µ1′ ( )
Var ( X ) = E X 2 − [ E ( X ) ] (or ) Var ( X ) = µ 2′ − µ1′
2 2 2 2
∑ xi 2 P( X = xi )

Where E ( X ) = µ2′ =
2 ∞
Where E ( X 2 ) = µ 2′ = x 2 f ( x) dx
xi
−∞
5. r th Order raw Moment : r th Order raw Moment :
µr′ = E ( X r ) = ∑ xir P( X = xi )
∫x

xi µ r′ = E ( X r ) = r
f ( x) dx
−∞
6. r th Central Moment : r th Central Moment :
µr = E ( X − X )r = ∑ ( xi − X ) P( X = xi )
µr = E  X − X  = ∫ (x − X )
r ∞
r r
xi f ( x)dx
−∞

M X (t ) = E (etX ) = ∑ etx P( X = xi )
7. Moment generating function: Moment generating function:

∫e

tX tx
xi M X (t ) = E (e ) = f ( x) dx
−∞
8. First four central moments [ Relationship between raw moment and central moment]
µ1 = 0 (always)
µ 2 = µ 2′ − ( µ1′) 2
µ 3 = µ 3′ − 3µ 2′ µ1′ + 2(µ1′ ) 3
µ 4 = µ 4′ − 4µ 3′ µ1′ + 6µ 2′ ( µ1′ ) 2 − 3( µ1′ ) 4
In general
µn = µn′ − nC1 µn′ −1µ1′ + nC2 µn′ −2 ( µ1′ ) − nC3 µn′ −3 ( µ1′ ) + ... + (−1)n+1 (n − 1) ( µ1′ )
2 3 n

PROPERTIES OF MEAN & VARIANCE


1) E (c ) = c where c is any constant.
2) E (aX ± b) = aE ( X ) ± b
3) var (c) = 0 where c is any constant.
4) var (aX ± b) = a 2 var( X )
PROPERTIES OF MOMENT GENERATING FUNCTION:
tr
1) E ( X r ) = µ r′ = co efficient of in the expansion of M X (t )
r!
d 
2) E ( X r ) = µ r′ =  r ( M X (t ) ) 
r

 dt t=0
3) M CX (t ) = M X (Ct ), where C being a constant.
4) M X1 + X 2 (t ) = M X1 (t ) M X 2 (t ) if X 1 & X 2 are independent.
5) M aX +b (t ) = ebt M X (at )
STANDARD DISTRIBUTIONS:
Discret MEAN VARIANCE Condition for applying (or)
Probability Mass Function MGF { M X (t ) } Remarks
Distributions {E( X )} {Var ( X ) }
i) The n trials are independent
P( X = x) = nC x p x q n− x , x = 0,1, 2,3,..., n ii) n is small (n<30)

Where
Binomial
n − number of trials
Distribution (q + pe t ) n np npq
X ∼ B ( n, p ) p − probability of Success
q − probability of failures
X − Number of Success (out of ' n ' trials)
and p + q = 1

e−λ λ x i) n is infinitely large (i.e) n→∞


P( X = x) = , x = 0,1, 2,3,... ii) p is very small (i.e) p→0
x!
iii) λ=np
Poisson Where
eλ ( e −1) λ λ
t
Distribution n − number of trials
X ∼ P (λ ) p − probability of Success
X − Number of Success (out of ' n ' trials)
and p + q = 1
Memoryless property:
Form I
P( X = x) = q x −1 p, x = 1, 2,3,..., n pe t
1 q If X is a random variable such that for all
X − number of trials required to 1 − qe t p p2 s,t>0 then X is said to have memory less
Geometric get a first success property
Distribution P( X > s + t / X > s) = P( X > t ), ∀ s & t > 0
X ∼ G (p)
Form II
P( X = x) = q x p, x = 0,1, 2,3,... p q q
X − number of failures before 1 − qet p p2
the first success
Continuous MEAN VARIANCE
Probability density Function MGF { M X (t ) }
Distributions {E( X )} {Var ( X ) }
 1
 ,a<x<b
f ( x) =  b − a
Uniform
e bt − e at a+b (b − a) 2
,t ≠ 0
 0
Distribution -
X ∼ U (a, b) (b − a)t 2 12
, otherwise
Memoryless property:

λ e −λ x , x ≥ 0
If X is a random variable with
λ
f ( x) = 
Exponential
1 1 exponential distribution, then X lacks
0
Distribution
X ∼ e (λ ) , otherwise λ −t λ λ2 memory, in the sense that
P( X > s + t / X > s) = P( X > t ), ∀ s, t > 0

 −λ x (λx)α −1
λ e , x≥0
f ( x) =  Γα
0  λ  α α2
 λ − t  , λ > t
Gamma α


Distribution If α=1 Gamma distribution becomes
X ∼ G (λ , α )
, otherwise λ λ2 exponential distribution
The parameters λ and α are
positive
Normal

( )
Distribution −( x − µ ) 2
1 t 2σ 2
2 f ( x) = e 2σ 2
,−∞ < x < ∞ µ t+ µ σ2 -
X ∼ N µ ,σ e 2
σ 2π

You might also like