ST - Joseph'S College of Engineering, Chennai-119 ST - Joseph'S Institute of Technology, Chennai-119 Ma6453 - Probability and Queueing Theory Unit I Random Variables Formulae Sheet
ST - Joseph'S College of Engineering, Chennai-119 ST - Joseph'S Institute of Technology, Chennai-119 Ma6453 - Probability and Queueing Theory Unit I Random Variables Formulae Sheet
ST - Joseph'S College of Engineering, Chennai-119 ST - Joseph'S Institute of Technology, Chennai-119 Ma6453 - Probability and Queueing Theory Unit I Random Variables Formulae Sheet
∑ P( X = x ) = 1
∞ ii)
ii) i −∞
i =1
xi F ( x) = P ( X ≤ x) = f ( x) dx
−∞
Properties of C.D.F:
i) F (−∞ ) = limF ( x) = 0 and F (∞ ) = limF ( x) = 1
x → −∞ x →∞
ii) P(a < X < b) = P(a ≤ X ≤ b) = P(a < X ≤ b) = P(a ≤ X < b) = F (b) − F (a)
iii) P( X ≤ a ) = F ( a )
vi) P( X > a ) = 1 − P ( X ≤ a ) = 1 − F (a )
v) P( X = a ) = 0
vi) f ( x) = [ F ( x)]
d
dx
3. Mean (or) Expectation of X = E ( X ) = X = µ1′ Mean (or) Expectation of X = E ( X ) = X = µ1′
E ( X ) = ∑ xi P( X = xi )
E ( X ) = ∫ x f ( x) dx
∞
xi
−∞
( ) ( )
4. Variance of X: Variance of X:
( )
Var ( X ) = E X 2 − [ E ( X ) ] (or ) Var ( X ) = µ 2′ − µ1′ ( )
Var ( X ) = E X 2 − [ E ( X ) ] (or ) Var ( X ) = µ 2′ − µ1′
2 2 2 2
∑ xi 2 P( X = xi )
∫
Where E ( X ) = µ2′ =
2 ∞
Where E ( X 2 ) = µ 2′ = x 2 f ( x) dx
xi
−∞
5. r th Order raw Moment : r th Order raw Moment :
µr′ = E ( X r ) = ∑ xir P( X = xi )
∫x
∞
xi µ r′ = E ( X r ) = r
f ( x) dx
−∞
6. r th Central Moment : r th Central Moment :
µr = E ( X − X )r = ∑ ( xi − X ) P( X = xi )
µr = E X − X = ∫ (x − X )
r ∞
r r
xi f ( x)dx
−∞
M X (t ) = E (etX ) = ∑ etx P( X = xi )
7. Moment generating function: Moment generating function:
∫e
∞
tX tx
xi M X (t ) = E (e ) = f ( x) dx
−∞
8. First four central moments [ Relationship between raw moment and central moment]
µ1 = 0 (always)
µ 2 = µ 2′ − ( µ1′) 2
µ 3 = µ 3′ − 3µ 2′ µ1′ + 2(µ1′ ) 3
µ 4 = µ 4′ − 4µ 3′ µ1′ + 6µ 2′ ( µ1′ ) 2 − 3( µ1′ ) 4
In general
µn = µn′ − nC1 µn′ −1µ1′ + nC2 µn′ −2 ( µ1′ ) − nC3 µn′ −3 ( µ1′ ) + ... + (−1)n+1 (n − 1) ( µ1′ )
2 3 n
dt t=0
3) M CX (t ) = M X (Ct ), where C being a constant.
4) M X1 + X 2 (t ) = M X1 (t ) M X 2 (t ) if X 1 & X 2 are independent.
5) M aX +b (t ) = ebt M X (at )
STANDARD DISTRIBUTIONS:
Discret MEAN VARIANCE Condition for applying (or)
Probability Mass Function MGF { M X (t ) } Remarks
Distributions {E( X )} {Var ( X ) }
i) The n trials are independent
P( X = x) = nC x p x q n− x , x = 0,1, 2,3,..., n ii) n is small (n<30)
Where
Binomial
n − number of trials
Distribution (q + pe t ) n np npq
X ∼ B ( n, p ) p − probability of Success
q − probability of failures
X − Number of Success (out of ' n ' trials)
and p + q = 1
λ e −λ x , x ≥ 0
If X is a random variable with
λ
f ( x) =
Exponential
1 1 exponential distribution, then X lacks
0
Distribution
X ∼ e (λ ) , otherwise λ −t λ λ2 memory, in the sense that
P( X > s + t / X > s) = P( X > t ), ∀ s, t > 0
−λ x (λx)α −1
λ e , x≥0
f ( x) = Γα
0 λ α α2
λ − t , λ > t
Gamma α
Distribution If α=1 Gamma distribution becomes
X ∼ G (λ , α )
, otherwise λ λ2 exponential distribution
The parameters λ and α are
positive
Normal
( )
Distribution −( x − µ ) 2
1 t 2σ 2
2 f ( x) = e 2σ 2
,−∞ < x < ∞ µ t+ µ σ2 -
X ∼ N µ ,σ e 2
σ 2π