A Probability and Statistics Cheatsheet
A Probability and Statistics Cheatsheet
Cheat Sheet
Copyright
c Matthias Vallentin, 2011
[email protected]
i=0
i! x!
0.8
n = 30, p = 0.6 p = 0.5 =4
0.25
0.3
0.20
0.6
0.15
0.2
1
PMF
PMF
PMF
PMF
0.4
0.10
0.1
0.2
0.05
0.00
0.0
0.0
a b 0 10 20 30 40 0 2 4 6 8 10 0 5 10 15 20
x x x x
1 We use the notation (s, x) and (x) to refer to the Gamma functions (see 22.1), and use B(x, y) and Ix to refer to the Beta functions (see 22.2).
3
1.2 Continuous Distributions
Notation FX (x) fX (x) E [X] V [X] MX (s)
0 x<a
(b a)2 esb esa
xa I(a < x < b) a+b
Uniform Unif (a, b) a<x<b
ba ba 2 12 s(b a)
1 x>b
(x )2
Z x
2 s2
1
N , 2 2
Normal (x) = (t) dt (x) = exp exp s +
2 2 2 2
(ln x )2
1 1 ln x 1 2 2 2
ln N , 2 e+ /2
(e 1)e2+
Log-Normal + erf exp
2 2 2 2 x 2 2 2 2
1 T
1 (x) 1
Multivariate Normal MVN (, ) (2)k/2 ||1/2 e 2 (x) exp T s + sT s
2
(+1)/2
+1
2 x2
Students t Student() Ix ,
1+ 0 0
2 2 2
1 k x 1
Chi-square 2k , xk/2 ex/2 k 2k (1 2s)k/2 s < 1/2
(k/2) 2 2 2k/2 (k/2)
r
d
(d1 x)d1 d2 2
2d22 (d1 + d2 2)
d1 d1 (d1 x+d2 )d1 +d2 d2
F F(d1 , d2 ) I d1 x , d1 d1 d2 2 d1 (d2 2)2 (d2 4)
d1 x+d2 2 2 xB 2
, 2
1 x/ 1
Exponential Exp () 1 ex/ e 2 (s < 1/)
1 s
(, x/) 1 1
Gamma Gamma (, ) x1 ex/ 2 (s < 1/)
() () 1 s
, x
1 /x 2 2(s)/2 p
Inverse Gamma InvGamma (, ) x e >1 >2 K 4s
() () 1 ( 1)2 ( 2)2 ()
P
k
i=1 i Y 1
k
i E [Xi ] (1 E [Xi ])
Dirichlet Dir () Qk xi i Pk Pk
i=1 (i ) i=1 i=1 i i=1 i + 1
k1
!
( + ) 1 X Y +r sk
Beta Beta (, ) Ix (, ) x (1 x)1 1+
() () + ( + )2 ( + + 1) r=0
++r k!
k=1
sn n
k k x k1 (x/)k 1 2 X n
Weibull Weibull(, k) 1 e(x/) e 1 + 2 1 + 2 1+
k k n=0
n! k
x
m x xm x
Pareto Pareto(xm , ) 1 x xm m
+1 x xm >1 m
>2 (xm s) (, xm s) s < 0
x x 1 ( 1)2 ( 2)
4
Uniform (continuous) Normal Lognormal Student's t
=1
0.4
= 0, 2 = 0.2 = 0, 2 = 3
1.0
= 0, 2 = 1 = 2, 2 = 2 =2
= 0, 2 = 5 = 0, 2 = 1 =5
=
0.8
= 2, 2 = 0.5 = 0.5, 2 = 1
= 0.25, 2 = 1
= 0.125, 2 = 1
0.8
0.3
0.6
0.6
PDF
PDF
(x)
0.2
0.4
1
0.4
ba
0.1
0.2
0.2
0.0
0.0
0.0
2
F Exponential Gamma
k=1 d1 = 1, d2 = 1 =2 = 1, = 2
0.5
2.0
0.5
k=2 d1 = 2, d2 = 1 =1 = 2, = 2
3.0
k=3 d1 = 5, d2 = 2 = 0.4 = 3, = 2
k=4 d1 = 100, d2 = 1 = 5, = 1
k=5 d1 = 100, d2 = 100 = 9, = 0.5
0.4
0.4
2.5
1.5
2.0
0.3
0.3
PDF
PDF
1.0
1.5
0.2
0.2
1.0
0.5
0.1
0.1
0.5
0.0
0.0
0.0
0.0
0 2 4 6 8 0 1 2 3 4 5 0 1 2 3 4 5 0 5 10 15 20
x x x x
2.5
= 2, = 1 = 5, = 1 = 1, k = 1 xm = 1, = 2
= 3, = 1 = 1, = 3 = 1, k = 1.5 xm = 1, = 4
= 3, = 0.5 = 2, = 2 = 1, k = 5
= 2, = 5
4
2.5
2.0
3
2.0
3
1.5
2
PDF
PDF
1.5
2
1.0
1.0
1
1
0.5
0.5
0.0
0.0
0
0
0 1 2 3 4 5 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.5 1.0 1.5 2.0 2.5 0 1 2 3 4 5
x x x x
5
2 Probability Theory Law of Total Probability
n n
Definitions X G
P [B] = P [B|Ai ] P [Ai ] = Ai
Sample space i=1 i=1
2. P [] = 1
" #
G X 3 Random Variables
3. P Ai = P [Ai ]
i=1 i=1 Random Variable
Probability space (, A, P) X:R
P [a < X b] = b a
Chebyshev
V [X]
P [|X E [X]| t] (x) = 1 (x) 0 (x) = x(x) 00 (x) = (x2 1)(x)
t2
1
Chernoff Upper quantile of N (0, 1): z = (1 )
e
P [X (1 + )] > 1 Gamma
(1 + )1+
X Gamma (, ) X/ Gamma (, 1)
Jensen P
Gamma (, ) i=1 Exp ()
E [(X)] (E [X]) convex P P
Xi Gamma (i , ) Xi
Xj = i Xi Gamma ( i i , )
Z
() 1 x
= x e dx
7 Distribution Relationships 0
Beta
Binomial
1 ( + ) 1
n x1 (1 x)1 = x (1 x)1
Xi Bern (p) =
X
Xi Bin (n, p) B(, ) ()()
B( + k, ) +k1
E X k1
i=1 E Xk = =
X Bin (n, p) , Y Bin (m, p) = X + Y Bin (n + m, p) B(, ) ++k1
limn Bin (n, p) = Po (np) (n large, p small) Beta (1, 1) Unif (0, 1)
8
8 Probability and Moment Generating Functions Conditional mean and variance
X
E [X | Y ] = E [X] + (Y E [Y ])
GX (t) = E tX |t| < 1
" Y
#
X (Xt)i X E Xi
ti
t
Xt
MX (t) = GX (e ) = E e =E =
p
i! i! V [X | Y ] = X 1 2
i=0 i=0
P [X = 0] = GX (0)
P [X = 1] = G0X (0) 9.3 Multivariate Normal
(i)
GX (0) Covariance Matrix (Precision Matrix 1 )
P [X = i] =
i!
V [X1 ] Cov [X1 , Xk ]
E [X] = G0X (1 )
=
.. .. ..
(k)
E X k = MX (0) . . .
X!
Cov [Xk , X1 ] V [Xk ]
(k)
E = GX (1 )
(X k)! If X N (, ),
2
V [X] = G00X (1 ) + G0X (1 ) (G0X (1 ))
1/2 1
GX (t) = GY (t) = X = Y
d
fX (x) = (2)n/2 || exp (x )T 1 (x )
2
Properties
9 Multivariate Distributions
Z N (0, 1) X = + 1/2 Z = X N (, )
9.1 Standard Bivariate Normal X N (, ) = 1/2 (X ) N (0, 1)
p X N (, ) = AX N A, AAT
Let X, Y N (0, 1) X
Z with Y = X + 1 2 Z
X N (, ) a is vector of length k = aT X N aT , aT a
Joint density 2
x + y 2 2xy
1
f (x, y) = exp 10 Convergence
2(1 2 )
p
2 1 2
Conditionals Let {X1 , X2 , . . .} be a sequence of rvs and let X be another rv. Let Fn denote
the cdf of Xn and let F denote the cdf of X.
(Y | X = x) N x, 1 2 (X | Y = y) N y, 1 2
and
Types of Convergence
Independence D
X
Y = 0 1. In distribution (weakly, in law): Xn X
1
z
( > 0) lim P [|Xn X| > ] = 0
n
f (x, y) = exp
2(1 2 )
p
2x y 1 2
as
3. Almost surely (strongly): Xn X
" 2 2 #
x x y y x x y y h i h i
z= + 2 P lim Xn = X = P : lim Xn () = X() = 1
x y x y n n
9
qm
4. In quadratic mean (L2 ): Xn X CLT Notations
11 Statistical Inference
10.1 Law of Large Numbers (LLN) iid
Let X1 , , Xn F if not otherwise noted.
Let {X1 , . . . , Xn } be a sequence of iid rvs, E [X1 ] = , and V [X1 ] < .
11.1 Point Estimation
Weak (WLLN)
P Point estimator bn of is a rv: bn = g(X1 , . . . , Xn )
Xn as n h i
bias(bn ) = E bn
Strong (SLLN) P
as Consistency: bn
Xn as n
Sampling distribution: F (bn )
r h i
Standard error: se(n ) = V bn
b
10.2 Central Limit Theorem (CLT)
h i h i
Let {X1 , . . . , Xn } be a sequence of iid rvs, E [X1 ] = , and V [X1 ] = 2 . Mean squared error: mse = E (bn )2 = bias(bn )2 + V bn
p-value evidence
where < 0.01 very strong evidence against H0
0.01 0.05 strong evidence against H0
r
T 0.05 0.1 weak evidence against H0
se(b
b ) =
Jn
> 0.1 little or no evidence against H0
Wald Test
Two-sided test
and Jn = Jn () = b.
b and
= b 0
Reject H0 when |W | > z/2 where W =
se
b
P |W | > z/2
p-value = P0 [|W | > |w|] P [|Z| > |w|] = 2(|w|)
12.4 Parametric Bootstrap
Likelihood Ratio Test (LRT)
Sample from f (x; bn ) instead of from Fn , where bn could be the mle or method sup Ln () Ln (bn )
T (X) = =
of moments estimator. sup0 Ln () Ln (bn,0 ) 13
k
D
X iid xn = (x1 , . . . , xn )
(X) = 2 log T (X) 2rq where Zi2 2k with Z1 , . . . , Zk N (0, 1)
Prior density f ()
i=1 Likelihood f (xn | ): joint density of the data
p-value = P0 [(X) > (x)] P 2rq > (x) n
Y
In particular, X n iid = f (xn | ) = f (xi | ) = Ln ()
Multinomial LRT
i=1
Posterior density f ( | xn )
X1 Xk
Let pn = ,..., be the mle
Normalizing constant cn = f (xn ) = f (x | )f () d
R
n n
k Xj Kernel: part of a density that depends Ron
Ln (pn ) Y pj
T (X) = = Ln ()f ()
Posterior Mean n = f ( | xn ) d = R Ln ()f
R
Ln (p0 ) j=1
p0j () d
k
X pj D
(X) = 2 Xj log 2k1 14.1 Credible Intervals
j=1
p 0j
The approximate size LRT rejects H0 when (X) 2k1, 1 Posterior Interval
2
Pearson Test Z b
n
P [ (a, b) | x ] = f ( | xn ) d = 1
k
X (Xj E [Xj ])2 a
T = where E [Xj ] = np0j under H0
j=1
E [Xj ] 1 Equal-tail Credible Interval
D
T 2k1 Z a Z
f ( | xn ) d = f ( | xn ) d = /2
p-value = P 2k1 > T (x)
D
b
2
Faster Xk1 than LRT, hence preferable for small n
1 Highest Posterior Density (HPD) region Rn
Independence Testing
1. P [ Rn ] = 1
I rows, J columns, X multinomial sample of size n = I J
X 2. Rn = { : f ( | xn ) > k} for some k
mles unconstrained: pij = nij
X
mles under H0 : p0ij = pi pj = Xni nj Rn is unimodal = Rn is an interval
PI PJ nX
LRT: = 2 i=1 j=1 Xij log Xi Xijj
PI PJ (X E[X ])2
Pearson 2 : T = i=1 j=1 ijE[Xij ]ij
14.2 Function of Parameters
D
LRT and Pearson 2k , where = (I 1)(J 1) Let = () and A = { : () }.
Posterior CDF for
Z
14 Bayesian Inference H(r | xn ) = P [() | xn ] = f ( | xn ) d
A
Bayes Theorem
Posterior Density
f (x | )f () f (x | )f () h( | xn ) = H 0 ( | xn )
f ( | x) = n
=R Ln ()f ()
f (x ) f (x | )f () d
Bayesian Delta Method
Definitions
| X n N (),
b seb 0 ()
b
n
X = (X1 , . . . , Xn )
14
14.3 Priors Continuous likelihood (subscript c denotes constant)
Likelihood Conjugate Prior Posterior hyperparameters
Choice
Uniform(0, ) Pareto(xm , k) max x(n) , xm , k + n
n
Subjective Bayesianism: prior should incorporate as much detail as possible Exponential() Gamma(, ) + n, +
X
xi
the researchs a priori knowledge via prior elicitation. i=1
Objective Bayesianism: prior should incorporate as little detail as possible Pn
0 i=1 xi 1 n
(non-informative prior). Normal(, c2 ) Normal(0 , 02 ) + / + 2 ,
2 2 02 c
Robust Bayesianism: consider various priors and determine sensitivity of 0 c1
1 n
our inferences to changes in the prior. + 2
02 c
Pn
02 + i=1 (xi )2
Types Normal(c , 2 ) Scaled Inverse Chi- + n,
+n
square(, 02 )
Flat: f () constant + nx n
R Normal(, 2 ) Normal- , + n, + ,
Proper: f () d = 1 +n 2
scaled Inverse n
(x )2
R
Improper: f () d = 1X 2
Gamma(, , , ) + (xi x) +
Jeffreys prior (transformation-invariant): 2 i=1 2(n + )
1
1 1
1 1
p p MVN(, c ) MVN(0 , 0 ) 0 + nc 0 0 + n x ,
f () I() f () det(I())
1 1
1
0 + nc
n
Conjugate: f () and f ( | xn ) belong to the same parametric family X
MVN(c , ) Inverse- n + , + (xi c )(xi c )T
Wishart(, ) i=1
n
X xi
14.3.1 Conjugate Priors Pareto(xmc , k) Gamma(, ) + n, + log
i=1
xm c
Discrete likelihood Pareto(xm , kc ) Pareto(x0 , k0 ) x0 , k0 kn where k0 > kn
Xn
Likelihood Conjugate Prior Posterior hyperparameters Gamma(c , ) Gamma(0 , 0 ) 0 + nc , 0 + xi
n n i=1
X X
Bernoulli(p) Beta(, ) + xi , + n xi
i=1
Xn n
X
i=1
n
X
14.4 Bayesian Testing
Binomial(p) Beta(, ) + xi , + Ni xi If H0 : 0 :
i=1 i=1 i=1
n
X
Z
Negative Binomial(p) Beta(, ) + rn, + xi Prior probability P [H0 ] = f () d
n
i=1 Z0
Posterior probability P [H0 | xn ] = f ( | xn ) d
X
Poisson() Gamma(, ) + xi , + n
0
i=1
n
X
Multinomial(p) Dirichlet() + x(i)
i=1 Let H0 , . . . , HK1 be K hypotheses. Suppose f ( | Hk ),
n
f (xn | Hk )P [Hk ]
X
Geometric(p) Beta(, ) + n, + xi
P [Hk | xn ] = PK ,
n
k=1 f (x | Hk )P [Hk ]
i=1
15
Marginal Likelihood 1. Estimate VF [Tn ] with VFn [Tn ].
Z 2. Approximate VFn [Tn ] using simulation:
f (xn | Hi ) = f (xn | , Hi )f ( | Hi ) d
(a) Repeat the following B times to get Tn,1 , . . . , Tn,B , an iid sample from
Posterior Odds (of Hi relative to Hj ) the sampling distribution implied by Fn
B B
!2
Bayes Factor 1 X 1 X
log10 BF10 BF10 evidence vboot = VFn = Tn,b T
B B r=1 n,r
b=1
0 0.5 1 1.5 Weak
0.5 1 1.5 10 Moderate
12 10 100 Strong
16.1.1 Bootstrap Confidence Intervals
>2 > 100 Decisive
p
1p BF 10 Normal-based Interval
p = p where p = P [H1 ] and p = P [H1 | xn ]
1 + 1p BF10
Tn z/2 se
boot
Algorithm
(Frequentist) Risk
1. Draw cand f ()
Z
2. Generate u Unif (0, 1)
h i
R(, )
b = L(, (x))f
b (x | ) dx = EX| L(, (X))
b
Ln (cand )
3. Accept cand if u
Ln (bn ) Bayes Risk
ZZ
16.3 Importance Sampling
h i
r(f, )
b = L(, (x))f
b (x, ) dx d = E,X L(, (X))
b
Sample from an importance function g rather than target density h.
Algorithm to obtain an approximation to E [q() | xn ]:
h h ii h i
r(f, )
b = E EX| L(, (X)
b = E R(, )b
iid
1. Sample from the prior 1 , . . . , n f ()
h h ii h i
r(f, )
b = EX E|X L(, (X)
b = EX r(b | X)
Ln (i )
2. For each i = 1, . . . , B, calculate wi = PB
i=1 Ln (i )
n
PB 17.2 Admissibility
3. E [q() | x ] i=1 q(i )wi
b0 dominates b if
: R(, b0 ) R(, )
b
17 Decision Theory
: R(, b0 ) < R(, )
b
Definitions
b is inadmissible if there is at least one other estimator b0 that dominates
Unknown quantity affecting our decision: it. Otherwise it is called admissible.
17
17.3 Bayes Rule Residual Sums of Squares (rss)
Bayes Rule (or Bayes Estimator) n
X
rss(b0 , b1 ) = 2i
r(f, )
b = inf e r(f, )
e
i=1
R
(x) = inf r( | x) x = r(f, )
b b b = r(b | x)f (x) dx
Least Square Estimates
Theorems
bT = (b0 , b1 )T : min rss
b0 ,
b1
Squared error loss: posterior mean
Absolute error loss: posterior median
Zero-one loss: posterior mode b0 = Yn b1 Xn
Pn Pn
(Xi Xn )(Yi Yn ) i=1 Xi Yi nXY
17.4 Minimax Rules b1 = i=1 Pn 2
= n
(X X ) 2
P 2
i=1 i n i=1 Xi nX
Maximum Risk
0
h i
R()
b = sup R(, ) R(a) = sup R(, a) E b | X n =
b 1
2 n1 ni=1 Xi2 X n
h i P
Minimax Rule n
V |X =
b
e = inf sup R(, )
b = inf R()
sup R(, ) e nsX X n 1
e e
r Pn
2
i=1 Xi
b
se(
b b0 ) =
b = Bayes rule c : R(, )
b =c sX n n
b
Least Favorable Prior se(
b b1 ) =
sX n
bf = Bayes rule R(, bf ) r(f, bf ) Pn Pn
where s2X = n1 i=1 (Xi X n )2 and
b2 = 1
n2 2i
i=1 an (unbiased) estimate
of . Further properties:
18 Linear Regression
P P
Consistency: b0 0 and b1 1
Definitions
Asymptotic normality:
Response variable Y
Covariate X (aka predictor variable or feature) b0 0 D b1 1 D
N (0, 1) and N (0, 1)
se(
b b0 ) se(
b b1 )
18.1 Simple Linear Regression
Approximate 1 confidence intervals for 0 and 1 are
Model
Yi = 0 + 1 Xi + i E [i | Xi ] = 0, V [i | Xi ] = 2 b0 z/2 se(
b b0 ) and b1 z/2 se(
b b1 )
Fitted Line
rb(x) = b0 + b1 x The Wald test for testing H0 : 1 = 0 vs. H1 : 1 6= 0 is: reject H0 if
|W | > z/2 where W = b1 /se(
b b1 ).
Predicted (Fitted) Values
Ybi = rb(Xi ) R2
Pn b 2
Pn 2
Residuals i=1 (Yi Y ) rss
2
= 1 Pn i=1 i 2 = 1
i = Yi Ybi = Yi b0 + b1 Xi R = Pn 2
i=1 (Yi Y ) i=1 (Yi Y )
tss
18
Likelihood If the (k k) matrix X T X is invertible,
n n n
Y Y Y b = (X T X)1 X T Y
L= f (Xi , Yi ) = fX (Xi ) fY |X (Yi | Xi ) = L1 L2 h i
i=1 i=1 i=1 V b | X n = 2 (X T X)1
n
b N , 2 (X T X)1
Y
L1 = fX (Xi )
i=1
n
(
2
) Estimate regression function
Y
n 1 X
L2 = fY |X (Yi | Xi ) exp 2 Yi (0 1 Xi ) k
2 i X
i=1 rb(x) = bj xj
j=1
Under the assumption of Normality, the least squares estimator is also the mle
2
Unbiased estimate for
n
1X 2 n
b2 =
1 X 2
n i=1 i b2 =
= X b Y
n k i=1 i
The training error is a downward-biased estimate of the prediction risk. Frequentist Risk
h i Z Z
h i R(f, fbn ) = E L(f, fbn ) = b2 (x) dx + v(x) dx
E R btr (S) < R(S)
h i
h
i n
X h i b(x) = E fbn (x) f (x)
bias(R btr (S) R(S) = 2
btr (S)) = E R Cov Ybi , Yi h i
i=1 v(x) = V fbn (x)
Adjusted R2
19.1.1 Histograms
2 n 1 rss
R (S) = 1
n k tss Definitions
Mallows Cp statistic Number of bins m
1
Binwidth h = m
R(S)
b =R 2 = lack of fit + complexity penalty
btr (S) + 2kb Bin Bj has j observations
R
Define pbj = j /n and pj = Bj f (u) du
Akaike Information Criterion (AIC)
Histogram Estimator
m
AIC(S) = bS2 )
`n (bS , k X pbj
fbn (x) = I(x Bj )
j=1
h
Bayesian Information Criterion (BIC) h i p
j
E fbn (x) =
k h
bS2 ) log n
BIC(S) = `n (bS , h i p (1 p )
j j
2 V fbn (x) =
nh2
h2
Z
Validation and Training 2 1
R(fn , f )
b (f 0 (u)) du +
12 nh
m
X n n !1/3
R
bV (S) = (Ybi (S) Yi )2 m = |{validation data}|, often or 1 6
i=1
4 2 h = 1/3 R 2 du
n (f 0 (u))
2/3 Z 1/3
Leave-one-out Cross-validation C 3 2
R (fbn , f ) 2/3 C= (f 0 (u)) du
n n
!2 n 4
X
2
X Yi Ybi (S)
R
bCV (S) = (Yi Yb(i) ) = Cross-validation estimate of E [J(h)]
i=1 i=1
1 Uii (S) Z n m
2 2Xb 2 n+1 X 2
JCV (h) = fn (x) dx
b b f(i) (Xi ) = pb
U (S) = XS (XST XS )1 XS (hat matrix) n i=1 (n 1)h (n 1)h j=1 j
20
19.1.2 Kernel Density Estimator (KDE) k-nearest Neighbor Estimator
Kernel K 1 X
rb(x) = Yi where Nk (x) = {k values of x1 , . . . , xn closest to x}
k
i:xi Nk (x)
K(x) 0
Nadaraya-Watson Kernel Estimator
R
K(x) dx = 1
R
xK(x) dx = 0 n
X
rb(x) = wi (x)Yi
R 2 2
x K(x) dx K >0
i=1
xxi
KDE K
wi (x) = h [0, 1]
n
Pn xxj
j=1 K
1X1 x Xi h
fbn (x) = K
n i=1 h h h4
Z 4 Z
f 0 (x)
2
Z Z rn , r)
R(b x2 K 2 (x) dx r00 (x) + 2r0 (x) dx
1 4 00 2 1 4 f (x)
R(f, fn ) (hK )
b (f (x)) dx + K 2 (x) dx
4 nh 2 K 2 (x) dx
Z R
2/5 1/5 1/5 + dx
nhf (x)
Z Z
c c2 c3
h = 1 c1 = 2
K , c 2 = K 2
(x) dx, c 3 = (f 00 (x))2 dx c1
n1/5 h
Z 4/5 Z 1/5 n1/5
c4 5 2 2/5 c2
R (f, fbn ) = 4/5 c4 = (K ) K 2 (x) dx (f 00 )2 dx R (b
rn , r) 4/5
n 4 n
| {z }
C(K)
Pm+n = Pm Pn
1
Pn = P P = Pn St Exp
Marginal probability
1
Pk
Sample autocorrelation function If 2k+1 i=k wtj 0, a linear trend function t = 0 + 1 t passes
without distortion
b(h)
b(h) = Differencing
b(0)
t = 0 + 1 t = xt = 1
Sample cross-variance function
nh
1 X 21.4 ARIMA models
bxy (h) = (xt+h x)(yt y)
n t=1 Autoregressive polynomial
bxy (h) Autoregressive operator
bxy (h) = p
bx (0)b
y (0)
(B) = 1 1 B p B p
Properties
Autoregressive model order p, AR (p)
1
bx (h) = if xt is white noise
n xt = 1 xt1 + + p xtp + wt (B)xt = wt
1
bxy (h) = if xt or yt is white noise AR (1)
n
k1
X k,||<1 X
21.3 Non-Stationary Time Series xt = k (xtk ) + j (wtj ) = j (wtj )
j=0 j=0
Classical decomposition model | {z }
linear process
P
xt = t + st + wt E [xt ] = j=0 j (E [wtj ]) = 0
2 h
w
t = trend (h) = Cov [xt+h , xt ] = 12
(h)
st = seasonal component (h) = (0) = h
wt = random noise term (h) = (h 1) h = 1, 2, . . .
24
Moving average polynomial Seasonal ARIMA
(z) = 1 + 1 z + + q zq z C q 6= 0 Denoted by ARIMA (p, d, q) (P, D, Q)s
Moving average operator P (B s )(B)D d s
s xt = + Q (B )(B)wt
(B) = 1 + 1 B + + p B p
21.4.1 Causality and Invertibility
MA (q) (moving average model order q) P
ARMA (p, q) is causal (future-independent) {j } : j=0 j < such that
xt = wt + 1 wt1 + + q wtq xt = (B)wt
q
X
xt = wtj = (B)wt
X
E [xt ] = j E [wtj ] = 0
j=0
j=0
( Pqh P
2
w j=0 j j+h 0hq ARMA (p, q) is invertible {j } : j=0 j < such that
(h) = Cov [xt+h , xt ] =
0 h>q
X
MA (1) (B)xt = Xtj = wt
xt = wt + wt1 j=0
2 2
(1 + )w h = 0
Properties
(h) = w 2
h=1
0 h>1 ARMA (p, q) causal roots of (z) lie outside the unit circle
(
2 h=1
(z)
(h) = (1+ )
X
0 h>1 (z) = j z j = |z| 1
j=0
(z)
ARMA (p, q)
xt = 1 xt1 + + p xtp + wt + 1 wt1 + + q wtq ARMA (p, q) invertible roots of (z) lie outside the unit circle
(B)xt = (B)wt X (z)
(z) = j z j = |z| 1
Partial autocorrelation function (PACF) j=0
(z)
xh1
i , regression of xi on {xh1 , xh2 , . . . , x1 }
Behavior of the ACF and PACF for causal and invertible ARMA models
hh = corr(xh xh1
h , x0 xh1
0 ) h2
E.g., 11 = corr(x1 , x0 ) = (1) AR (p) MA (q) ARMA (p, q)
ARIMA (p, d, q) ACF tails off cuts off after lag q tails off
d xt = (1 B)d xt is ARMA (p, q) PACF cuts off after lag p tails off q tails off
(B)(1 B)d xt = (B)wt
Exponentially Weighted Moving Average (EWMA) 21.5 Spectral Analysis
xt = xt1 + wt wt1 Periodic process
X xt = A cos(2t + )
xt = (1 )j1 xtj + wt when || < 1
j=1 = U1 cos(2t) + U2 sin(2t)
xn+1 = (1 )xn + xn
Frequency index (cycles per unit time), period 1/
25
Amplitude A Discrete Fourier Transform (DFT)
Phase n
X
U1 = A cos and U2 = A sin often normally distributed rvs d(j ) = n1/2 xt e2ij t
i=1
Periodic mixture
q
X Fourier/Fundamental frequencies
xt = (Uk1 cos(2k t) + Uk2 sin(2k t))
j = j/n
k=1
Uk1 , Uk2 , for k = 1, . . . , q, are independent zero-mean rvs with variances k2 Inverse DFT
n1
Pq X
(h) = k=1 k2 cos(2k h) xt = n1/2 d(j )e2ij t
Pq
(0) = E x2t = k=1 k2 j=0
22.4 Combinatorics [3] R. H. Shumway and D. S. Stoffer. Time Series Analysis and Its Applications
With R Examples. Springer, 2006.
Sampling
[4] A. Steger. Diskrete Strukturen Band 1: Kombinatorik, Graphentheorie,
k out of n w/o replacement w/ replacement Algebra. Springer, 2001.
k1
Y n! [5] A. Steger. Diskrete Strukturen Band 2: Wahrscheinlichkeitstheorie und
ordered nk = (n i) = nk Statistik. Springer, 2002.
i=0
(n k)!
n nk n!
n1+r
n1+r
[6] L. Wasserman. All of Statistics: A Concise Course in Statistical Inference.
unordered = = = Springer, 2003.
k k! k!(n k)! r n1
27
Univariate distribution relationships, courtesy of Leemis and McQueston [2].
28