Time Series Analysis

Download as pdf or txt
Download as pdf or txt
You are on page 1of 124

STA6856

Time Series Analysis


Achraf Cohen
[email protected]

Department of Mathematics and Statistics


The University of West Florida

Spring 2021

1/106
Time Series Analysis
This course covers Time Series models.
We will use R/RStudio
Assignments 3-4
Two take-home exams
These slides are being updated! if you catch a typo/error please
send an email to [email protected]! Thank you!

2/106
Time Series Analysis
Definitions

A Time Series is a collection of observations xt made sequentially


in time.
A discrete-time time series is a collection of observations xt in
which the set T0 of times at which observations are made is a
discrete set.
A continuous-time time series is a collection of observations xt
are made continuously over some time interval.

3/106
Time Series Analysis
Definitions

A Time Series is a collection of observations xt made sequentially


in time.
A discrete-time time series is a collection of observations xt in
which the set T0 of times at which observations are made is a
discrete set.
A continuous-time time series is a collection of observations xt
are made continuously over some time interval.
This course covers discrete-time time series

3/106
Time Series Analysis
Introduction

Time
We mean by time:
Seconds, hours, years,...

4/106
Time Series Analysis
Introduction

Time
We mean by time:
Seconds, hours, years,...
Spatial: 1st machine in a row, 2nd machine,...

4/106
Time Series Analysis
Introduction

Time
We mean by time:
Seconds, hours, years,...
Spatial: 1st machine in a row, 2nd machine,...
Depth: one millimetre down, two millimetre down,...

4/106
Time Series Analysis
Introduction

Time
We mean by time:
Seconds, hours, years,...
Spatial: 1st machine in a row, 2nd machine,...
Depth: one millimetre down, two millimetre down,...
The important point here is to have an ordered variable like time
that there is a meaning of direction in its values. Then from a
given observation, past, present and future have a meaning.

4/106
Time Series Analysis
Introduction

Plotting a time series is an important early step in its analysis


In general, a plot can reveal:
Trend: upward or downward pattern that might be
extrapolated into the future

5/106
Time Series Analysis
Introduction

Plotting a time series is an important early step in its analysis


In general, a plot can reveal:
Trend: upward or downward pattern that might be
extrapolated into the future
Periodicity: Repetition of behavior in a regular pattern

5/106
Time Series Analysis
Introduction

Plotting a time series is an important early step in its analysis


In general, a plot can reveal:
Trend: upward or downward pattern that might be
extrapolated into the future
Periodicity: Repetition of behavior in a regular pattern
Seasonality: Periodic behavior with a known period (hourly,
monthly, every 2 months...)

5/106
Time Series Analysis
Introduction

Plotting a time series is an important early step in its analysis


In general, a plot can reveal:
Trend: upward or downward pattern that might be
extrapolated into the future
Periodicity: Repetition of behavior in a regular pattern
Seasonality: Periodic behavior with a known period (hourly,
monthly, every 2 months...)
Heteroskedasticity: changing variance

5/106
Time Series Analysis
Introduction

Plotting a time series is an important early step in its analysis


In general, a plot can reveal:
Trend: upward or downward pattern that might be
extrapolated into the future
Periodicity: Repetition of behavior in a regular pattern
Seasonality: Periodic behavior with a known period (hourly,
monthly, every 2 months...)
Heteroskedasticity: changing variance
Dependence: positive (successive observations are similar) or
negative (successive observations are dissimilar)

5/106
Time Series Analysis
Introduction

Plotting a time series is an important early step in its analysis


In general, a plot can reveal:
Trend: upward or downward pattern that might be
extrapolated into the future
Periodicity: Repetition of behavior in a regular pattern
Seasonality: Periodic behavior with a known period (hourly,
monthly, every 2 months...)
Heteroskedasticity: changing variance
Dependence: positive (successive observations are similar) or
negative (successive observations are dissimilar)
Missing data, outliers, breaks...

5/106
Time Series Analysis
Introduction
Example 1: U.S.A. population at ten year intervals from 1790-1990
There is a upward trend
There is a slight change in shape/structure
Nonlinear behavior 250
Population of the U.S.A (Millions)

200
150
100
50
0

1800 1850 1900 1950


6/106
Time Time Series Analysis
Introduction
Example 2: Johnson & Johnson Quarterly Earnings
There are 84 quarters (21 years) measured from the 1st quarter of 1960 to the
last quarter of 1980.
Note the gradually increasing underlying trend and the rather regular variation
superimposed on the trend that seems to repeat over quarters.

7/106
Time Series Analysis
Introduction
Example 3: Global Warming
The data are the global mean land-ocean temperature index from 1880 to
2009. We note an apparent upward trend in the series during the latter part of
the 20th century that has been used as an argument for the global warming
hypothesis (whether the overall trend is natural or whether it is caused by some
human-induced interface)

8/106
Time Series Analysis
Introduction
Example 4: Airline passengers from 1949-1961
Trend? Seasonality? Heteroskedasticity? ...
Upward trend, seasonality on a 12 month interval, increasing variability
Monthly totals of internaional airline passengers 1949−1961

600
500
400
300
200
100

0 20 40 60 80 100 120 140

Time

9/106
Time Series Analysis
Introduction
Example 5: Monthly Employed persons from 1980-1991
Trend? Seasonality? Heteroskedasticity? ...
Upward trend, seasonality with a structural break

10/106
Time Series Analysis
Introduction
Example 6: Monthly Beer Production in Australia
Trend? Seasonality? Heteroskedasticity? breaks?... no trend in last 100
Months, no clear seasonality

11/106
Time Series Analysis
Introduction
Example 7: Annual number of Candadian Lynx trapped near
McKenzie River
Trend? Seasonality? Heteroskedasticity? breaks?... no trend, no clear
seasonality as it does correspond to a known period, periodicity

12/106
Time Series Analysis
Introduction
Example 8: Yield from a controlled chemical batch process
Trend? Seasonality? Heteroskedasticity? breaks?... Negative dependence:
successive observations tend to lie on opposite sides of the mean.

13/106
Time Series Analysis
Introduction
Example 9: Monthly real exchange rates between U.S and Canada
Trend? Seasonality? Heteroskedasticity? breaks?...

14/106
Time Series Analysis
Introduction
Example 9: Monthly real exchange rates between U.S and Canada
Trend? Seasonality? Heteroskedasticity? breaks?...

No obvious seasonality or trend. Hard to make a long range prediction. Positive


Dependence: successive observations tend to lie on the same side of the mean. 14/106
Time Series Analysis
Introduction

Remarks
The issue of distinguishing between dependence and trend is
difficult: There is no unique decomposition of a series into trend
and dependence behaviors.
The issue that tampers this question: we have only one realization.
If we had many realizations, we might be able to average to
determine trend.

15/106
Time Series Analysis
Introduction

Objectives
What do we hope to achieve with time series analysis?
Provide a model of the data (testing of scientific hypothesis,
etc.)

16/106
Time Series Analysis
Introduction

Objectives
What do we hope to achieve with time series analysis?
Provide a model of the data (testing of scientific hypothesis,
etc.)
Predict future values (very common goal of analysis)

16/106
Time Series Analysis
Introduction

Objectives
What do we hope to achieve with time series analysis?
Provide a model of the data (testing of scientific hypothesis,
etc.)
Predict future values (very common goal of analysis)
Produce a compact description of the data (a good model can
be used for "data compression")

16/106
Time Series Analysis
Introduction

Modeling
We take the approach that the data is a realization of random
variable. However, many statistical tools are based on assuming any
R.V. are IID.
In Times Series:
R.V. are usually not independent (affected by trend and
seasonality)

17/106
Time Series Analysis
Introduction

Modeling
We take the approach that the data is a realization of random
variable. However, many statistical tools are based on assuming any
R.V. are IID.
In Times Series:
R.V. are usually not independent (affected by trend and
seasonality)
Variance may change significantly

17/106
Time Series Analysis
Introduction

Modeling
We take the approach that the data is a realization of random
variable. However, many statistical tools are based on assuming any
R.V. are IID.
In Times Series:
R.V. are usually not independent (affected by trend and
seasonality)
Variance may change significantly
R.V. are usually not identically distributed
The first goal in time series modeling is to reduce the analysis
needed to a simpler case: Eliminate Trend, Seasonality, and
heteroskedasticity then we model the remainder as dependent but
Identically distributed
17/106
Time Series Analysis
Introduction

The probabilistic model


A complete probabilistic model/description of a time series Xt
observed as a collection of n random variables at times t1 ,t2 ,. . . , tn
for any positive integer n is provided by the joint probability
distribution,

F (C1 , C2 , ..., Cn ) = P(X1 ≤ C1 , ..., Xn ≤ Cn )

This is generally difficult to write, unless the case the variables


are jointly normal.

18/106
Time Series Analysis
Introduction

The probabilistic model


A complete probabilistic model/description of a time series Xt
observed as a collection of n random variables at times t1 ,t2 ,. . . , tn
for any positive integer n is provided by the joint probability
distribution,

F (C1 , C2 , ..., Cn ) = P(X1 ≤ C1 , ..., Xn ≤ Cn )

This is generally difficult to write, unless the case the variables


are jointly normal.
Thus, we look for other statistical tools => quantifying
dependencies

18/106
Time Series Analysis
Introduction

Recall the basic concepts


X and Y are r.v.’s with finite variance

Cov (X , Y ) = E ((X − E (X ))(X − E (X )))

and correlation
Cov (X , Y )
corr (X , Y ) =
SX SY

r.v.’s with zero correlation are uncorrelated


Uncorrelated does not imply Independence
Linear combination?

19/106
Time Series Analysis
Some properties of Expectation and Variances/Covariances

X, Y, W, and Z are r.v.’s


Cov(Y,Z) = E(YZ) - E(Y)E(Z)
Var(X) = Cov(X,X) = E(X 2 ) - (E (X ))2
Var(a+bX)= b 2 Var(X)
Cov(aX+bY,cZ+dW)= ac Cov(X,Z) + ad Cov(X,W) + bc
Cov(Y,Z) +bd Cov(Y,W)
P P
E ( Xi ) = E (Xi )
We will make use of these rules a lot! Remember them :)

20/106
Time Series Analysis
Introduction

A time series model for the observed data xt


The mean function µX = E (Xt )
The Covariance function
γX (r , s) = E ((Xr − µX (r ))(Xs − µX (s))) for all integers r and
s
The focus will be to determine the mean function and the
Covariance function to define the time series model.

21/106
Time Series Analysis
Some zero-Mean Models
iid Noise
The simplest model for a times series: no trend or seasonal component and in
which the observations are IID with zero mean.
We can write, for any integer n and real numbers x1 , x2 ,...,xn ,

P(X1 ≤ x1 , ..., Xn ≤ xn ) = P(X1 ≤ x1 )...P(Xn ≤ xn )

It plays an important role as a building block for more complicated time series
models

white noise
3
2
1
w

0
−1
−2

0 100 200 300 400 500

Time
22/106
Time Series Analysis
Some zero-Mean Models
Random Walk
The random walk {St }, t = 0, 1, 2, .... is obtained by cumulatively summing iid
random variables, S0 = 0

St = X1 + X2 + · · · + Xt , t = 1, 2, ....

where Xt is iid noise. It plays an important role as a building block for more
complicated time series models

Random walk
10 15 20 25
x

5
0
−5

0 50 100 150 200

Time

23/106
Time Series Analysis
Models with Trend

250
Population of the U.S.A (Millions)

200
150
100
50
0

1800 1850 1900 1950

Time

In this case a zero-mean model for the data is clearly inappropriate. The graph
suggests trying a model of the form:

Xt = mt + Yt

where mt is a function known as the trend component and Yt has a zero mean.
Estimating mt ?
24/106
Time Series Analysis
Models with Trend

mt can be estimated using a least squares regression procedure


(quadratic regression)
The estimated trend component m̂t provides a natural predictor of
future values of Xt
X̂t = m̂t + Yt
Example with R

25/106
Time Series Analysis
Models with Seasonality

In this case a zero-mean model for the data is clearly inappropriate. The graph
suggests trying a model of the form:

Xt = St + Yt

where St is a function known as the season component and Yt has a zero


mean. Estimating St ?
26/106
Example with R Time Series Analysis
Time series Modeling

Plot the series => examine the main characteristics (trend,


seasonality, ...)
Remove the trend and seasonal components to get stationary
residuals/models
Choose a model to fit the residuals using sample statistics
(sample autocorrelation function)
Forecasting will be given by forecasting the residuals to arrive
at forecasts of the original series Xt

27/106
Time Series Analysis
Stationary and Autocorrelation function

Let Xt ba a time series:


The mean function

µX (t) = E (Xt )

The Covariance function

γX (r , s) = Cov (Xr , Xs ) = E ((Xr − µX (r ))(Xs − µX (s)))

for all integers r and s

28/106
Time Series Analysis
Stationary and Autocorrelation function

Definitions
1 Xt is strictly stationary if {X1 , . . . Xn } and {X1+h , . . . Xn+h } have the
same joint distributions for all integers h and n > 0.

29/106
Time Series Analysis
Stationary and Autocorrelation function

Definitions
1 Xt is strictly stationary if {X1 , . . . Xn } and {X1+h , . . . Xn+h } have the
same joint distributions for all integers h and n > 0.
2 Xt is weakly stationary if
µX (t) is independent of t.
γX (t + h, t) is independent of t for each h.

29/106
Time Series Analysis
Stationary and Autocorrelation function

Definitions
1 Xt is strictly stationary if {X1 , . . . Xn } and {X1+h , . . . Xn+h } have the
same joint distributions for all integers h and n > 0.
2 Xt is weakly stationary if
µX (t) is independent of t.
γX (t + h, t) is independent of t for each h.
3 Let Xt be a stationary time series. The autocovariance function (ACVF)
of Xt at lag h is
γX (h) = Cov (Xt+h , Xt )
The autocorrelation function (ACF) of Xt at lag h is
γX (h)
ρX (h) = = Cor (Xt+h , Xt )
γX (0)

29/106
Time Series Analysis
Stationary and Autocorrelation function

iid Noise
If Xt is iid noise and E (Xt2 ) = σ 2 < ∞, then The process Xt is strictly
stationary since the joint distribution can be written, for any integer n and real
numbers c1 , c2 ,...,cn , as follows:

P(X1 ≤ c1 , ..., Xn ≤ cn ) = P(X1 ≤ c1 )...P(Xn ≤ cn )


= F (c1 ) . . . F (cn )
n
Y
= F (ci )
i=1

This does not depend on t. Xt ∼ IID(0, σ 2 )

The autocovariance function


(
σ 2 , if h = 0
γX (t + h, h) = (1)
0, otherwise

30/106
Time Series Analysis
Stationary and Autocorrelation function

White Noise
If Xt is a sequence of uncorrelated random variables, each with zero mean and
variance σ 2 , then clearly Xt is stationary with the same autocovariance function
as the iid noise. We can write

Xt ∼ WN(0, σ 2 )

Clearly, every IID(0, σ 2 ) sequence is WN(0, σ 2 ) but not conversely

31/106
Time Series Analysis
Stationary and Autocorrelation function
Random Walk
If {St } is the random walk with Xt is a IID(0, σ 2 ) sequence, then The random
walk {St }, t = 0, 1, 2, .... is obtained by cumulatively summing iid random
variables, S0 = 0

St = X1 + X2 + · · · + Xt , t = 1, 2, ....

where Xt is iid noise. It plays an important role as a building block for more
complicated time series models

Random walk
10 15 20 25
x

5
0
−5

0 50 100 150 200

Time

32/106
Time Series Analysis
Stationary and Autocorrelation function

First-Order Moving Average or MA(1) Process


Consider the series defined by the equation

Xt = Zt + θZt−1 , t = 0, ±1, ....

where Zt is WN(0, σ 2 ) noise and θ is a real-valued constant.


E (Xt )?
γX (t + h, h)?

33/106
Time Series Analysis
Stationary and Autocorrelation function

First-Order AutoRegressive AR(1) Process


Let us assume now that Xt is a stationary series satisfying the equation

Xt = ΦXt−1 + Zt , t = 0, ±1, ....

where Zt is WN(0, σ 2 ) noise, |Φ| < 1, and Zt is uncorrelated with Xs for each
s < t.
E (Xt )?
γX (t + h, h)?

34/106
Time Series Analysis
The Sample Autocorrelation function
In practical problems, we do not start with a model, but with observed data
(x1 , x2 , . . . , xn ). To assess the degree of dependence in the data and to select
a model for the data, one of the important tools we use is the sample
autocorrelation function (Sample ACF).
Definition
Let x1 , x2 , . . . , xn be observations of a time series. The sample mean of
x1 , x2 , . . . , xn is
Xn
x = 1/n xi
t=1

The sample autocovariance function is


n−|h|
X
γ̂(h) := 1/n (xt+|h| − x)(xt − x), −n < h < n
t=1

The sample autocorrelation function is


γ̂(h)
ρ̂ =
γ̂(0)
35/106
Time Series Analysis
The Sample Autocorrelation function

Remarks
1 The sample autocorrelation function (ACF) can be computed for any data
set and is not restricted to observations from a stationary time series.
2 For data containing a Trend, |ρ̂(h)| will display slow decay as h increases.
3 For data containing a substantial deterministic periodic component,
|ρ̂(h)| will exhibit similar behavior with the same periodicity.

36/106
Time Series Analysis
The Sample Autocorrelation function

We may recognize the sample autocorrelation function of many time series:

Remarks
1 White Noise => Zero
2 Trend => Slow decay
3 Periodic => Periodic
4 Moving Average (q) => Zero for |h| > q
5 AutoRegression (p) => Decay to zero exponentially

37/106
Time Series Analysis
The Sample Autocorrelation function

Examples with R

38/106
Time Series Analysis
Estimation and Elimination of Trend and Seasonal
Components

The first step in the analysis of any time series is to plot the data. Inspection
of the graph may suggest the possibility of representing the data as follows (the
classical decomposition):
Xt = mt + st + Yt
where
mt is the trend component
st is the seasonal component
Yt random noise component / Residuals
if seasonal and noise fluctuations appear to increase with the level of the
process => Eliminate by using a preliminary transformation of the data
(natural log,...).

39/106
Time Series Analysis
Estimation and Elimination of Trend and Seasonal
Components

Approaches
1 Estimate and eliminate the trend and the seasonal components in the
hope that the residual Yt will turn out to be a stationary time series =>
Find a Model using stationary process theory.
2 Box and Jenkins (1976) proposed to apply differencing operators to the
series until the differenced observations resemble a realization of some
stationary time series.

40/106
Time Series Analysis
Estimation and Elimination of Trend and Seasonal
Components

Trend Estimation
Moving average and spectral smoothing are an essentially nonparametric
methods for trend (or signal) estimation

Xt = mt + Yt , E (Yt ) = 0

1 Smoothing with a finite moving average filter


2 Exponential smoothing
3 Smoothing by eliminating the high-frequency components (Fourier series)
4 Polynomial fitting (Regression)

41/106
Time Series Analysis
Estimation and Elimination of Trend and Seasonal
Components

Trend Estimation: Smoothing Moving Average


Let q be a nonnegative integer and consider the two-sided moving average
Pq
j=−q Xt−j
Wt =
2q + 1

It is useful to think about m̂t as a process obtained from Xt by application of a


linear operator or linear filter m̂t = ∞ 1
P
j=−∞ aj Xt−j with weights aj = 2q+1 .
This particular filter is a low-pass filter in the sense that it takes the data and
removes from it the rapidly fluctuating (high frequency) components Ŷt to
leave slowly varying estimated trend term m̂t .
With R (smooth.ma{itsmr})

42/106
Time Series Analysis
Estimation and Elimination of Trend and Seasonal
Components

Trend Estimation: Exponential Smoothing


For any fixed α ∈ [0, 1], the one-sided moving averages m̂t defined by:

m̂t = αXt + (1 − α)m̂t−1 ; t = 2, ...n

and m̂1 = X1
With R (smooth.exp{itsmr})

43/106
Time Series Analysis
Estimation and Elimination of Trend and Seasonal
Components

Trend Estimation: Smoothing by eliminating of high frequency


Using Fourier Transform we can delete some high frequency.

With R (smooth.fft{itsmr})

Trend Estimation: Polynomial fitting


Using regression procedures

R function (we can use lm{stats} or trend{itsmr})

44/106
Time Series Analysis
Nonseasonal Model With Trend: Estimation

Remark: Smoothing Moving Average


There are many filters that could be used for smoothing!
Large q will allow linear trend function mt = c0 + c1 t to pass
We must be aware of choosing q to be too large if mt is not linear
(Example with R)
Clever choice of the weights aj can design a filter that will not only be
effective in attenuating noise in the data, but that will also allow a larger
class of trend functions (for example all polynomials of degree ≤ 3) to
pass.
Spencer 15-point moving average is a filter that passes polynomials of
degree ≤ 3 without distortion. Its weights are:
1/320[−3, −6, −5, 3, 21, 46, 67, 74, 67, 46, 21, 3, −5, −6, −3]

45/106
Time Series Analysis
Estimation and Elimination of Trend and Seasonal
Components

Trend Elimination: Differencing


Instead of attempting to remove the noise by smoothing as in Method 1, we
now attempt to eliminate the trend term by differencing. We define the lag-1
difference operator ∇ by:

∇Xt = Xt − Xt−1 = (1 − B)Xt

where B is the backward shift operator : BXt = Xt−1


With R (diff{base})

46/106
Time Series Analysis
Estimation and Elimination of Trend and Seasonal
Components

Trend Elimination: Differencing


Powers of the operators B and ∇ are defined as follows:

∇j (Xt ) = ∇(∇j−1 (Xt ))

where j >= 1, ∇0 (Xt ) = Xt and

B j (Xt ) = Xt−j

Example: ∇2 Xt ?

If the operator ∇ is applied to a linear trend function mt = c0 + c1 t, then we


obtain the constant function ∇mt = mt − mt−1 = c1 . In the same way we can
show any polynomial trend of degree k can be reduced to a constant by
application of the operator ∇k .
It is found in practice that the order k of differencing required is quite small;
frequently 1 or 2.
47/106
Time Series Analysis
Estimation and Elimination of Trend and Seasonal
Components

Both Trend and Seasonal


The classical Decomposition model

Xt = mt + st + Yt

= st , and dj=1 sj = 0
P
where E (Yt ) = 0, st+d

d is the period of the seasonal component.


The methods described previously (for the trend) can be adapted in a natural
way to eliminate both the trend and seasonality components.

48/106
Time Series Analysis
Estimation: Both Trend and Seasonal [Method 1]
Suppose we have observations x1 , x2 , ...xn
1 The trend is first estimated by applying a moving average filter specially
to eliminate the seasonal component of period d
if the period is even, say d=2q then
m̂t = (0.5xt−q + xt−q+1 , . . . , 0.5xt+q )/d q < t ≤ n − q
if the period is odd, say d=2q+1 then we use the simple
moving average
2 Estimate the seasonal component; for each k=1,..., d, we compute the
average wk of the deviation xk+jd − m̂k+jd , q < k + jd ≤ n − q; and we
estimate the seasonal component as follows:
Pd
wi
ŝk = wk − i=1 ; k = 1, . . . , d
d
and ŝk = ŝk−d ; k>d
3 The deseasonalized data is then dt = xt − ŝt t=1,...n
4 Reestimate the trend from the deseasonalized data using one of the
methods already described.
49/106
Time Series Analysis
Eliminating: Both Trend and Seasonal [Method 2]
We can use the Differencing operator to eliminate the trend and the seasonal
component
1 Eliminate the seasonality of period d using ∇d Xt = Xt − Xt−d
Xt = mt + st + Yt Applying ∇d Xt = mt − mt−d + Yt − Yt−d
2 Eliminate the trend mt − mt−d using ∇k Xt

50/106
Time Series Analysis
Testing Noise squence

IID Null Hypothesis


1 QLB Ljung-Box Test (H0 : data are Independent vs. H1 : data are not
independent)
2 QML McLeod- Li Test (autocorrelations of squared data)
3 Turning point Test (IID vs. not IID)
4 The difference-sign Test (randomness)
5 The rank Test (detecting a linear trend)

Remember that as you increase the number of tests, the probability of at least
one rejects the null hypothesis when it is true increases.

51/106
Time Series Analysis
Summary

Trend and Seasonality


Estimation and Elimination of Trend and Seasonal
Components
1 Smoothing methods
2 Differencing Operator ∇
3 Properties of the operator ∇
4 Procedure to estimate Both Trend and Seasonality (with R)
Testing Noise sequence (test(residuals) with R)

52/106
Time Series Analysis
Stationary Processes

Remarks
1 A key role in time series analysis is given by processes whose properties
do not vary with time.
2 If we wish to make predictions then clearly we must assume that
somehting does not vary with time
3 In time series analysis, our goal is to predict a series that contains a
random component, if this random component is stationary (weakly )
then we can develop powerful techniques to forecast its future values.

53/106
Time Series Analysis
Stationary Processes

Basic Properties
1 The autocovariance function (ACVF)
γ(h) = Cov (Xt+h , Xt ), h = 0, ±1, ±2, . . .
γ(h)
2 The autocorrelation function (ACF) ρ(h) = γ(0)

3 γ(0) ≥ 0
4 |γ(h)| ≤ γ(0), for all h (ρ(h) ≤ 1) or (Cauchy-Schwarz inequality
E (XY )2 ≤ E (X 2 )E (Y 2 ))
5 γ(h) = γ(−h)

54/106
Time Series Analysis
Stationary Processes

Prediction
The ACF and ACVF provide a useful measure of the degree of dependence
among the values of a time series at different times => Very important if we
consider the prediction of future values of the series in terms of the past and
present values.

55/106
Time Series Analysis
Stationary Processes

Prediction
The ACF and ACVF provide a useful measure of the degree of dependence
among the values of a time series at different times => Very important if we
consider the prediction of future values of the series in terms of the past and
present values.

Question
What is the role of the autocorrelation function in prediction?

55/106
Time Series Analysis
Stationary Processes

The role of autocorrelation in prediction


Consider Xt a stationary Gaussian time series (all of its joint distributions
are Multivariate Normal)
We observed Xn and we would like to find the function of Xn that gives
us the best predictor of Xn+h (the value of the series after h time units).
The best predictor will be given by the function of Xn that minimizes the
mean squared error (MSE).

Question
What is function of Xn that gives us the best predictor of Xn+h ?

Answer
The best predictor of Xn+h in terms of MSE is given by
E (Xn+h |Xn ) = µ + ρ(h)(Xn − µ)

56/106
Time Series Analysis
Stationary Processes

The role of autocorrelation in prediction


For time series with non-normal joint distributions the calculation are in general
more complicated => we look at the best linear predictor (l(Xn ) = aXn + b),
then our problem becomes finding a and b that minimize E ((Xn+h − aXn − b)2 ).

Answer
The best linear predictor of l(Xn ) in terms of MSE is given by

l(Xn ) = µ + ρ(h)(Xn − µ)

The fact that the best linear predictor depends only on the mean and the ACF
of the series Xt means that it can be calculated without more detailed
knowledge of the series Xt

57/106
Time Series Analysis
Stationary Processes: Examples

The MA(q) process


Xt is a moving-average process of order q if

Xt = Zt + θ1 Zt−1 + · · · + θq Zt−q

where Zt ∼ WN(0, σ 2 ) and θ1 , . . . , θq are constants.

Remarks
If Xt is a stationary q-correlated time series with mean 0, then it can be
represented as the MA(q) process

58/106
Time Series Analysis
Stationary Processes: Examples

The ARMA(1,1) process


The time series Xt is an ARMA(1,1) process if it is stationary and satisfies (for
every t)
Xt − ΦXt−1 = Zt + θZt−1
where Zt ∼ WN(0, σ 2 )

59/106
Time Series Analysis
Stationary Processes: Properties

The MA(q) process


The simplest ways to construct a time series that is strictly stationary is to
"filter" an iid sequence of random variables. Consider Zt ∼ IID, we define:

Xt = g (Zt , Zt−1 , . . . , Zt−q )

for some real-valued function g (., . . . , .). We can say that Xt is q-dependent.

Remarks
IID is 0-dependent
WN is 0-correlated
A stationary time series is q-correlated if γ(h) = 0 whenever lhl > q
MA(1) is 1-correlated
MA(q) is q-correlated

60/106
Time Series Analysis
Stationary Processes: Properties

The MA(q) process


Xt is a moving-average process of order q if

Xt = Zt + θ1 Zt−1 + · · · + θq Zt−q

where Zt ∼ WN(0, σ 2 ) and θ1 , . . . , θq are constants.

If Xt is a stationary q-correlated time series with mean 0, then it can be


represented as the MA(q) process given by the equation above.

61/106
Time Series Analysis
Stationary Processes: Properties

The class of Linear time series models, which includes the class of
Autoregressive Moving-Average (ARMA) models, provides a general framework
for studying stationary processes.

Linear processes
The time series Xt is a Linear process if it has the representation:

X
Xt = ψj Zt−j (2)
j=−∞

2
P∞all t, where Zt ∼ WN(0, σ ) and ψj is a sequence of constants with
for
j=−∞ | ψj |< ∞
P∞
j=−∞ | ψj |< ∞ ensures the infinite sum in (2) converges
1 The condition

2 If ψj = 0 for all j < 0 then a linear process is called a moving average or


MA(∞)

62/106
Time Series Analysis
Stationary Processes: Linear Process

Proposition
Let
P∞Yt be a stationary series with mean 0 and autocovariance function γy . If
j=−∞ | ψj |< ∞, then the time series


X ∞
X
Xt = ψj Yt−j = ψj B j Yt = ψ(B)Yt (3)
j=−∞ j=−∞

is stationary with mean zero and autocorrelation function γx (h)


PP
1 γx (h) = ψj ψk γX (h − k + j)

63/106
Time Series Analysis
Stationary Processes: Properties

The AR(1) process


The time series Xt is an AR(1) process was found to be stationary:

Xt − ΦXt−1 = Zt + θZt−1 (4)

if Zt ∼ WN(0, σ 2 ) and | Φ |< 1, and Zt is uncorrelated with Xs for each s ≤ t


1 Xt is called a causal or future-independent function of Zt .
2 If | Φ |> 1 (the series does not converge, we can rewrite then AR(1))
3 If Φ = ±1, there is no stationary solution of (4)

64/106
Time Series Analysis
Stationary Processes: Properties

The ARMA(1,1) process


The time series Xt is an ARMA(1,1) process if it is stationary and satisfies (for
every t)
Xt − ΦXt−1 = Zt + θZt−1
where Zt ∼ WN(0, σ 2 ) and θ + Φ 6= 0
1 A stationary the ARMA(1,1) exists if and only if φ 6= ±1
2 If | φ |< 1, then the
Punique stationary solution is
Xt = Zt + (θ + φ) φj−1 Zt−j . This Xt is causal, since Xt can be
expressed in terms of the current and past values of Zs , s ≤ t
3 If | φ |> 1, then the unique
Pstationary solution is
Xt = −θψ −1 Zt + (θ + φ) φ−j−1 Zt+j . This Xt is noncausal, since Xt
can be expressed in terms of the current and future values of Zs , s ≥ t.
(unatural solution)

65/106
Time Series Analysis
Stationary Processes: Properties

The ARMA(1,1) process


The time series Xt is an ARMA(1,1) process if it is stationary and satisfies (for
every t)
Xt − ΦXt−1 = Zt + θZt−1
where Zt ∼ WN(0, σ 2 ) and θ + Φ 6= 0
1 if θ = ±1 , then ARMA(1,1) process is invertible in the more general
sense that Zt is a mean square limit of finite linear combinations of Xs ,
s ≤ t.
2 If | θ |< 1, then the ARMA(1,1) is invertible, since Zt can be expressed in
terms of the current and past values of Xs , s ≤ t
3 If | θ |> 1, then the ARMA(1,1) is noninvertible, since Zt can be
expressed in terms of the current and future values of Xs , s ≥ t.

66/106
Time Series Analysis
Stationary Processes: Properties

The Sample Mean and Autocorrelation Function


A weakly stationary time series Xt is charachterized by its mean µ, its
autocovariance γ(.), and its autocorrelation ρ(.). The estimation of these
statistics play a crucial role in problems of inference => constructing an
appropriate model for the data. We examine here some properties of the
sample estimates X and ρ̂(.).
1 E (X n ) = µ
 
Pn |h|
2 Var (X n ) = n−1 h=−n 1− n
γ(h)
√ |h| 
3 If the time series is Gaussian, then (X n − µ)/ n ∼ N(0, 1 − n
γ(h))

4 The confidence Intervals for µ are given X n ± Z1−α/2 √v̂n , where
P |h| 
v̂ = |h|<√n 1 − √ n
γ̂(h). For ARMA processes, this is a good
approximation of v for large n.

67/106
Time Series Analysis
Stationary Processes: Properties

The Sample Mean: Asymptotic distribution


If we know the asymptotic distribution of X n , we can use it to infer about µ
(e.g. is µ=0?). Similarly for ρ̂(h)

1
√ X | h |
(X n − µ) n ∼ AN(0, 1− γ(h))
n
|h|<n

2 In this case, the confidence Intervals for µ are given X n ± Z1−α/2 √v̂n ,
P |h| 
where v̂ = |h|<√n 1 − √ n
γ̂(h). For ARMA processes, this is a good
approximation of v for large n.

Example
What are the approximate 95% confidence intervals for the mean of AR(1)?
"AN" means Asymptotically Normal

68/106
Time Series Analysis
Stationary Processes: Properties

The Estimation of γ(.) and ρ(.)


The sample autocovariance and autocorrelation functions are defined by:
Pn−|h|
t=1 (Xt+|h| − X n )(Xt − X n )
γ̂(h) =
n
and

γ̂(h)
ρ̂(h) =
γ̂(0)

1 For h slightly smaller than n, the estimate of γ(h) and ρ(h) are
unreliable. Since there few pairs (Xt+h , Xt ) available (only one if h=n-1).
A practical guide is to have at least n=50 and h ≤ n/4

69/106
Time Series Analysis
Stationary Processes: Properties

The Estimation of γ(.) and ρ(.): Asymptotic distribution


The sampling distribution of ρ(.) can usually be approximated by a normal
distribution for large sample sizes. For Linear Models (ARMA):

W
ρ̂ = (ρ̂(1), . . . , ρ̂(k))0 ∼ AN(ρ,
)
n
where ρ = (ρ(1), . . . , ρ(k)), and W is the covariance matrix whose (i, j)
element is given by Bartlett’s formula:


X
wij = {ρ(k + i) + ρ(k − i) − 2ρ(i)ρ(k)} × {ρ(k + j) + ρ(k − j) − 2ρ(j)ρ(k)}
k=1

1 IID Noise?

70/106
Time Series Analysis
Stationary Processes: Properties ACF

The sample ACF Examination - Results


1.96
If | ρ̂(h) |< √ ,
n
for all h ≥ 1, then assume MA(0), WN sequence.
1.96
If | ρ̂(1) |> √ ,
n
then we should look at the rest ρ̂(h) with

1.96 1+2ρ2 (1)
± √
n
; we can replace ρ(1) by its estimate (you can also remark
that 2ρ̂2 (1)/n ∼ 0, for large n).
In general, if | ρ̂(h0 ) |> 1.96

n
and | ρ̂(h) |< 1.96
√ ,
n
for h ≥ h0 , then assume
MA(q) model with q = h0 .

71/106
Time Series Analysis
Stationary Processes: Forecasting

Forecasting Pn Xn+h
Now we consider the problem of predicting the values Xn+h ; h > 0. Let’s
assume Xt is a stationary time series with µ and γ.
The goal is to find the linear combination of 1, Xn , . . . , X1 that minimizes the
mean squared error. We will denote

Pn Xn+h = a0 + a1 Xn + · · · + an X1

It remains to find the coefficients ai that minimizes:

E ((Xn+h − a0 − a1 Xn − · · · − an X1 )2 )

72/106
Time Series Analysis
Stationary Processes: Forecasting

Forecasting Pn Xn+h
We can show that Pn Xn+h is given by:
n
X
Pn Xn+h = µ + ai (Xn+1−i − µ)
i=1

and
E ((Xn+h − Pn Xn+h )2 ) = γ(0) − a 0n γn (h)
where a n satisfies

Γna n = γn (h)
. . . γ(n − 1)
 
γ(0) γ(1)
 γ(1) γ(0) . . . γ(n − 2)
where a n = (a1 , . . . , an )0 , Γn =  .. .. .. ;
 
..
 . . . . 
γ(n − 1) γ(n − 2) . . . γ(0)
γn (h) = (γ(h), γ(h + 1), . . . , γ(h + n − 1))0 ; and a0 = µ(1 − ni=1 ai )
P

73/106
Time Series Analysis
Stationary Processes: Forecasting

Prediction Algorithms
The following prediction algorithms use the idea of one-step predictor Pn Xn+1
based on n previous observations would be used to calculate Pn+1 Xn+2 . This
said to be recursive.
1 The Durbin-Levinson Algorithm (well suited to forecasting AR(p))
2 The Innovations Algorithm (well suited to forecasting MA(q))

74/106
Time Series Analysis
ARMA(p,q) models

ARMA(p, q) process
Xt is an ARMA(p, q) process if Xt is stationary and if for every t,

Xt − φ1 Xt−1 − · · · − φp Xt−p = Zt + θ1 Zt−1 + · · · + θq Zt−q

where Zt ∼ WN(0, σ 2 ) and the polynomials (1 − φ1 z − · · · − φp z p ) and


(1 + θ1 z + · · · + θq z q ) have no common factors.

The process Xt is said to be an ARMA(p,q) process with mean µ if Xt − µ is


an ARMA(p,q) process.

75/106
Time Series Analysis
ARMA Models

We can write:
φ(B)Xt = θ(B)Zt
where φ(.) and θ(.) are the pth and qth degree polynomials:

φ(z) = 1 − φ1 z − · · · − φp z p

and
θ(z) = 1 + θ1 z + · · · + θq z q
and B is the backward shift operator B j Xt = Xt−j , j = 0, ±1, ...

76/106
Time Series Analysis
ARMA models

ARMA(1, 1) process
Xt is an ARMA(1, 1) process if Xt is stationary and if for every t,

Xt − φ1 Xt−1 = Zt + θ1 Zt−1

where Zt ∼ WN(0, σ 2 )

Lecture notes on ARMA(1,1).

77/106
Time Series Analysis
ARMA models

ARMA(p, q) process: Existence, Causality, and Invertibility


A stationary solution (existence, uniqueness) Xt exists if and only if:

φ(z) = 1 − φ1 z − · · · − φp z p 6= 0 for all | z |= 1

An
P∞ARMA(p,q) process Xt P is causal if there exist constant ψj such that

j=0 | ψj |< ∞ and Xt = j=0 ψj Zt−j , for all t. Causality is equivalent
to the condition:

φ(z) = 1 − φ1 z − · · · − φp z p 6= 0 for all | z |≤ 1

An ARMA(p,q) process Xt is invertible if there exist constant πj such


that ∞
P P∞
j=0 | πj |< ∞ and Z t = j=0 π j X t−j , for all t. Invertibility is
equivalent to the condition:

θ(z) = 1 + θ1 z + · · · + θq z q 6= 0 for all | z |≤ 1

More details can be found in Section 3.1, Chapter 3. Similar to ARMA(1,1).

78/106
Time Series Analysis
ARMA models

ARMA(p, q) process: Autocorrelation Function (ACF)


The autocorrelation function (ACF)Pof the causal ARMA(p,q) process Xt can
be found using the fact that Xt = ∞ j=0 ψj Zt−j (causal MA(∞) process) and


X
γX (h) = σ 2 ψj ψj+|h|
j=0

Find the ACF of ARMA(1,1), using the fact that

ψ0 = 1; ψj = (φ + θ)φj−1 , j ≥1

79/106
Time Series Analysis
ARMA models

ARMA(1, 1) process: Autocorrelation Function (ACF)


(1 + φ)2 2
γX (0) = σ
1 − φ2
(θ + φ)(1 + φθ) 2
γX (1) = σ
1 − φ2
γX (h) = φh−1 γx (1)

Section 3.2.1 has details about the calculation of the ACVF.

80/106
Time Series Analysis
ARMA models

Definition: Partial Autocorrelation Function (PACF)


The partial autocorrelation function (PACF) of ARMA process Xt is the
function α(.) defined by
α(0) = 1
and
α(h) = φhh , h≥1
where φhh is the last component of φh = Γ−1
h γh

Think about it as a conditional correlation Cor (Xt , Xt+h | Xt+1 , . . . , Xt+h−1 )

81/106
Time Series Analysis
Modeling ARMA models

Practical facts about PACF and ACF


The PACF is often best used to identify the AR(p) models
For AR(p) models, the theoretical PACF are equal to zero after h = p
The MA(q) models are better identified using ACF.

Examples with R

82/106
Time Series Analysis
Modeling ARMA models (Chapter 5)

To determine an appropriate ARMA(p,q) model to represent an observed


stationary process, we need to:
Choose the orders p and q (order selection)
Estimate the mean
Estimate the coefficients {φi , i = 1, . . . , p} and {θi , i = 1, . . . , q}
Estimate the white noise variance σ 2
Select a model

83/106
Time Series Analysis
Modeling ARMA models (Chapter 5)

To determine an appropriate ARMA(p,q) model to represent an observed


stationary process, we need to:
Choose the orders p and q (order selection)Use ACF and PACF plots.
Estimate the mean Use the mean-corrected process Xt − X n .
Estimate the coefficients {φi , i = 1, . . . , p} and {θi , i = 1, . . . , q}
Estimate the white noise variance σ 2
Select a model

84/106
Time Series Analysis
Modeling ARMA models (Chapter 5)

When p and q are known and the time series is mean-corrected, good
estimators of vectors φ and θ can be found imaging the data to a stationary
Gaussian time series and maximizing the likelihood with respect to the
p + q + 1 parameters ( φ1 , . . . , φp , θ1 , . . . , θq and σ 2 ). We can estimate these
parameters using:

The Yule-Walker and Burg procedures for pure autoregressive models


AR(p) (yw {itsmr }, burg {itsmr })
The Innovations and Hannan-Rissanen procedures for ARMA(p,q)
(ia{itsmr }, hannan{itsmr })

85/106
Time Series Analysis
Modeling ARMA models (Chapter 5)

Properties
The Burg’s algorithm usually gives higher likelihoods than the
Yule-Walker equations for AR(p)
For pure MA processes, the Innovations algorithm usually gives higher
likelihoods than the Hannan-Rissanen procedure
For ARMA models, the Hannan-Rissanen is more successful in finding
causal models
These preliminary estimations are required for initialization of the likelihood
maximization.

86/106
Time Series Analysis
Modeling ARMA models (Chapter 5)

Yule-Walker Estimation
The Sample Yule-Walker equations are:

φ̂ = (φ̂1 , . . . , φ̂p )0 = R̂p−1 ρ̂p

and  
σ̂ 2 = γ̂(0) 1 − ρ̂0p R̂p−1 ρ̂p

where ρ̂p = (ρ̂(1), . . . , ρ̂(p))0


σ 2 Γ−1
p
Large-Sample Distribution of φ̂ ∼ N(φ, n
)
q
σ̂ 2 Γ̂−1
p
Confidence intervals of φpj are φ̂pj ± z1−α/2 n

87/106
Time Series Analysis
Estimation ARMA models (Chapter 5)

We can use the Innovations algorithm in order to estimate parameters of


MA(q). The Confidence regions of the coefficients:

MA(q) Estimation
q Pj−1
2
θ̂mi
Confidence intervals of θj are θ̂mj ± z1−α/2 i=0
n

88/106
Time Series Analysis
Estimation ARMA Models (Chapter 5)

Suppose Xt is a Gaussian time series with mean zero

Maximum Likelihood Estimators (Section 5.2 from the Textbook)

S(φ̂, θ̂)
σ̂ 2 =
n
Pn (Xj −X̂j )2
where S(φ̂, θ̂) = j=1 rj−1
,
and φ̂, θ̂ are the values of φ, θ that minimize:
n
X
l(φ, θ) = ln n−1 S(φ, θ) + n−1 ln rj−1
j=1

Minimization of l(φ, θ) must be done numerically. Initial values for φ, θ) can be


obtained from preliminary estimation algorithms (Yule-walker, Burg,
Innovations, and Hannan).

89/106
Time Series Analysis
ARMA Models

Order selection
The Akaike Information criterion bias-corrected (AICC) is defined as follows:

2(p + q + 1)n
AICC := −2 ln(LX (β, SX (β)/n) +
n−p−q−2
It was designed to be an approximately unbiased estimate of the
Kullback-Leibler index of the fitted model relative to the true model.
We select p and q values for our fitted model to be those that minimize
AICC (β̂).

90/106
Time Series Analysis
Modeling ARMA models

Paramater Redundancy
Consider a white noise process Xt = Zt . We can write this as

0.3Xt−1 = 0.3Zt−1

By subtracting the two representation we have:

Xt − 0.3Xt−1 = Zt − 0.3Zt−1

which looks like an ARMA(1,1) model. Of course Xt is still white noise. We


have this problem because of the parameter redundancy or
over-parameterization.
We can solve this problem by looking at the common factors of the two
polynomials φ(B) and θ(B).

91/106
Time Series Analysis
Forecasting ARMA models

Given X1 , X2 , . . . , Xn observations, we want to predict Xn+h . We know that the


best linear predictor is given by:
n
X
Pn Xn+h = X̂n+h = µ + ai (Xn+i−1 − µ)
i=1

where the vector an satisfies Γn an = γn (h). The mean squared errors:

E (Xn+h − Pn Xn+h )2 = γ(0) − an0 γ(h) = γ(0)(1 − an0 ρ(h))

Use Durbin-Levinson and Innovations algorithms to solve these equations.

92/106
Time Series Analysis
Forecasting ARMA models

Examples
For AR(1): Pn Xn+1 = φXn
For AR(1) with nonzero-mean: Pn Xn+h = µ + φh (Xn − µ)
For AR(p): if n>p then Pn Xn+1 = φ1 Xn + · · · + φp Xn+1−p
Pmin(n,q)
For MA(q): Pn Xn+1 = j=1 θnj (Xn+1−j − X̂n+1−j )
For ARMA(p,q): if n > m = max(p, q) then for all h ≥ 1
p q
X X
Pn Xn+h = φi Pn Xn+h−i + θn+h−1,j (Xn+h−j − X̂n+h−j )
i=1 j=h

The calculations are performed automatically using forecast{itsmr} function


into R

93/106
Time Series Analysis
ARIMA models

We have already seen the importance of the class of ARMA models for
representing stationary time series. A generalization of this class, which
includes a wide range of nonstationry series, is provided by the ARIMA
(AutoRegressive Integrated Moving Average)models.

Definition
If d is a nonnegative integer, then Xt is an ARIMA(p,d,q) process if

Yt := (1 − B)d Xt

is a causal ARMA(p,q) process.


B is the backward shift operator.

ARIMA processes reduce to ARMA processes when differenced finitely


many times
Xt is stationary if and only if d = 0

94/106
Time Series Analysis
ARIMA models

The definition means that Xt satisfies a difference equation of the form:

φ(B)(1 − B)d Xt = θ(B)Zt , Zt ∼ WN(0, σ 2 )

The polynomial φ∗ (B) = φ(B)(1 − B)d has now a zero of order d at z = 1.

Example
Consider Xt is an ARIMA(1,1,0) process of for φ ∈ (−1, 1),

(1 − φB)(1 − B)Xt = Zt , Zt ∼ WN(0, σ 2 )

95/106
Time Series Analysis
ARIMA models

ACF for ARIMA models


A distinctive feature of the data that suggests the appropriateness of an
ARIMA model is the slowly decaying positive sample autocorrelation
function.
In order to fit an ARIMA model to the data, we would apply the operator
∇ = 1 − B repeatedly in the hope of some j, ∇j Xt will have a rapidly
decaying sample autocorrelation function, that is compatible with that of
an ARMA process with no zeros of the autoregressive polynomial near the
unit circle.
Example with R with φ = 0.8, n=200, and σ 2 = 1

96/106
Time Series Analysis
ARIMA models

Modeling of ARIMA models


Deviations (e.g. trend, seasonality, heteroskedasticity) from stationary
may be suggested by the graph of the the series itself or by the sample
autocorrelation function or both.
We have seen how to handle the Trend and Seasonality components
Logarithmic Transformation is appropriate whenever the series whose
variance increases linearly with the mean. A general class of
variance-stabilizing transformations is given by Cox-Box transformation
fλ :  −1 λ
λ (Xt − 1), Xt ≥ 0, λ > 0
fλ (Xt ) =
ln Xt , Xt > 0, λ = 0
In practice, λ is often 0 or 0.5
We can use powerTransform{car } to estimate λ.

Example with R using wine data

97/106
Time Series Analysis
ARIMA models

Units Roots in Time Series


The unit root problem arises when either the AR or MA polynomial of
ARMA model has a root on or near to the unit circle.
A root near to 1 of the AR polynomial suggest that the data should be
differenced before fitting an ARMA model
A root near to 1 of the MA polynomial suggest that the data were
overdifferenced.

98/106
Time Series Analysis
ARIMA models

Units Roots in Time Series:


Augmented Dickey-Fuller Test for AR processes
For MA process is more complicated (general case not fully resolved)

99/106
Time Series Analysis
SARIMA models

We have already seen the how difference the series Xt at lag s is convenient
way of eliminating a seasonal component of period s.

Definition
If d and D are nonnegative integers, then Xt is a seasonal
ARIMA(p, d, q)x(P, D, Q)s process with period s if the difference series
Yt = (1 − B)d (1 − B s )D Xt is a causal ARMA process defined by

φ(B)Φ(B s )Yt = θ(B)Θ(B s )Zt

where B is the backward shift operator. Zt ∼ WN(0, σ 2 ).


φ(z) = 1 − φ1 z − · · · − φp z p , Φ(z) = 1 − Φ1 z − · · · − ΦP z P ,
θ(z) = 1 + θ1 z + · · · + θq z q , and Θ(z) = 1 + Θ1 z + · · · + ΘQ z Q

The process Yt is causal if and only if φ(z) = 0 and Φ(z) = 0 for | z |> 1

100/106
Time Series Analysis
SARIMA models

A nonstationary process has often a seasonal component that repeats itself


after a regular period of time. The seasonal period can be:
Monthly: s=12 (12 observations per a year)
Quarterly: s=4 (4 observations per a year)
Daily: s=365 (365 observations per a year)
Daily per week: s=5 (5 working days)
Weekly: s=52 (observations per a year)
Example lecture note

101/106
Time Series Analysis
Forecasting Techniques

We focused up to this point on fitting time series models for


stationary and nonstationary series. We present now 2 techniques
for forecasting:
The ARAR algorithm: use AR-ARMA modeling and
shortening.
The Holt-Winters (HW) algorithm: use of exponential
smoothing for forecasting.
See R code for examples.

102/106
Time Series Analysis
Regression with ARMA errors

The regression model defines with ARMA errors is given by:

Yt = βXt + Rt where,
Rt = φ1 Rt−1 + · · · + φp Rt−p − θ1 zt−1 − · · · − θq zt−q + zt

The regression interpretations is the same as in the regression


models.

103/106
Time Series Analysis
STL decomposition

The Seasonal and Trend decomposition using LOESS is a robut


method for decomposing time series
The seasonal component is allowed to change over time.
It can be robust to outliers.
Cleveland, R. B., Cleveland, W. S., McRae, J. E., Terpenning, I. J. (1990).
STL: A seasonal-trend decomposition procedure based on loess. Journal of
Official Statistics, 6(1), 3-33.

104/106
Time Series Analysis
Financial time series models

Financial time series data are special because of their features,


which include tail heaviness, asymmetry, volatility, and serial
dependence without correlation.
Let consider Pt the price of a stock of other financial asset at
time t, then we can define the log return by
Zt = log(Pt ) − log(Pt−1 )
A model of the return series should support the fact that this
data has a conditional variance ht of Zt is not independent of
past value of Zt .

105/106
Time Series Analysis
ARCH model
The idea of the ARCH (autoregressive conditional
heteroscedasticity) model is to incorporate the sequence ht in the
model by:
p
Zt = hT et , et ∼ IIDN(0, 1) (5)
ht is known as the volatility and related to the past values of Zt2
via the ARCH(p) model:
p
X
2
ht = α0 + αi Zt−i , (6)
i=1
The GARCH(p,q) (generalized ARCH) is given by:
p
X q
X
2
ht = α 0 + αi Zt−i + βi ht−i , (7)
i=1 i=1
α0 > 0 and αi ≥ 0, βi ≥ 0.
106/106
Time Series Analysis

You might also like