5qqmn938 - Week 2

Download as pdf or txt
Download as pdf or txt
You are on page 1of 43

Week 2:

Univariate Models –
Autoregression Part 1
Intermediate Econometrics - 5QQMN938
Dr Jack Fosten
This Week
Covered this week
• Introducing the AR(1) model
• Properties of AR(1): conditions for stationarity, mean, variance and autocorrelation
• Estimating the AR(1) model using OLS
• Forecasting: intro to point and interval forecasts
• Real-world case study: we will make a forecast of CPI inflation
Tutorial
• Running the AR(1) model on real world data and making our first forecast
Reading
• S&W: 14.3, 14.4 and Appendix (also Enders: 2.3, 2.7)
Looking Ahead
• We will look at extending the AR(1) model to the AR(p) model

2 KING’S BUSINESS SCHOOL | kcl.ac.uk/business


Overview of the
AR(1) Model
What is the AR(1) model and how do we solve it?
Re-cap on Last
Week

• We saw data transformations using lags,


differences and growth rates
• We looked at time series correlation, or
“autocorrelation” using a plot
• We introduced “stationarity” and “non-
stationarity” and saw how the AC plot
can detect this informally
• We used stationary CPI inflation and
employment growth
• We need an econometric model which
explains autocorrelation and can be used
to forecast

4 KING’S BUSINESS SCHOOL | kcl.ac.uk/business


Introduction to What does an AR(1) Process
Look Like?
Autoregression We can use Stata (or other software) to artificially generate
a simulated AR(1) process.

We can draw a sample of T=200 error terms from a


The autoregressive model (AR) expresses 𝑌𝑡 standard normal distribution using a random number
as a function of lag(s) generator

AR(1) Model
Assume that 𝑌0 = 0 and let 𝛽0 = 1 and 𝛽1 = 0.5, so that:
A natural way to forecast 𝑌𝑡 is based on its past
value 𝑌𝑡−1 𝑌𝑡 = 1 + 0.5𝑌𝑡−1 + 𝑢𝑡

The stationary first-order autoregressive model or


AR(1) model is:

𝑌𝑡 = 𝛽0 + 𝛽1 𝑌𝑡−1 + 𝑢𝑡

where |𝛽1 | < 1 and 𝑢𝑡 is a white noise error process


with 𝐸 𝑢𝑡 = 0, 𝑉𝑎𝑟 𝑢𝑡 = 𝜎 2 , 𝑐𝑜𝑣 𝑢𝑡 , 𝑢𝑡+𝑗 = 0
• |𝛽1 | < 1 assumption is critical for stationarity
• This models the serial correlation in 𝑌𝑡
• Simple and very useful for forecasting
• Often forecasts better than complex models

5 KING’S BUSINESS SCHOOL | kcl.ac.uk/business


Solving the AR(1) Model
The AR(1) model is a stochastic difference equation. Let’s try and solve it
as a function of the error term 𝑢𝑡
AR(1) Model with 𝜷𝟎 = 𝟎
To simplify our lives, let’s assume the constant is zero:
We’re just getting rid of 𝛽0 in the AR(1)
𝑌𝑡 = 𝛽1 𝑌𝑡−1 + 𝑢𝑡 (1) model 𝑌𝑡 = 𝛽0 + 𝛽1 𝑌𝑡−1 + 𝑢𝑡

Now by lagging the equation one period, we get:

𝑌𝑡−1 = 𝛽1 𝑌𝑡−2 + 𝑢𝑡−1 (2)

Now substitute (2) into (1) to get:

𝑌𝑡 = 𝛽1 (𝛽1 𝑌𝑡−2 + 𝑢𝑡−1 ) + 𝑢𝑡 (3)

Exercise: simplify (3) and repeat by substituting in for 𝒀𝒕−𝟐 . Do you notice a pattern?

6 KING’S BUSINESS SCHOOL | kcl.ac.uk/business


Solving the AR(1) Model
𝑌𝑡 = 𝛽12 𝑌𝑡−2 + 𝛽1 𝑢𝑡−1 + 𝑢𝑡
Using the fact that 𝑌𝑡−2 = 𝛽1 𝑌𝑡−3 + 𝑢𝑡−2 (by
= 𝛽12 𝛽1 𝑌𝑡−3 + 𝑢𝑡−2 + 𝛽1 𝑢𝑡−1 + 𝑢𝑡 lagging the AR(1) model twice
= 𝛽13 𝑌𝑡−3 + 𝛽12 𝑢𝑡−2 + 𝛽1 𝑢𝑡−1 + 𝑢𝑡

Can you see a pattern emerging?

Let’s assume that we have a (finite) initial condition 𝑌0 , then:

General Solution to the AR(1) Model

𝑡−1

𝑌𝑡 = 𝛽1𝑡 𝑌0 + ෍ 𝛽1𝑖 𝑢𝑡−𝑖


𝑖=0

Let’s analyse this expression further

7 KING’S BUSINESS SCHOOL | kcl.ac.uk/business


Solving the AR(1) Model
• Let’s now use the assumption 𝛽1 < 1 and let’s look what
happens when 𝒕 grows to infinity, so 𝑡 → ∞ and 𝛽1𝑡 → 0
If 𝛽1 = 0.1, then imagine this getting
• The first part of the solution of 𝑌𝑡 vanishes to zero: smaller 𝛽1 2 = 0.01, 𝛽1 3 = 0.001, 𝛽1 𝑡 =
0.000000 …
lim 𝛽1𝑡 𝑌0 = 0
𝑡→∞

• Leaving the solution to be: Note


General Solution to the AR(1) Model as 𝒕 grows large: • Rewriting 𝑌𝑡 as a function of the error terms
instead of 𝑌𝑡−1 is called “inverting” the
AR(1) model

• This can only be done when 𝛽1 < 1. Can
𝑌𝑡 = ෍ 𝛽1𝑖 𝑢𝑡−𝑖 = 𝑢𝑡 + 𝛽1 𝑢𝑡−1 + 𝛽12 𝑢𝑡−2 + ⋯ you think why?
𝑖=0 • This is also called the “moving average
(MA) representation” of the AR(1) model
• This expression only converges if 𝛽1 < 1 because 𝑌𝑡 is written as a weighted
average of error terms
• We can use this infinite sum to show 𝑌𝑡 is stationary • We will explore these concepts later in the
course

8 KING’S BUSINESS SCHOOL | kcl.ac.uk/business


Wold’s Decomposition
The solution to the AR(1) process is a special case of a remarkable
result called Wold’s decomposition
Wold’s Decomposition
Any zero-mean weakly stationary time series 𝑌𝑡 can be decomposed as:

𝑌𝑡 = ෍ 𝛼𝑗 𝑢𝑡−𝑗 + 𝑑𝑡
𝑗=0

where 𝑢𝑡 is white noise with mean zero and variance 𝜎 2 , 𝛼0 = 0 and σ∞ 2


𝑗=0 𝛼𝑗 < ∞, and 𝑑𝑡
is a deterministic term.

• So ANY weakly stationary process can be written as a function of white noise errors

• Notice how the general solution to the AR(1) model with no constant is a special case
of this decomposition where 𝛼0 , 𝛼1 , 𝛼2 … = 1, 𝛽1 , 𝛽12 … and 𝑑𝑡 = 0

9 KING’S BUSINESS SCHOOL | kcl.ac.uk/business


Properties of the
AR(1) Model
Mean, variance and ACF of the AR(1) model
Stationarity of the
AR(1) Model

Last time, we saw that we require


stationarity for consistency of OLS
in time series regression
Stationarity is also important for
forecasting time series data
Recap: Weak Stationarity
A series 𝑌𝑡 with 𝑣𝑎𝑟 𝑌𝑡 < ∞ is weakly
stationary if (i) 𝐸(𝑌𝑡 ) is constant, (ii) 𝑣𝑎𝑟(𝑌𝑡 )
is constant and (iii) 𝑐𝑜𝑣 𝑌𝑡 , 𝑌𝑡+𝑗 only
depends on 𝑗 and not 𝑡.
• Now that we have a model for 𝑌𝑡 we can
check whether these conditions hold

11 KING’S BUSINESS SCHOOL | kcl.ac.uk/business


Stationarity of the Taking Expectations
AR(1) Model Remember a couple of properties of expectations:

If 𝑋 is a random variable and 𝑎 is a constant, then:

Show that mean, variance and 𝐸 𝑎𝑋 = 𝑎𝐸(𝑋)


covariance do not depend on time
and if 𝑌 is also a random variable, then:
Mean
Using the solution for 𝑌𝑡 is: 𝐸 𝑋 + 𝑌 = 𝐸 𝑋 + 𝐸(𝑌)


Remember
𝐸(𝑌𝑡 ) = 𝐸 ෍ 𝛽1𝑖 𝑢𝑡−𝑖 formula for 𝑌𝑡
𝑖=0
= 𝐸 𝑢𝑡 + 𝛽1 𝐸 𝑢𝑡−1 + 𝛽12 𝐸 𝑢𝑡−2 + ⋯
=0

• The mean is independent of 𝑡 so we


have shown the first condition for weak
stationarity

12 KING’S BUSINESS SCHOOL | kcl.ac.uk/business


Stationarity of the Infinite Geometric Series
If we have an infinite geometric series like:

AR(1) Model 1 + 𝑎 + 𝑎2 + 𝑎3 + ⋯

where 𝑎 < 1, then the infinite sum is bounded and its


value is given by:
Variance
1
𝑣𝑎𝑟 𝑌𝑡 = 𝐸 𝑌𝑡 − 𝐸(𝑌𝑡 )2
1−𝑎

2 1 1 1 1 1
∞ For example, if 𝑎 = 2, then 1 + 2+ 4 + 8 + ⋯ = 1 =2
1−
2
=𝐸 ෍ 𝛽1𝑖 𝑢𝑡−𝑖 − 𝐸 𝑌𝑡 Remember
formula for 𝑌𝑡 Try it for yourself!
𝑖=0

∞ 2

=𝐸 ෍ 𝛽1𝑖 𝑢𝑡−𝑖 − 0 Remember 𝐸 𝑌𝑡 = 0


from last slide
𝑖=0
= 𝐸 𝑢𝑡 + 𝛽1 𝑢𝑡−1 + 𝛽12 𝑢𝑡−2 + ⋯ 2
= 𝐸 ሾ𝑢𝑡 𝑢𝑡 + 𝛽1 𝑢𝑡 𝑢𝑡−1 + 𝛽12 𝑢𝑡 𝑢𝑡−2 + ⋯ + Remember that since 𝑢𝑡 is
𝛽1 𝑢𝑡−1 𝑢𝑡 + 𝛽12 𝑢𝑡−1 𝑢𝑡−1 + 𝛽13 𝑢𝑡−1 𝑢𝑡−2 … ሿ uncorrelated white noise, all
cross products in 𝑢𝑡 with
𝑢𝑡−1 etc. are 0 in expectation

= 𝐸 𝑢𝑡2 + 𝛽12 𝐸 𝑢𝑡−1


2
+ 𝛽14 𝐸 𝑢𝑡−2
2
+⋯
2 4 Remember that the variance of the error
2
= 𝜎 (1 + 𝛽1 + 𝛽1 + ⋯ ) term is assumed to be 𝑣𝑎𝑟 𝑢𝑡 = 𝐸 𝑢𝑡2 = 𝜎 2

13 KING’S BUSINESS SCHOOL | kcl.ac.uk/business


Stationarity of the Infinite Geometric Series
If we have an infinite geometric series like:

AR(1) Model 1 + 𝑎 + 𝑎2 + 𝑎3 + ⋯

where 𝑎 < 1, then the infinite sum is bounded and its


value is given by:
Variance
1
• Finally! Using the assumption that |𝛽1 | < 1
1−𝑎
and the geometric series we get:
1 1 1 1 1
For example, if 𝑎 = 2, then 1 + 2+ 4 + 8 + ⋯ = 1 =2
2
𝑣𝑎𝑟 𝑌𝑡 = 𝜎 1 + 𝛽12 + 𝛽14 +⋯ 1−
2

Try it for yourself!


2
𝜎
=
1 − 𝛽12 Substitute 𝑎 = 𝛽12 to match up these
• Which does not vary over time, as required expressions…

for weak stationarity

14 KING’S BUSINESS SCHOOL | kcl.ac.uk/business


Stationarity of the AR(1) Model
Covariance = 𝐸 ሾ 𝑢𝑡 + 𝛽1 𝑢𝑡−1 + 𝛽12 𝑢𝑡−2 + ⋯
𝑗 𝑗+1
The covariance is a bit more fiddly: × 𝛽1 𝑢𝑡 + 𝛽1 𝑢𝑡−1 + ⋯ ൧
𝑗 𝑗+2 𝑗+4
Remember 𝐸 𝑌𝑡+𝑗 = = 𝛽1 𝜎 2 + 𝛽1 𝜎 2 + 𝛽1 …
𝑐𝑜𝑣 𝑌𝑡 , 𝑌𝑡+𝑗 0 for any 𝑗 𝑗
= 𝛽1 𝜎 2 1 + 𝛽12 + 𝛽14 + ⋯
= 𝐸 𝑌𝑡 − 𝐸(𝑌𝑡 ) 𝑌𝑡+𝑗 − 𝐸(𝑌𝑡+𝑗 ) 𝑗
𝛽1 𝜎 2
= 𝐸 ሾ 𝑢𝑡 + 𝛽1 𝑢𝑡−1 + 𝛽12 𝑢𝑡−2 + ⋯ =
1 − 𝛽12
𝑗 𝑗+1
× ൫𝑢𝑡+𝑗 + 𝛽1 𝑢𝑡+𝑗−1 + ⋯ + 𝛽1 𝑢𝑡 + 𝛽1 𝑢𝑡−1
+ ⋯ ൯൧ • Now this expression is independent of
• 2
Now only the terms in 𝐸(𝑢𝑡2 ), 𝐸(𝑢𝑡−1 ) are 𝑡 and only depends on 𝑗
non-zero because ut is white noise • This satisfies the final condition for weak
stationarity

15 KING’S BUSINESS SCHOOL | kcl.ac.uk/business


The AR(1) Model
with a Constant
If we now have the full model:

𝑌𝑡 = 𝛽0 + 𝛽1 𝑌𝑡−1 + 𝑢𝑡

• The only thing that changes is the mean:


Mean
𝛽0
𝐸 𝑌𝑡 =
1 − 𝛽1

• This is known as the “long-run mean”


• You will show this formally in the activity

Variance and Covariance


• Same as before (see activity/problem set)

16 KING’S BUSINESS SCHOOL | kcl.ac.uk/business


The ACF of the Back to the Simulated Data

AR(1) Model Remember the simulated data from before


with 𝛽1 = 0.5

What do we expect the ACF to be in theory?


Last time we saw the autocorrelation Is this quite close to what we expect?
coefficient. We can find all AC’s as a
function of 𝛽1
Autocorrelation Function

𝑐𝑜𝑣(𝑌𝑡 , 𝑌𝑡−𝑗 )
𝜌𝑗 = 𝑐𝑜𝑟𝑟 𝑌𝑡 , 𝑌𝑡−𝑗 =
𝑣𝑎𝑟 𝑌𝑡 𝑣𝑎𝑟(𝑌𝑡−𝑗 )
• And now we know 𝑐𝑜𝑣(𝑌𝑡 , 𝑌𝑡−𝑗 ) and 𝑣𝑎𝑟 𝑌𝑡 for
the AR(1) model, so we get:
𝑗
𝛽1 𝜎 2 1 − 𝛽12
𝜌𝑗 = ×
1 − 𝛽12 𝜎2
𝑗
= 𝛽1
• So the ACF of the AR(1) model with 𝛽1 < 1
shows a geometric decline

17 KING’S BUSINESS SCHOOL | kcl.ac.uk/business


Stationarity Condition
of the AR(1) Model
Playing around with the condition that 𝛽1 < 1
Playing around
with 𝜷𝟏

The parameter 𝛽1 is crucial for


stationarity of 𝑌𝑡
• The main assumption for stationarity in the
AR(1) model is that we need 𝛽1 < 1
• To explore this, I created an Excel file for you
called “AR1 Simulator.xlsx” (see activity)
• Here you can play around with 𝛽0 and 𝛽1 and
watch what happens to 𝑌𝑡 as things change
• Try to understand the Excel formula for 𝑌𝑡
• On the right are the cases of 𝛽1 = 0.1 and
𝛽1 = 0.9
• Positive values for 𝛽1 are the most common in
economics

19 KING’S BUSINESS SCHOOL | kcl.ac.uk/business


Non-stationarity
Caused by 𝜷𝟏

We need 𝛽1 <1 for stationarity.


There are different cases where
this does not hold
Random Walk (𝜷𝟏 = 𝟏)
• When 𝛽1 = 1 we have what is called a
“unit root” or “random walk” process
• We will cover these in detail later, but we
will not allow for these just yet (see right,
above)
Explosive Time Series (𝜷𝟏 > 𝟏)
• Explosive time series do not really occur
in economics. We will rule these out (but
see graph on right, below, for 𝛽1 = 1.05)

20 KING’S BUSINESS SCHOOL | kcl.ac.uk/business


Negative values of
𝜷𝟏

Negative values are allowed under


𝛽1 <1 but strong –ve serial
correlation is rare
• The graphs to the right show the cases of
𝛽1 = −0.1 and 𝛽1 = −0.9
• The second case is clearly not something
we would expect of economic time series

• We saw that negative autocorrelation


estimates can happen with large outliers if
they are not removed
– In other words, we have a +vely
autocorrelated series except the outlier
erroneously causes –ve correlation

21 KING’S BUSINESS SCHOOL | kcl.ac.uk/business


Estimation and
Forecasting
Practicalities of the AR(1) model
Estimating the Stata Commands
AR(1) Model Remember the simulated example from before:

𝑌𝑡 = 1 + 0.5𝑌𝑡−1 + 𝑢𝑡

We can run a simple AR(1) model on the data


It’s as easy as O-L-S! simulated in the the previous slide.
• In the AR(1) model, we know 𝑌𝑡 and 𝑌𝑡−1
The Stata command is:
• We can just do OLS then!
• With a sample of data 𝑡 = 1, … , 𝑇, we estimate regress y L.y
the AR(1) model using data from 𝑡 = 2, … , 𝑇
• This regression has 𝑇 − 1 observations
• We lose one observation by taking the lag.
This affects the degrees of freedom (d.f.)
when we do a 𝑡-test or 𝐹-test
Which Standard Errors?
• If we fear heteroskedasticity then we should
use robust standard errors
• With dynamic models (with lagged dependent
variable) we usually do not worry about serial
correlation. We’ll come back to this

23 KING’S BUSINESS SCHOOL | kcl.ac.uk/business


Estimating the AR(1) Model
. regress y L.y

Source | SS df MS Number of obs = 199


-------------+---------------------------------- F(1, 197) = 61.68
Model | 57.4302648 1 57.4302648 Prob > F = 0.0000
Residual | 183.428038 197 .931106793 R-squared = 0.2384
-------------+---------------------------------- Adj R-squared = 0.2346
Total | 240.858303 198 1.21645608 Root MSE = .96494

------------------------------------------------------------------------------
y | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
y |
L1. | .485165 .0617759 7.85 0.000 .3633381 .6069919
|
_cons | .8890972 .1262882 7.04 0.000 .6400468 1.138148
------------------------------------------------------------------------------

Note the degrees of freedom in the F-test is 𝑻 − 𝟏 − 𝒌 and 𝒌 here is 2, so we have 𝑻 − 𝟑 degrees of
freedom, i.e. 𝟐𝟎𝟎 − 𝟑 = 𝟏𝟗𝟕

24 KING’S BUSINESS SCHOOL | kcl.ac.uk/business


Introduction to Forecasting
Why/who would want to forecast?
• Forecasting is one of the most important parts of time series modelling
• Forecasts are used in all sectors:
– “What do we expect GDP growth to be next quarter” (Bank of England fan chart)
– “What will my company’s sales be next month?”
– “Should I buy, hold or sell this asset in the next week?”

• We will use the AR(1) model to get a simple 1-period ahead forecast
• We should also think about the uncertainty of the forecast, i.e. a confidence interval

25 KING’S BUSINESS SCHOOL | kcl.ac.uk/business


Introduction to BoE Fan Charts
Forecasting
There is a page about the Bank of England’s fan
charts in S&W Chapter 14

The BoE are credited with being the first to use this
graphical display of forecast uncertainty
Different types of forecast It’s a way of conveying the BoE’s own feelings
• A point forecast is a forecast of the mean of a about future uncertainty of GDP, inflation etc.
variable (a single number) We will create one of these using models later in
• An interval forecast conveys uncertainty around the course when we look at multi-step forecasting
(at the moment we will only look at one-step ahead
the forecast. forecasting)
– E.g. a 90% forecast interval “contains the future
value in 90% of repeated applications”
• Important we do not only report point forecasts
– In the diagram opposite, each of the shaded
bands is an interval
• A fan chart displays the uncertainty at different
levels of confidence
– The collection of shaded red parts forms the fan
chart, or “the river of blood”

Source: Bank of England – Inflation Report Aug 18

26 KING’S BUSINESS SCHOOL | kcl.ac.uk/business


Point Forecasts: the AR(1)
case
Let’s introduce some general Now let’s apply it to the AR(1) model
forecasting notation. we have just learnt
An “out-of-sample” point forecast of period 𝑇 + 1 First, let’s write out the true AR(1) model for 𝑌 at
using all data up period 𝑇 is as follows: 𝑇 + 1 as follows:
𝑌𝑇+1 = 𝛽0 + 𝛽1 𝑌𝑇 + 𝑢 𝑇+1
Optimal point forecast using unknown
coefficients Optimal point forecast using unknown
coefficients:
𝑌𝑇+1|𝑇 is the conditional expectation of 𝑌𝑇+1 ,
𝑌𝑇+1|𝑇 = 𝐸 𝛽0 + 𝛽1 𝑌𝑇 + 𝑢 𝑇+1 |𝑌𝑇
given all information up to 𝑇
= 𝛽0 + 𝛽1 𝑌𝑇
In practice, we estimate the coefficients:
Point forecast using estimated
Point forecast using estimated coefficients:
coefficients: 𝑌෠𝑇+1|𝑇 = 𝛽መ0 + 𝛽መ1 𝑌𝑇

𝑌෠𝑇+1|𝑇 is the same but with estimated coefficients This is very simple to calculate using 𝛽መ0 , 𝛽መ1 and
used in place of unknown population coefficients 𝑌𝑇 , which are all known

27 KING’S BUSINESS SCHOOL | kcl.ac.uk/business


Interval Forecasts: the AR(1)
case
We should convey the uncertainty Now let’s apply it to the AR(1) model
which surrounds the forecast 𝑌෠𝑇+1|𝑇 . we have just learnt
• Want to assess the forecast error: Taking the expressions above we get:
𝑌𝑇+1 − 𝑌෠𝑇+1|𝑇 Forecast error
• We can do this using the root mean 𝑌𝑇+1 − 𝑌෠𝑇+1|𝑇
squared forecast error (RMSFE) = 𝑢 𝑇+1 − ሾ 𝛽෠0 − 𝛽0 + (𝛽෠1 − 𝛽1 )𝑌𝑇 ሿ

𝐸 𝑌𝑇+1 − 𝑌෠𝑇+1|𝑇
2
Mean squared forecast error
2
𝐸 𝑌𝑇+1 − 𝑌෠𝑇+1|𝑇
• This is just the square root of the MSFE
= 𝜎 2 + 𝑣𝑎𝑟( 𝛽෠0 − 𝛽0 + 𝛽෠1 − 𝛽1 𝑌𝑇 )
• The uncertainty has two parts:
– Uncertainty about the future error term
– Uncertainty about estimation of 𝛽0 , 𝛽1

28 KING’S BUSINESS SCHOOL | kcl.ac.uk/business


Constructing Stata Commands
Forecast Intervals
Let’s run through the steps to point and interval forecasts.

1) Add a new observation at the end of the sample

tsappend, add(1)

2) Run the AR(1) regression and save the predicted values.


• If we assume the forecast error is normally The extra observation is the forecast
distributed we can compute a forecast interval regress y l.y
• All we need is an estimate of the RMSFE, which predict y1

we will call se(𝑌𝑇+1 − 𝑌෠𝑇+1|𝑇 ) 3) Save the RMSFE using stdf, and save the 90% intervals

• Stata command is like “predict sf, stdf” predict sf1,stdf


gen y1L=y1-1.645*sf1
where stdf means standard error of the forecast gen y1U=y1+1.645*sf1

The 90% Forecast Interval Table: Critical Values for Commonly-used


Confidence Levels
𝑌෠𝑇+1 ± 1.645se(𝑌𝑇+1 − 𝑌෠𝑇+1|𝑇 )
Significance Confidence Critical Value
𝛼 100(1 − 2𝛼) 𝑧𝛼
90% critical value from
standard normal
0.025 95% 1.96
• The 95%, 80%, 60% etc. intervals are computed
0.05 90% 1.645
using different critical values from the normal
tables 0.1 80% 1.28
0.2 60% 0.84

29 KING’S BUSINESS SCHOOL | kcl.ac.uk/business


Worked Example with the Simulated Data

Don’t worry, we’ll practice this in tutorial!

Let’s look through the manual workings in the next slide

30 KING’S BUSINESS SCHOOL | kcl.ac.uk/business


Formulae Recap
Worked Example with Point forecast:
the Simulated Data 𝑌෠𝑇+1|𝑇 = 𝛽መ0 + 𝛽መ1 𝑌𝑇

90% Interval forecast:


• The last observation in the sample is:
𝑌෠𝑇+1 ± 1.645se(𝑌𝑇+1 − 𝑌෠𝑇+1|𝑇 )
𝑌200 = 0.361

• The regression estimates are on the right


• The RMSFE estimate is:
se 𝑌201 − 𝑌෠201 = 0.971) Where did this come from!? See last slide. We’ll see this in tutorial

Point forecast
. regress y L.y
𝑌෠201|200 =0.889+0.485∗(0.361)=1.064
Source | SS df MS Number of obs = 199
-------------+---------------------------------- F(1, 197) = 61.68
Model | 57.4302648 1 57.4302648 Prob > F = 0.0000

90% Interval Forecast


Residual | 183.428038 197 .931106793 R-squared = 0.2384
-------------+---------------------------------- Adj R-squared = 0.2346
Total | 240.858303 198 1.21645608 Root MSE = .96494
𝑌෠201|200 ± 1.645se 𝑌𝑇+1 − 𝑌෠𝑇+1|𝑇
------------------------------------------------------------------------------
y | Coef. Std. Err. t P>|t| [95% Conf. Interval]
= 1.064 ± 1.645 0.971 -------------+----------------------------------------------------------------
y |
= 1.064 ± 1.597 L1. | .485165 .0617759 7.85 0.000 .3633381 .6069919
|
= ሾ−0.533,2.661ሿ _cons | .8890972 .1262882 7.04 0.000 .6400468 1.138148
------------------------------------------------------------------------------

31 KING’S BUSINESS SCHOOL | kcl.ac.uk/business


Finding Critical Values in the
Normal Table

If we wanted some other


confidence level, e.g.
66%, we would need to
look up the critical value.

Example: This row gives


the probability (area to
the left) corresponding to
values -1.90, -1.91,
-1.92, etc.

The value for -1.96 is


0.025=2.5% which
matches with the table
above

32 KING’S BUSINESS SCHOOL | kcl.ac.uk/business


Real World Case
Study
Forecasting U.S. CPI Inflation
Monthly CPI Inflation

U.S. inflation
has been a
hot topic in
the news
since 2022

See BBC links


(Dec, Jan)

Compare the
numbers to
this graph and
explain

34 KING’S BUSINESS SCHOOL | kcl.ac.uk/business


Monthly CPI Inflation

35 KING’S BUSINESS SCHOOL | kcl.ac.uk/business


Inflation AR(1) Regression
. reg dCPI L.dCPI

Source | SS df MS Number of obs = 465


-------------+---------------------------------- F(1, 463) = 134.29
Model | 7.58270528 1 7.58270528 Prob > F = 0.0000
Residual | 26.1434344 463 .056465301 R-squared = 0.2248
-------------+---------------------------------- Adj R-squared = 0.2232
Total | 33.7261397 464 .072685646 Root MSE = .23762

------------------------------------------------------------------------------
dCPI | Coefficient Std. err. t P>|t| [95% conf. interval]
-------------+----------------------------------------------------------------
dCPI |
L1. | .4738204 .0408877 11.59 0.000 .393472 .5541688
|
_cons | .1203869 .0144974 8.30 0.000 .091898 .1488758
------------------------------------------------------------------------------

36 KING’S BUSINESS SCHOOL | kcl.ac.uk/business


Inflation AR(1) Regression
. reg dCPI L.dCPI, robust

Linear regression Number of obs = 465


F(1, 463) = 38.81
Prob > F = 0.0000
R-squared = 0.2248
Root MSE = .23762

------------------------------------------------------------------------------
| Robust
dCPI | Coefficient std. err. t P>|t| [95% conf. interval]
-------------+----------------------------------------------------------------
dCPI |
L1. | .4738204 .0760617 6.23 0.000 .3243516 .6232893
|
_cons | .1203869 .0211387 5.70 0.000 .0788473 .1619265
------------------------------------------------------------------------------

37 KING’S BUSINESS SCHOOL | kcl.ac.uk/business


Forecasting U.S.
CPI Inflation

• The last available data point is 2022M11 (as the


data were downloaded in mid-December 2022)
• Monthly inflation was 0.096% in 2022M11
• Let’s make a forecast of 2022M12!

Point forecast
𝑌෠2022𝑀12|2022𝑀11 = Fill in the gap!

. reg dCPI L.dCPI


90% Interval Forecast Source | SS df MS Number of obs = 465
-------------+---------------------------------- F(1, 463) = 134.29
• The RMSFE is 0.238, so: Model | 7.58270528 1 7.58270528 Prob > F = 0.0000
Residual | 26.1434344 463 .056465301 R-squared = 0.2248
𝑌෠2022𝑀12|2022𝑀11 ± 1.645∗ 0.238 -------------+---------------------------------- Adj R-squared = 0.2232
Total | 33.7261397 464 .072685646 Root MSE = .23762
= Fill in the gap! ------------------------------------------------------------------------------
dCPI | Coefficient Std. err. t P>|t| [95% conf. interval]
-------------+----------------------------------------------------------------
dCPI |
L1. | .4738204 .0408877 11.59 0.000 .393472 .5541688
|
_cons | .1203869 .0144974 8.30 0.000 .091898 .1488758
------------------------------------------------------------------------------

38 KING’S BUSINESS SCHOOL | kcl.ac.uk/business


Forecasting U.S.
CPI Inflation

• The last available data point is 2022M11 (as the


data were downloaded in mid-December 2022)
• Monthly inflation was 0.096% in 2022M11
• Let’s make a forecast of 2022M12!

Point forecast
𝑌෠2022𝑀12|2022𝑀11 =0.120 + 0.474 ∗ 0.096% = 0.166%

. reg dCPI L.dCPI

Source | SS df MS Number of obs = 465


90% Interval Forecast -------------+----------------------------------
Model | 7.58270528 1 7.58270528
F(1, 463)
Prob > F
=
=
134.29
0.0000
Residual | 26.1434344 463 .056465301 R-squared = 0.2248
• The RMSFE is 0.238, so: -------------+---------------------------------- Adj R-squared = 0.2232
Total | 33.7261397 464 .072685646 Root MSE = .23762
𝑌෠2022𝑀12|2022𝑀11 ± 1.645∗ 0.238
------------------------------------------------------------------------------
∗ dCPI | Coefficient Std. err. t P>|t| [95% conf. interval]
0.166% ± 1.645 0.238 -------------+----------------------------------------------------------------
dCPI |
= 0.166% ± 0.39151 L1. | .4738204 .0408877 11.59 0.000 .393472 .5541688
|
= ሾ−0.226%, 0.557%ሿ _cons | .1203869 .0144974 8.30 0.000 .091898 .1488758
------------------------------------------------------------------------------

39 KING’S BUSINESS SCHOOL | kcl.ac.uk/business


Forecasting U.S. CPI Inflation
Our prediction is that CPI inflation was 0.166% in 2022M12

• Since the data were downloaded in December 2022, we actually now have the data
release for December 2022, it arrived on 12th Jan 2023!
• You will often find there is a publication delay in many economic series
• Let’s compare our prediction with what happened
• The Bureau of Labour Statistics (BLS) released CPI inflation as -0.1%
• (You can also find data releases on Trading Economics)
• Our prediction was pretty close to the released figure in the grand scheme of things!
• The 90% confidence interval included -0.1%

• Do you think our forecast is missing something? How could we improve the forecast?

40 KING’S BUSINESS SCHOOL | kcl.ac.uk/business


Take-aways
What did we learn? Where are we going next?
What have we done this
week?
Re-cap
• We have identified the AR(1) model as a simple and powerful tool in time series
• We have spent time understanding the theoretical properties of this model:
– Mean, variance, covariance and conditions for stationarity
– How to estimate the model using OLS
– How to produce point and interval forecasts from this
• We computed point and interval forecasts from simulated data, and applied the methods to
forecast U.S. CPI inflation

Next Time
• We will generalise this to the AR(p) model
• We will learn how to choose between competing forecast models

42 KING’S BUSINESS SCHOOL | kcl.ac.uk/business


Thank you
Thanks for your attention this week!

You might also like