Literature Review v02

Download as pdf or txt
Download as pdf or txt
You are on page 1of 3

Literature Review:

In this section, we provide a brief description of the models examined in this research paper.
To compare results of simple naive approach with sophisticated optimization models, we
employ a number of minimum variance optimization models in our analysis. Use of the
different models based on same dataset gives us an understanding of variations in the outcome
while facilitating conclusive recommendations in the given context.

1. Equally Weighted Moving Average covariance:

This simple covariance model depends on time series data for certain fixed historic periods to
estimate volatility and correlation. That is, a simple equally weighted moving average of look-
back period which is dynamic through time. The statistical formula for n-period moving average
covariance presented by (Kevin Sheppard, Chapter 9) is:

1
=n (1.1)
=1

One specific argument about this model is the choice of short or long look-back period. A short
historic range is the immediate reflection of current market condition but could be biased due
to volatility clustering. A long historic period, on the other hand, could generate completely
different forecast than actual if one or more past extreme data points is included in averaging
period. The effect of an extreme data point remains in the model for averaging period (n) until
the point falls out from the averaging window. This model is only suitable for long term
forecasts of average volatility and the attempt to forecast time varying volatility based on
estimate of constant volatility is a disadvantage (Alexander, C.; 2008).

2. Exponentially Weighted Moving Average covariance:

EWMA, an alternative to equally weighted moving average, was developed to solve equal
weights issue by putting more weights on recent observations and exponentially decreasing
weights on historic data. The term (0 < < 1) is introduced to distribute weights on historic
data (Kevin Sheppard, Chapter 9):

= (1 ) 1 (1.2)
=1

being high estimates less reactive volatility to market events but more persistent while low
value estimates highly reactive but less persistent volatility. The selection of value is often
subjective as same value has to be used for all assets in multivariate context to ensure a positive
definitive correlation matrix (Engle, 2002). The RiskMetrics covariance (RiskMetrics 1994
Covariance) is computed as a EWMA with = 0.94 for daily data and =0.97 for monthly
(Kevin Sheppard, Chapter 9).

3. Constant Conditional Correlation:

The constant conditional correlation (CCC) model, introduced by Bollerslev in 1990, models
the conditional covariance matrix indirectly by estimating the conditional correlation matrix. It
assumes the correlations between residuals (t) or observed variables (rit) to be fixed over time
while conditional variances are dynamic. The CCC model can be estimated in two steps
(Bollerslev 1990). Firstly, n conditional variances are modeled using univariate GARCH (1, 1)
models in terms of vector of standardized residuals ut:
2
, = + ,1 + ,1 , = 1,2, , (1.3)

2
= + ,1 ,1 + ,1 , , = , /,

Secondly, constant conditional correlation is estimated by using standard correlation estimator


on the standardized residuals (Kevin Sheppard, Chapter 9). Though CCC model comes with the
convenience due to its interpretable parameters and easy estimation, its fundamental assumption
of time-invariant correlations has often been found unrealistic (Annastiina and Timo, 2008).
Also, the constant correlation estimator does not provide a method to calculate consistent
standard errors using the multi-stage estimation process (Robert & Kevin, 2001).

4. Dynamic Conditional Correlation:

In order to avoid time-invariant correlation assumption, Engle (2002) introduced dynamic


conditional correlation (DCC) model that impose GARCH-type structure on the correlations
(Annastiina and Timo, 2008). Unlike CCC estimator, DCC allows Rt (correlation) to vary with
time. The form of Engles DCC model is as follows:
1 1
2 2
= , = (11 , . , ) (1.4)

Dt is the diagonal matrix containing the conditional standard deviations on the leading diagonal
and Rt is the conditional correlation matrix. Rt can be estimated in two stages; first-stage uses
equation (1.5) to estimate average correlation, which is then used in the equation (1.6) in the
process of calculating conditional correlation from equation (1.7).
=1 , ,
= (1.5)
=1 ,
2 =1 ,
2

1 + 1
= (1 ) + 1 (1.6)
1 1
= , = (11
2 2
, . , ) (1.7)

where t is the vector of standardized residuals, a and b are non-negative scalars, R = E[t t]
is the unconditional correlation of the standardized residuals, and is a diagonal matrix
composed of the inverse of the square root of the diagonal elements of Qt to ensure Rt is a well-
defined correlation matrix (Engle & Sheppard, 2008).

DCC addresses time-invariant correlation issue of CCC but has the limitation of having same
dynamic structure for all conditional correlations. It can make the covariance matrix positive
definite at any point in time (Annastiina and Timo, 2008) and has clear analytical advantage
that the number of parameters to be estimated is independent of the number of series to be
correlated, thus making very large correlation matrices potentially possible to be estimated
(Elisabeth 2009).

You might also like