Seemingly Unrelated Regressions

Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

Seemingly Unrelated Regressions

Hyungsik Roger Moon Benoit Perron


Department of Economics Département de sciences économiques, CIREQ, and CIRANO
University of Southern California Université de Montréal
[email protected] [email protected]

July 2006

Abstract

This article considers the seemingly unrelated regression (SUR) model first analyzed by Zellner (1962). We describe
estimators used in the basic model as well as recent extensions.

1
Seemingly unrelated regressions
A seemingly unrelated regression (SUR) system comprises several individual relationships that are linked by the fact
that their disturbances are correlated. Such models have found many applications. For example, demand functions can
be estimated for different households (or household types) for a given commodity. The correlation among the equation
disturbances could come from several sources such as correlated shocks to household income. Alternatively, one could
model the demand of a household for different commodities, but adding-up constraints leads to restrictions on the
parameters of different equations in this case. On the other hand, equations explaining some phenomenon in different
cities, states, countries, firms or industries provide a natural application as these various entities are likely to be subject
to spillovers from economy-wide or worldwide shocks.
There are two main motivations for use of SUR. The first one is to gain efficiency in estimation by combining
information on different equations. The second motivation is to impose and/or test restrictions that involve parameters
in different equations. Zellner (1962) provided the seminal work in this area, and a thorough treatment is available in the
book by Srivastava and Giles (1987). A recent survey can be found in Fiebig (2001) . This chapter selectively overviews
the SUR model, some of the estimators used in such systems and their properties, and several extensions of the basic SUR
model. We adopt a Classical perspective, although much Bayesian analysis has been done with this model (including
Zellner’s contributions).

Basic linear SUR model


0
Suppose that yit is a dependent variable, xit = (1, xit,1 , xit,2 , ..., xit,Ki −1 ) is a Ki -vector of explanatory variables for
observational unit i, and uit is an unobservable error term, where the double index it denotes the tth observation of the
ith equation in the system. Often t denotes time and we will refer to this as the time dimension, but in some applications,
t could have other interpretations, for example as a location in space. A classical linear SUR model is a system of linear
regression equations,

y1t = β 01 x1t + u1t


..
.

yNt = β 0N xNt + uNt

where i = 1, · · · , N, and t = 1, ..., T. Denote L = K1 +· · ·+KN . Further simplification in notation can be accomplished by
stacking the observations either in the t dimension or for each i. For example, if we stack for each observation t, let Yt =
[y1t , ..., yN t ]0 , X̃t = diag (x1t , x2t , ..., xNT ) , a block-diagonal matrix with x1t , ..., xN T on its diagonal, Ut = [u1t , ...uN t ]0 ,
£ ¤0
and β = β 01 , ..., β 0N . Then,
Yt = X̃t0 β + Ut . (1)

2
Another way to present the SUR model is to write it in a form of a multivariate regression with parameter restrictions.
For this, define Xt = [x01t , x02t , ..., x0NT ]0 and A (β) = diag (β 1 , ..., β N ) to be a (L × N ) block diagonal coefficient matrix.
Then, the SUR model in (1) can be rewritten as

Yt = A (β)0 Xt + Ut , (2)

and the coefficient A (β) satisfies


vec (A (β)) = Gβ, (3)

for some (N L × L) full rank matrix G. In the special case where K1 = · · · = KN = K, we have G = diag(i1 , . . . , iN ) ⊗ IK ,
where ij denotes the j’th column of the N × N identity matrix IN .

Assumption:
0
In the classical linear SUR model, we assume that for each i = 1, ..., N, xi = [xi1 , ..., xiT ] is of full rank Ki , and that
conditional on all the regressors X 0 = [X1 , ..., XT ] , the errors Ut are iid over time with mean zero and homoskedastic
variance Σ = E (ut u0t | X) . Furthermore, we assume that Σ is positive definite and denote by σ ij the (i, j)th element of
Σ, that is, σ ij = E (uit ujt |X) .
Under this assumption, the covariance matrix of the entire vector of disturbances U 0 = [U1 , ..., UT ] is given by
£ 0¤
E vec (U ) (vec (U )) = Σ ⊗ IT .

Estimation of β :

In this section we summarize four estimators of β that have been widely used in applications of the classical linear
SUR. Other estimators (such as Bayes, empirical Bayes, or shrinkage estimators) have also been proposed. Interested
readers should refer to Srivastava and Giles (1987) and Fiebig (2001) .

1. Ordinary least squares (OLS) estimator:

The first estimator of β is the ordinary least squares (OLS) estimator of Yt on regressor Xt ,
µX ¶−1 X
T T
β̂ OLS = X̃t X̃t0 X̃t Yt .
t=1 t=1

³ 0 0
´0
This is just the vector that stacks the equation-by-equation OLS estimators, β̂ OLS = β̂ 1,OLS , ..., β̂ N,OLS , where
³P ´−1 P
T 0 T
β̂ i,OLS = t=1 xit xit t=1 xit yit .

2. Generalized least squares (GLS) and feasible GLS (FGLS) estimator:

3
When the system covariance matrix Σ is known, the GLS estimator of β is
µX ¶−1 X
T T
β̂ GLS = X̃t Σ−1 X̃t0 X̃t Σ−1 Yt .
t=1 t=1

When the covariance matrix Σ is unknown, a feasible GLS (FGLS) estimator is defined by replacing the unknown Σ with
a consistent estimate. A widely used estimator of Σ is

Σ̂ = (σ̂ij ) ,

1
PT 0
where σ̂ ij = T t=1 êit êjt and êkt is the OLS residuals of the kth equation, that is, êkt = ykt − β̂ k,OLS xkt , k = i, j. Then
µX ¶−1 X
T T
β̂ F GLS = X̃t Σ̂−1 X̃t0 X̃t Σ̂−1 Yt .
t=1 t=1

The FGLS estimator is a two-step estimator where OLS is used in the first step to obtain residuals êkt and an estimator
of Σ. The second step compute β̂ F GLS based on the estimated Σ in the first step. This estimator is sometimes referred to
as the restricted estimator as opposed to the unrestricted estimator proposed by Zellner that uses the residuals from an
OLS regression of (2) without imposing the coefficient restrictions (3) , i.e. from regressing each regressand on all distinct
regressors in the system.

3. Gaussian quasi-maximum likelihood estimator (QMLE):

The Gaussian log-likelihood function is


T 1 XT ³ ´0 ³ ´
L (β, Σ) = const + det Σ − Yt − X̃t0 β Σ−1 Yt − X̃t0 β ,
2 2 t=1

or equivalently,
T 1 XT ¡ 0 ¢0 ¡ 0 ¢
L (β, Σ) = const +
det Σ − Yt − A (β) Xt Σ−1 Yt − A (β) Xt ,
2 2 t=1
³ ´
where A (β) denotes the coefficient A in (2) with the linear restriction of (3) , and the QMLE β̂ QMLE , Σ̂QMLE maximizes
L (β, Σ) . When the vector Ut has a normal distribution, this estimator is the maximum likelihood estimator.

4. Minimum distance (MD) estimator:

The idea of the MD estimator is to obtain an estimator of the unrestricted coefficient A in (2) , Â, and then, obtain
an estimator of β by minimizing the distance between  and β in (3) . For this, assume that T > L and that the whole
³P ´−1 P
T 0 T 0
regressor matrix X has full rank L. When  is the OLS estimator of A (β) , that is,  = X X
t=1 t t t=1 Xt Yt , the

optimal MD estimator β̂ MD minimizes the optimal MD objective function


³ ³ ´ ´0 µ XT ¶³ ³ ´ ´
QMD (β) = vec  − Gβ Σ̂−1 ⊗ Xt Xt0 vec  − Gβ .
t=1

In this case, we have


µ µ XT ¶ ¶−1 µ µ XT ¶ ³ ´¶
β̂ M D = G0 Σ̂−1 ⊗ Xt Xt0 G G0 Σ̂−1 ⊗ Xt Xt0 vec  .
t=1 t=1

4
Relationship among the estimators:

Some of the above estimators are tightly linked. For example, if we use the same consistent estimator Σ̂, the FGLS
and the MD estimators above are identical, that is, β̂ F GLS = β̂ MD . Also, if we use the QMLE estimator of Σ, Σ̂QMLE
³ ´
in place of Σ̂, β̂ QMLE is identical to β̂ F GLS and to β̂ MD . By the Gauss-Markov theorem, the GLS estimator β̂ GLS
is more efficient than the OLS estimator β̂ OLS when the system errors are correlated across equations. However, this
efficiency gain disappears in some special cases described in Kruskal’s theorem (Kruskal, 1968). A well-known special
case of this theorem is when the regressors in each equation are the same. For other cases, readers can refer Chapter
14 of Greene (2003) and Davidson and MacKinnon (1993, pp. 294-295). The efficiency gain relative to OLS tends to be
larger when the correlation across equations is larger and when the correlation among regressors in different equations is
smaller.
Note also that efficient estimators propagate misspecification and inconsistencies across equations. For example, if
any equation is misspecified (for example some relevant variable has been omitted), then the entire vector β will be
inconsistently estimated by the efficient methods. In this sense, equation-by-equation OLS provides some degree of
robustness since it is not affected by misspecification in other equations in the system.

Distribution of the estimators:

In the literature on the classical linear SUR, the FGLS estimator β̂ F GLS is often called the SUR estimator (SURE).
The usual asymptotic analysis of the SURE is carried out when the dimension of index t, T, increases to infinity with the
dimension of index i, N, kept fixed. For asymptotic theories for large N, T, one can refer to Phillips and Moon (1999.
Under regularity conditions, the asymptotic distributions as T → ∞ of the aforementioned estimators are:
µ h ³ ´i−1 ¶
√ ³ ´ ´i−1 ³ ´h ³
T β̂ OLS − β ⇒ N 0, E X̃t X̃t0 E X̃t ΣX̃t0 E X̃t X̃t0

and
µ h ³ ´i−1 ¶
√ ³ ´ √ ³ ´ √ ³ ´
T β̂ GLS − β , T β̂ F GLS − β , T β̂ MD − β ⇒ N 0, E X̃t Σ−1 X̃t0
³¡ ¡ ¢ ¢−1 ´
≡ N G0 Σ−1 ⊗ E (Xt Xt0 ) G .

It is straightforward to show that the SUR estimator using the information in the system is more efficient (has a smaller
variance) than the estimator of the individual equations. Using the above distributional results, it is straightforward to
construct statistics to test general nonlinear hypotheses.
Finite sample properties of SURE have been studied extensively either analytically in some restrictive cases (e.g.
Zellner, 1963, 1972, Kakwani, 1967), by asymptotic expansions (e.g. Phillips, 1977 and Srivastava and Maekawa, 1995) or

5
by simulation (e.g Kmenta and Gilbert, 1968). Most work has focused on the two-equation case. The above approximations
appear to be good descriptions of the finite-sample behavior of the estimators analyzed when the number of observations,
T, is large relative to the number of equations, N. In particular, efficient methods provide an efficiency gain in cases where
the correlation among disturbances across equations is high and when correlation among regressors across equations is
low. Non-normality of disturbances has also been found to deteriorate the quality of the above approximations. Bootstrap
methods have also proposed to remedy these documented departures from normality and improve the size of tests.

Extensions
In this section we discuss several extensions of the classical linear SUR model where the assumption on the error terms
is no longer satisfied.

Autocorrelation and heteroskedasticity:

As in standard univariate models, non-spherical disturbances can be accommodated by either modelling the residuals
or computing robust covariance matrices. In addition to standard dynamic effects, serial correlation can arise in this
environment due to the presence of individual effects (see Baltagi, 1980). One could define the equivalent of White (in
the case of heteroskedasticity) or HAC (in the case of serial correlation) standard errors to conduct inference with the
OLS estimator as in the single-equation framework
For efficiency in estimation, some parametric assumption on the disturbance process is often imposed (see Greene,
2003). For example, in the case of heteroskedasticity, Hodgson, Linton, and Vorkink (2002) propose an adaptive estimator
that is efficient under the assumption that the errors follow an elliptical symmetric distribution that includes the normal
as a special case. An intermediate approach is to use a restricted (or parametric) covariance matrix to try to capture
some efficiency gains in estimation,and then use a nonparametric heteroskedasticity and autocorrelation (HAC) consistent
estimator of the covariance matrix to do inference. This two-tier approach (dubbed quasi-FGLS) has been suggested by
Creel and Farell (1996) .

Endogenous regressors:

When the regressor Xt in the SUR model is correlated with the error term Ut , one needs instrumental variables (IVs),
0 0 0
say, Zt = [z1t , ..., zN t ] to estimate β. We suppose that the IVs satisfy the usual rank condition. The GMM estimator (or

the IV estimator), then, utilizes the moment condition

E [vec (Zt Ut0 )] = 0.

The optimal GMM estimator β̂ GMM is derived by minimizing the GMM objective function with the optimal choice of

6
³ ³P ´´−1
T 0
weighting matrix given by Σ̂ ⊗ t=1 Zt Zt ,
∙X n ¡ ¸ µ ¶−1 ∙X ¸
T 0 ¢0 o 0 XT T n ¡
0 ¢0 o
QGMM (β) = vec Zt Yt − A (β) Xt Σ̂ ⊗ Zt Zt0 vec Zt Yt − A (β) Xt .
t=1 t=1 t=1

Then, we have
⎧ Ã ⎫
⎨ µX ¶ µX ¶−1 µX ¶!−1 ⎬−1
T T T
β̂ GMM = G0 Σ̂ ⊗ Xt Zt0 Zt Zt0 Zt Xt0 G
⎩ t=1 t=1 t=1 ⎭
( µX ¶ µX ¶−1 µX ¶)
T T T ³ ´
0 −1 0 0 0
×G Σ̂ ⊗ Xt Zt Zt Zt Zt Xt vec Â2SLS ,
t=1 t=1 t=1

½³ ´ ³P ´−1 ³P ´¾−1 ³P ´ ³P ´−1 ³P ´


PT 0 T T T T T
where Â2SLS = t=1 Xt Zt t=1 Zt Zt0 t=1 Zt Xt0 t=1 Xt Zt0 t=1 Zt Zt0 t=1 Zt Yt0 is the two-
stage least squares estimator of A (β) . When Xt is exogenous, so that Xt = Zt , the GMM objective function QGMM (β)
and minimum distance objective function QMD (β) are identical, and we conclude that β̂ GMM = β̂ MD .

Vector autoregressions:

When the index t in the SUR model denotes time and the regressors xit include lagged dependent variables, the
classical linear SUR model becomes a vector autoregression model (VAR) with exclusion restrictions. In this case, the
regressors X are no longer strictly exogenous, and the assumption in the previous section is violated. A special case is
when the order of the lagged dependent variables is one. In this case, for {yit }t to be stationary, it is necessary that
the absolute value of the coefficient of yit−1 is less than one. If the coefficient of yit−1 is one, {yit }t is nonstationary.
Nonstationary SUR VAR models have been used in developing tests for unit roots and cointegration in panels with cross-
sectional dependence, see for example Chang (2004), Groen and Kleibergen (2003) and Larsson, Lyhagen, and Lothgren
(2001).

Seemingly unrelated cointegration regressions:

When the non-constant regressors in Xt are integrated nonstationary variables but the errors in Ut are stationary,
we call model (1) (or equivalently (2)) a seemingly unrelated cointegration regression model, see Park and Ogaki (1991),
Moon (1999), Mark et al (2005), and Moon and Perron (2004). These papers showed that for efficient estimation of β, an
estimator of the long-run variance of Ut , not of the spontaneous covariance Σ as in the previous section, should be used
in FGLS. In addition some modification of the regression is necessary when the integrated regressors and the stationary
errors are correlated. Empirical applications in the main references include tests for purchasing power parity, the relation
between national saving and investment, and tests of the forward rate unbiasedness hypothesis.

Nonlinear SUR (NSUR):

7
An NSUR model assumes that the conditional mean of yit given xit is nonlinear, say hi (β, xit ) , that is, yit =
hi (β, xit ) + uit .Defining H (β, Xt ) = (h1 (β, x1t ) , ..., hN (β, xN t )) , we write the NSUR model in a multivariate nonlinear
regression form,
Yt = H (β, Xt ) + Ut .

In this case, we may estimate β using (quasi) MLE assuming that Yt are Gaussian conditioned on Xt or GMM utilizing
the moment condition that E [g (Xt ) Ut0 ] = 0 for any measurable transformation g of Xt .

References
[1] Baltagi, B. (1980): On Seemingly Unrelated Regressions with Error Components, Econometrica, 48, 1547-1552.

[2] Chang, Y. (2004): Bootstrap Unit Root Tests in Panels with Cross-Sectional Dependency, Journal of Econometrics,
120, 263-293.

[3] Creel, M. and M. Farell (1996): SUR Estimation of Multiple Time-Series Models with Heteroskedasticity and Serial
Correlation of Unknown Form, Economic Letters, 53, 239-245.

[4] Davidson, R. and J. MacKinnon (1993): Estimation and Inference in Econometrics, Oxford University Press.

[5] Fiebig, D. G. (2001): Seemingly Unrelated Regression, in Baltagi, B. eds, A Companion to Theoretical Econometrics,
Backwell Publishers, 0101-121.

[6] Greene, W. (2003): Econometric Analysis, 5th ed. New Jersey, Prentice Hall.

[7] Groen, J. and F. Kleibergen (2003): Likelihood-Based Cointegration Analysis in Panels of Vector Error Correction
Models. Journal of Business and Economic Statistics, 21, 295-318.

[8] Hodgson, D., O. Linton, and K. Vorkink (2002): Testing the Capital Asset Pricing Model Efficiently under Elliptical
Symmetry: A Semiparametric Approach, Journal of Applied Econometrics, 17, 619-639.

[9] Larsson, R., J. Lyhagen, and M. Lothgren (2001): Likelihood-based Cointegration Tests in Heterogeneous Panels,
Econometrics Journal, 4, 109-142.

[10] Kakwani, N.C. (1967): The Unbiasedness of Zellner’s Seemingly Unrelated Regression Equations Estimator, Journal
of the American Statistical Association, 62, 141-142.

[11] Kamenta, J. and R.F. Gilbert (1968): Small Sample Properties of Alternative Estimators for Seemingly Unrelated
Regressions, Journal of the American Statistical Association, 63, 1180-1200.

[12] Kruskal, W. (1968): When are Gauss-Markov and Least Squares Estimators Identical?, Annals of Mathematical
Statistics, 39, 70-75.

8
[13] Mark, N., M. Ogaki, and D. Sul (2005): Dynamic Seemingly Unrelated Cointegrating Regressions, Review of Economic
Studies, 72, 797-820.

[14] Moon, H.R. (1999): A Note on Fully-Modified Estimation of Seemingly Unrelated Regressions Models with Integrated
Regressors, Economics Letters, 65, 25—31.

[15] Moon, H.R. and P. Perron (2004): Efficient Estimation of SUR Cointegration Regression Model and Testing for
Purchasing Power Parity, Econometric Reviews, 23, 293-323.

[16] Park, J. and M. Ogaki (1991): Seemingly Unrelated Canonical Cointegrating Regressions, University of Rochester
working paper 280.

[17] Phillips, P.C.B. (1977): Finite sample distribution of Zellner’s SURE, Journal of Econometrics, 6, 147-164.

[18] Srivastava, V. K. and D. E. A. Giles (1987): Seemingly Unrelated Regression Equations Models, New York: Marcel
Dekker Inc..

[19] Srivastava, V. K. and K. Maekawa (1995): Efficiency Properties of Feasible Generalized Least Squares Estimators in
SURE Models under Non-normal Disturbances, Journal of Econometrics, 66, 99-121.

[20] Zellner A. (1962): An Efficient Method of Estimating Seemingly Unrelated Regression Equations and Tests of
Aggregation Bias, Journal of the American Statistical Association, 57, 500-509.

[21] Zellner A. (1963): Estimators for Seemingly Unrelated Regression Equations: Some Finite Sample Results, Journal
of the American Statistical Association, 58, 977-992

[22] Zellner A. (1972): Corrigenda, Journal of the American Statistical Association, 67, 255

You might also like