Asymptotical Statistics
Asymptotical Statistics
Asymptotical Statistics
Editorial Board:
Already published
1. Bootstrap Methods and Their Application, by A. C. Davison and D.V. Hinkley
2. Markov Chains, by J. Norris
Asymptotic Statistics
CAMBRIDGE
UNIVERSITY PRESS
PUBLISHED BY TilE PRESS SYNDICATE OF TilE UNIVERSITY OF CAMBRIDGE
The Pitt Building, Trumpington Street, Cambridge, United Kingdom
A catalog record for this book is available from the British Library
1. Introduction 1
1.1. Approximate Statistical Procedures 1
1.2. Asymptotic Optimality Theory 2
1.3. Limitations 3
1.4. The Index n 4
2. Stochastic Convergence 5
2.1. Basic Theory 5
2.2. Stochastic o and 0 Symbols 12
*2.3. Characteristic Functions 13
*2.4. Almost-Sure Representations 17
*2.5. Convergence of Moments 17
*2.6. Convergence-Determining Classes 18
*2.7. Law of the Iterated Logarithm 19
*2.8. Lindeberg-Feller Theorem 20
*2.9. Convergence in Total Variation 22
Problems 24
3. Delta Method 25
3.1. Basic Result 25
3.2. Variance-Stabilizing Transformations 30
*3.3. Higher-Order Expansions 31
*3.4. Uniform Delta Method 32
*3.5. Moments 33
Problems 34
4. Moment Estimators 35
4.1. Method of Moments 35
*4.2. Exponential Families 37
Problems 40
5. M- and Z-Estimators 41
5.1. Introduction 41
5.2. Consistency 44
5.3. Asymptotic Normality 51
vii
viii Contents
References 433
Index 439
Preface
This book grew out of courses that I gave at various places, including a graduate course in
the Statistics Department of Texas A&M University, Master's level courses for mathematics
students specializing in statistics at the Vrije Universiteit Amsterdam, a course in the DEA
program (graduate level) of Universite de Paris-sud, and courses in the Dutch AIO-netwerk
(graduate level).
The mathematical level is mixed. Some parts I have used for second year courses for
mathematics students (but they find it tough), other parts I would only recommend for a
graduate program. The text is written both for students who know about the technical
details of measure theory and probability, but little about statistics, and vice versa. This
requires brief explanations of statistical methodology, for instance of what a rank test or
the bootstrap is about, and there are similar excursions to introduce mathematical details.
Familiarity with (higher-dimensional) calculus is necessary in all of the manuscript. Metric
and normed spaces are briefly introduced in Chapter 18, when these concepts become
necessary for Chapters 19, 20, 21 and 22, but I do not expect that this would be enough as a
first introduction. For Chapter 25 basic knowledge of Hilbert spaces is extremely helpful,
although the bare essentials are summarized at the beginning. Measure theory is implicitly
assumed in the whole manuscript but can at most places be avoided by skipping proofs, by
ignoring the word "measurable" or with a bit of handwaving. Because we deal mostly with
i.i.d. observations, the simplest limit theorems from probability theory suffice. These are
derived in Chapter 2, but prior exposure is helpful.
Sections, results or proofs that are preceded by asterisks are either of secondary impor-
tance or are out of line with the natural order of the chapters. As the chart in Figure 0.1
shows, many of the chapters are independent from one another, and the book can be used
for several different courses.
A unifying theme is approximation by a limit experiment. The full theory is not developed
(another writing project is on its way), but the material is limited to the "weak topology"
on experiments, which in 90% of the book is exemplified by the case of smooth parameters
of the distribution of i.i.d. observations. For this situation the theory can be developed
by relatively simple, direct arguments. Limit experiments are used to explain efficiency
properties, but also why certain procedures asymptotically take a certain form.
A second major theme is the application of results on abstract empirical processes. These
already have benefits for deriving the usual theorems on M -estimators for Euclidean pa-
rameters but are indispensable if discussing more involved situations, such as M -estimators
with nuisance parameters, chi-square statistics with data-dependent cells, or semiparamet-
ric models. The general theory is summarized in about 30 pages, and it is the applications
xiii
xiv Preface
24
Figure 0.1. Dependence chart. A solid arrow means that a chapter is a prerequisite for a next chapter.
A dotted arrow means a natural continuation. Vertical or horizontal position has no independent
meaning.
that we focus on. In a sense, it would have been better to place this material (Chapters
18 and 19) earlier in the book, but instead we start with material of more direct statistical
relevance and of a less abstract character. A drawback is that a few (starred) proofs point
ahead to later chapters.
Almost every chapter ends with a "Notes" section. These are meant to give a rough
historical sketch, and to provide entries in the literature for further reading. They certainly
do not give sufficient credit to the original contributions by many authors and are not meant
to serve as references in this way.
Mathematical statistics obtains its relevance from applications. The subjects of this book
have been chosen accordingly. On the other hand, this is a mathematician's book in that
we have made some effort to present results in a nice way, without the (unnecessary) lists
of "regularity conditions" that are sometimes found in statistics books. Occasionally, this
means that the accompanying proof must be more involved. If this means that an idea could
go lost, then an informal argument precedes the statement of a result.
This does not mean that I have strived after the greatest possible generality. A simple,
clean presentation was the main aim.
A* adjoint operator
JB* dual space
Cb(T), UC(T), C(T) (bounded, uniformly) continuous functions on T
.eoo(T) bounded functions on T
.Cr(Q), Lr(Q) measurable functions whose rth powers are Q-integrable
11/IIQ,r norm of Lr(Q)
llzlloo. llziiT uniform norm
lin linear span
C,N,Q,JR,Z number fields and sets
EX, E*X, var X, sdX, Cov X (outer) expectation, variance, standard deviation,
covariance (matrix) of X
IP'n, Gn empirical measure and process
Gp P-Brownian bridge
N(JL, :E), tn.
2
x; normal, t and chisquare distribution
Za, Xn,a, tn,a upper o:-quantiles of normal, chisquare and t distributions
« absolutely continuous
<1,<1 t> contiguous, mutually contiguous
-
<
""'
p
--+
as
--+
smaller than up to a constant
convergence in distribution
convergence in probability
convergence almost surely
N(e, T, d), Nn(e, T, d) covering and bracketing number
J(e, T, d), ln(e, T, d) entropy integral
Op(l), Op(l) stochastic order symbols
XV
1
Introduction
I ( ,Jii(Xn-
s~p PJL Sn f,.t) .::: x ) I
- <l>(x) -+ 0,
provided the variables X; have a finite second moment. This variation on the central limit
theorem is proved in the next chapter. A "large sample" level a test is to reject Ho : fJ- = f.J-o
Snl
if 1../ii(Xn - f.J-o)l exceeds the upper al2 quantile of the standard normal distribution.
Table 1.1 gives the significance level of this test if the observations are either normally or
exponentially distributed, and a = 0.05. For n 2: 20 the approximation is quite reasonable
in the normal case. If the underlying distribution is exponential, then the approximation is
less satisfactory, because of the skewness of the exponential distribution.
2 Introduction
n Normal Exponentiala
5 0.122 0.19
10 0.082 0.14
15 0.070 0.11
20 0.065 0.10
25 0.062 0.09
50 0.056 0.07
100 0.053 0.06
In many ways the t-test is an uninteresting example. There are many other reasonable
test statistics for the same problem. Often their null distributions are difficult to calculate.
An asymptotic result similar to the one for the t-statistic would make them practically
applicable at least for large sample sizes. Thus, one aim of asymptotic statistics is to derive
the asymptotic distribution of many types of statistics.
There are similar benefits when obtaining confidence intervals. For instance, the given
approximation result asserts that .jn(Xn- JL)/ Sn is approximately standard normally dis-
tributed if fL is the true mean, whatever its value. This means that, with probability approx-
imately 1 - 2a,
.jn(Xn- JL)
-Zct <
- Sn < Zct·
-
This can be rewritten as the confidence statement JL = X n ± Za Sn / .jn in the usual manner.
For large nits confidence level should be close to 1 - 2o:.
As another example, consider maximum likelihood estimators Bn based on a sample of
size n from a density p8 • A major result in asymptotic statistics is that in many situations
,y'n(Bn -0) is asymptotically normally distributed with zero mean and covariance matrix the
inverse of the Fisher information matrix Ie. If Z is k-variate normally distributed with mean
zero and nonsingular covariance matrix :E, then the quadratic form zr :E -l Z possesses a
chi-square distribution with k degrees of freedom. Thus, acting as if .jn(Bn- 0) possesses
an Nk(O, Ii 1) distribution, we find that the ellipsoid
is an approximate 1 - o: confidence region, if xf,a is the appropriate critical value from the
chi-square distribution. A closely related alternative is the region based on inverting the
likelihood ratio test, which is also based on an asymptotic approximation.
in certain exponential family models; the Rao-Blackwell theory allows us to conclude that
certain estimators are of minimum variance among the unbiased estimators. An important
and fairly general result is the Cramer-Rao bound for the variance of unbiased estimators,
but it is often not sharp.
If exact optimality theory does not give results, be it because the problem is untractable
or because there exist no "optimal" procedures, then asymptotic optimality theory may
help. For instance, to compare two tests we might compare approximations to their power
functions. To compare estimators, we might compare asymptotic variances rather than
exact variances. A major result in this area is that for smooth parametric models maximum
likelihood estimators are asymptotically optimal. This roughly means the following. First,
maximum likelihood estimators are asymptotically consistent: The sequence of estimators
converges in probability to the true value of the parameter. Second, the rate at which
maximum likelihood estimators converge to the true value is the fastest possible, typically
1/ .jn. Third, their asymptotic variance, the variance of the limit distribution of .jn(On- 0),
is minimal; in fact, maximum likelihood estimators "asymptotically attain" the Cramer-Rao
bound. Thus asymptotics justify the use of the maximum likelihood method in certain
situations. It is of interest here that, even though the method of maximum likelihood often
leads to reasonable estimators and has great intuitive appeal, in general it does not lead
to best estimators for finite samples. Thus the use of an asymptotic criterion simplifies
optimality theory considerably.
By taking limits we can gain much insight in the structure of statistical experiments. It
turns out that not only estimators and test statistics are asymptotically normally distributed,
but often also the whole sequence of statistical models converges to a model with a nor-
mal observation. Our good understanding of the latter "canonical experiment" translates
directly into understanding other experiments asymptotically. The mathematical beauty of
this theory is an added benefit of asymptotic statistics. Though we shall be mostly concerned
with normal limiting theory, this theory applies equally well to other situations.
1.3 Limitations
Although asymptotics is both practically useful and of theoretical importance, it should not
be taken for more than what it is: approximations. Clearly, a theorem that can be interpreted
as saying that a statistical procedure works fine for n --+ oo is of no use if the number of
available observations is n = 5.
In fact, strictly speaking, most asymptotic results that are currently available are logically
useless. This is because most asymptotic results are limit results, rather than approximations
consisting of an approximating formula plus an accurate error bound. For instance, to
estimate a value a, we consider it to be the 25th element a = a25 in a sequence a 1, a2, ... ,
and next take liiDn-+oo an as an approximation. The accuracy of this procedure depends
crucially on the choice of the sequence in which a 25 is embedded, and it seems impossible
to defend the procedure from a logical point of view. This is why there is good asymptotics
and bad asymptotics and why two types of asymptotics sometimes lead to conflicting
claims.
Fortunately, many limit results of statistics do give reasonable answers. Because it may
be theoretically very hard to ascertain that approximation errors are small, one often takes
recourse to simulation studies to judge the accuracy of a certain approximation.
4 Introduction
Just as care is needed if using asymptotic results for approximations, results on asymptotic
optimality must be judged in the right manner. One pitfall is that even though a certain
procedure, such as maximum likelihood, is asymptotically optimal, there may be many
other procedures that are asymptotically optimal as well. For finite samples these may
behave differently and possibly better. Then so-called higher-order asymptotics, which
yield better approximations, may be fruitful. See e.g., [7], [52] and [114]. Although we
occasionally touch on this subject, we shall mostly be concerned with what is known as
"first-order asymptotics."
l.S Notation
A symbol index is given on page xv.
For brevity we often use operator notation for evaluation of expectations and have special
symbols for the empirical measure and process.
For P a measure on a measurable space (X, B) and f : X~---* JRk a measurable function,
Pf denotes the integral J f dP; equivalently, the expectation Epf(X 1) for X 1 a random
variable distributed according to P. When applied to the empirical measure JPln of a sample
X 1 , ••• , X n, the discrete uniform measure on the sample values, this yields
1 n
Pnf = - Lf(X;).
n i=l
This formula can also be viewed as simply an abbreviation for the average on the right. The
empirical process Gn f is the centered and scaled version of the empirical measure, defined
by
1 n
Gnf = .;n(Pnf- Pf) = r.;L(f(X;) -Epf(X;)).
v n i=l
This is studied in detail in Chapter 19, but is used as an abbreviation throughout the book.
2
Stochastic Convergence
for every x at which the limit distribution function x ~ P(X :::: x) is continuous. Alterna-
tive names are weak convergence and convergence in law. As the last name suggests, the
convergence only depends on the induced laws of the vectors and not on the probability
spaces on which they are defined. Weak convergence is denoted by Xn- X; if X has dis-
tribution L, or a distribution with a standard code, such as N(O, 1), then also by Xn- Lor
Xn -v-+ N(O, 1).
Let d (x, y) be a distance function on JRk that generates the usual topology. For instance,
the Euclidean distance
k
d(x, y) = llx- yll = ( ~(x;- y;) 2
)1/2
A sequence of random variables Xn is said to converge in probability to X if for all e > 0
P(d(Xn, X) >e)--+ 0.
t More formally it is a Borel measurable map from some probability space in JRk. Throughout it is implic-
itly understood that variables X, g(X), and so forth of which we compute expectations or probabilities are
measurable maps on some probability space.
5
6 Stochastic Convergence
P(limd(Xn. X) = 0) = 1.
2.2 Lemma (Portmanteau). For any random vectors Xn and X the following statements
are equivalent.
(i) P(Xn:::: x)--* P(X:::: x)forall continuity points ofx 1-+ P(X:::: x);
(ii) Ef(Xn)--* Ef(X)forall bounded, continuousfunctions f;
(iii) Ef(Xn)--* Ef(X)forall bounded, Lipschitzt functions f;
(iv) liminfEf(Xn) ~ Ef(X)forallnonnegative, continuousfunctions f;
(v) liminfP(Xn e G)~ P(X e G) for every open set G;
(vi) lim sup P(Xn E F) :::: P(X E F) for every closed set F;
(vii) P(Xn E B) --* P(X E B) for all Borel sets B with P(X e 8B) = 0, where
8B = B- B is the boundary of B.
Proof. (i) => (ii). Assume first that the distribution function of X is continuous. Then
condition (i) implies that P(Xn e I) --* P(X e I) for every rectangle I. Choose a
sufficiently large, compact rectangle I with P(X fj. I) < 8. A continuous function f is
uniformly continuous on the compact set I. Thus there exists a partition I = U iIi into
finitely many rectangles Ii such that f varies at most 8 on every Ii. Take a point xi from
each Ii and define fe = Lj f(xj)1Ir Then If- fel < 8 on I, whence iff takes its values
in[-1,1],
t A function is called Lipschitz if there exists a number L such that jJ(x)- f(y)j :S Ld(x, y), for every x and
y. The least such number Lis denoted II fillip·
2.1 Basic Theory 7
For sufficiently large n, the right side of the first equation is smaller than 2e as well. We
combine this with
Together with the triangle inequality the three displays show that IE/(Xn)- Ef(X)I is
bounded by Se eventually. This being true for every e > 0 implies (ii).
Call a set B a continuity set if its boundary 8B satisfies P(X E 8B) = 0. The preceding
argument is valid for a general X provided all rectangles I are chosen equal to continuity
sets. This is possible, because the collection of discontinuity sets is sparse. Given any
collection of pairwise disjoint measurable sets, at most countably many sets can have
positive probability. Otherwise the probability of their union would be infinite. Therefore,
given any collection of sets {Ba: o: E A} with pairwise disjoint boundaries, all except at
most countably many sets are continuity sets. In particular, for each j at most countably
many sets of the form {x : xi ::: o:} are not continuity sets. Conclude that there exist dense
subsets Q 1 , ••• , Qk of lR such that each rectangle with comers in the set Q 1 x · · · x Q k is
a continuity set. We can choose all rectangles I inside this set.
(iii) =} (v). For every open set G there exists a sequence of Lipschitz functions with
0 :S fm t lG. For instance fm(x) = (md(x, Gc)) 1\1. Foreveryfixedm,
As m --+ oo the right side increases to P(X E G) by the monotone convergence theorem.
(v) *(vi). Because a set is open if and only if its complement is closed, this follows by
taking complements.
(v) +(vi)=} (vii). Let Band B denote the interior and the closure of a set, respectively.
By (iv)
by (v). If P(X E 8B) = 0, then left and right side are equal, whence all inequalities
are equalities. The probability P(X E B) and the limit limP(Xn E B) are between the
expressions on left and right and hence equal to the common value.
(vii)=} (i). Every cell ( -oo, x] such that xis a continuity point of x 1-+ P(X ::: x) is a
continuity set.
The equivalence (ii) #- (iv) is left as an exercise. •
2.3 Theorem (Continuous mapping). Let g: JRk 1-+ JRm be continuous at every point of
a set C such that P(X E C) = 1.
(i) If Xn- X, then g(Xn)- g(X);
(ii) If Xn ~ X, then g(Xn) ~ g(X);
(iii) If Xn ~ X, then g(Xn) ~ g(X).
8 Stochastic Convergence
Proof. (i). The event {g(Xn) E F} is identical to the event {Xn E g- 1(F)}. For every
closed set F,
To see the second inclusion, take x in the closure of g- 1 (F). Thus, there exists a sequence
Xm with Xm ---+ x and g(xm) E F for every F. If x E C, then g(xm)---+ g(x), which is in F
because F is closed; otherwise x E cc. By the portmanteau lemma,
Because P(X E cc) = 0, the probability on the right is P(X E g- 1(F)) = P(g(X) E
F). Apply the portmanteau lemma again, in the opposite direction, to conclude that
g(Xn)- g(X).
(ii). Fix arbitrary e > 0. For each 8 > 0 let Bo be the set of x for which there exists
y with d(x, y) < 8, but d(g(x), g(y)) > e. If X ¢. Bo and d(g(Xn). g(X)) > e, then
d(Xn. X) 2:: 8. Consequently,
The second term on the right converges to zero as n ---+ oo for every fixed 8 > 0. Because
+
B0 n C 0 by continuity of g, the first term converges to zero as 8 0. +
Assertion (iii) is trivial. •
Any random vector X is tight: For every e > 0 there exists a constant M such that
P(IIXII > M) <e. Asetofrandom vectors {Xa :a E A} iscalleduniformlytightif M can
be chosen the same for every X a: For every e > 0 there exists a constant M such that
Thus, there exists a compact set to which all Xa give probability "almost" one. Another
name for uniformly tight is bounded in probability. It is not hard to see that every weakly
converging sequence Xn is uniformly tight. More surprisingly, the converse of this statement
is almost true: According to Prohorov's theorem, every uniformly tight sequence contains a
weakly converging subsequence. Prohorov's theorem generalizes the Reine-Borel theorem
from deterministic sequences Xn to random vectors.
Proof. (i). Fix a number M such that P(IIXII 2:: M) < e. By the portmanteau lemma
P(IIXnll 2:: M) exceeds P(IIXII 2:: M) arbitrarily little for sufficiently large n. Thus there
exists N such that P( II Xn II 2:: M) < 2e, for all n 2:: N. Because each of the finitely many
variables Xn with n < N is tight, the value of M can be increased, if necessary, to ensure
thatP(IIXnll 2:: M) < 2e for every n.
2.1 Basic Theory 9
The crux of the proof of Prohorov's theorem is Helly's lemma. This asserts that any
given sequence of distribution functions contains a subsequence that converges weakly to
a possibly defective distribution function. A defective distribution function is a function
that has all the properties of a cumulative distribution function with the exception that it has
limits less than 1 at oo and/or greater than 0 at -oo.
2.5 Lemma (Belly's lemma). Each given sequence Fn of cumulative distribution func-
tions on :IRk possesses a subsequence Fn 1 with the property that Fn/X) --* F(x) at each
continuity point x of a possibly defective distribution function F.
Proof. Let Qk = {q 1,q2 , .•• } be the vectors with rational coordinates, ordered in an
arbitrary manner. Because the sequence Fn(q,) is contained in the interval [0, 1], it has
a converging subsequence. Call the indexing subsequence {n}}~ 1 and the limit G(q 1).
Next, extract a further subsequence {n]l C {n}l along which Fn(q2) converges to a
limit G(q2), a further subsequence {n}} C {n]l along which Fn(q3) converges to a limit
G(q 3 ), ••• , and so forth. The "tail" of the diagonal sequence ni :=n} belongs to every
sequence n~. Hence Fn 1 (qi) --* G(q;) for every i = 1, 2, .... Because each Fn is nonde-
creasing, G(q) .::: G(q') if q .::: q'. Define
Conclude that lliminf Fn/X) - F(x)l < e. Because this is true for every e > 0 and
the same result can be obtained for the lim sup, it follows that Fn/X) --* F(x) at every
continuity point of F.
In the higher-dimensional case, it must still be shown that the expressions defining masses
of cells are nonnegative. For instance, for k = 2, F is a (defective) distribution function
only if F(b) + F(a)- F(a 1 , b2)- F(a 2, b 1) 2: Oforeverya.::: b. In thecasethatthe four
comers a, b, (a" b2), and (a 2, b 1) of the cell are continuity points; this is immediate from
the convergence of Fn1 to F and the fact that each Fn is a distribution function. Next, for
general cells the property follows by right continuity. •
10 Stochastic Convergence
EIX iP
P(IXni > M) ::: M~
The right side can be made arbitrarily small, uniformly in n, by choosing sufficiently
largeM.
Because EX~ = var Xn + (EXn) 2 , an alternative sufficient condition for uniform tight-
ness is EXn = 0{1) and var Xn = 0(1). This cannot be reversed. D
Consider some of the relationships among the three modes of convergence. Convergence
in distribution is weaker than convergence in probability, which is in tum weaker than
almost-sure convergence, except if the limit is constant.
Proof. (i). The sequence of sets An = Um~n {d (Xm, X) > e} is decreasing for every
e > 0 and decreases to the empty set if Xn(w) --* X(w) for every w. If Xn ~ X, then
P(d(Xn. X) >e) ::: P(An)--* 0.
(iv). For every f with range [0, 1] and Lipschitz norm at most 1 and every e > 0,
The second term on the righf converges to zero as n --* oo. The first term can be made
arbitrarily small by choice of e. Conclude that the sequences Ef(Xn) and Ef(Yn) have the
same limit. The result follows from the portmanteau lemma.
(ii). Because d(Xn, X) ~ 0 and trivially X- X, it follows that Xn--+ X by (iv).
(iii). The "only if'' part is a special case of (ii). For the converse letball(c, e) be the open
ball of radius e around c. Then P(d(Xn, c):=::: e)= P(Xn E ball(c, e)c). If Xn -c, then
the lim sup of the last probability is bounded by P(c E ball(c, e)c) = 0, by the portmanteau
lemma.
(v). First note that d((Xn, Yn), (Xn, c)) = d(Yn, c) ~ 0. Thus, according to (iv), it
suffices to show that (Xn, c)--+ {X, c). For every continuous, bounded function (x, y) ~---*
f(x, y), thefunctionx ~---* f(x, c) is continuous and bounded. ThusEf(Xn, c) --* Ef(X, c)
ifXn-X.
(vi). This follows from d( (xt, Yt). (xz, yz)) ::: d(xt. xz) + d(yt. Yz). •
2.8 Lemma (Slutsky). Let Xn, X and Yn be random vectors or variables. If Xn- X and
Yn - c for a constant c, then
(i) Xn + Yn- X+ c;
(ii) YnXn -eX;
(iii) yn- 1 Xn- c- 1 X provided c =f:. 0.
In (i) the "constant" c must be a vector of the same dimension as X, and in (ii) it is
probably initially understood to be a scalar. However, (ii) is also true if every Yn and c
are matrices (which can be identified with vectors, for instance by aligning rows, to give a
meaning to the convergence Yn- c), simply because matrix multiplication (x, y) ~--* yx is
a continuous operation. Even (iii) is valid for matrices Yn and c and vectors Xn provided
c =f:. 0 is understood as c being invertible, because taking an inverse is also continuous.
2.9 Example (t-statistic). Let Y1, Y2 , ••• be independent, identically distributed random
variables with EY1 = 0 and EYf < oo. Then the t-statistic ..fTiYn/Sn, where s;
= (n-
1)- 1E7= 1(Yi - Yn) 2 is the sample variance, is asymptotically standard normal.
To see this, first note that by two applications of the weak law of large numbers and the
continuous-mapping theorem for convergence in probability
for certain parameters () and u 2 depending on the underlying distribution, for every distri-
bution in the model. Then() = Tn ± Sn/ ..jTi Za is a confidence interval for() of asymptotic
12 Stochastic Convergence
Ievell - 2a. More precisely, we have that the probability that() is contained in [Tn -
Snf Jn Za., Tn + Snf Jn Za.] converges to 1 - 2a.
This is a consequence of the fact that the sequence ,Jri(Tn - 0)/ Sn is asymptotically
standard normally distributed. D
If the limit variable X has a continuous distribution function, then weak convergence
Xn "-"+X implies P(Xn .::: x) --+ P(X .::: x) for every x. The convergence is then even
uniform in x.
2.11 Lemma. Suppose that Xn "-"+ X for a random vector X with a continuous distribution
function. ThensupxiPCXn .::=x) -P(X .::=x)l--+ 0.
Proof. Let Fn and F be the distribution functions of Xn and X. First consider the one-
dimensional case. Fix k E N. By the continuity ofF there exist points -oo = x0 <
x 1 < · · · < Xk = oo with F(x;) = ijk. By monotonicity, we have, for Xi-1.::: x.::: x;,
I I I I
Thus Fn (x) - F (x) is bounded above by sup; Fn (x;) - F (x;) + 1I k, for every x. The
latter, finite supremum converges to zero as n --+ oo, for each fixed k. Because k is arbitrary,
the result follows.
In the higher-dimensional case, we follow a similar argument but use hyperrectangles,
rather than intervals. We can construct the rectangles by intersecting the k partitions obtained
by subdividing each coordinate separately as before. •
(1 + Op(l)r 1 = Op(l)
Op(Rn)= Rnop(l)
Op(Rn) = RnOp(l)
op(Op(l)) = Op(l).
To see the validity of these rules it suffices to restate them in terms of explicitly named
vectors, where each o p ( 1) and 0 p (1) should be replaced by a different sequence of vectors
that converges to zero or is bounded in probability. In this way the first rule says: If Xn ~ 0
and Yn ~ 0, then Zn = Xn + Yn ~ 0. This is an example of the continuous-mapping
theorem. The third rule is short for the following: If Xn is bounded in probability and
Yn ~ 0, then XnYn ~ 0. If Xn would also converge in distribution, then this would be
statement (ii) of Slutsky's lemma (with c = 0). But by Prohorov's theorem, Xn converges
in distribution "along subsequences" if it is bounded in probability, so that the third rule
can still be deduced from Slutsky's lemma by "arguing along subsequences."
Note that both rules are in fact implications and should be read from left to right, even
though they are stated with the help of the equality sign. Similarly, although it is true that
op(l) + op(l) = 2op(l), writing down this rule does not reflect understanding of the op
symbol.
Two more complicated rules are given by the following lemma.
2.12 Lemma. Let R be a function defined on domain in ~k such that R(O) = 0. Let Xn be
a sequence of random vectors with values in the domain of R that converges in probability
to zero. Then, for every p > 0,
(i) if R(h) = o(llh liP) ash --+ 0, then R(Xn) = Op (IIXn liP);
(ii) ifR(h) = O(llhiiP) ash--+ 0, then R(Xn) = Op(IIXniiP).
Proof. Define g(h) as g(h) = R(h)fllhiiP for h ::f. 0 and g(O) = 0. Then R(Xn) =
g(Xn)IIXniiP.
(i) Because the function g is continuous at zero by assumption, g(Xn) ~ g(O) = 0 by
the continuous-mapping theorem.
(ii) By assumption there exist M and 8 > 0 such that lg(h)l .::: M whenever llhll .::: 8.
Thus P(lg(Xn) I > M) .::: P(IIXn II > 8) --+ 0, and the sequence g(Xn) is tight. •
Each of the functions x 1-+ eitT x is continuous and bounded. Thus, by the portmanteau
lemma, EeirTx. --+ EeirTx for every t if Xn "-"+X. By Levy's continuity theorem the
14 Stochastic Convergence
2.13 Theorem (Uvy's continuity theorem). Let Xn and X be random vectors in IRk.
Then Xn ...,... X if and only ifEeitr x. --+ Eeitr x for every t E IRk. Moreover, if E eitr x. con-
verges pointwise to a junction <f>(t) that is continuous at zero, then 4> is the characteristic
function of a random vector X and Xn ...,... X.
Proof. If Xn...,... X, then Eh(Xn) --+ Eh(X) for every bounded continuous function h, in
particular for the functions h(x) = eitr x. This gives one direction of the first statement.
For the proof of the last statement, suppose first that we already know that the sequence
Xn is uniformly tight. Then, according to Prohorov's theorem, every subsequence has a
further subsequence that converges in distribution to some vector Y. By the preceding
paragraph, the characteristic function of Y is the limit of the characteristic functions of the
converging subsequence. By assumption, this limit is the function ¢(t). Conclude that
every weak limit point Y of a converging subsequence possesses characteristic function
¢. Because a characteristic function uniquely determines a distribution (see Lemma 2.15),
it follows that the sequence Xn has only one weak limit point. It can be checked that a
uniformly tight sequence with a unique limit point converges to this limit point, and the
proof is complete.
The uniform tightness of the sequence Xn can be derived from the continuity of 4> at
zero. Because marginal tightness implies joint tightness, it may be assumed without loss
of generality that Xn is one-dimensional. For every x and 8 > 0,
!8 }_8
{8 Re(1-¢(t)) dt.
Because 4> is continuous at zero, there exists for every e > 0 a 8 > 0 such that II -
4> (t) < e I
for iti < 8. For this 8 the integral is bounded by 2e. Conclude that P(IXnl > 2/8) ::::; 2s
for sufficiently large n, whence the sequence Xn is uniformly tight. •
2.14 Example (Normal distribution). The characteristic function of the Nk(JL, :E) distri-
bution is the function
"' = eZT"f
EezT(I;l/2X+n) .- e(I;112zl x-lxT
2
xd
x - 1- - = ezT t-t+!zTI;z
2 •
(2rr)k/2
2.3 Characteristic Functions 15
For real-valued z, the last equality follows easily by completing the square in the exponent.
Evaluating the integral for complex z, such as z = it, requires some skill in complex
function theory. One method, which avoids further calculations, is to show that both the
left- and righthand sides of the preceding display are analytic functions of z. For the right
side this is obvious; for the left side we can justify differentiation under the expectation
sign by the dominated-convergence theorem. Because the two sides agree on the real axis,
they must agree on the complex plane by uniqueness of analytic continuation. D
2.15 Lemma. Random vectors X and Y in :Ilk are equal in distribution if and only if
Eeitr x = EeitrY for every t E :Ilk.
Proof. By Fubini's theorem and calculations as in the preceding example, for every u > 0
andy E :Ilk,
By the convolution formula for densities, the righthand side is (2rr )k times the density
p x +a z (y) of the sum of X and u Z for a standard normal vector Z that is independent of X.
Conclude that if X and Y have the same characteristic function, then the vectors X + u Z
and Y + u Z have the same density and hence are equal in distribution for every u > 0. By
Slutsky's lemma X+ u Z "-"+X as u ..j.. 0, and similarly for Y. Thus X andY are equal in
distribution. •
The characteristic function of a sum of independent variables equals the product of the
characteristic functions of the individual variables. This observation, combined with Levy's
theorem, yields simple proofs of both the law oflarge numbers and the central limit theorem.
2.16 Proposition (Weak law of large numbers). Let Yt, ... , Yn be i. i.d. random variables
with characteristic function ~· Then Yn ~ 11- for a real number 11- if and only if~ is differ-
entiable at zero with i 11- = ~' (0).
Proof. We only prove that differentiability is sufficient. For the converse, see, for exam-
ple, [127, p. 52]. Because ~(0) = 1, differentiability of~ at zero means that ~(t) = 1
+ t~'(O) + o(t) as t--+ 0. Thus, by Fubini's theorem, for each fixed t and n--+ oo,
A sufficient but not necessary condition for ~(t) = EeitY to be differentiable at zero
is that ElY I < oo. In that case the dominated convergence theorem allows differentiation
16 Stochastic Convergence
In particular, the derivative at zero is 4>' (0) = iEY and hence Y n ~ EY1•
If EY 2 < oo, then the Taylor expansion can be carried a step further and we can obtain
a version of the central limit theorem.
2.17 Proposition (Central limit theorem). Let Y1, .•• , Yn be i.i.d. random variables with
EY; = 0 and EY? = 1. Then the sequence JnYn converges in distribution to the standard
normal distribution.
Proof. A second differentiation under the expectation sign shows that ¢"(0) = i 2EY 2.
r
Because 4>' (0) = iEY = 0, we obtain
The characteristic function t ~--+ EeirTx of a vector X is determined by the set of all
characteristic functions u ~--+ Eeiu(rT X) of linear combinations tT X of the components of X.
Therefore, Levy's continuity theorem implies that weak convergence of vectors is equivalent
to weak convergence of linear combinations:
2.18 Example (Multivariate central limit theorem). Let Y1 , Y2, ... be i.i.d. random vec-
tors in :Ilk with mean vector f.L = EY1 and covariance matrix :E = E(Y1 - JL)(Y1 - JL)T.
Then
(The sum is taken coordinatewise.) By the Cramer-Wold device, this can be proved by
finding the limit distribution of the sequences of real variables
1 ~
t T( -L.)Y; - JL) ) 1 ~ T Yi -
= -~(t t TJL).
Jn i=! Jn i=l
Because the random variables tTY1 - tT f.L, tTY2 - tT f.L, .•. are i.i.d. with zero mean and
variance tT:Et, this sequence is asymptotically N 1(0, tT:Et)-distributed by the univariate
central limit theorem. This is exactly the distribution of tT X if X possesses an Nk(O, :E)
distribution. D
2.5 Convergence of Moments 17
2.19 Theorem (Almost-sure representations). Suppose that the sequence of random vec-
tors Xn converges in distribution to a random vector Xo. Then there exists a probability
space (Q, U, P) and random vectors Xn defined on it such that Xn is equal in distribution
to Xn for every n 2:: 0 and Xn --* Xo almost surely.
Proof. For random variables we can simply define Xn = F; 1(U) for Fn the distribu-
tion function of Xn and U an arbitrary random variable with the uniform distribution on
[0, 1]. (The "quantile transformation," see Section 21.1.) The simplest known construc-
tion for higher-dimensional vectors is more complicated. See, for example, Theorem 1.1 0.4
in [146], or [41]. •
Uniform integrability is the missing link between convergence in distribution and conver-
gence of moments.
2.20 Theorem. Let f : JRk 1-+ lR be measurable and continuous at every point in a set
C. Let Xn- X where X takes its values in C. Then Ef(Xn)--* Ef(X) if and only ifthe
sequence of random variables f(Xn) is asymptotically uniformly integrable.
Proof. We give the proof only in the most interesting direction. (See, for example, [146]
(p. 69) for the other direction.) Suppose that Yn = f(Xn) is asymptotically uniformly
integrable. Then we show that EYn --* EY for Y = f(X). Assume without loss of
generality that Yn is nonnegative; otherwise argue the positive and negative parts separately.
By the continuous mapping theorem, Yn -v-+ Y. By the triangle inequality,
Because the function y 1-+ y 1\ M is continuous and bounded on [0, oo), it follows that the
middle term on the right converges to zero as n --* oo. The first term is bounded above by
18 Stochastic Convergence
EYn1{Yn > M}, and converges to zero as n--* oo followed by M--* oo, by the uniform
integrability. By the portmanteau lemma (iv), the third term is bounded by the liminf as
n --* oo of the first and hence converges to zero as M t oo. •
2.21 Example. Suppose Xn is a sequence of random variables such that Xn "-"+X and
limsupEIXniP < oo for some p. Then all moments of order strictly less than p converge
also: EX~ --* EXk for every k < p.
By the preceding theorem, it suffices to prove that the sequence X~ is asymptotically
uniformly integrable. By Markov's inequality
The limit superior, as n --* oo followed by M --* oo, of the right side is zero if k < p. D
2.22 Theorem. Let Xn and X be random variables such that EX~ --* EXP < oo for
every p E N. If the distribution of X is uniquely determined by its moments, then Xn "-"+X.
Proof. Because Ex; = 0 (1), the sequence Xn is uniformly tight, by Markov's inequality.
By Prohorov's theorem, each subsequence has a further subsequence that converges weakly
to a limit Y. By the preceding example the moments of Y are the limits of the moments
of the subsequence. Thus the moments of Y are identical to the moments of X. Because,
by assumption, there is only one distribution with this set of moments, X and Y are equal
in distribution. Conclude that every subsequence of Xn has a further subsequence that
converges in distribution to X. This implies that the whole sequence converges to X. •
2.23 Example. The normal distribution is uniquely determined by its moments. (See, for
example, [123] or[133,p. 293].) Thus EX~--* Oforoddp and EX~--* (p-1)(p-3) .. ·1
for even p implies that Xn "-"+ N(O, 1). The converse is false. D
2.24 Lemma. On IRk = IR1 x IRm the set offunctions (x, y) 1--+ f(x)g(y) with f and g
ranging over all bounded, continuous functions on IR1 and IRm, respectively, is convergence-
determining.
2.25 Lemma. There exists a countable set of continuous functions f :IRk 1--+ [0, 1] that
is convergence-determining and, moreover, Xn -v-+ X implies that Ef(Xn) -+ Ef(X) uni-
formly in f E F.
2.26 Proposition (Law of the iterated logarithm). Let Y1o Yz, ... be i.i.d. random vari-
ables with mean zero and variance 1. Then
. Y1 + · •· + Yn r,;
hm sup = v 2, a.s.
n-+oo Jn log log n
Conversely, if this statement holds for both Yi and -Yi, then the variables have mean zero
and variance 1.
The law of the iterated logarithm gives an interesting illustration of the difference between
almost sure and distributional statements. Under the conditions of the proposition, the
sequence n- 112 (YI + · · · + Yn) is asymptotically normally distributed by the central limit
theorem. The limiting normal distribution is spread out over the whole real line. Apparently
division by the factor Jloglogn is exactly right to keep n- 112 (Y1 + · · · + Yn) within a
compact interval, eventually.
A simple application of Slutsky's lemma gives
Y1 + · · · + Yn P
Zn := -+ 0.
Jnloglogn
Thus Zn is with high probability contained in the interval ( -e, e) eventually, for any e > 0.
This appears to contradict the law of the iterated logarithm, which asserts that Zn reaches
the interval cJ2- e, J2 +e) infinitely often with probability one. The explanation is
that the set of w such that Zn(w) is in ( -e, e) or (J2- e, J2 +e) fluctuates with n. The
convergence in probability shows that at any advanced time a very large fraction of w have
Zn(w) E ( -e, e). The law of the iterated logarithm shows that for each particular w the
sequence Zn(w) drops in and out of the interval (J2- e, J2 +e) infinitely often (and
hence out of ( -e, e)).
The implications for statistics can be illustrated by considering confidence statements.
If f.1. and 1 are the true mean and variance of the sample Y1 , Y2 , ••. , then the probability that
- 2 - 2
Yn--<n<Yn+-
JYi-~"'- J7i
20 Stochastic Convergence
converges to <1>(2)- <I>( -2) ~ 95%. Thus the given interval is an asymptotic confidence
interval of level approximately 95%. (The confidence level is exactly <1>(2)- <I>( -2) if the
observations are normally distributed. This may be assumed in the following; the accuracy
of the approximation is not an issue in this discussion.) The point J.L = 0 is contained in
the interval if and only if the variable Zn satisfies
2
IZnl <
- .Jloglogn
·
Assume that J.L = 0 is the true value of the mean, and consider the following argument. By
the law of the iterated logarithm, we can be sure that Zn hits the interval ( ../2 - 8, ../2 + 8)
infinitely often. The expression 2/ .Jiog log n is close to zero for large n. Thus we can be
sure that the true value J.L = 0 is outside the confidence interval infinitely often.
How can we solve the paradox that the usual confidence interval is wrong infinitely often?
There appears to be a conceptual problem if it is imagined that a statistician collects data in
a sequential manner, computing a confidence interval for every n. However, although the
frequentist interpretation of a confidence interval is open to the usual criticism, the paradox
does not seem to rise within the frequentist framework. In fact, from a frequentist point
of view the curious conclusion is reasonable. Imagine 100 statisticians, all of whom set
95% confidence intervals in the usual manner. They all receive one observation per day
and update their confidence intervals daily. Then every day about five of them should have
a false interval. It is only fair that as the days go by all of them take turns in being unlucky,
and that the same five do not have it wrong all the time. This, indeed, happens according
to the law of the iterated logarithm.
The paradox may be partly caused by the feeling that with a growing number of observa-
tions, the confidence intervals should become better. In contrast, the usual approach leads
to errors with certainty. However, this is only true if the usual approach is applied naively
in a sequential set-up. In practice one would do a genuine sequential analysis (including
the use of a stopping rule) or change the confidence level with n.
There is also another reason that the law of the iterated logarithm is of little practical
consequence. The argument in the preceding paragraphs is based on the assumption that
2/.Jlog log n is close to zero and is nonsensical if this quantity is larger than ../2. Thus the
argument requires at least n =::: 1619, a respectable number of observations.
2.27 Proposition (Lindeberg-Feller central limit theorem). For each n let Yn,!, ... ,
Yn,k. be independent random vectors with finite variances such that
k.
LEIIYn,dl 2 l{IIYn,dl > 8}--* 0, every 8 > 0,
i=l
k.
L Cov Yn,i --* :E.
i=l
2.8 Lindeberg-Feller Theorem 21
Then the sequence L~::: 1 (Yn,i - EYn,i) converges in distribution to a normal N(O, :E)
distribution.
A result of this type is necessary to treat the asymptotics of, for instance, regression
problems with fixed covariates. We illustrate this by the linear regression model. The
application is straightforward but notationally a bit involved. Therefore, at other places
in the manuscript we find it more convenient to assume that the covariates are a random
sample, so that the ordinary central limit theorem applies.
2.28 Example (Linear regression). In the linear regression problem, we observe a vector
Y = X{J + e for a known (n x p) matrix X of full rank, and an (unobserved) error vector e
with i.i.d. components with mean zero and variance a 2 • The least squares estimator of fJ is
This estimator is unbiased and has covariance matrix a 2 (XT X)- 1. If the error vector e is
normally distributed, then /3 is exactly normally distributed. Under reasonable conditions
on the design matrix, the least squares estimator is asymptotically normally distributed for
a large range of error distributions. Here we fix p and let n tend to infinity.
This follows from the representation
n
(XTX) 112 (/J- {J) = (XTx)-IfZxTe = Laniei,
i=l
where an! •... ' ann arethecolumnsofthe (pxn) matrix (XTx)-!fZxT =:A. This sequence
is asymptotically normal if the vectors an! e1, ... , ann en satisfy the Lindeberg conditions.
The norming matrix (XT X) 112 has been chosen to ensure that the vectors in the display
have covariance matrix a 2 I for every n. The remaining condition is
n
Lllaniii 2Eefl { llanillled > e} --* 0.
i=l
Thiscanbesimplifiedtootherconditionsinseveral ways. Because L llandl 2 = trace(AAT)
= p, it suffices that maxEefl{ llandlled > e} --* 0, which is equivalent to
max llanill--* 0.
l;Si:Sn
It is reasonable to assume that the sequences x and x 2 are bounded. Then the first matrix
22 Stochastic Convergence
on the right behaves like a fixed matrix, and the conditions for asymptotic normality
simplify to
supiP(Xn E B) - P(X E B) I -+ 0,
B
where the supremum is taken over all measurable sets B. In view of the portmanteau lemma,
this type of convergence is stronger than convergence in distribution. Not only is it required
that the sequence P(Xn e B) converges for every Borel set B, the convergence must also
be uniform in B. Such strong convergence occurs less frequently and is often more than
necessary, whence the concept is less useful.
A simple sufficient condition for convergence in total variation is pointwise convergence
of densities. If Xn and X have densities Pn and p with respect to a measure J-L, then
Thus, convergence in total variation can be established by convergence theorems for inte-
grals from measure theory. The following proposition, which should be compared with the
monotone and dominated convergence theorems, is most appropriate.
2.29 Proposition. Suppose that In and I are arbitrary measurable functions such that
In -+ I J-t-almost everywhere (or in J-t-measure) and lim sup J llniP d~-t ~ J IliP d~-t <
J
oo,for some p 2::: 1 and measure J-t. Then lin- liP d~-t-+ 0.
Proof. By the inequality (a+ b)P ~ 2PaP + 2PbP, valid for every a, b 2::: 0, and the
assumption, 0 ~ 2PIIn1P + 2PIIIP- lin- liP -+ 2P+ 1 111P almost everywhere. By
Fatou's lemma,
2.30 Corollary (Scheffe). Let Xn and X be random vectors with densities Pn and p with
respect to a measure J-t. If Pn -+ p J-t-almost everywhere, then the sequence Xn converges
to X in total variation.
Notes 23
2.31 Theorem (Central limit theorem in total variation). Let Yt, Yz, ... be i.i.d. random
variables with finite second moment and characteristic function 4J such that J I4J(t) lv dt <
oo for some v ?: 1. Then Yt + · · · + Yn satisfies the central limit theorem in total variation.
Proof. It can be assumed without loss of generality that EY1 = 0 and var Y1 = 1. By
the inversion formula for characteristic functions (see [47, p. 509]), the density Pn of
Yt + · ·· + Yn/ .jn can be written
Pn(X) = 21
1l' I.( t
e-ux 4J .jn )n dt.
By the central limit theorem and Levy's continuity theorem, the integrand converges to
e-itx exp( -~t 2 ). It will be shown that the integral converges to
1
2rr
I . 1 2
e-•tx e-:zt dt =
e-2
lx2
.J2i.
Then an application of Scheffe's theorem concludes the proof.
The integral can be split into two parts. First, for every e > 0,
r
lltl>e../ii
le-itx <P( ~)nl dt::: .;n ltl>e
"'n
supi4J(t)ln-v I I4J<tW dt.
Here supltl>e I4J (t) I < 1 by the Riemann-Lebesgue lemma and because 4J is the characteristic
function of a nonlattice distribution (e.g., [47, pp. 501, 513]). Thus, the first part of the
integral converges to zero geometrically fast.
Second, a Taylor expansion yields that 4J(t) = 1 - tt 2 + o(t 2 ) as t --+ 0, so that there
exists e > 0 such that I4J (t) I ::: 1 - t 2 I 4 for every It I < e. It follows that
The proof can be concluded by applying the dominated convergence theorem to the remain-
ing part of the integral. •
Notes
The results of this chapter can be found in many introductions to probability theory. A
standard reference for weak convergence theory is the first chapter of [11]. Another very
readable introduction is [41]. The theory of this chapter is extended to random elements
with values in general metric spaces in Chapter 18.
24 Stochastic Convergence
PROBLEMS
1. If Xn possesses a t-distribution with n degrees of freedom, then Xn ""'"'N(O, 1) as n ~ oo.
Show this.
2. Does it follow immediately from the result of the previous exercise that EXK --+ EN(O, 1)P for
every p EN? Is this true?
3. If Xn ""'"'N(O, 1) and Yn ~a, then XnYn ""'"'N(O, a 2). Show this.
4. In what sense is a chi-square distribution with n degrees of freedom approximately a normal
distribution?
5. Find an example of sequences such that Xn ""'"' X and Yn ""'"' Y, but the joint sequence (Xn, Yn)
does not converge in law.
6. If Xn and Yn are independent random vectors for every n, then Xn ""'"'X and Yn ""'"'Y imply that
(Xn, Yn)""'"' (X, Y), where X andY are independent. Show this.
7. If every Xn and X possess discrete distributions supported on the integers, then Xn ""'"'X if and
only ifP(Xn = x)--+ P(X = x) for every integer x. Show this.
8. IfP(Xn = i/n) = 1/n for every i = 1, 2, ... , n, then Xn ""'"'X, but there exist Borel sets with
P(Xn E B) = 1 for every n, but P(X E B) = 0. Show this.
9. IfP(Xn = Xn) = 1 for numbers Xn and Xn --+ x, then Xn ""'"'x. Prove this
(i) by considering distributions functions
(ii) by using Theorem 2.7.
10. State the rule op(1) + Op(1) = Op(1) in terms of random vectors and show its validity.
11. In what sense is it true that op(1) = Op(1)? Is it true that Op(1) = op(1)?
12. The rules given by Lemma 2.12 are not simple plug-in rules.
(i) Give an example of a function R with R(h) = o(llhll) ash--+ 0 and a sequence of random
variables Xn such that R(Xn) is not equal to op(Xn)·
(ii) GivenanexampleofafunctionRsuchR(h) = O(llhll)ash--+ Oandasequenceofrandom
variables Xn such that Xn = Op(1) but R(Xn) is not equal to 0 p(Xn).
13. Find an example of a sequence of random variables such that Xn ""'"'0, but EXn --+ oo.
14. Find an example of a sequence of random variables such that Xn ~ 0, but Xn does not converge
almost surely.
15. Let Xt. ... , Xn be i.i.d. with density /J..,a(x) = A.e-A.(x-a)1{x 2: a}. Calculate the maximum
likelihood estimator of (in. an) of (A., a) and show that (in. an) ~ (A., a).
16. Let X 1, ... , Xn be i.i.d. standard normal variables. Show that the vector U = (X 1, ... , Xn) IN,
where N 2 = E?=I xf, is uniformly distributed over the unit sphere sn-l in Rn, in the sense that
U and 0 U are identically distributed for every orthogonal transformation 0 of Rn.
17. For each n, let Un be uniformly distributed over the unit sphere sn-l in Rn. Show that the vectors
.jli(Un,l, Un,2) converge in distribution to a pair of independent standard normal variables.
18. If .jli(Tn - 9) converges in distribution, then Tn converges in probability to e. Show this.
19. If EXn --+ JL and var Xn --+ 0, then Xn ~ JL. Show this.
20. If E~ 1 P(IXn I > e) < oo for every e > 0, then Xn converges almost surely to zero. Show this.
21. Use characteristic functions to show that binornial(n, A./n) ""'"'Poisson(A.). Why does the central
limit theorem not hold?
22. If X 1, ••• , X n are i.i.d. standard Cauchy, then Xn is standard Cauchy.
(i) Show this by using characteristic functions
(ii) Why does the weak law not hold?
23. Let X 1, ... , Xn be i.i~ with finite fourth moment. Find constants a, b, and Cn such that the
sequence cn(Xn -a, X~- b) converges in distribution, and determine the limit law. Here Xn
and X~ are the averages of the X; and the Xf, respectively.
3
Delta Method
If .,fo(Tn -0) -v-+ T for some variable T, then we expect that ,Jn(~(Tn) - ~(())) -v-+ ~'(()) T.
In particular, if .,fo(Tn - ()) is asymptotically normal N(O, a 2), then we expect that
..fo(~CTn)- ~(()))is asymptotically normal N(O, ~'(0) 2 a 2 ). This is proved in greater
generality in the following theorem.
In the preceding paragraph it is silently understood that Tn is real-valued, but we are more
interested in considering statistics~ (Tn) that are formed out of several more basic statistics.
Consider the situation that Tn = (Tn,!, ... , Tn,k) is vector-valued, and that~: Rk 1-+ Rm is
a given function defined at least on a neighbourhood of (). Recall that ~ is differentiable at
() if there exists a linear map (matrix) ~9 :Rk 1-+ Rm such that
All the expressions in this equation are vectors of length m, and llhll is the Euclidean
norm. The linear map h 1-+ ~9 (h) is sometimes called a "total derivative," as opposed to
25
26 Delta Method
partial derivatives. A sufficient condition for 4> to be (totally) differentiable is that all partial
e
derivatives 84>i(x)f8x; exist for x in a neighborhood of and are continuous at e. (Just
existence of the partial derivatives is not enough.) In any case, the total derivative is found
from the partial derivatives. If 4> is differentiable, then it is partially differentiable, and the
derivative map h ~---* 4>9 (h) is matrix multiplication by the matrix
If the dependence of the derivative 4>9 on e is continuous, then 4> is called continuously
differentiable.
It is better to think of a derivative as a linear approximation h ~---* 4>8(h) to the function
h ~---* ¢(e +h)- ¢(e) than as a set of partial derivatives. Thus the derivative at a pointe
is a linear map. If the range space of 4> is the real line (so that the derivative is a horizontal
vector), then the derivative is also called the gradient of the function.
Note that what is usually called the derivative of a function 4> : lR ~---* lR does not com-
pletely correspond to the present derivative. The derivative at a point, usually written 4>' (e),
is written here as 4>9. Although 4>' (e) is a number, the second object 4>9 is identified with the
map h ~---* 4>9(h) = 4>' (e) h. Thus in the present terminology the usual derivative function
e ~---* ¢'(e) is a map from lR into the set of linear maps from lR ~---* IR, not a map from
lR ~---* JR. Graphically the "affine" approximation h ~---* ¢(e)+ 4>9(h) is the tangent to the
function 4> at e.
3.1 Theorem. Let 4> : ID>.p C IRk ~---* !Rm be a map defined on a subset of IRk and dif-
ferentiable at e. Let Tn be random vectors taking their values in the domain of¢. If
rn(Tn- e)"-"+ T for numbers rn --* 00, then rn(<f>(Tn)- ¢(e)) -4>9(T). Moreover, the
difference between rn(<f>(Tn)- ¢(e)) and ¢9(rn(Tn- e)) converges to zero in probability.
Proof. Because the sequencern(Tn -e) converges in distribution, it is uniformly tight and
Tn - e converges to zero in probability. By the differentiability of 4> the remainder function
R(h) = ¢(e +h)- ¢(e)- 4>9(h) satisfies R(h) = o(llhll) ash--* 0. Lemma 2.12 allows
to replace the fixed h by a random sequence and gives
The map ~ is differentiable at the point (J = (a I , a 2) T, with derivative ~(a 1 ,a2 ) = (- 2a I , 1).
Thus if the vector (TI, T2 )' possesses the normal distribution in the last display, then
The latter variable is normally distributed with zero mean and a variance that can be ex-
pressed in ai, ... , a4. In case ai = 0, this variance is simply a4- ai. The general case
can be reduced to this case, because S2 does not change if the observations X; are replaced
by the centered variables Y; = X; - a I. Write Il-k = EY;k for the central rrwments of the
X;. Noting that S2 = ~ (Y, Y 2 ) and that ~ (/1-I, 11-2) = 11- 2 is the variance of the original
observations, we obtain
3.3 Example (Level of the chi-square test). As an application of the preceding example,
consider the chi-square test for testing variance. Normal theory prescribes to reject the null
hypothesis Ho: 11-2 ::::; 1 for values of nS2 exceeding the upper a point x;,a of the x;_I
distribution. If the observations are sampled from a normal distribution, then the test has
exactly level a. Is this still approximately the case if the underlying distribution is not
normal? Unfortunately, the answer is negative.
For large values of n, this can be seen with the help of the preceding result. The central
limit theorem and the preceding example yield the two statements
(
PJJ.2=InS
2
>Xn,a -P vn
2 ) _ ( r::(S/1-2-1
2
)
>
-n)
x;,aJn (za..fi)
--*1-<1> JK+2.
The asymptotic level reduces to 1- <l>(za) =a if and only if the kurtosis of the underlying
distribution is 0. This is the case for normal distributions. On the other hand, heavy-tailed
distributions have a much larger kurtosis. If the kurtosis of the underlying distribution is
"close to" infinity, then the asymptotic level is close to 1- <1>(0) = 1/2. We conclude that
the level of the chi-square test is nonrobust against departures of normality that affect the
value of the kurtosis. At least this is true if the critical values of the test are taken from
the chi-square distribution with (n - 1) degrees of freedom. If, instead, we would use a
28 Delta Method
Law Level
Laplace 0.12
0.95 N(O, 1) + 0.05 N(O, 9) 0.12
normal approximation to the distribution of .jn(S21JL 2 - 1) the problem would not arise,
provided the asymptotic variance JC + 2 is estimated accurately. Table 3.1 gives the level
for two distributions with slightly heavier tails than the normal distribution. D
In the preceding example the asymptotic distribution of .jn(S2 -a 2) was obtained by the
delta method. Actually, it can also and more easily be derived by a direct expansion. Write
r.:: 2 -a)=-vn
'V"(S 2 1
'-( -L)X;-JL)f--
2 -a 2) --vn(X-JL).
r.:: - 2
n i=l
The second term converges to zero in probability; the first term is asymptotically normal
by the central limit theorem. The whole expression is asymptotically normal by Slutsky's
lemma.
Thus it is not always a good idea to apply general theorems. However, in many exam-
ples the delta method is a good way to package the mechanics of Taylor expansions in a
transparent way.
3.4 Example. Consider the joint limit distribution of the sample variance S 2 and the
X1S. Again for the limit distribution it does not make a difference whether we
t -statistic
use a factor n or n - 1 to standardize S 2 • For simplicity we use n. Then ( S2 , X1S) can be
written as </J(X, X 2) for the map </J: R? 1-+ JR2 given by
.;n (y2Y_
y3-)..
1) -N (o. ( +i
/C
JC ~ 2 /L6/a6-
3 J.Ls/as-)..
J.Ls;a~ ~ )..)) .
)..2
The derivative of 4J at the point (0, 1, )..) equals ( -3, -3)../2, 1). Hence, if T possesses the
normal distribution in the display, then .jii(ln -)..) is asymptotically normal distributed with
mean zero and variance equal to var( -3T1 - 3)..T2 /2 + T3 ). If the underlying distribution
is normal, then).. = J.Ls = 0, JC = 0 and JL6/a 6 = 15. In that case the sample skewness is
asymptotically N(O, 6)-distributed.
An approximate level a test for normality based on the sample skewness could be to
reject normality if .jiilln I > J6 Zaf2· Table 3.2 gives the level of this test for different
values of n. D
n Level
10 0.02
20 0.03
30 0.03
50 0.05
These are asymptotically of level 1 - 2a in that the probability that (} is covered by the
interval converges to 1 - 2a for every (}. Unfortunately, as stated previously, these intervals
are useless, because of their dependence on the unknown (}. One solution is to replace
the unknown standard deviations a(O) by estimators. If the sequence of estimators is
chosen consistent, then the resulting confidence interval still has asymptotic level 1 - 2a.
Another approach is to use a variance-stabilizing transformation, which often leads to a
better approximation.
The idea is that no problem arises if the asymptotic variances a 2 ( (}) are independent of(}.
Although this fortunate situation is rare, it is often possible to transform the parameter into
a different parameter 'fl = 4J(O), for which this idea can be applied. The natural estimator
for 'fl is 4J(Tn). If 4J is differentiable, then
For 4J chosen such that 4J'(O)a(O) = 1, the asymptotic variance is constant and finding an
asymptotic confidence interval for 'fl = 4J(O) is easy. The solution
3.6 Example (Co"elation). Let (Xt, Yt), ... , (Xn, Yn) be a samplefromabivariatenor-
mal distribution with correlation coefficient p. The sample correlation coefficient is defined
as
With the help of the delta method, it is possible to derive that ,Jri(r- p) is asymptotically
zero-mean normal, with variance depending on the (mixed) third and fourth moments of
(X, Y). This is true for general underlying distributions, provided the fourth moments exist.
Under the normality assumption the asymptotic variance can be expressed in the correlation
of X and Y. Tedious algebra gives
It does not work very well to base an asymptotic confidence interval directly on this result.
3.3 Higher-Order Expansions 31
Figure 3.1. Histogram of 1000 sample correlation coefficients, based on 1000 independent samples
of the the bivariate normal distribution with correlation 0.6, and histogram of the arctanh of these
values.
The transformation
</J(p) = I 1 1 1+p
- -2 dp =-log--= arctanhp
1-p 2 1-p
In the one-dimensional case, a Taylor expansion applied to a statistic Tn has the form
Usually the linear term (Tn - e)~'(e) is of higher order than the remainder, and thus
determines the order at which ~(Tn)- ~(e) converges to zero: the same order as Tn -e.
Then the approach of the preceding section gives the limit distribution of ~(Tn)- ~(e). If
~'(e) = 0, this approach is still valid but not of much interest, because the resulting limit
distribution is degenerate at zero. Then it is more informative to multiply the difference
~(Tn) -~(e) by a higher rate and obtain a nondegenerate limit distribution. Looking at
the Taylor expansion, we see that the linear term disappears if~' (e) = 0, and we expect
that the quadratic term determines the limit behavior of ~(Tn).
3.7 Example. Suppose that Jn,X converges weakly to a standard normal distribution.
Because the derivative of x 1-+ cos x is zero at x = 0, the standard delta method of the
preceding section yields that Jn(cos X - cos 0) converges weakly to 0. It should be
concluded that Jn is not the right norming rate for the random sequence cos X - 1. A
more informative statement is that - 2n (cos X- 1) converges in distribution to a chi-square
distribution with one degree of freedom. The explanation is that
If the sequences Tn,i - ei are of different order, then it may happen, for instance, that the
linear part involving Tn,i - ei is of the same order as the quadratic part involving (Tn,j - ei ) 2 •
Thus, it is necessary to determine carefully the rate of all terms in the expansion, and to
rearrange these in decreasing order of magnitude, before neglecting the "remainder."
Several applications of Slutsky's lemma and the delta method yield as limit in law the vector
l/J9(T +h) -l/J9(h) = l/J9(T), if Tis the limit in distribution of ,.jn(Tn- en). For en ~ e
at a slower rate, this argument does not work. However, the same result is true under a
slightly stronger differentiability assumption on l/J.
3.8 Theorem. Let l/J : JRk ~---+ JRm be a map defined and continuously differentiable in
a neighborhood of e. Let Tn be random vectors taking their values in the domain of
l/J. If rn(Tn -en) ov-t T for vectors en ~ e and numbers rn ~ oo, then rn(l/J(Tn) -
l/J(en)) ov-tl/J9(T). Moreover, the difference between rn(l/J(Tn) -lfJ(en)) and l/J9(rn(Tn- en))
converges to zero in probability.
Proof. It suffices to prove the last assertion. Because convergence in probability to zero
of vectors is equivalent to convergence to zero of the components separately, it is no loss
of generality to assume that l/J is real-valued. For 0 ::::; t ::::; 1 and fixed h, define gn(t) =
l/J(en + th). For sufficiently large n and sufficiently small h, both en and en + h are in a
ball around e inside the neighborhood on which l/J is differentiable. Then gn : [0, 1] ~---+ lR is
continuously differentiable with derivative g~ (t) = lf19.+th (h). By the mean-value theorem,
gn(l)- gn(O) = g~(;) for some 0::::;;::::; 1. In other words
e
By the continuity of the map ~---+ l/19. there exists for every e > 0 a 8 > 0 such that
e
lll/J~(h) -l/19 (h) II < ellh II for every II~ - II < 8 and every h. For sufficiently large n and
llh II < 8/2, the vectors en +; h are within distance 8 of e, so that the norm II Rn (h) II of the
right side of the preceding display is bounded by ell h 11. Thus, for any "f/ > 0,
The first term converges to zero as n ~ oo. The second term can be made arbitrarily small
by choosing e small. •
*3.5 Moments
So far we have discussed the stability of convergence in distribution under transformations.
We can pose the same problem regarding moments: Can an expansion for the moments of
l/J(Tn) -l/J(e) be derived from a similar expansion for the moments of Tn- e? In principle
the answer is affirmative, but unlike in the distributional case, in which a simple derivative
of l/J is enough, global regularity conditions on l/J are needed to argue that the remainder
terms are negligible.
One possible approach is to apply the distributional delta method first, thus yielding the
qualitative asymptotic behavior. Next, the convergence of the moments of l/J(Tn) -l/J(e)
(or a remainder term) is a matter of uniform integrability, in view of Lemma 2.20. If
l/J is uniformly Lipschitz, then this uniform integrability follows from the corresponding
uniform integrability of Tn - e. If l/J has an unbounded derivative, then the connection
between moments of l/J(Tn) -l/J(e) and Tn- e is harder to make, in general.
34 Delta Method
Notes
The Delta method belongs to the folklore of statistics. It is not entirely trivial; proofs are
sometimes based on the mean-value theorem and then require continuous differentiability in
a neighborhood. A generalization to functions on infinite-dimensional spaces is discussed
in Chapter 20.
PROBLEMS
1. Find the joint limit distribution of (.Jii(X- JL), .Jii(S2 - a 2)) if X and S2 are based on a sample
of size n from a distribution with finite fourth moment. Under what condition on the underlying
distribution are .Jii(X- JL) and .Jii(S2 - a 2) asymptotically independent?
2. Find the asymptotic distribution of .Jii(r - p) if r is the correlation coefficient of a sample of n
bivariate vectors with finite fourth moments. (This is quite a bit of work. It helps to assume that
the mean and the variance are equal to 0 and 1, respectively.)
3. Investigate the asymptotic robustness of the level of the t-test for testing the mean that rejects
Ho : f.L ~ 0 if .JiiXIS is larger than the upper a quantile of the tn-1 distribution.
4. Find the limit distribution of the sample kurtosis kn = n- 1 Ll=l (X;- X) 4 /S4 - 3, and design an
asymptotic level a test for normality based on kn. (Warning: At least 500 observations are needed
to make the normal approximation work in this case.)
5. Design an asymptotic level a test for normality based on the sample skewness and kurtosis jointly.
6. Let X 1, ... , Xn be i.i.d. with expectation f.L and variance 1. Find constants such that an (X; - bn)
converges in distribution if f.L = 0 or f.L =f. 0.
7. Let X 1, ... , X n be a random sample from the Poisson distribution with mean (}. Find a variance
stabilizing transformation for the sample mean, and construct a confidence interval for (} based on
this.
8. Let X 1, •.• , Xn be i.i.d. with expectation 1 and finite variance. Find the limit distribution of
.Jii(x;1 - 1). If the random variables are sampled from a density f that is bounded and strictly
positive in a neighborhood of zero, show that EIX; 1 1 = oo for every n. (The density of Xn is
bounded away from zero in a neighborhood of zero for every n.)
4
Moment Estimators
For existence of the moment estimator, it is necessary that the vector JPn f be in the range
of the function e. If e is one-to-one, then the moment estimator is uniquely determined as
en= e- 1 (1Pnf) and
If JPn f is asymptotically normal and e- 1 is differentiable, then the right side is asymptoti-
cally normal by the delta method.
35
36 Moment Estimators
The derivative of e- 1 at e(Oo) is the inverse e~ 1 of the derivative of eat Oo. Because the
function e- 1 is often not explicit, it is convenient to ascertain its differentiability from the
differentiability of e. This is possible by the inverse function theorem. According to this
theorem a map that is (continuously) differentiable throughout an open set with nonsingular
derivatives is locally one-to-one, is of full rank, and has a differentiable inverse. Thus we
obtain the following theorem.
4.1 Theorem. Suppose that e(O) = P9 f is one-to-one on an open set 8 C ~k and con-
tinuously differentiable at Oo with nonsingular derivative e~. Moreover, assume that
P9ollfll 2 < oo. Then moment estimators en exist with probability tending to one and
satisfy
For completeness, the following two lemmas constitute, if combined, a proof of the
inverse function theorem. If necessary the preceding theorem can be strengthened somewhat
by applying the lemmas directly. Furthermore, the first lemma can be easily generalized to
infinite-dimensional parameters, such as used in the semiparametric models discussed in
Chapter25.
4.2 Lemma. Let 8 c ~k be arbitrary and let e : e 1-+ ~k be one-to-one and differentiable
at a point Oo with a nonsingular derivative. Then the inverse e- 1 (defined on the range of
e) is differentiable at e(Oo) provided it is continuous at e(Oo).
Proof. Write 'f1 = e(Oo) and !lh = e- 1(q +h) - e- 1(q). Because e- 1 is continuous at 'fl,
we have that !lh 1-+ 0 as h 1-+ 0. Thus
sufficiently small open neighborhood U of eo onto an open set V and e- 1 : V 1-+ U is well
defined and continuous.
Proof. By assumption, e~ --* A_, := e~ as() 1-+ ()0 • Thus III - Ae~ II ::s ! for every
() in a sufficiently small neighborhood U of ()0 • Fix an arbitrary point .,, = e(()1) from
V = e(U) (where ()1 E U). Next find an e > 0 such that ball(()1, e) c U, and fix an
arbitrary point 'fl with 11"1- 'fl1ll < 8 :=! IIAII- 1e. It will be shown that 'fl = e(()) for some
point() E ball((),, e). Hence every 'fl E ball('fl!, 8) has an original in ball(()1, e). If e is
one-to-one on U, so that the original is unique, then it follows that Vis open and that e- 1
is continuous at .,,.
Define a function¢(())=()+ A('fl- e(e)). Because the norm of the derivative 4>~ =
I - Ae9 is bounded by ! throughout U, the map 4> is a contraction on U. Furthermore, if
II() - ()dl ::S e,
1
114>(()) - e,ll ::::: 114>(()) - ¢(e,) I + ll¢(e,) - e,ll ::::: 211e -()!II + IIAIIII'fl- .,,11 < e.
Consequently, 4> maps ball(()1, e) into itself. Because 4> is a contraction, it has a fixed point
() E ball(()1, e): a point with¢(()) =e. By definition of 4> this satisfies e(()) = 'fl·
Any other 8 with e(8) = 'fl is also a fixed point of¢. In that case the difference = e- ()
4> (e) - 4> (e) has norm bounded by ! II e-
e11. This can only happen if = e. Hence e is e
one-to-one throughout U. •
4.4 Example. Let X 1, ... , Xn be a random sample from the beta-distribution: The com-
mon density is equal to
r(a {J) a-!+ {J-1
x 1-+ r(a)r({J) x (1 - x) 1o<x<i.
The moment estimator for (a, {J) is the solution of the system of equations
- a
Xn = EafJX! = - - ,
, a+{J
X2 - E X2 - (a + 1)a
n - a,{J I - (a + fJ + 1)(a + {J)
The righthand side is a smooth and regular function of (a, {J), and the equations can be
solved explicitly. Hence, the moment estimators exist and are asymptotically normal. D
Thus h and t = (t1, ••• , tk) are known functions on the sample space, and the family is
given in its natural parametrization. The parameter set 8 must be contained in the natural
parameter space for the family. This is the set of() for which p 9 can define a probability
density. If JL is the dominating measure, then this is the right side in
It is a standard result (and not hard to see) that the natural parameter space is convex. It is
usually open, in which case the family is called "regular." In any case, we assume that the
true parameter is an inner point of e. Another standard result concerns the smoothness of
the function () ~---* c(fJ), or rather of its inverse. (For a proof of the following lemma, see
[100, p. 59] or [17, p. 39].)
The lemma implies that the log likelihood l 9 (x) = log p 9 (x) can be differentiated (in-
finitely often) with respect to(). The vector of partial derivatives (the score function) satisfies
. c + t(x) =
l9(x) = -(()) t(x)- E9t(X).
c
Here the second equality is an example of the general rule that score functions have zero
means. It can formally be established by differentiating the identity J P9 d JL 1 under the =
integral sign: Combine the lemma and the Leibniz rule to see that
~~p9 dJL =
ae;
f ac(fJ) h(x) e9Tt(x) dJL(X)
ae;
+ fc(()) h(x) t;(X) e()Tt(x) dJL(X).
The left side is zero and the equation can be rewritten as 0 = cjc(fJ) + E 9t(X).
It follows that the likelihood equations I: i 9 (X;) = 0 reduce to the system of k equations
Thus, the maximum likelihood estimators are moment estimators. Their asymptotic prop-
erties depend on the function e(fJ) = E9t(X), which is very well behaved on the interior
of the natural parameter set. By differentiating E9 t(X) under the expectation sign (which
is justified by the lemma), we see that its derivative matrices are given by
e~ = Cov9 t(X).
The exponential family is said to be offull rank if no linear combination 2::~= 1 .A. iti (X) is
constant with probability 1; equivalently, if the covariance matrix oft (X) is nonsingular. In
4.2 Exponential Families 39
view of the preceding display, this ensures that the derivative e~ is strictly positive-definite
throughout the interior of the natural parameter set. Then e is one-to-one, so that there exists
at most one solution to the moment equations. (Cf. Problem 4.6.) In view of the expression
for i 9 , the matrix -ne~ is the second-derivative matrix (Hessian) of the log likelihood
.L:7= 1 l 9 (X;). Thus, a solution to the moment equations must be a point of maximum of
the log likelihood.
A solution can be shown to exist (within the natural parameter space) with probability
1 if the exponential family is "regular," or more generally "steep" (see [17]); it is then a
point of absolute maximum of the likelihood. If the true parameter is in the interior of the
parameter set, then a (unique) solution ~n exists with probability tending to 1 as n r+ oo,
in any case, by Theorem 4.1. Moreover, this theorem shows that the sequence .jn(~n- 9o)
is asymptotically normal with covariance matrix
So far we have considered an exponential family in standard form. Many examples arise
in the form
(4.6)
4.6 Theorem. Let E> c IRk be open and let Q : E> r+ IRk be one-to-one and continuously
differentiable throughout E> with nonsingular derivatives. Let the (exponential) family of
densities Pe be given by (4.6) and be offull rank. Then the likelihood equations have a
unique solution ~n with probability tending to 1 and .jn(~n- 9) ~ N(O, l9 1 )for every e.
Proof. According to the inverse function theorem, the range of Q is open and the inverse
map Q- 1 is differentiable throughout this range. Thus, as discussed previously, the delta
method ensures the asymptotic normality. It suffices to calculate the asymptotic covariance
matrix. By the preceding discussion this is equal to
By direct calculation, the score function for the model is equal to i 9 (x) = djd(lJ) +
(Q~)T t(x). As before, the score function has mean zero, so that this can be rewritten
as i 9 (x) = (Q 0)T(t(x)- E 9 t(X)). Thus, the Fisher information matrix equals 19 =
(Q~)TCov9 t(X) Q~. This is the inverse of the asymptotic covariance matrix given in the
preceding display. •
40 Moment Estimators
Not all exponential families satisfy the conditions of the theorem. For instance, the
normal N((), () 2 ) family is an example of a "curved exponential family." The map Q(()) =
(e- 2 , e- 1) (with t(x) = (-x 2 j2,x)) does not fill up the natural parameter space of the
normal location-scale family but only traces out a one-dimensional curve. In such cases the
result of the theorem may still hold. In fact, the result is true for most models with "smooth
parametrizations," as is seen in Chapter 5. However, the "easy" proof of this section is not
valid.
PROBLEMS
1. Let X 1, ... , Xn be a sample from the uniform distribution on [-9, 9]. Find the moment estimator
of 9 based on X 2 • Is it asymptotically normal? Can you think of an estimator for 9 that converges
faster to the parameter?
2. Let X 1, ... , Xn be a sample from a density P9 and f a function such that e(9) = E9 f (X) is
differentiable with e'(9) = E9l9(X)j(X) for l9 =log P9·
(i) Show that the asymptotic variance of the moment estimator based on f equals var9(/)/
COV9(/, l9) 2 .
(ii) Show that this is bigger than 19 1 with equality for all 9 if and only if the moment estimator
is the maximum likelihood estimator.
(iii) Show that the latter happens only for exponential family members.
3. To what extent does the result of Theorem 4.1 require that the observations are i.i.d.?
4. Let the observations be a sample of size n from the N(J-t, a 2) distribution. Calculate the Fisher
information matrix for the parameter 9 = (J-t, a 2) and its inverse. Check directly that the maximum
likelihood estimator is asymptotically normal with zero mean and covariance matrix 19 1•
5. Establish the formulae~ = Cov9 t(X) by differentiating e(9) = E9t(X) under the integral sign.
(Differentiating under the integral sign is justified by Lemma 4.5, because E9t(X) is the first
derivative of c(9)- 1.)
6. Suppose a function e : E> 1-+ JR.k is defined and continuously differentiable on a convex subset
E> c IRk with strictly positive-definite derivative matrix. Then e has at most one zero in E>.
(Consider the function g(i..) = (9! - 9-z)T e(i..9! + (1 - 1..)9-z) for given 91 :f. 9-z and 0 ::S i.. ::S 1.
If g(O) = g(l) = 0, then there exists a point i..o with g' (i..o) = 0 by the mean-value theorem.)
5
M- and Z-Estimators
5.1 Introduction
Suppose that we are interested in a parameter (or "functional") () attached to the distribution
ofobservationsX 1, ••• , Xn. Apopularmethodforfindinganestimatoren =en(X~o ... , Xn)
is to maximize a criterion function of the type
(5.1)
Here m 9 : X 1-+ iR are known functions. An estimator maximizing Mn(()) over 8 is called
an M -estimator. In this chapter we investigate the asymptotic behavior of sequences of
M -estimators.
Often the maximizing value is sought by setting a derivative (or the set of partial deriva-
tives in the multidimensional case) equal to zero. Therefore, the name M -estimator is also
used for estimators satisfying systems of equations of the type
(5.2)
Here 1/Jo are known vector-valued maps. For instance, if () is k-dimensional, then lfr9
typically has k coordinate functions 1/lo = ( 1/lo, 1, ... , 1/lo,k), and (5.2) is shorthand for the
system of equations
n
2:1/lo,j(Xi) = 0, j=l,2, ... ,k.
i=l
Even though in many examples 1/lo,j is the jth partial derivative of some function m9, this
is irrelevant for the following. Equations, such as (5.2), defining an estimator are called
estimating equations and need not correspond to a maximization problem. In the latter case
it is probably better to call the corresponding estimators Z-estimators (for zero), but the
use of the name M -estimator is widespread.
41
42 M- and Z-Estimators
Sometimes the maximum of the criterion function Mn is not taken or the estimating
equation does not have an exact solution. Then it is natural to use as estimator a value
that almost maximizes the criterion function or is a near zero. This yields approximate
M -estimators or Z-estimators. Estimators that are sufficiently close to being a point of
maximum or a zero often have the same asymptotic behavior.
An operator notation for taking expectations simplifies the formulas in this chapter.
We write P for the marginal law of the observations X 1, ••• , Xn, which we assume to be
identically distributed. Furthermore, we write Pf for the expectation E/ (X) = J f d P and
abbreviate the average n- 1 L,7=d(Xi) to IPnf· Thus IPn is the empirical distribution: the
(random) discrete distribution that puts mass 1In at every of the observations X 1, ... , Xn.
The criterion functions now take the forms
We also abbreviate the centered sums n- 112 '£7= 1 (/(Xi) - Pf) to Gnf, the empirical
process at f.
5.3 Example (Maximum likelihood estimators). Suppose x~. ... , Xn have a common
density P9. Then the maximum likelihood estimator maximizes the likelihood 07= 1P9 (Xi),
or equivalently the log likelihood
n
() ~--+ L)ogp9(Xi).
i=l
(Define log 0 = -oo.) However, this function is not smooth in() and there exists no natural
version of (5.2). Thus, in this example the definition as the location of a maximum is more
fundamental than the definition as a zero. D
respectively.t Both estimating equations involve functions of the form 'tft(x - 0) for a
function 'tft that is monotone and odd around zero. It seems reasonable to study estimators
that solve a general equation of the type
n
L'tft(Xi- 0) = 0.
i=l
-k if X< -k
'tft(x) = [x]~k: = {x if lxl~ k,'
k if X 2: k.
The Huber estimators were motivated by studies in robust statistics concerning the influ-
ence of extreme data points on the estimate. The exact values of the largest and smallest
observations have very little influence on the value of the median, but a proportional influ-
ence on the mean. Therefore, the sample mean is considered nonrobust against outliers. If
the extreme observations are thought to be rather unreliable, it is certainly an advantage to
limit their influence on the estimate, but the median may be too successful in this respect.
Depending on the value of k, the Huber estimators behave more like the mean (large k) or
more like the median (small k) and thus bridge the gap_between the nonrobust mean and
very robust median.
Another example are the quantiZes. A pth sample quantile is roughly a point 0 such that
pn observations are less than 0 and (1 - p )n observations are greater than 0. The precise
definition has to take into account that the value pn may not be an integer. One possibility
e
is to call a pth sample quantile any that solves the inequalities
n
-1 < L((l- p)1{Xi < 0}- p1{X; > 0}) < 1. (5.5)
i=l
t The sign-function is defined as sign(x) = -1, 0, 1 if x < 0, x = 0 or x > 0, respectively. Also x+ means
=
x v 0 max(x, 0). For the median we assume that there are no tied observations (in the middle).
44 M- and Z-Estimators
0
C\1
0 ~
'f
0
~
0 0
<)I
4 6 8 10 12 -2 -1 0 2 3
e
Figure 5.1. The functions ~ Wn (9) for the 80% quantile and the Huber estimator for samples of
size 15 from the gamma(8,1) and standard normal distribution, respectively.
All the estimators considered so far can also be defined as a solution of a maximization
problem. Mean, median, Huber estimators, and quantiles minimize E7= 1m(Xi -e) form
equal to x 2 , lxl, x2 llxlsk + (2klxl- k2)llxl>k and (1- p)x- + px+, respectively. D
5.2 Consistency
If the estimator en
is used to estimate the parameter e' then it is certainly desirable that
the sequence ~n converges in probability to e.
If this is the case for every possible value
of the parameter, then the sequence of estimators is called asymptotically consistent. For
instance, the sample mean Xn is asymptotically consistent for the population mean EX
(provided the population mean exists). This follows from the law of large numbers. Not
surprisingly this extends to many other sample characteristics. For instance, the sam-
ple median is consistent for the population median, whenever this is well defined. What
can be said about M-estimators in general? We shall assume that the set of possible
parameters is a metric space, and write d for the metric. Then we wish to prove that
d(~n. 0 ) ~ 0 for some value 0 , which depends on the underlying distribution of the
e e
observations.
Suppose that theM-estimator ~n maximizes the random criterion function
Clearly, the "asymptotic value" of ~n depends on the asymptotic behavior of the functions
Mn. Under suitable normalization there typically exists a deterministic "asymptotic criterion
function" e ~--+ M(e) such that
every e. (5.6)
For instance, if Mn(e) is an average of the form JP>nm 9 as in (5.1), then the law of large
numbers gives this result with M(e) = Pm 9 , provided this expectation exists.
It seems reasonable to expect that the maximizer {jn of Mn converges to the maximizing
value eo of M. This is what we wish to prove in this section, and we say that ~n is
(asymptotically) consistent for e0 • However, the convergence (5.6) is too weak to ensure
5.2 Consistency 45
Figure 5.2. Example of a function whose point of maximum is not well separated.
the convergence of{)n· Because the value {}n depends on the whole function()~---* Mn(()),
an appropriate form of "functional convergence" of Mn to M is needed, strengthening the
pointwise convergence (5.6). There are several possibilities. In this section we first discuss
an approach based on uniform convergence of the criterion functions. Admittedly, the
assumption of uniform convergence is too strong for some applications and it is sometimes
not easy to verify, but the approach illustrates the general idea.
Given an arbitrary random function () ~---* Mn(()), consider estimators {}n that nearly
maximize Mn, that is,
Then certainly Mn({}n) 2: Mn(()o)- op(l), which turns out to be enough to ensure con-
sistency. It is assumed that the sequence Mn converges to a nonrandom map M: 8 ~---* R.
Condition (5.8) of the following theorem requires that this map attains its maximum at a
-unique point ()0 , and only parameters close to ()0 may yield a value of M(()) close to the
maximum value M(()o). Thus, eo should be a well-separated point of maximum of M.
Figure 5.2 shows a function that does not satisfy this requirement.
5.7 Theorem. Let Mn be random functions and let M be a fixed function of() such that
for every 8 > ot
supiMn(())- M(())l ~ 0,
1Je8 (5.8)
sup M(()) < M(()o).
(}: d(9,/1o)~B
Then any sequence of estimators {jn with Mn({}n) 2: Mn(()o)- Op(l) converges in proba-
bility to eo.
t Some of the expressions in this display may be nonmeasurable. Then the probability statements are understood
in terms of outer measure.
46 M- and Z-Estimators
Proof. By the property of en. we have Mn(8n) 2: Mn(8o)- op(1). Because the uniform
convergence of Mn toM implies the convergence of Mn (Oo) ~ M (8o), the right side equals
M(8o)- op(1). It follows that Mn(8n) 2: M(80 ) - op(l), whence
by the first part of assumption (5.8). By the second part of assumption (5.8), there exists for
every e > 0 a number 'fl > 0 such that M((}) < M(80 ) - 'fl for every(} with d((}, (}0 ) 2: e.
Thus, the event {d(8n,8o) 2: e} is contained in the event {M(8n) < M(fJo)- 'fl}. The
probability of the latter event converges to 0, in view of the preceding display. •
5.9 Theorem. Let Wn be random vector-valued functions and let 'II be a fixed vector-
valued function of(} such that for every e > 0
Proof. This follows from the preceding theorem, on applying it to the functions Mn(O) =
-ll'~~n(8)ll and M((}) = -11'11(0)11· •
The conditions of both theorems consist of a stochastic and a deterministic part. The
deterministic condition can be verified by drawing a picture of the graph of the function. A
helpful general observation is that, for a compact set e and continuous function M or 'II,
uniqueness of (}0 as a maximizer or zero implies the condition. (See Problem 5.27.)
For Mn((}) or Wn(O) equal to averages as in (5.1) or (5.2) the uniform convergence
required by the stochastic condition is equivalent to the set of functions {m 9 : (} E E>}
or {1/19,j: (} E E>, j = 1, ... , k} being Glivenko-Cantelli. Glivenko-Cantelli classes of
functions are discussed in Chapter 19. One simple set of sufficient conditions is that e be
compact, that the functions(} ~--+ m9 (x) or(} ~--+ 1jl9 (x) are continuous for every x, and that
they are dominated by an integrable function.
Uniform convergence of the criterion functions as in the preceding theorems is much
stronger than needed for consistency. The following lemma is one of the many possibilities
to replace the uniformity by other assumptions.
5.2 Consistency 47
5.10 Lemma. Let E> be a subset of the real line and let Wn be random functions and
\If a .fixed function ofO such that Wn(O) ~ \11(0) in probability for every e. Assume that
each map e t-+ Wn (0) is continuous and has exactly one zero en, or is nondecreasing with
Wn(en) = op(l). LetOo be a point such that \lf(Oo- e) < 0 < \lf(Oo + e)for every e > 0.
A p
Then On~ Oo.
Proof. If the map 0 t-+ Wn(e) is continuous and has a unique zero at en. then
The left side converges to one, because Wn(Oo ±e) ~ w(e0 ±e) in probability. Thus the
right side converges to one as well, and en is consistent.
IfthemapO 1-+ Wn(e) is nondecreasing and en is a zero, then the same argument is valid.
More generally, ifO t-+ Wn(O) is nondecreasing, then Wn(eo- e) < -17 and en ::S Oo- e
imply Wn(en) < -17, which has probability tending to zero for every 17 > 0 if en is a near
zero. This and a similar argument applied to the right tail shows that, for every e, 17 > 0,
For 217 equal to the smallest of the numbers -\If (eo - e) and \If (eo + e) the left side still
converges to one. •
for every fixed 0. Thus, we expect that the sample median converges in probability to a
point Oo such that P(X > 00 ) = P(X < e0 ): a population median.
This can be proved rigorously by applying Theorem 5.7 or 5.9. However, even though
the conditions of the theorems are satisfied, they are not entirely trivial to verify. (The
uniform convergence of Wn to \If is proved essentially in Theorem 19.1) In this case it
is easier to apply Lemma 5.10. Because the functions 0 t-+ Wn(e) are nonincreasing, it
follows that en!;_ Oo provided that w(eo- e) > 0 > w(eo +e) for every e > 0. This is
the case if the population median is unique: P(X < 00 - e) < ~ < P(X < e0 +e) for all
e > 0. D
In this subsection we consider an alternative set of conditions under which the maximizer en
e
of the process Mn converges in probability to a point of maximum 0 of the function M. This
"classical" approach to consistency was taken by Wald in 1949 for maximum likelihood
estimators. It works best if the parameter set E> is compact. If not, then the argument must
48 M- and Z-Estimators
be complemented by a proof that the estimators are in a compact set eventually or be applied
to a suitable compactification of the parameter set.
Assume that the map e ~---* me(x) is upper-semicontinuous for almost all x: For every e
Proof. If the function e ~---* Pm9 is identically -oo, then E>o = E>, and there is nothing
to prove. Hence, we may assume that there exists e0 E E> 0 such that Pm 90 > -oo, whence
Plmllol < oo by (5.13).
e e e
Fix some and let Ut ..j.. be a decreasing sequence of open balls around of diameter
converging to zero. Write mu(x) for SUPeeu me(x). The sequence mu1 is decreasing
and greater than me for every l. Combination with (5.12) yields that mu1 ..j..m9 almost
surely. In view of (5.13), we can apply the monotone convergence theorem and obtain that
Pmu1 ..j.. Pm9 (whichmaybe-oo).
Fore ¢. E>o, we have Pme < Pm9o. Combine this with the preceding paragraph to see
that for every e ¢. E>o there exists an open ball U9 around e with Pmu8 < Pm90 • The set
B = {e E K: d(e, E>0) 2: e} is compact and is covered by the balls {U9 : e E B}. Let
Ue1 , ••• , U9p be a finite subcover. Then, by the law oflarge numbers,
as
supJP>nm9 ~ sup JP>nmu8 --+ sup Pmu8 . < Pm9o.
eeB j=!, ... ,p 1 j 1
If enE B, then SUPeeB JP>nme is at least JP>nmo.' which by definition of en is at least JP>nmllo -
op(l) = Pm9o- op(l), by the law oflarge numbers. Thus
In view of the preceding display the probability of the event on the right side converges to
zero as n--+ oo. •
Even in simple examples, condition (5.13) can be restrictive. One possibility for relax-
ation is to divide the n observations in groups of approximately the same size. Then (5.13)
5.2 Consistency 49
may be replaced by, for some k and every k :::: l < 2k,
I
P 1 sup L:me(xi) < oo. (5.15)
(leU i=l
Surprisingly enough, this simple device may help. For instance, under condition (5.13)
the preceding theorem does not apply to yield the asymptotic consistency of the maximum
likelihood estimator of (/L, a) based on a random sample from the N(tL, a 2 ) distribution
(unless we restrict the parameter set for a), but under the relaxed condition it does (with
k = 2). (See Problem 5.25.) The proof of the theorem under (5.15) remains almost the
same. Divide the n observations in groups of k observations and, possibly, a remainder
group of l observations; next, apply the law of large numbers to the approximately nj k
group sums.
5.16 Example (Cauchy likelihood). The maximum likelihood estimator for() based on a
random sample from the Cauchy distribution with location() maximizes the map f) 1-+ JP>nme
for
These infinite values should not worry us: They are permitted in the preceding theorem.
Moreover, because we maximize f) r+ JP>nme, they ensure that the estimator en never takes
the values ±oo, which is excellent.
We apply Wald's theorem withE> = iii, equipped with, forinstance, the metric d (fJ1, f)z) =
Iarctg ()I - arctg f)~. Because the functions f) 1-+ me (x) are continuous and nonpositive, the
conditions are trivially satisfied. Thus, taking K =iii, we obtain that d(en. E>o) ~ 0. This
conclusion is valid for any underlying distribution P of the observations for which the set
E>o is nonempty, because so far we have used the Cauchy likelihood only to motivate me.
To conclude that the maximum likelihood estimator in a Cauchy location model is con-
sistent, it suffices to show that E>o = {fJo} if P is the Cauchy distribution with center fJo. This
follows most easily from the identifiability of this model, as discussed in Lemma 5.35. D
5.17 Example (Cu"ent status data). Suppose that a "death" that occurs at timeT is only
observed to have taken place or not at a known "check-up time" C. We model the obser-
vations as a random sample X1, ... , Xn from the distribution of X = (C, l{T :::: C}),
where T and C are independent random variables with completely unknown distribution
functions F and G, respectively. The purpose is to estimate the "survival distribution"
1-F.
If G has a density g with respect to Lebesgue measure A., then X= (C, ~)has a density
with respect to the product of A. and counting measure on the set {0, 1}. A maximum like-
lihood estimator for F can be defined as the distribution function fr that maximizes the
likelihood
n
F 1-+ n(~;F(C;) + (1- ~;)(1- F)(C;))
i=l
over all distribution functions on [0, oo). Because this only involves the numbers F(C 1),
... , F(Cn). the maximizer of this expression is not unique, but some thought shows that
there is a unique maximizer fr that concentrates on (a subset of) the observation times
c,, ... , Cn. This is commonly used as an estimator.
We can show the consistency of this estimator by Wald's theorem. By its definition fr
maximizes the function F 1-+ lP'n log p F, but the consistency proof proceeds in a smoother
way by setting
PF 2pF
mF = log = log--'---
P<F+Fo)/2 PF + PFo
Because the likelihood is bigger at fr than .it is at ! fr + ! Fo, it follows that lP'nm fr 2:: 0 =
IP'nmFo· (It is not claimed that fr maximizes F 1-+ IP'nmF; this is not true.)
Condition (5.13) is satisfied trivially, because m F :::: log 2 for every F. We can equip the
set of all distribution functions with the topology of weak convergence. If we restrict the
parameter set to distributions on a compact interval [0, -r ], then the parameter set is compact
by Prohorov's theorem.t The map F 1-+ mF(c, 8) is continuous at F, relative to the weak
topology, for every (c, 8) such that cis a continuity point of F. Under the assumption that
G has a density, this includes almost every (c, 8), for every given F. Thus, Theorem 5.14
shows that fr n converges under Fo in probability to the set Fo of all distribution functions
that maximize the map F 1-+ PF0 mF, provided Fo e:Fo. This set always contains Fo, but
it does not necessarily reduce to this single point. For instance, if the density g is zero
on an interval [a, b], then we receive no information concerning deaths inside the interval
[a, b], and there can be no hope that Fn converges to Fo on [a, b]. In that case, F0 is not
"identifiable" on the interval [a, b].
We shall show that :F0 is the set of all F such that F = F0 almost everywhere according
to G. Thus, the sequence fr n is consistent for Fo "on the set of time points that have a
positive probability of occurring."
Because p F = p Fo under PFo if and only ifF = Fo almost everywhere according to G, it
suffices to prove that, for every pair of probability densities p and p 0 , Po log 2 p1(p + p 0) :::: 0
with equality if and only if p = Po almost surely under Po. If Po(p = 0) > 0, then
log2pj(p +Po) = -oo with positive probability and hence, because the function is
bounded above, Polog2pj(p + po) = -oo. Thus we may assume that P0 (p = 0) = 0.
Then, with f(u) = -u log(!+ !u),
t Alternatively, consider all probability distributions on tbe compactification [0, oo] again equipped with tbe
weak topology.
5.3 Asymptotic Normality 51
by Jensen's inequality and the concavity off, with equality only if Pol p = 1 almost surely
under P, and then also under P0 • This completes the proof. D
Wn(eo) eo)\lln(en)
If P'lfr'Jo is finite, then the numerator -JnWn(e0 ) = -n- 112 I:: VrfJo(X;) is asymptotically
normal by the central limit theorem. The asymptotic mean and variance are P1/re0 =
W(eo) = 0 and P'l/r~. respectively. Next consider the denominator. The first term ~n(Oo)
. p •
is an average and can be analyzed by the law oflarge numbers: Wn(eo)--* P'lfrfJo, provided
the expectation exists. The second term in the denominator is a product of en - = 0 p (1) e
and 1ir n(en) and converges in probability to zero under the reasonable condition that \iln (en)
(which is also an average) is Op(1). Together with Slutsky's lemma, these observations
yield
(5.19)
The preceding derivation can be made rigorous by imposing appropriate conditions, often
called "regularity conditions." The only real challenge is to show that \iln(On) = Op(l)
(see Problem 5.20 or section 5.6).
The derivation can be extended to higher-dimensional parameters. For a k-dimensional
parameter, we use k estimating equations. Then the criterion functions are maps \lin : ~k 1-+
52 M- and Z-Estimators
:Ilk and the derivatives ~n(Bo) are (k x k)-matrices that converge to the (k x k) matrix P,jr!Jo
with entries P a1aei 1/frJo,i. The final statement becomes
(5.20)
5.21 Theorem. For each 8 in an open subset ofEuclidean space, let x 1-+ 1/f9 (x) be a mea-
surable vector-valued function such that, for every 81 and 82 in a neighborhood of Oo and
. •2
a measurable function 1/f with P'l/f < oo,
Assume that P 111/f!Jo 11 2 < oo and that the map 8 1-+ P1/f9 is differentiable at a zero Bo, with
nonsingularderivative matrix V90 • lfiPn'lfr~. = op(n- 112 ), and en~ Oo, then
In particular, the sequence .jn(en - Oo) is asymptotically normal with mean zero and
covariance matrix Vi 1 P1/f90 1/f~(V9~ 1 )T.
Proof. For a fixed measurable function f, we abbreviate .jn(IPn - P)f to Gnf, the
empirical process evaluated at f. The consistency of en and the Lipschitz condition on the
e
maps 1-+ 1/f9 imply that
(5.22)
For a nonrandom sequence en this is immediate from the fact that the means of these variables
are zero, while the variances are bounded by Pllo/9. -1fr!Joll 2 :::; P-if, 2 11Bn- Boll 2 and hence
converge to zero. A proof for estimators Bn under the present mild conditions takes more
effort. The appropriate tools are developed in Chapter 19. In Example 19.7 it is seen that
the functions 1/f9 form a Donsker class. Next, (5.22) follows from Lemma 19.24. Here we
accept the convergence as a fact and give the remainder of the proof.
By the definitions of Bn and Oo, we can rewrite Gn'lfr~. as .jnP('Ifr9o - 1/f~) + op(1).
Combining this with the delta method (or Lemma 2.12) and the differentiability of the map
8 1-+ P1/f9, we find that
5.3 Asymptotic Normality 53
map, the inverse of the remainder term still converges to zero in probability. •
It is this expansion rather than the differentiability that is needed in the following theorem.
5.23 Theorem. For each 0 in an open subset ofEuclidean space let x ~--+ me (x) be a mea-
surable function such that 0 ~--+ me (x) is differentiable at Oo for P -almost every x t with
derivative rheo (x) and such that, for every 01 and~ in a neighborhood of and a measur- eo
able function m with Pm 2 < 00
Furthemwre, assume that the map 0 ~--+ Pm9 admits a second-order Taylor expansion
at a point of maximum Oo with nonsingular symmetric second derivative matrix V9o. If
JP>nmo.
1
2: sup9 JP>nm9 - o p (n- ) and On --+
~ p
eo, then
In particular, the sequence .jii(Bn - Oo) is asymptotically normal with mean zero and
covariance matrix V9;;" 1 Pm9om~ V9; 1•
*Proof. The Lipschitz property and the differentiability of the maps 0 ~---* m 9 imply that,
for every random sequence hn that is bounded in probability,
1 rT r AT ·
nJP>n (m9o+fl./v'n- m9o ) = 2nn V9onn + hn Gnm9o + Op(l),
The assertions of the preceding theorems must be in agreement with each other and
also with the informal derivation leading to (5.20). If 0 ~---* m 9 (x) is differentiable, then a
maximizer of 0 I-* JP>nm9 typically solves JP>nVr9 = 0 for Vr9 = m9. Then the theorems and
(5.20) are in agreement provided that
a2 a .
Ve = 802 Pm9 = ao P1/r9 = P1/r9 = Pm9.
This involves changing the order of differentiation (with respect to 0) and integration (with
respect to x ), and is usually permitted. However, for instance, the second derivative of Pm 9
may exist without 0 ~---* m 9 (x) being differentiable for all x, as is seen in the following
example.
5.24 Example (Median). The sample median maximizes the criterion function e ~---*
-L: =II Xi -0 I· Assume that the distribution function F of the observations is differentiable
5.3 Asymptotic Normality 55
co
ci
<0
ci
C\1
ci
0
ci
Figure 5.3. The distribution function of the sample median (dotted curve) and its normal approxi-
mation for a sample of size 25 from the Laplace distribution.
at its median e0 with positive derivative j(e0 ). Then the sample median is asymptotically
normal.
e
This follows from Theorem 5.23 applied with m 9 (x) = lx- 1-lxl- As a consequence
of the triangle inequality, this function satisfies the Lipschitz condition with m(x) = 1.
Furthermore, the map e 1-+ me(x) is differentiable at eo except if X =eo. with m9o(x) =
-sign(x- eo). By partial integration,
9
Pme=eF(O)+ { (e-2x)dF(x)-e(1-F(e))=2 { F(x)dx-e.
J~~ h
e
IfF is sufficiently regular around 0 , then Pm 9 is twice differentiable with first derivative
2F(e) - 1 (which vanishes at eo) and second derivative 2j(e). More generally, under the
e
minimal condition that F is differentiable at 0 , the function Pm 9 has a Taylor expansion
Pme0 + ~(e - eo) 2 2j(eo) + o(le - eol 2), so that we set V9o = 2j(eo). Because Pm~
= E1 = 1, the asymptotic variance of the median is 1/(2f(£Jo))2. Figure 5.3 gives an
impression of the accuracy of the approximation. D
of the true underlying distribution P on the model using the Kullback-Leibler divergence,
which is defined as - P log(pe I p ), as a "distance" measure: pe0 minimizes this quantity
over all densities in the model. Second, we expect that ,Jn(en -eo) is asymptotically
normal with mean zero and covariance matrix
e
Here .e0 = log p 0 , and VIJo is the second derivative matrix of the map 1-+ P log Po. The
preceding theorem with m 0 = log p 8 gives sufficient conditions for this to be true.
The asymptotics give insight into the practical value of the experimenter's estimate en.
This depends on the specific situation. However, if the model is not too far off from the truth,
then the estimated density PO. may be a reasonable approximation for the true density. D
5.26 Example (Exponential frailty model). Suppose that the observations are a random
sample (Xt, Y1), ••• , (Xn. Yn) of pairs of survival times. For instance, each X; is the
survival time of a "father" and Y; the survival time of a "son." We assume that given
an unobservable value z;, the survival times X; and Y; are independent and exponentially
e
distributed with parameters z; and z;, respectively. The value z; may be different for each
e
observation. The problem is to estimate the ratio of the parameters.
To fit this example into the i.i.d. set-up of this chapter, we assume that the values z1, ••• , Zn
are realizations of a random sample Zt. ... , Zn from some given distribution (that we do
not have to know or parametrize).
One approach is based on the sufficiency of the variable X; + eY; for z; in the case
that e is known. Given Z; = z, this "statistic" possesses the gamma-distribution with
shape parameter 2 and scale parameter z. Corresponding to this, the conditional density of
an observation (X, Y) factorizes, for a given z, as ho(x, y) ge(x + ey I z), for ge(s I z) =
z2 se-zs the gamma-density and
ho(x, y) = - - .
e
x+ey
e
Because the density of X; + Y; depends on the unobservable value z;, we might wish to
discard the factor g 8 (s I z) from the likelihood and use the factor h 0 (x, y) only. Unfor-
tunately, this "conditional likelihood" does not behave as an ordinary likelihood, in that
the corresponding "conditional likelihood equation," based on the function hoi ho(x, y) =
a1ae logh 8 (x, y), does not have mean zero under e.
The bias can be corrected by condi-
tioning on the sufficient statistic. Let
e + eo 2eeo eo 1
P!Jolfre =- e- eo- (e- eo)2log 0 = 3eo (eo- e)+ o(eo- e).
5.3 Asymptotic Normality 57
e
Hence the zero of ~---+ Peo 1/19 is taken uniquely ate = fJo. Next, the sequence ,y'n(en -eo)
can be shown to be asymptotically normal by Theorem 5.21. In fact, the functions -if, 9 (x, y)
are uniformly bounded in x, y > 0 and e ranging over compacta in (0, oo), so that, by the
mean value theorem, the function -if, in this theorem may be taken equal to a constant.
On the other hand, although this estimator is easy to compute, it can be shown that it is
not asymptotically optimal. In Chapter 25 on semiparametric models, we discuss estimators
with a smaller asymptotic variance. D
5.27 Example (Nonlinear least squares). Suppose that we observe a random sample (X 1 ,
Yt), ... , (Xn, Yn) from the distribution of a vector (X, Y) that follows the regression
model
is an M-estimator for m9 (x, y) = (y - f 9 (x) ) 2 (or rather minus this function). It should
be expected to converge to the minimizer of the limit criterion function
e
Thus the least squares estimator should be consistent if 0 is identifiable from the model,
e e
in the sense that ::f. 0 implies that /fJ(X) ::f. feo(X) with positive probability.
For sufficiently regular regression models, we have
This suggests that the conditiOJ?-S of Theorem 5.23 are satisfied with Veo = 2P j 9oj~ and
m90 (x, y) = -2(y - feo(x) )! 9o (x). If e and X are independent, then this leads to the
asymptotic covariance matrix V9~ 1 2Ee 2 • D
Besides giving the asymptotic normality of .jn(en - e0 ), the preceding theorems give
an asymptotic representation
If we neglect the remainder term, t then this means that en - eo behaves as the average of
the variables V9;;" 11freo(Xi)· Then the (asymptotic) "influence" of the nth observation on the
t To make the following derivation rigorous, more information concerning the remainder term would be necessary.
58 M- and Z-Estimators
Because the "influence" of an extra observation X is proportional to v9- 11/19 (x ), the function
X 1-+ v9- 11/19 (x) is called the asymptotic influence function of the estimator en. Influence
functions can be defined for many other estimators as well, but the method of Z-estimation
is particularly convenient to obtain estimators with given influence functions. Because V9o
is a constant (matrix), any shape of influence function can be obtained by simply choosing
the right functions 1/19.
For the purpose of robust estimation, perhaps the most important aim is to bound the
influence of each individual observation. Thus, a Z-estimator is called B-robust if the
function 1/f9 is bounded.
fori.i.d. errors e 1 , ... , en that are independent of X 1, ... , Xn. The classical estimator for the
regression parameter e is the least squares estimator, which minimizes I:7= 1(Yi- er XY.
Outlying values of Xi ("leverage points") or extreme values of (Xi, Yi) jointly ("influence
points") can have an arbitrarily large influence on the value of the least-squares estimator,
which therefore is nonrobust. As in the case of location estimators, a more robust estimator
fore can be obtained by replacing the square by a function m(x) that grows less rapidly
as x --* oo, for instance m(x) = lxl or m(x) equal to the primitive function of Huber's 1/J.
Usually, minimizing an expression of the type I:7= 1m(Yi- fJXi) is equivalent to solving a
system of equations
n
L1fr(Yi- erxi)xi = o.
i=1
Because E 1/J (Y -e[ X) X = E 1/1 (e)EX, we can expect the resulting estimator to be consistent
provided E1fr(e) = 0. Furthermore, we should expect that, for V80 = El/l'(e)XXT,
r.:
-vn(Bn
h
-eo)= r.;Vti;;" 1~
1 (
~1/1 Y; -e0T X; ) X; +op(l).
-vn i=1
Consequently, even for a bounded function 1/1, the influence function (x, y) 1-+ V8- 11/f (y -
eT x)x may be unbounded, and an extreme value of an X; may still have an arbitrarily
large influence on the estimate (asymptotically). Thus, the estimators obtained in this way
are protected against influence points but may still suffer from leverage points and hence
are only partly robust. To obtain fully robust estimators, we can change the estimating
5.3 Asymptotic Normality 59
equations to
n
LVt((Yi- erxi)v(Xi))w(X;) = 0.
i=l
Here we protect against leverage points by choosing w bounded. For more flexibility we
have also allowed a weighting factor v(Xi) inside 1/f. The choices 1/t(x) = x, v(x) = 1 and
w(x) = x correspond to the (nonrobust) least-squares estimator.
The solution {jn of our final estimating equation should be expected to be consistent for
the solution of
5.29 Example (Optimal robust estimators). Every sufficiently regular function 1/t defines
en
a location estimator through the equation 2:7= 11/t(Xi -e) = 0. In order to choose among
the different estimators, we could compare their asymptotic variances and use the one with
the smallest variance under the postulated (or estimated) distribution P of the observations.
On the other hand, if we also wish to guard against extreme obervations, then we should
find a balance between robustness and asymptotic variance. One possibility is to use the
estimator with the smallest asymptotic variance at the postulated, ideal distribution P under
the side condition that its influence function be uniformly bounded by some constant c. In
this example we show that for P the normal distribution, this leads to the Huber estimator.
The Z-estimator is consistent for the solution e 0 of the equation P1/t(·- e) = E1/t(X 1 -
e) = 0. Suppose that we fix an underlying, ideal P whose "location" eo is zero. Then the
problem is to find 1/t that minimizes the asymptotic variance P1jt 2 /(P1/t') 2 under the two
side conditions, for a given constant c,
1/t(x) I
s~p 1 P1/t' ::::; c, and P1/t = 0.
The problem is homogeneous in 1/t, and hence we may assume that P 1/t' = 1 without loss
of generality. Next, minimization of P1jt 2 under the side conditions P1/t = 0, P1/t' = 1 and
111/tlloo ::::; c can be achieved by using Lagrange multipliers, as in problem 14.6 This leads
to minimizing
for fixed "multipliers" A and f.L under the side condition 111/r II 00 :::: c with respect to 1/r. This
expectation is minimized by minimizing the integrand pointwise, for every fixed x. Thus
the minimizing 1/r has the property that, for every x separately, y = 1/r (x) minimizes the
parabola y 2 + AY + JLY(P'fp)(x) over y E [-c, c]. This readily gives the solution, with
[y]~ the value y truncated to the interval [c, d],
1 1 p' ]c
1/r(x) = [ --A- -JL-(X) .
2 2 p -c
The constants A and f.L can be solved from the side conditions P1/r = 0 and P1/r' = 1. The
normal distribution P = <I> has location score function p' I p (x) = - x, and by symmetry
it follows that A= 0 in this case. Then the optimal'f/r reduces to Huber's 1/r function. D
~
~1/r (X;-fJ)-
A -0, (5.30)
i=l (]'
Here 8- is an initial (robust) estimator of scale, which is meant to stabilize the robustness
of the location estimator. For instance, the "cut-off" parameter kin Huber's 1/r-function
determines the amount of robustness of Huber's estimator, but the effect of a particular
choice of k on bounding the influence of outlying observations is relative to the range of
the observations. If the observations are concentrated in the interval [-k, k], then Huber's
1/r yields nothing else but the sample mean, if all observations are outside [ -k, k], we get
the median. Scaling the observations to a standard scale gives a clear meaning to the value
of k. The use of the median absolute deviation from the median (see. section 21.3) is often
recommended for this purpose.
If the scale estimator is itself a Z-estimator, then we can treat the pair (e, 8-) as a Z-
estimator for a system of equations, and next apply the preceding theorems. More generally,
we can apply the following result. In this subsection we allow a condition in terms of
Donsker classes, which are discussed in Chapter 19. The proof of the following theorem
follows the same steps as the proof of Theorem 5.21.
5.31 Theorem. For each f) in an open subset oj"JRk and each 17 in a metric space, let x 1-+
1/re, 11 (x) be an 1Rk-valued measurable function such that the class offunctions {1/re, 11 : llfJ-
fJoll < 8, d(T/, T/ 0 ) < li} is Donskerforsome li > 0, and such that Pll1/re, 11 -1/reo, 110 11 2 --* 0
as (f), 17) --* (fJo, T/o). Assume that P1/reo, 110 = 0, and that the maps f) 1-+ P1/re, 11 are differ-
entiable at fJo, uniformly in 17 in a neighborhood of T/o with nonsingular derivative matrices
Veo, 11 such that Veo, 11 --* Veo, 110 • If ..fo1P'n1fre•. ~. = op(1) and (en. ~n) ~ (fJo, T/o), then
5.5 Maximum Likelihood Estimators 61
Under the conditions of this theorem, the limiting distribution of the sequence .jn({jn-
00 ) depends on the estimator ~n through the "drift" term .jnP1/feo.~.· In general, this gives
a contribution to the limiting distribution, and ~n must be chosen with care. If ~n is ,Jn-
consistent and the map 'f1 1-+ P1/feo,'fl is differentiable, then the drift term can be analyzed
using the delta-method.
It may happen that the drift term is zero. If the parameters 0 and 'f1 are "orthogonal"
in this sense, then the auxiliary estimators ~n may converge at an arbitrarily slow rate and
affect the limit distribution of {jn only through their limiting value 'flo.
5.32 Example (Symmetric location). Suppose that the distribution of the observations
is symmetric about eo.Let X 1-+ 1/f(x) be an antisymmetric function, and consider the
Z-estimators that solve equation (5.30). Because Pl/I((X- 00 )/u) = 0 for every u, by
the symmetry of P and the antisymmetry of 1/1, the "drift term" due to ~ in the pre-
ceding theorem is identically zero. The estimator {jn has the same limiting distribu-
tion whether we use an arbitrary consistent estimator of a ''true scale" uo or uo
itself. D
5.33 Example (Robust regression). In the linear regression model considered in Exam-
ple 5.28, suppose that we choose the weight functions v and w dependent on the data and
solve the robust estimator {j n of the regression parameters from
This corresponds to defining a nuisance parameter 'f1 = (v, w) and setting 1/le,v,w(x, y) =
1/l((y- eTx)v(x))w(x). If the functions 1/le,v,w run through a Donsker class (and they
easily do), and are continuous in (0, v, w), and the map 0 1-+ Pl/le,v,w is differentiable at
00 uniformly in (v, w), then the preceding theorem applies. If E1fr(eu) = 0 for every u,
then Pl/leo,v,w = 0 for any v and w, and the limit distribution of .jn({jn - Oo) is the same,
whether we use the random weight functions (vn. wn) or their limit (vo, wo) (assuming that
this exists).
The purpose of using random weight functions could be, besides stabilizing the robust-
ness, to improve the asymptotic efficiency of {jn· The limit (vo, w 0 ) typically is not the
same for every underlying distribution P, and the estimators (vn, wn) can be chosen in such
a way that the asymptotic variance is minimal. D
If X 1, ... , Xn are a random sample from a density Pe, then the maximum likelihood
en
estimator maximizes the function e I-+ L log Pfi (X;), or equivalently, the function
1~ Pe Pe
Mn(e) =-~log -(X;)= IP'n log-.
n i=l Pfio Pfio
This requires that the model for the observations is not the same under the parameters e
e
and 0 • Identifiability is a natural and even a necessary condition: If the parameter is not
identifiable, then consistent estimators cannot exist.
Proof. First note that M(eo) = Pe0 log 1 = 0. Hence we wish to showthatM(e) is strictly
negative for e ::f. eo.
Because logx ::::; 2(,fX- 1) for every x :::: 0, we have, writing f.L for the dominating
measure,
J
(The last inequality is an equality if Pe df.L = 1.) This is always nonpositive, and is zero
only if p9 and Pfio are equal. By assumption the latter happens only if = eo. • e
Thus, under conditions such as in section 5.2 and identifiability, the sequence of maxi-
mum likelihood estimators is consistent for the true parameter.
t Presently we take the expectation P9o under the parameter lio, whereas the derivation in section 5.3 is valid for a
generic underlying probability structure and does not conceptually require that the set of parameters () indexes
a set of underlying distributions.
5.5 Maximum Likelihood Estimators 63
Even though ~ does not maximize f) 1-+ JP>nme, this inequality can be used as the starting
point for a consistency proof, since Theorem 5.7 requires that Mn({j) 2: Mn(fJo)- op(l)
only. The true parameter is still identifiable from this criterion function, because, by the
preceding lemma, Peome = 0 implies that (pe + Peo)/2 = pe0 , or Pe = Peo· A technical
advantage is that me 2: log(1/2). For another variation, see Example 5.17.
Consider asymptotic normality. The maximum likelihood estimator solves the likelihood
equations
an
- ~)ogpe(X;) = 0.
ae i=l
Hence it is a Z-estimatorfor 1/le equal to the score function ie = a;ae
log p 9 of the model.
In view of the results of section 5.3, we expect that the sequence .jn(~n - fJ) is, under e,
asymptotically normal with mean zero and covariance matrix
(5.36)
Under regularity conditions, this reduces to the inverse of the Fisher information matrix
. ·T
Ie = Pelel 9 •
To see this in the case of a one-dimensional parameter, differentiate the identity J p 9 d f.L = 1
twice with respect to f). Assuming that the order of differentiation and integration can be
reversed, we obtain J Pe dJL = J fJe df.L = 0. Together with the identities
2
l.. e =Pe- - (Pe
- )
Pe Pe
this implies that P9 ie = 0 (scores have mean zero), and P9 le = -19 (the curvature of the
likelihood is equal to minus the Fisher information). Consequently, (5.36) reduces to Ie-'.
The higher-dimensional case follows in the same way, in which we should interpret the
identities P9 ie = 0 and Pele = -19 as a vector and a matrix identity, respectively.
We conclude that maximum likelihood estimators typically satisfy
This is a very important result, as it implies that maximum likelihood estimators are asymp-
totically optimal. The convergence in distribution means roughly that the maximum likeli-
e,
hood estimator {jn is N(fJ, (n/e)- 1)-distributedfor every forlarge n. Hence, it is asymp-
totically unbiased and asymptotically of variance (n/9 )- 1• According to the Cramer-Rao
64 M- and Z-Estimators
theorem, the variance of an unbiased estimator is at least (n/11 )- 1 • Thus, we could in-
fer that the maximum likelihood estimator is asymptotically uniformly minimum-variance
unbiased, and in this sense optimal. We write "could" because the preceding reasoning is
informal and unsatisfying. The asymptotic normality does not warrant any conclusion about
en
the convergence of the moments Eo and varo en;
we have not introduced an asymptotic
version of the Cramer-Rao theorem; and the Cramer-Rao bound does not make any assertion
concerning asymptotic normality. Moreover, the unbiasedness required by the Cramer-Rao
theorem is restrictive and can be relaxed considerably in the asymptotic situation.
However, the message that maximum likelihood estimators are asymptotically efficient
is correct. We give a precise discussion in Chapter 8. The justification through asymptotics
appears to be the only general justification of the method of maximum likelihood. In some
form, this result was found by Fisher in the 1920s, but a better and more general insight
was only obtained in the period from 1950 through 1970 through the work of Le Cam and
others.
In the preceding informal derivations and discussion, it is implicitly understood that the
density p 11 possesses at least two derivatives with respect to the parameter. Although this
can be relaxed considerably, a certain amount of smoothness of the dependence () 1-+ p 11 is
essential for the asymptotic normality. Compare the behavior of the maximum likelihood
estimators in the case of uniformly distributed observations: They are neither asymptotically
normal nor asymptotically optimal.
5.37 Example (Uniform distribution). Let Xt, ... , Xn be a sample from the uniform
distribution on [0, 0]. Then the maximum likelihood estimator is the maximum X(n) of the
observations. Because the variance of X(n) is of the order O(n- 2 ), we expect that a suitable
norming rate in this case is not ..jii, but n. Indeed, for each x < 0
(() +xfn)n
Po (n(X(n) - ()) :::: x ) =Po ( X1 :::: () +;;X)n = ()
xf9
-+ e .
We conclude this section with a theorem that establishes the asymptotic normality of
maximum likelihood estimators rigorously. Clearly, the asymptotic normality follows from
Theorem 5.23 applied to m11 =log p 11 , or from Theorem 5.21 applied with 1/ro = l 11 equal
to the score function of the model. The following result is a minor variation on the first
theorem. Its conditions somehow also ensure the relationship P11 t 11 = -19 and the twice-
differentiability of the map() 1-+ Plio log p 11 , even though the existence of second derivatives
is not part of the assumptions. This remarkable phenomenon results from the trivial fact
that square roots of probability densities have squares that integrate to 1. To exploit this,
we require the differentiability of the maps () 1-+ .Jiii, rather than of the maps () 1-+ log p 11 •
A statistical model (Po:() E 8) is called differentiable in quadratic mean if there exists a
5.5 Maximum Likelihood Estimators 65
1[ .jjii- ,.j[ii;- r
~ce - eol .e!Jo,.j[ii; df,L = o(ue- eou 2). (5.38)
This property also plays an important role in asymptotic optimality theory. A discussion,
including simple conditions for its validity, is given in Chapter 7. It should be noted that
~ I n : - _1_~ - !(~1
ae -vPe- JPeae Pe-
2 ae ogpe
)2
In:
-vPe·
Thus, the function i.9o in the integral really is the score function of the model (as the
notation suggests), and the expression le0 = P!Joi!Joi~ defines the Fisherinformation matrix.
However, condition (5 .38) does not require existence of a1ae pe (x) for every x.
5.39 Theorem. Suppose that the model (Pe: e E 8) is differentiable in quadratic mean
at an inner point eo of e c IRk. Furthermore, suppose that there exists a measurable
function i with P9ol 2 < oo such that, for every e1 and e2 in a neighborhood of e0 ,
In particular, the sequence .jn(en - e0 ) is asymptotically normal with mean zero and
covariance matrix Ii;/.
*Proof. This theorem is a corollary of Theorem 5.23. We shall show that the conditions
of the latter theorem are satisfied for me = log Pe and VIJo = -IIJo.
Fix an arbitrary converging sequence of vectors hn ~ h, and set
Wn = 2( Peo;~lv'ii- 1).
By the differentiability in quadratic mean, the sequence ,JnWn converges in L2(P9o) to the
function h Ti.9o. In particular, it converges in probability, whence by a delta method
e
In view of the Lipschitz condition on the map 1-+ log Pe, we can apply the dominated-
convergence theorem to strengthen this to convergence in L2CPe0 ). This shows that the map
e 1-+ log p 9 is differentiable in probability, as required in Theorem 5.23. (The preceding
argument considers only sequences en of the special form eo + hn/ .;n
approaching eo.
Because hn can be any converging sequence and Jn+l/.jn ~ 1, these sequences are
actually not so special. By re-indexing the result can be seen to be true for any en ~ e0 .)
Next, by computing means (which are zero) and variances, we see that
Equating this result to the expansion given by Theorem 7 .2, we see that
e
Hence the map 1-+ Peo log Pe is twice-differentiable with second derivative matrix -leo,
or at least permits the corresponding Taylor expansion of order 2. •
Here '11: :Ill-+ [0, 1] is a known continuously differentiable, monotone function. The choices
'II((J) = 11(1 +e-9 ) (the logistic distribution function) and 'II = <I> (the normal distribution
function) correspond to the logit model and probit model, respectively. The maximum
likelihood estimator en maximizes the (conditional) likelihood function
n n
e I-+ npe(Y; I X;):= n'II((JTX;)y1 (1- 'II(OTX;)) 1-Y1 •
i=l i=l
en
The consistency and asymptotic normality of can be proved, for instance, by combining
Theorems 5.7 and 5.39. (Alternatively, we may follow the classical approach given in sec-
tion 5.6. The latter is particularly attractive for the logit model, for which the log likelihood
is strictly concave in 8, so that the point of maximum is unique.) For identifiability of wee
must assume that the distribution of the X; is not concentrated on a (k- I)-dimensional
affine subspace of :Ilk. For simplicity we assume that the range of X; is bounded.
The consistency can be proved by applying Theorem 5.7 with me = log(pe + Peo)l2.
Because P9o is bounded away from 0 (and oo), the function me is somewhat better behaved
than the function log Pe.
By Lemma 5.35, the parameter 8 is identifiable from the density pe. We can redo the
proof to see that, with ;S meaning "less than up to a constant,"
This shows that Bois the unique point of maximum of 8 1-+ Peome. Furthermore, if Peomek --+
P9om9o, then 8[ X~ eJ' X. If the sequence Ok is also bounded, then E( (Ok- Oo)T X) 2 --+ 0,
whence Ok 1-+ 80 by the nonsingularity of the matrix EX xr. On the other hand, IIBk II cannot
have a diverging subsequence, because in that case 8[ I II ek II X ~ 0 and hence Okl IIBk II --+ 0
by the same argument. This verifies condition (5.8).
Checking the uniform convergence to zero of sup 9 IJP>nme - Pme I is not trivial, but
it becomes an easy exercise if we employ the Glivenki-Cantelli theorem, as discussed in
Chapter 19. The functions x 1-+ 'II(OT x) form a VC-class, and the functions me take the
formme(x, y) = 4>('11((JTx), y, 'll(e[x)), wherethefunction<f>(y, y, 71) is Lipschitz in its
first argument with Lipschitz constant bounded above by 1I 7J + 1I (1 - 7J). This is enough to
5.6 Classical Conditions 67
ensure that the functions m 9 form a Donsker class and hence certainly a Glivenko-Cantelli
class, in view of Example 19.20.
The asymptotic normality of ,Jri(en - 0) is now a consequence of Theorem 5.39. The
score function
• y- W(OT X) 1 T
.e9(Y I x) = W(OTx)(l- W)(OTx) Ill (0 x)x
is uniformly bounded in x, y and 0 ranging over compacta, and continuous in 0 for every
x and y. The Fisher information matrix
W'(OT X) 2
1 -E XXT
9- III(OT X)(1 - W)(OT X)
The estimator en is a zero of Wn, and the true value 00 a zero of Ill. The essential condition
of the following theorem is that the second-order partial derivatives of 1jt9(x) with respect
to 0 exist for every x and satisfy
2
I8 1/t9.h(x) I < .i~( )
ae;ei - '~' x •
for some integrable measurable function 1/1. This should be true at least for every 0 in a
neighborhood of Oo.
68 M- and Z-Estimators
5.41 e e
Theorem. For each in an open subset of Euclidean space, let 1-+ 1/re(x) be
twice continuously differentiable for every x. Suppose that P1freo = 0, that P 111/reo 1 2 < oo
and that the matrix P~eo exists and is nonsingular. Assume that the second-order partial
e
derivatives are dominated by a fixed integrable function 1/1 (x) for every in a neighborhood
of eo.
Then every consistent estimator sequence {j n such that 'IIn({j n) = 0 for every n satisfies
r.:
eo)=- P1/re0 )-1 1r.: ~ + op(1).
0
vn(en- ~1/reo(X;)
A (
vn i=l
In particular, the sequence Jn(en - eo) is asymptotically normal with mean zero and
covariance matrix (P~ e0 ) - 1 P1/re0 1/1~ (P~ eo)-'.
Proof. By Taylor's theorem there exists a (random) vector On on the line segment between
eoand {jn such that
The first term on the right Wn(eo) is an average of the i.i.d. random vectors 1/reo(X;), which
have mean P1/reo = 0. By the central limit theorem, the sequence ..jTi'lln(eo) converges
in distribution to a multivariate normal distribution with mean 0 and covariance matrix
P1/re0 1fr~. The derivative ~n(eo) in the second term is an average also. By the law of
large numbers it converges in probability to the matrix V = P ~eo. The second derivative
$n(On) is a k-vector of (k x k) matrices depending on the second-order derivatives 1/le. By
assumption, there exists a ball B around eo such that 1/le is dominated by 111/111 for every
e E B. The probability of the event ({jn E B} tends to 1. On this event
This is bounded in probability by the law of large numbers. Combination of these facts
allows us to rewrite the preceding display as
because the sequence ({jn- eo) Op(l) = op(l)Op(1) converges to 0 in probability if {jn
e
is consistent for 0 • The probability that the matrix Ve0 + op(1) is invertible tends to 1.
Multiply the preceding equation by ..j7i and apply ( V + o p ( 1)) -I left and right to complete
the proof. •
In the preceding sections, the existence and consistency of solutions {j n of the estimating
equations is assumed from the start. The present smoothness conditions actually ensure the
existence of solutions. (Again the conditions could be significantly relaxed, as shown in
the next proof.) Moreover, provided there exists a consistent estimator sequence at all, it is
always possible to select a consistent sequence of solutions.
5.42 Theorem. Under the conditions ofthe preceding theorem, the probability that the eq-
uation JP>n 1/re = 0 has at least one root tends to 1, as n ---+ oo, and there exists a sequence
of roots {j n such that {j n ---+ eo in probability. If 1/re = me is the gradient of some function
5.6 Classical Conditions 69
m9 and 8o is a point of local maximum of& ~--+ Pm9, then the sequence en can be chosen
to be local maxima of the maps e I-+ JP>nm!J.
Proof. Integrate the Taylor expansion of e ~--+ 1/19 (x) with respect to x to find that, for a
e e
point = (x) on the line segment between eo and e'
. I T ··
P1/r!J = P1/r9o + P1/r9o(8- &o) + 2 (&- &o) P1/ru(e- &o).
e
By the domination condition, II Pi;t ull is bounded by P II lit II < oo if is sufficiently close
to 80 • Thus, the map \11(8) = P1/r9 is differentiable at 80 • By the same argument \II is
differentiable throughout a small neighborhood of 80 , and by a similar expansion (but now
to first order) the derivative P ~ 9 can be seen to be continuous throughout this neighborhood.
Because P ~ 90 is nonsingular by assumption, we can make the neighborhood still smaller,
if necessary, to ensure that the derivative of \II is nonsingular throughout the neighborhood.
Then, by the inverse function theorem, there exists, for every sufficiently small 8 > 0, an
open neighborhood Go of &o such that the map \11: Go ~--+ ball(O, 8) is a homeomorphism.
The diameter of Go is bounded by a multiple of 8, by the mean-value theorem and the fact
that the norms of the derivatives (P~ 9 )- 1 of the inverse \11- 1 are bounded.
Combining the preceding Taylor expansion with a similar expansion for the sample
version '11n(8) = JP>n1/r!J, we see
I
sup ll'~~n(&)- \11(8) :S Op(l) + 8op(l) + 82 0p(l),
!JeG&
where the op(1) terms and the Op(1) term result from the law oflarge numbers, and are
uniform in small8. Because P(op(l) + 8op(1) > t8) -+ 0 for every 8 > 0, there exists
8n .J.. 0 such thatP(op(1) + 8nop(1) > t8n)-+ 0. If Kn,o is the event where the left side of
the preceding display is bounded above by 8, then P(Kn,o.) -+ 1 as n-+ oo.
On the event Kn,o the map e ~--+ e - 'lin o \11- 1(8) maps ball(O, 8) into itself, by the
definitions of Go and Kn,o· Because the map is also continuous, it possesses a fixed-point
in ball(O, 8), by Brouwer's fixed point theorem. This yields a zero of 'lin in the set G0 ,
whence the first assertion of the theorem.
For the final assertion, first note that the Hessian P~9o of 8 ~--+ Pm9 at &o is negative-
definite, by assumption. A Taylor expansion as in the proof of Theorem 5.41 shows that
. • p p •
:+
JP>n1/ro. - JP>n1/r!Jo
~
0 for every 8n-+ &o. Hence the Hessian JP>n1/ro. of 8 ~--+ JP>n~!J at any
consistent zero en converges in probability to the negative-definite matrix P1/r9o and is
negative-definite with probability tending to 1. •
The assertion of the theorem that there exists a consistent sequence of roots of the
estimating equations is easily misunderstood. It does not guarantee the existence of an
asymptotically consistent sequence of estimators. The only claim is that a clairvoyant
statistician (with preknowledge of 80) can choose a consistent sequence of roots. In reality,
it may be impossible to choose the right solutions based only on the data (and knowledge
of the model). In this sense the preceding theorem, a standard result in the literature, looks
better than it is.
The situation is not as bad as it seems. One interesting situation is if the solution of the
estimating equation is unique for every n. Then our solutions must be the same as those of
the clairvoyant statistician and hence the sequence of solutions is consistent.
70 M- and Z-Estimators
In general, the deficit can be repaired with the help of a preliminary sequence of estimators
n. n
{j If the sequence {j is consistent, then it works to choose the root en
of JP>n 1/19 = 0 that
is closest to iin. Because ll&n- iinll is smaller than the distance 118~- iinll between the
clairvoyant sequence e~ and iin. both distances converge to zero in probability. Thus the
sequence of closest roots is consistent.
The assertion of the theorem can also be used in a negative direction. The point 80 in
e
the theorem is required to be a zero of 1--+ P1jl9 , but, apart from that, it may be arbitrary.
Thus, the theorem implies at the same time that a malicious statistician can always choose
a sequence of roots en that converges to any given zero. These may include other points
e.
besides the "true" value of Furthermore, inspection of the proof shows that the sequence
of roots can also be chosen to jump back and forth between two (or more) zeros. If the
e
function 1--+ P1j19 has multiple roots, we must exercise care. We can be sure that certain
e
roots of I-+ JP>n 1/19 are bad estimators.
Part of the problem here is caused by using estimating equations, rather than maximiza-
tion to find estimators, which blurs the distinction between points of absolute maximum,
local maximum, and even minimum. In the light of the results on consistency in section 5.2,
e
we may expect the location of the point of absolute maximum of 1--+ JP>nm 9 to converge
e
to a point of absolute maximum of 1--+ Pm 9 • As long as this is unique, the absolute
maximizers of the criterion function are typically consistent.
5.43 Example (Weibull distribution). Let X 1, •.. , Xn be a sample from the Weibull dis-
tribution with density
P9 ,.,.(x) = fj__x 9
a -!e-x fu,
9
X> 0, e > 0, a> 0.
(Then a 119 is a scale parameter.) The score function is given by the partial derivatives of
the log density with respect toe and a:
l9.,.(x)
. (!
= e +logx- xa9 logx, w-..!:..a + ax 92 ).
The likelihood equations L i9,u (xi) = 0 reduce to
1 n
a=- ""x~·
L....J I'
n i=!
The second equation is strictly decreasing in (j, from 00 at (j = 0 tO log X -log X(n) at (j = 00.
Hence a solution exists, and is unique, unless all Xi are equal. Provided the higher-order
derivatives of the score function exist and can be dominated, the sequence of maximum
likelihood estimators (8~. an) is asymptotically normal by Theorems 5.41 and 5.42. There
exist four different third-order derivatives, given by
a3.f9,u(X) _ 2_ _ x9 l 3
a&3 - (j3 a og X
a3.e9,u(x) - x 9 1 2
ae 2 aa - a2 og X
a3.f9,u(X) = _ 2x 9 lo X
aeaa 2 a3 g
a3.e9,u (x) 2 6x 9
-a-'-ac:..,3~ = - a 3 + -a-4 •
5. 7 One-Step Estimators 71
Fore and a ranging over sufficiently small neighborhoods of e0 and a0 , these functions are
dominated by a function of the form
for sufficiently large A and B. Because the Weibull distribution has an exponentially small
tail, the mixed moment EIJo,uo X PI log Xlq is finite for every p, q 2: 0. Thus, all moments of
i 9 and l 9 exist and M is integrable. D
This corresponds to replacing Wn(e) by its tangent at On, and is known as the method of
Newton-Rhapson in numerical analysis. The solution = en is e
A - ' - -1 -
en =en- Wn(en) Wn(en).
In numerical analysis this procedure is iterated a number of times, taking en as the new
preliminary guess, and so on. Provided that the starting point 0n is well chosen, the sequence
of solutions converges to a root of Wn. Our interest here goes in a different direction. We
suppose that the preliminary estimator On is already within range n- 112 of the true value
of e. Then, as we shall see, just one iteration of the Newton-Rhapson scheme produces
an estimator en that is as good as the Z-estimator defined by Wn. In fact, it is better in
that its consistency is guaranteed, whereas the true Z-estimator may be inconsistent or not
uniquely defined.
In this way consistency and asymptotic normality are effectively separated, which is
useful because these two aims require different properties of the estimating equations.
Good initial estimators can be constructed by ad-hoc methods and take care of consistency.
Next, these initial estimators can be improved by the one-step method. Thus, for instance,
the good properties of maximum likelihood estimation can be retained, even in cases in
which the consistency fails.
In this section we impose the following condition on the random criterion functions Wn.
For every constant M and a given nonsingular matrix ~0 •
Condition (5.44) suggests that Wn is differentiable at eo, with derivative tending to ~0 • but
this is not an assumption. We do not require that a derivative ~n exists, and introduce
72 M- and Z-Estimators
5.45 Theorem (One-step estimation). Let .jn\lfn(eo) "-"'+ Z and let (5.44) hold. Then the
one-step estimator Bn, for a given ,Jn-consistent estimator sequence On and estimators
. p .
Wn,o-+ Wo, satisfies
5.46 Addendum. For Wn(e) = IJ!'n1/fe condition (5.44) is satisfied under the conditions of
Theorem 5.21 with $o = Ve0 , and under the conditions of Theorem 5.41 with $o = P'if,9o.
By (5.44) the second term can be replaced by -$o,Jn(On -eo)+op(l). Thus the expression
can be rewritten as
The first term converges to zero in probability, and the theorem follows after application of
Slutsky's lemma.
For a proof of the addendum, see the proofs of the corresponding theorems. •
If the sequence .jn(Bn -eo) converges in distribution, then it is certainly uniformly tight.
Consequently, a sequence of one-step estimators is ,Jn-consistent and can itself be used as
preliminary estimator for a second iteration of the modified Newton-Rhapson algorithm.
Presumably, this would give a value closer to a root of Wn. However, the limit distribution
of this "two-step estimator" is the same, so that repeated iteration does not give asymptotic
improvement. In practice a multistep method may nevertheless give better results.
We close this section with a discussion of the discretization trick. This device is mostly
of theoretical value and has been introduced to relax condition (5.44) to the following. For
every nonrandom sequence en = e0 + O(n- 112 ),
(5.47)
This new condition is less stringent and much easier to check. It is sufficiently strong if
the preliminary estimators On are discretized on grids of mesh width n- 112 • For instance,
On is suitably discretized if all its realizations are points of the grid n- 112 -z} (consisting
of the points n- 112 (i 1, ••• , h) for integers i~o ... , ik). This is easy to achieve, but perhaps
unnatural. Any preliminary estimator sequence On can be discretized by replacing its values
5. 7 One-Step Estimators 73
by the closest points of the grid. Because this changes each coordinate by at most n- 112 ,
.Jn-consistency of On is retained by discretization.
Define a one-step estimator en as before, but now use a discretized version of the pre-
liminary estimator.
5.48 Theorem (Discretized one-step estimation). Let J1iWn (Oo) ~ Z and let (5.47) hold.
Then the one-step estimator en, for a given Jn-consistent, discretized estimator sequence
- . p .
On and estimators Wn,o-+ Wo, satisfies
5.49 Addendum. For Wn (0) = JP>n1/f9 and JP>n the empirical measure of a random sample
from a density Pe that is differentiable in quadratic mean (5.38), condition (5.47), is satisfied,
. ·T
with 'llo = - Peo 1/fecl·eo• if, as 0 -+ Oo,
converges to zero in probability. Fixe > 0. By the .Jn-consistency, there exists M with
P(JniiOn - Ooll > M)< e. If JniiOn - Ooll ::;: M, then On equals one of the values in the
set Sn = {0 E n- 1127!}: 110- Ooll ::;: n- 112 M}. For each M and n there are only finitely
many elements in this set. Moreover, for fixed M the number of elements is bounded
independently of n. Thus
If the score function .l9 of the model also satisfies the conditions of the addendum,
then the estimators ~n.o = - P0• 1/fo. i.~. are consistent for ~o. This shows that discretized
one-step estimation can be carried through under very mild regularity conditions. Note
that the addendum requires only continuity of 0 ~--* 1/f9 , whereas (5.47) appears to require
differentiability.
5.50 Example (Cauchy distribution). Suppose X 1, ... , Xn are a sample from the Cauchy
r
location family p 9 (x) = :rr- 1 (1 + (x- 0) 2 1• Then the score function is given by
. 2(x- 0)
le(x) = 1 + (x- 0)2.
74 M- and Z-Estimators
Figure 5.4. Cauchy log likelihood function of a sample of 25 observations, showing three local
maxima. The value of the absolute maximum is well-separated from the other maxima, and its
location is close to the true value zero of the parameter.
This function behaves like 1/x for x --* ±oo and is bounded in between. The second
moment of i 9 (X 1) therefore exists, unlike the moments of the distribution itself. Because
the sample mean possesses the same (Cauchy) distribution as a single observation X 1, the
sample mean is a very inefficient estimator. Instead we could use the median, or another
M -estimator. However, the asymptotically best estimator should be based on maximum
likelihood. We have
The tails of this function are of the order 1I x 3 , and the function is bounded in between.
These bounds are uniform in () varying over a compact interval. Thus the conditions of
Theorems 5.41 and 5.42 are satisfied. Since the consistency follows from Example 5.16,
the sequence of maximum likelihood estimators is asymptotically normal.
The Cauchy likelihood estimator has gained a bad reputation, because the likelihood
equation I:: i 9 (Xi) = 0 typically has several roots. The number of roots behaves asymp-
totically as two times a Poisson(1/rr) variable plus 1. (See [126].) Therefore, the one-step
(or possibly multi-step method) is often recommended, with, for instance, the median as the
initial estimator. Perhaps a better solution is not to use the likelihood equations, but to deter-
mine the maximum likelihood estimator by, for instance, visual inspection of a graph of the
likelihood function, as in Figure 5 .4. This is particularly appropriate because the difficulty of
multiple roots does not occur in the two parameter location-scale model. In the model with
density p 9 (xja)ja, themaximumlikelihoodestimatorfor ((),a) is unique. (See [25].) D
5.51 Example (Mixtures). Let f and g be given, positive probability densities on the real
line. Consider estimating the parameter() = (JL, v, a, r, p) based on a random sample from
5.8 Rates of Convergence 75
x t-+ pf -a-(X-
JL) ~1 + (1 - (X-V)
1
p)g - r - ~·
Iff and g are sufficiently regular, then this is a smooth five-dimensional parametric model,
and the standard theory should apply. Unfortunately, the supremum of the likelihood over
the natural parameter space is oo, and there exists no maximum likelihood estimator. This
is seen, for instance, from the fact that the likelihood is bigger than
5.52 Theorem (Rate of convergence). Assume that for fixed constants C and a > {3, for
every n, and for every sufficiently small8 > 0,
If the sequence Bn satisfies JP>nmo. 2:: JP>nmllo - 0 p ( naf(ZfJ-Za)) and converges in outer
probability to eo, then ni/(2a-ZfJ)d({Jn, eo) = Oj,(1).
76 M- and Z-Estimators
Proof. Set rn = n 11<2a-ZfJ) and suppose that en maximizes the map() 1-+ JP>nmo up to a
variable Rn = Op(r;;").
For each n, the parameter space minus the point 00 can be partitioned into the "shells"
Sj,n = {0:2i-l < rnd((),()0 ):::::; 2i}, withj rangingovertheintegers. Ifrnd(en,Oo) is
larger than 2M for a given integer M, then en is in one of the shells Sj,n with j 2:: M. In
that case the supremum of the map() 1-+ JP>nmo- JP>nmOo over this shell is at least -Rn by
the property of en. Conclude that, for every e > 0,
P*(rn d(en. Oo) > 2M) :::::; L P* ( sup (JP>nmo- JP>nmfio) 2:: - ~)
j?!M OeSJ,n rn
zi::;srn
If the sequence en is consistent for Oo, then the second probability on the right converges
to 0 as n --+ oo, for every fixed e > 0. The third probability on the right can be made
arbitrarily small by choice of K, uniformly in n. Choose e > 0 small enough to ensure that
the conditions of the theorem hold for every 8 :::::; e. Then for every j involved in the sum,
we have
2(j-l)a.
sup P(mo - m9o) :::::; -C-a.-·
OeSJ,n rn
For ~c2<M-I)a. 2:: K, the series can be bounded in terms of the empirical process Gn by
by Markov's inequality and the definition of rn. The right side converges to zero for every
M = Mn--+ oo. •
Consider the special case that the parameter() is a Euclidean vector. If the map() 1-+ Pm 9
is twice-differentiable at the point of maximum 00 , then its first derivative at ()0 vanishes
and a Taylor expansion of the limit criterion function takes the form
5.53 Corollary. For each e in an open subset of Euclidean space let x ~ me (x) be a
measurable function such that,for every e1 and (h in a neighborhood ofeo and a measurable
function m such that Pm 2 < oo,
Furthermore, suppose that the map e ~ Pme admits a second-order Taylor expansion at
the point ofmaximum eo with nonsingular second derivative. If JP>nmo. 2:: JP>nmlio- 0 p (n - 1),
~ ~ p
then .jn(t1n- eo)= Op(1), provided that en--* eo.
Proof. By assumption, the first condition of Theorem 5.52 is valid with a = 2. To see
that the second one is valid with fJ = 1, we apply Corollary 19.35 to the class offunctions
F = {me- m~~o: 11e- eo II < 8}. This class has envelope function F = m8, whence
E* sup
ne-llolld
IGn(me-m~~o)I;S 1 0
11miiP,28
JlogNn(e,F,L2(P))de.
The bracketing entropy of the class F is estimated in Example 19.7. Inserting the upper
bound obtained there into the integral, we obtain that the preceding display is bounded
above by a multiple of
1 f:(IT8)
0
11miiP,28
log -
8
de.
Rates of convergence different from .jn are quite common for M -estimators of infinite-
dimensional parameters and may also be obtained through the application of Theorem 5.52.
See Chapters 24 and 25 for examples. Rates slower than Jn may also arise for fairly simple
parametric estimates.
5.54 Example (Modal interval). Suppose that we define an estimator oflocation as the en
center of an interval of length 2 that contains the largest possible fraction of the observations.
This is an M-estimator for the functions me = 1[e-1,9+1l·
For many underlying distributions the first condition of Theorem 5.52 holds with a = 2.
It suffices that the map e ~ Pme = P[e - 1, e + 1] is twice-differentiable and has
e
a proper maximum at some point 0 • Using the maximal inequality Corollary 19.35 (or
Lemma 19.38), we can show that the second condition is valid with fJ = t.
Indeed, the
bracketing entropy of the intervals in the real line is of the order 8j e , and the envelope
2
function of the class offunctions 1[e-1,9+1] - 1[1Jo-1,1Jo+1] as e ranges over ceo- 8, eo+ 8)
is bounded by 1[eo-1-8,11o-1+8l + 1[eo+1-8,1Jo+1+8l• whose squared L2-norm is bounded by
11PIIoo28.
t
Thus Theorem 5.52 applies with a = 2 and fJ = and yields the rate of convergence
n 113 • The resulting location estimator is very robust against outliers. However, in view of
its slow convergence rate, one should have good reasons to use it.
The use of an interval of length 2 is somewhat awkward. Every other fixed length would
give the same result. More interestingly, we can also replace the fixed-length interval by the
smallest interval that contains a fixed fraction, for instance 1/2, of the observations. This
78 M- and Z-Estimators
still yields a rate of convergence of n 113 • The intuitive reason for this is that the length of a
"shorth" settles down by a .JTi-rate and hence its randomness is asymptotically negligible
relative to its center. D
(j I-+ JP>nm9,~•.
The sets E>n and Hn need not be metric spaces, but instead we measure the discrepancies
between en and eo. and ~nand a limiting value 'f/o, by nonnegative functions (j I-+ dTJ((J, 8o)
and 'f/ 1--+ d('f/, 'f/o), which may be arbitrary.
5.55 Theorem. Assume that,forarbitraryfunctionsen: E>n x Hn 1--+ :~land 4Jn: (0, oo) 1--+
1--+ 4Jn(8)j8P is decreasing for some fJ < 2, every (8, 'f/) E E>n x Hn, and
ll such that 8
every 8 > 0,
Let 8n > 0 satisfy 4Jn(8n) .:S .JTi 8~ for every n. If P(en E E>n, ~n E Hn) -* 1 and
Pnmo•. ~. 2: Pnm9o.~.- Op(8~). then d~.(en. 8o) = o;,(8n + d(~n. 'f/o)).
Proof. For simplicity assume that Pnmo•. ~. 2: Pnm9o.~•• without a tolerance term. For
each n E N, j E Z and M > 0, let Sn,j,M be the set
{ (8, 'f/) E E>n X Hn: 2i-!8n < dTJ((J, 8o) .:S 2i8n, d('f/, 'f/o) .:S 2-MdTJ((J, &o) }.
.: : L
j~M
p*((9,TJ)ESn,j,M Pn(m9,TJ- m9o.'l) 2: o).
sup
5.9 Argmax Theorem 79
h ~---* Mn (e + ~) - Mn(Oo).
Suppose that these, if suitably normed, converge to a limit process h ~---* M(h). Then the
general principle is that the sequence hn converges in distribution to the maximizer of this
limit process.
For simplicity of notation we shall write the local criterion functions ash ~---* Mn(h).
Let {Mn (h): h E Hn} be arbitrary stochastic processes indexed by subsets Hn of a given
metric space. We wish to prove that the argmax-functional is continuous: If Mn "-"+ M and
Hn --* H in a suitable sense, then the (near) maximizers hn of the random maps h ~---* Mn (h)
converge in distribution to the maximizer h of the limit process h ~---* M (h). It is easy to
find examples in which this is not true, but given the right definitions it is, under some
conditions. Given a set B, set
M(B) = supM(h).
heB
Then convergence in distribution of the vectors (Mn(A), Mn(B)) for given pairs of sets A
and B is an appropriate form of convergence of Mn toM. The following theorem gives some
flexibility in the choice of the indexing sets. We implicitly either assume that the suprema
Mn (B) are measurable or understand the weak convergence in terms of outer probabilities,
as in Chapter 18.
The result we are looking for is not likely to be true if the maximizer of the limit process
is not well defined. Exactly as in Theorem 5. 7, the maximum should be "well separated."
Because in the present case the limit is a stochastic process, we require that every sample
path h ~---* M(h) possesses a well-separated maximum (condition (5.57)).
80 M- and Z-Estimators
5.56 Theorem (Argmax theorem). Let Mn and M be stochastic processes indexed by sub-
sets Hn and H of a given metric space such that, for every pair of a closed set F and a set
K in a given collection JC,
Furthermore, suppose that every sample path of the process h 1-+ M (h) possesses a well-
separated point of maximum h in that, for every open set G and every K E JC,
If Mn (hn) ~ Mn (Hn)- o p (1) and for every 8 > 0 there exists K E lC such that supn P(hn ¢
K) < 8 and P(h ¢ K) < 8, then hn -v-+ h.
Proof. If hn E F n K, then Mn(F n K n Hn) ~ Mn(B)- op(l) for any set B. Hence,
for every closed set F and every K E J(,
by Slutsky's lemma and the portmanteau lemma. If hE Fe, then M(F n K n H) is strictly
smaller than M(h) by (5.57) and hence on the intersection with the event in the far right
side h cannot be contained in K n H. It follows that
The theorem works most smoothly if we can take J( to consist only of the whole space.
However, then we are close to assuming some sort of global uniform convergence of Mn
toM, and this may not hold or be hard to prove. It is usually more economical in terms
of conditions to show that the maximizers hn are contained in certain sets K, with high
probability. Then uniform convergence of Mn to M on K is sufficient. The choice of
compact sets K corresponds to establishing the uniform tightness of the sequence hn before
applying the argmax theorem.
If the sample paths of the processes Mn are bounded on K and Hn = H for every n, then
the weak convergence of the processes Mn viewed as elements of the space .l 00 (K) implies
the convergence condition of the argmax theorem. This follows by the continuous-mapping
theorem, because the map
5.58 Corollary. Suppose that Mn "-"'+Min l 00 (K) for every compact subset K of :Ilk, for
a limit process M with continuous sample paths that have unique points of maxima h. If
Hn ~ H, Mn(hn)?: Mn(Hn)-op(1), andthesequencehn is uniformly tight, thenhn "-"+h.
Proof. The compactness of K and the continuity of the sample paths h ~--* M (h) imply
that the (unique) points of maximum h are automatically well separated in the sense of
(5.57). Indeed, if this fails for a given open set G 3 h and K (and a given w in the
underlying probability space), then there exists a sequence hm in ac n K n H such that
M(hm) ~ M(h). If K is compact, then this sequence can be chosen convergent. The limit
ho must be in the closed set ac and hence cannot be h. By the continuity of Mit also has
the property that M(ho) =lim M(hm) = M(h). This contradicts the assumption that his
a unique point of maximum.
If we can show that (Mn(F n Hn). Mn(K n Hn)) converges to the corresponding limit
for every compact sets F C K, then the theorem is a corollary ofTheorem5.56. If Hn = H
for every n, then this convergence is immediate from the weak convergence of Mn to M
in l 00 (K), by the continuous-mapping theorem. For Hn changing with n this convergence
may fail, and we need to refine the proof of Theorem 5.56. This goes through with minor
changes if
limsupP(Mn(F n Hn)- Mn(K ~~ Hn)?: x):::; P(M(F n H)- M(K n H)?: x),
n-+oo
for every x, every compact set F and every large closed ball K. Define functions 8n: .eoo (K)
1-* ll by
and g similarly, but with H replacing Hn. By an argument as in the proof of Theo-
rem 18.11, the desired result follows if lim sup 8n (zn) :::; g(z) for every sequence Zn ~ z
in l 00 (K) and continuous function z. (Then limsupP(gn(Mn) ?: x) :::; P(g(M) ?: x) for
every x, for any weakly converging sequence Mn "-"'+ M with a limit with continuous sample
paths.) This in turn follows if for every precompact set B c K,
The argmax theorem can also be used to prove consistency, by applying it to the original
criterion functions e 1-+ Mn (e). Then the limit process e 1-+ M(e) is degenerate, and has
e
a fixed point of maximum 0 • Weak convergence becomes convergence in probability, and
the theorem now gives conditions for the consistency en
~ eo. Condition (5.57) reduces
e
to the well-separation of 0 , and the convergence
p
sup Mn(e)--* sup Mn(e)
lieFnKne. lieFnKne
is, apart from allowing E>n to depend on n, weaker than the uniform convergence of Mn to
M.
Notes
In the section on consistency we have given two main results (uniform convergence and
Wald's proof) that have proven their value over the years, but there is more to say on this
subject. The two approaches can be unified by replacing the uniform convergence by "one-
sided uniform convergence," which in the case of i.i.d. observations can be established
under the conditions of Wald's theorem by a bracketing approach as in Example 19.8 (but
then one-sided). Furthermore, the use of special properties, such as convexity of the 1/1 or
m functions, is often helpful. Examples such as Lemma 5.10, or the treatment of maximum
likelihood estimators in exponential families in Chapter 4, appear to indicate that no single
approach can be satisfactory.
The study of the asymptotic properties of maximum likelihood estimators and other
M-estimators has a long history. Fisher [48], [SO] was a strong advocate of the method of
maximum likelihood and noted its asymptotic optimality as early as the 1920s. What we
have labelled the classical conditions correspond to the rigorous treatment given by Cramer
[27] in his authoritative book. Huber initiated the systematic study of M -estimators, with
the purpose of developing robust statistical procedures. His paper [78] contains important
ideas that are precursors for the application of techniques from the theory of empirical
processes by, among others, Pollard, as in [117], [118], and [120]. For one-dimensional
parameters these empirical process methods can be avoided by using a maximal inequality
based on the L 2-norm (see, e.g., Theorem 2.2.4 in [146]). Surprisingly, then a Lipschitz
condition on the Hellinger distance (an integrated quantity) suffices; see for example, [80] or
[94]. For higher-dimensional parameters the results are also not the best possible, but I do
not know of any simple better ones.
The books by Huber [79] and by Hampel, Ronchetti, Rousseeuw, and Stahel [73] are
good sources for applications of M -estimators in robust statistics. These references also
discuss the relative efficiency of the different M -estimators, which motivates, for instance,
the use of Huber's 1/1-function. In this chapter we have derived Huber's estimator as the
solution of the problem of minimizing the asymptotic variance under the side condition
of a uniformly bounded influence function. Originally Huber derived it as the solution to
the problem of minimizing the maximum asymptotic variance sup P a~ for P ranging over
a contamination neighborhood:P = (1 - e)<l> + eQ with Q arbitrary. ForM-estimators
these two approaches tum out to be equivalent.
The one-step method can be traced back to numerical schemes for solving the likelihood
equations, including Fisher's method of scoring. One-step estimators were introduced for
Problems 83
their asymptotic efficiency by Le Cam in 1956, who later developed them for general locally
asymptotically quadratic models, and also introduced the discretization device, (see [93]).
PROBLEMS
1. Let X 1, ... , Xn be a sample from a density that is strictly positive and symmetric about some
point. Show that the Huber M -estimator for location is consistent for the symmetry point.
2. Find an expression for the asymptotic variance of the Huber estimator for location if the obser-
vations are normally distributed.
3. Define t/f(x) = 1 - p, 0, p if x < 0, 0, > 0. Show that Et/f(X- e) = 0 implies that P(X <
e) :::: p :::: P(X :::: e).
4. Let Xt. ... , Xn be i.i.d. N(f.L, cr 2 )-distributed. Derive the maximum likelihood estimator for
(f.L, cr 2) and show that it is asymptotically normal. Calculate the Fisher information matrix for
this parameter and its inverse.
5. Let Xt. ... , Xn be i.i.d. Poisson(lje)-distributed. Derive the maximum likelihood estimatorfor
e and show that it is asymptotically normal.
6. Let Xt. ... , Xn be i.i.d. N(e, e)-distributed. Derive the maximum likelihood estimator fore
and show that it is asymptotically normal.
7. Find a sequence of fixed (nonrandom) functions Mn: R 1-+ R that converges pointwise to a limit
Mo and such that each Mn has a unique maximum at a point en, but the sequence en does not
converge to eo. Can you also find a sequence Mn that converges uniformly?
8. Find a sequence of fixed (nonrandom) functions Mn:R r+ R that converges pointwise but not
uniformly to a limit Mo such that each Mn has a unique maximum at a point en and the sequence
en converges to eo.
9. Let Xt. ... , Xn be i.i.d. observations from a uniform distribution on [0, e]. Show that the
sequence of maximum likelihood estimators is asymptotically consistent. Show that it is not
asymptotically normal.
10. Let X1, ... , Xn be i.i.d. observations from an exponential density e exp(-ex). Show that the
sequence of maximum likelihood estimators is asymptotically normal.
11. Let JF; 1(p) be a pth sample quantile of a sample from a cumulative distribution F on lR that is
differentiable with positive derivative at the population pth-quantile F- 1(p) = inf{x: F (x) 2:
p}. Show that ..,fti(JF; 1(p) - F- 1(p)) is asymptotically normal with mean zero and variance
p(l- p)/f(F-l(p))2.
12. Derive a minimal condition on the distribution function F that guarantees the consistency of the
sample pth quantile.
13. Calculate the asymptotic variance of ..,fti(en -e) in Example 5.26.
14. Suppose that we observe a random sample from the distribution of (X, Y) in the following
errors-in-variables model:
X= Z+e
Y; =a+ {JZ + j,
where (e, f) is bivariate normally distributed with mean 0 and covariance matrix cr 2 / and is
independent from the unobservable variable Z. In analogy to Example 5.26, construct a system
of estimating equations for (a, {J) based on a conditional likelihood, and study the limit properties
of the corresponding estimators.
15. In Example 5.27, for what point is the least squares estimator On consistent if we drop the
condition that E(e 1 X) = 0? Derive an (implicit) solution in terms of the function E(e 1X). Is
it necessarily eo if Ee = 0?
84 M- and Z-Estimators
16. In Example 5.27, consider the asymptotic behavior of the least absolute-value estimator 9 that
minimizes 2:f= 1IY; - 4>9(X;)1.
17. LetX1, ... , Xn bei.i.d. with density fl.... a(x) = A.e-J..(x-a)1{x ~a), where the parameters)..> 0
and a E ll are unknown. Calculate the maximum likelihood estimator (~n. iln) of (A., a) and
derive its asymptotic properties.
18. Let X be Poisson-distributed with density P9 (x) = ex e- 9I X!. Show by direct calculation that
Eele(X) = 0 and Eele (X) = -E9l~(X). Compare this with the assertions in the introduction.
Apparently, differentiation under the integral (sum) is permitted in this case. Is that obvious from
results from measure theory or (complex) analysis?
19. Let X 1, ... , Xn be a sample from the N (e, 1) distribution, where it is known that e ~ 0. Show
that the maximum likelihood estimator is not asymptotically normal under e = 0. Why does this
not contradict the theorems of this chapter?
20. Show that (0- eo)IJ!n(On) in formula (5.18) converges in probability to zero if9n ~eo, and that
there exists an integrable function M and 8 > 0 with l~ 9 (x)l ~ M(x) for every x and every
ne- eon < 8.
21. If 9n maximizes Mn. then it also maximizes M;i. Show that this may be used to relax the
conditions of Theorem 5.7 to sup9 1M;i- M+l(e)-+ 0 in probability (if M(eo) > 0).
22. Supposethatforeverys > Othereexistsaset88 withliminfP(9n E 8 8 ) ~ 1-s. Thenuniform
convergence of Mn toM in Theorem 5.7 can be relaxed to uniform convergence on every 8 8 •
23. Show that Wald's consistency proof yields almost sure convergence of 9n, rather than convergence
in probability if the parameter space is compact and Mn(9n) ~ Mn(fJo)- o(1).
24. Suppose that (XI. Y1), ... , (Xn. Yn) are i.i.d. and satisfy the linear regression relationship Y; =
eT X; +e; for (unobservable) errors e1. ... , en independent of X1. ... , Xn. Show that the mean
absolute deviation estimator, which minimizes L IY; -ex;~, is asymptotically normal under a
mild condition on the error distribution.
25. (i) Verify the conditions ofWald's theorem for m9 the log likelihood function of theN (JL, cr 2)-
distribution if the parameter set fore = (JL, cr 2) is a compact subset of lR x JR+.
(ii) Extend m9 by continuity to the compactification of lR x JR+. Show that the conditions of
Wald's theorem fail at the points (JL, 0).
(iii) Replace me by the log likelihood function of a pair of two independent observations from the
N (JL, a 2 )-distribution. Show that Wald's theorem now does apply, also with a compactified
parameter set.
26. A distribution on IRk is called ellipsoidally symmetric if it has a density of the form x ~
g((x- JL)T:E- 1(x- JL)) for a function g: [0, oo) ~ [0, oo), a vector f.L, and a symmetric
positive-definite matrix :E. Study the Z-estimators for location P, that solve an equation of the
form
n
L1P"((X;- JLlt;; 1(X;- JL)),
i=!
for given estimators "tn and, for instance, Huber's 1P"-function. Is the asymptotic distribution of
"tn important?
27. Suppose that 8 is a compact metric space and M : 8 -+ ll is continuous. Show that (5.8) is
equivalent to the point fJo being a point of unique global maximum. Can you relax the continuity
of M to some form of "semi-continuity"?
6
Contiguity
= {p > 0},
Qp QQ = {q > 0}.
See Figure 6.1. Because P(Qp) = Jp=O pdf.L = 0, the measure Pis supported on the set
Qp. Similarly, Q is supported on QQ. The intersection Qp n QQ receives positive measure
from both P and Q provided its measure under f.L is positive. The measure Q can be written
as the sum Q = Qa + Q.l of the measures
As proved in the next lemma, Qa « P and Q.l j_ P. Furthermore, for every measurable
setA
Qa(A) = { 1 dP.
JA p
The decomposition Q = Qa + Q.l is called the Lebesgue decomposition of Q with respect
to P. The measures Qa and Q.l are called the absolutely continuous part and the orthogonal
85
86 Contiguity
p=q=O
part (or singular part) of Q with respect to P, respectively. In view of the preceding display,
the function q I p is a density of Qa with respect to P. It is denoted d Q I d P (not: d Qa I d P),
so that
dQ q
-=- P- a.s.
dP p'
As long as we are only interested in the properties of the quotient q I p under P-probability,
we may leave the quotient undefined for p = 0. The density dQidP is only P-almost
surely unique by definition. Even though we have used densities to define them, d Q1d P
and the Lebesgue decomposition are actually independent of the choice of densities and
dominating measure.
In statistics a more common name for a Radon-Nikodym density is likelihood ratio. We
shall think of it as a random variable d Q1d P : Q ~--+ [0, oo) and shall study its law under P.
6.2 Lemma. Let P and Q be probability measures with densities p and q with respect to
a measure f.L. Then for the measures Qa and QJ. defined in (22.30)
(i) Q = Qa + QJ., Qa « P, QJ. l.. P.
(ii) Qa(A) =fA (qlp) dP for every measurable set A.
(iii) Q « P if and only if Q(p = 0) = 0 if and only if j(q IP) dP = 1.
Proof. The first statement of (i) is obvious from the definitions of Qa and QJ.. For the
second, we note that P(A) can be zero only if p(x) = 0 for J.L-almost all x e A. In this
case, JL(A n {p > Ol) = 0, whence Qa(A) = Q(A n {p > Ol) = 0 by the absolute
continuity of Q with respect to f.L. The third statement of (i) follows from P(p = 0) = 0
and QJ.(p > 0) = Q(0) = 0.
Statement (ii) follows from
that by (ii) the total mass of Qa is equal to Qa (Q) = J(q I p) d P. This is 1 if and only if
Qa = Q. •
6.2 Contiguity
If a probability measure Q is absolutely continuous with respect to a probability measure
P, then the Q-law of a random vector X : Q ~--* IRk can be calculated from the P -law of the
pair (X, dQidP) through the formula
dQ
EQJ(X) = Epf(X) dP.
With px.v equal to the law of the pair (X, V) = (X, dQidP) under P, this relationship
can also be expressed as
dQ =
Q(X E B)= Epln(X)-d
p
1BxR
vdPx,v(x, v).
The validity of these formulas depends essentially on the absolute continuity of Q with
respect to P, because a part of Q that is orthogonal with respect to P cannot be recovered
from any P-law.
Consider an asymptotic version of the problem. Let (Qn, An) be measurable spaces,
each equipped with a pair of probability measures Pn and Qn. Under what conditions can
a Qn-limit law of random vectors Xn : Qn ~--* IRk be obtained from suitable Pn-limit laws?
In view of the above it is necessary that Qn is "asymptotically absolutely continuous" with
respect to Pn in a suitable sense. The right concept is contiguity.
The name "contiguous" is standard, but perhaps conveys a wrong image. "Contiguity"
suggests sequences of probability measures living next to each other, but the correct image
is "on top of each other" (in the limit).
t The algebraic identify dQ = (dQidP)dP is false, because the notation dQidP is used as shorthand for
d Qa I d P: If we write d Q I d P, then we are not implicitly assuming that Q « P.
88 Contiguity
This equivalence persists if the three statements are replaced by their asymptotic counter-
parts: Sequences Pn and Qn satisfy Qn<JPn, ifandonlyifthe weak limit points of dPnfdQn
under Qn give mass 0 to 0, if and only if the weak limit points of dQn/dPn under Pn have
mean 1.
Proof. The equivalence of (i) and (iv) follows directly from the definition of contiguity:
Given statistics Tn. consider the sets An = {IITn II > e }; given sets An. consider the statistics
Tn = lA.·
(i) =} (ii). For simplicity of notation, we write just {n} for the given subsequence
along which dPnfdQn .& U. For given n, we define the function gn(e) = Qn(dPnfdQn <
e) - P(U < e). By the portmanteau lemma, liminf gn(e) 2: 0 for every e > 0. Then, for
en .J.. 0 at a sufficiently slow rate, also lim inf gn (en) 2: 0. Thus,
If Qn is contiguous with respect to Pn, then the Qn-probability of the set on the left goes
to zero also. But this is the probability on the right in the first display. Combination shows
that P(U = 0) = 0.
(iii)=} (i). If Pn(An) --* 0, then the sequence lo.-A. converges to 1 in Pn-probability.
By Prohorov's theorem, every subsequence of {n} has a further subsequence along which
6.2 Contiguity 89
(dQnfdPn, ln.-A.) -v-+ (V, 1) under Pn, for some weak limit V. The function (v, t) 1-+ vt
is continuous and nonnegative on the set [0, oo) x {0, 1}. By the portmanteau lemma
Under (iii) the right side equals EV = 1. Then the left side is 1 as well and the sequence
Qn(An) = 1- Qn(Qn- An) converges to zero.
(ii):::::} (iii). The probability measures f.Ln = t<Pn + Qn) dominate both Pn and Qn, for
every n. The sum of the densities of Pn and Qn with respect to f.Ln equals 2. Hence, each of
the densities takes its values in the compact interval [0, 2]. By Prohorov's theorem every
subsequence possesses a further subsequence along which
dPn Rn
Wn:=--v-+W,
dp,n
for certain random variables U, V and W. Every Wn has expectation 1 under f.Ln. In view
of the boundedness, the weak convergence of the sequence Wn implies convergence of
moments, and the limit variable has mean E W = 1 as well. For a given bounded, continuous
function f, defineafunctiong: [0, 2] 1-+ lR by g(w) = f(w/(2-w) )(2-w) forO:::: w < 2
andg(2) = 0. Theng is bounded and continuous. BecausedPnfdQn = Wn/(2- Wn) and
d Qn/ dtLn = 2 - Wn, the portmanteau lemma yields
d Pn )
EQJ ( dQn = Ep,.f ( dQn
d Pn ) d Qn
dp,n = EJJ..g(Wn)--* Ef ( 2 _WW ) (2- W),
where the integrand in the right side is understood to be g(2) = Oif W = 2. By assumption,
the left side converges to Ef(U). Thus Ef(U) equals the right side of the display for every
continuous and bounded function f. Take a sequence of such functions with 1 :::::: f m ..j.. 1(OJ,
and conclude by the dominated-convergence theorem that
6.5 Example (Asymptotic log normality). The following special case plays an important
role in the asymptotic theory of smooth parametric models. Let Pn and Qn be probability
measures on arbitrary measurable spaces such that
dPn Q. N(JJ. ,u2) .
---v-+e
dQn
Then Qn <1 Pn. Furthermore, Qn <1 t> Pn if and only if JL = - ta
2•
Because the (log normal) variable on the right is positive, the first assertion is immediate
from (ii) of the theorem. The second follows from (iii) with the roles of Pn and Qn switched,
a
on noting that E exp N (p,, 2) = 1 if and only if p, = - ta
2•
90 Contiguity
A mean equal to minus half times the variance looks peculiar, but we shall see that this sit-
uation arises naturally in the study of the asymptotic optimality of statistical procedures. D
The following theorem solves the problem of obtaining a Qn-limit law from a Pn-limit
law that we posed in the introduction. The result, a version of Le Cam's third lemma, is in
perfect analogy with the nonasymptotic situation.
( Xn. -dQn) P.
-v-+ (X, V).
dPn
Proof. Because V 2: 0, it follows with the help of the monotone convergence theorem
that L defines a measure. By contiguity, E V = 1 and hence L is a probability measure.
It is immediate from the definition of L that J f dL = Ef(X) V for every measurable
indicator function f. Conclude, in steps, that the same is true for every simple function f,
any nonnegative measurable function, and every integrable function.
Iff is continuous and nonnegative, then so is the function (x, v) ~ f(x) v on :Ilk x
[0, oo). Thus
(xn.log~~:)IANk+l((-fcr2). (~ ;2)).
then
In this situation the asymptotic covariance matrices of the sequence Xn are the same under
Pn and Qn, but the mean vectors differ by the asymptotic covariance r between Xn and the
log likelihood ratios.t
The statement is a special case of the preceding theorem. Let (X, W) have the given
(k + 1)-dimensional normal distribution. By the continuous mapping theorem, the sequence
(Xn, dQnfdPn) converges in distribution under Pn to (X, ew). Because W isN( -!cr 2, cr 2)-
distributed, the sequences Pn and Qn are mutually contiguous. According to the abstract
t We set log 0 = -oo; because the normal distribution does not charge the point -oo the assumed asymptotic
normality oflogdQnfdPn includes the assumption that Pn(dQnfdPn = 0)--+ 0.
Problems 91
version of Le Cam's third lemma, Xn & L with L(B) = ElB(X)ew. The characteristic
function of Lis J eitr x dL(x) = Eeitrx ew. This is the characteristic function of the given
normal distribution at the vector (t, -i). Thus
The right side is the characteristic function of the Nk (~J- + T, L:) distribution. D
Notes
The concept and theory of contiguity was developed by Le Cam in [92]. In his paper the
results that were later to become known as Le Cam's lemmas are listed as a single theorem.
The names "first" and "third" appear to originate from [71]. (The second lemma is on
product measures and the first lemma is actually only the implication (iii) :::::} (i).)
PROBLEMS
1. Let Pn = N(O, 1) and Qn = N(f.Ln, 1). Show that the sequences Pn and Qn are mutually
contiguous if and only if the sequence f.Ln is bounded.
2. Let Pn and Qn be the distribution of the mean of a sample of size n from the N(O, 1) and the
N(9n, 1) distribution, respectively. Show that Pn <JI>Qn if and only if en = 0(1/ .Jii).
3. Let Pn and Qn be the law of a sample ofsizen from the uniform distribution on [0, 1] or [0, 1+1/n],
respectively. Show that Pn <J Qn. Is it also true that Qn <J Pn? Use Lemma 6.4 to derive your
answers.
I
4. Suppose that II Pn - Qn II -+ 0, where II · II is the total variation distance II P - Q II = supA P (A) -
Q(A)I. Show that Pn <JI> Qn.
5. Given 8 > 0 find an example of sequences such that Pn <JI> Qn. but II Pn - Qn II -+ 1 - 8. (The
maximum total variation distance between two probability measures is 1.) This exercise shows
that it is wrong to think of contiguous sequences as being close. (Try measures that are supported
on just two points.)
6. Give a simple example in which Pn <J Qn. but it is not true that Qn <J Pn.
7. Show that the constant sequences {P} and {Q} are contiguous if and only if P and Q are absolutely
continuous.
8. If P « Q, then Q(An) -+ 0 implies P(An) -+ 0 for every sequence of measurable sets. How
does this follow from Lemma 6.4?
7
Local Asymptotic Normality
7.1 Introduction
Suppose we observe a sample X 1, ••• , Xn from a distribution Po on some measurable space
(X, A) indexed by a parameter (J that ranges over an open subset e of Rk. Then the full
observation is a single observation from the product P0 of n copies of P0 , and the statis-
tical model is completely described as the collection of probability measures {P0 : (J E E>}
on the sample space (Xn, An). In the context of the present chapter we shall speak of a
statistical experiment, rather than of a statistical model. In this chapter it is shown that
many statistical experiments can be approximated by Gaussian experiments after a suitable
reparametrization.
e
The reparametrization is centered around a fixed parameter 0 , which should be regarded
as known. We define a local parameter h = ,.fii(fJ - fJo), rewrite P0 as P:a+h/.Jri' and thus
obtain an experiment with parameter h. In this chapter we show that, for large n, the
experiments
are similar in statistical properties, whenever the original experiments (J 1-+ Po are "smooth"
in the parameter. The second experiment consists of observing a single observation from a
normal distribution with mean h and known covariance matrix (equal to the inverse of the
Fisher information matrix). This is a simple experiment, which is easy to analyze, whence
the approximation yields much information about the asymptotic properties of the original
experiments. This information is extracted in several chapters to follow and concerns both
asymptotic optimality theory and the behavior of statistical procedures such as the maximum
likelihood estimator and the likelihood ratio test.
92
7.2 Expanding the Likelihood 93
We have taken the local parameter set equal to IRk, which is not correct if the parameter
set E> is a true subset of IRk. If 80 is an inner point of the original parameter set, then the
vector() = 80 + hj .;nis a parameter in E> for a given h, for every sufficiently large n,
and the local parameter set converges to the whole of IRk as n --* oo. Then taking the local
parameter set equal to IRk does not cause errors. To give a meaning to the results of this
chapter, the measure Peo+h/.jn may be defined arbitrarily if Oo + hj .;n
¢ E>.
P9+h ·
log - ( x ) = hle(x) + -21h 2le(x)
••
+ Ox(h 2).
Pe
The subscript x in the remainder term is a reminder of the fact that this term depends on x
as well as on h. It follows that
log n
n
i=l Pe
h n
Pe+h;.jn (X;)= r=
v n i=l
L
ie(X;)
1 h2 n
+ -- L:le(X;) + Remn.
2 n i=l
. .. ·2
Here the score has mean zero, Pel 9 = 0, and - P9l 9 = Pel 9 = 19 equals the Fisher infor-
mation for 8 (see, e.g., section 5.5). Hence the first term can be rewritten as hll.n,e. where
ll.n,e = n- 112 .L:7= 1 le(X;) is asymptotically normal with mean zero and variance le, by
the central limit theorem. Furthermore, the second term in the expansion is asymptotically
equivalent to -~h 2 I9 , by the law of large numbers. The remainder term should behave as
o(1jn) times a sum of n terms and hopefully is asymptotically negligible. Consequently,
under suitable conditions we have, for every h,
log nn
i=l
P9+h/.jn
Pe
1 2
(X;)= h11n,9- -Ieh + Op9 (1).
2
In the next section we see that this is similar in form to the likelihood ratio process of a Gaus-
sian experiment. Because this expansion concerns the likelihood process in a neighborhood
of 8, we speak of "local asymptotic normality" of the sequence of models {P0 :8 E E>}.
The preceding derivation can be made rigorous under moment or continuity conditions
on the second derivative of the log likelihood. Local asymptotic normality was originally
deduced in this manner. Surprisingly, it can also be established under a single condition that
only involves a first derivative: differentiability of the root density 8 1-+ ffi in quadratic
mean. This entails the existence of a vector of measurable functions i 9 = (le, 1, ••• , ie,k)T
such that
h--* 0. (7.1)
94 Local Asymptotic Normality
e
If this condition is satisfied, then the model (P9 : E E>) is called differentiable in quadratic
mean at e.
Usually, ~hT i 9(x) .Jp9(x) is the derivative of the map h 1--+ .Jp8 +h(x) at h = 0 for
(almost) every x. In this case
. 1 a~ a
.e!J(X) = 2 r=-77:\ an v P!J(X) = -logp!J(X).
v P9(x) u ae
Condition (7 .1) does not require differentiability of the map e 1--+ p 9 (x) for any single x, but
rather differentiability in (quadratic) mean. Admittedly, the latter is typically established by
pointwise differentiability plus a convergence theorem for integrals. Because the condition
is exactly right for its purpose,· we establish in the following theorem local asymptotic
normality under (7 .1 ). A lemma following the theorem gives easily verifiable conditions in
terms of pointwise derivatives.
7.2 Theorem. Suppose that E> is an open subset oj"IRk and that the model (P9: E E>) e
e.
is differentiable in quadratic mean at Then P9l9 = 0 and the Fisher information matrix
/9 = P9l9lr exists. Furthermore, for every converging sequence hn --* h, as n --* oo,
n
n P9+h./...jn
log i=l PrJ
1 ~ T· ( 1 T
(X;)= ,Jn ~h .e9 X;)- lh I9h + op (1).
8
Proof. Given a converging sequence hn --* h, we use the abbreviations Pn. p, and g for
PIJ+h./...jn• p9, and hT l9, respectively. By (7.1) the sequence ,Jn(ffn- ..jP) converges in
quadratic mean (i.e., in L 2 (JL)) to ~ g ..JP. This implies that the sequence ffn converges in
quadratic mean to ..JP. By the continuity of the inner product,
The right side equals ,Jn( 1- 1) = 0 for every n, because both probability densities integrate
to 1. Thus Pg = 0.
The random variable Wni = 2[.JPnfp(X;)- 1] is with P-probability 1 well defined.
By (7.1)
var ( -6
n 1
Wni - ,Jn t;
n
g(X;)
)
:S E(,JnWni - g(X;) }2 --* 0,
t
E Wni = 2n(f .JPn.JPdJL- 1) = J[.JPn- .JPJ
-n 2 dJL--* -~Pg2 •
(7.3)
(7.4)
7.2 Expanding the Likelihood 95
Next, we express the log likelihood ratio in L7= 1 Wni through a Taylor expansion of the
logarithm. If we write log(l + x) = x- ~x 2 + x 2 R(2x), then R(x) --* 0 as x --* 0, and
n p n ( 1 )
logO _;:(X;)= 2 ~)og 1 + -Wni
i=l p i=! 2
n 1 n 1 n
= L
Wni - 4 wn~ + 2 L L
wn~R(Wn;). (7.5)
i=! i=l i=!
As a consequence of the right side of (7 .3), it is possible to write n Wn~ = g 2 (X;) +Ani for
random variables Ani such that EIAni I --* 0. The averages An converge in mean and hence
in probability to zero. Combination with the law of large numbers yields
n
"'2_2_ -P 2
~ Wni = (g )n +An --* Pg .
i=!
By the triangle inequality followed by Markov's inequality,
The left side is an upper bound for P(maxl~isn IWnd > e.J2). Thus the sequence max1~i ~n
IWnd converges to zero in probability. By the property of the function R, the sequence
maxl~i~niRCWni)l converges in probability to zero as well. The last term on the right
in (7.5)is bounded bymaxl~i~niRCWni)l L7=! wn~· Thus it is Op(l)Op(1), and converges
in probability to zero. Combine to obtain that
log n
n p
_;:(Xi)=
i=l p
n 1
L Wni- 4Pi
i=!
+ op(l).
Together with (7 .4) this yields the theorem. •
7.6 Lemma. For every e in an open subset of!R.k let Po be a jL-probability density. Assume
thatthemape 1-+ so(x) = Jpo(x) iscontinuouslydifferentiableforeveryx. lftheelements
of the matrix lo = fCPoiPo)(p~ I po) Po dtL are well defined and continuous in e, then the
map e 1-+ ..[ii8 is differentiable in quadratic mean (7.1) with i 0 given by p 0 I p0 •
Proof. By the chain rule, the map e 1-+ p0 (x) = s~ (x) is differentiable for every x with
gradient Po = 2s0 so. Because s0 is nonnegative, its gradient s0 at a point at which s0 = 0
must be zero. Conclude that we can write so = ~ (p 0 I Po) ..fii8, where the quotient p 0 I Po
may be defined arbitrarily if p0 = 0. By assumption, the map e ~-+ 10 = 4 J s0 sl dtL is
continuous.
Because the map e 1-+ So (x) is continuously differentiable, the difference S(J+h (x) -so (x)
can be written as the integral J~ hT S(J+uh(x) du of its derivative. By Jensen's (or Cauchy-
J
Schwarz's) inequality, the square of this integral is bounded by the integral 01(h T S(J+uh (x) ) 2
96 Local Asymptotic Normality
I( SIJ+th,-Se) 2
t df,.t :5
!'{
Jo
1
( T· )2
h 1 SIJ+uth, du df,.t = 41 Jo{
1 T
h 1 l!J+uth,ht du,
where the last equality follows by Fubini's theorem and the definition of / 8 • For h 1 --+ h
s
the right side converges to ihT I8 h = j(hT 8 ) 2 df,.t by the continuity of the map 8 ~--* !8 •
By the differentiability of the map 8 I-* se (x) the integrand in
converges pointwise to zero. The result of the preceding paragraph combined with Propo-
sition 2.29 shows that the integral converges to zero. •
7.7 Example (Exponential families). The preceding lemma applies to most exponential
family models
Pe(x) = d(8)h(x)eQ<Blt<x>.
An exponential family model is smooth in its natural parameter (away from the boundary of
the natural parameter space). Thus the maps 8 ~--* ,Jp8 (x) are continuously differentiable
if the maps 8 ~--* Q(8) are continuously differentiable and map the parameter set E> into the
interior of the natural parameter space. The score function and information matrix equal
Thus the asymptotic expansion of the local log likelihood is valid for most exponential
families. D
7.8 Example (Location models). The preceding lemma also includes all location models
{f(x- 8): 8 e IR} for a positive, continuously differentiable density f with finite Fisher
information for location
The score function io(x) can be taken equal to -(f' jf)(x- 8). The Fisher information is
equal to I1 for every 8 and hence certainly continuous in 8.
By a refinement of the lemma, differentiability in quadratic mean can also be established
for slightly irregular shapes, such as the Laplace density f(x) = ~e-lxl. For the Laplace
density the map 8 I-* log f(x - 8) fails tO be differentiable at the single point 8 = X.
At other points the derivative exists and equals sign(x - 8). It can be shown that the
Laplace location model is differentiable in quadratic mean with score function i 8 (x) =
sign(x - 8). This may be proved by writing the difference ,J f(x -h) - .j f(x) as the
integral J01 ~h sign(x- uh) ,J f (x - uh) du of its derivative, which is possible even though
the derivative does not exist everywhere. Next the proof of the preceding lemma applies. D
Thus, under (7.1) the total mass P9+h(p9 = 0) of P9+h that is orthogonal to P9 must
"disappear" as h --+ 0 at a rate faster than h 2 •
This is not true for the uniform distribution, because, for h 2: 0,
P9+h(P9 = 0) = { n
1 h 1[0,9+hJ(X) dx =n h h.
1[0,9]< (7 + (7 +
The orthogonal part does converge to zero, but only at the rate O(h). D
dN(h, li 1) 1
log (
dN 0, Ii
1) (X) = hT /9X - 2
T
-h Ieh.
The right side is very similar in form to the right side of the expansion of the log likelihood
ratio logdP;+h;.,;nfdP0 given in Theorem 7.2. In view of the similarity, the possibility of
a normal approximation is not a complete surprise. The approximation in this section is
"local" in nature: We fix () and think of
e.
as a statistical model with parameter h, for "known" We show that this can be approxi-
mated by the statistical model (N(h, 19- 1): h e ~_k).
A motivation for studying a local approximation is that, usually, asymptotically, the
"true" parameter can be known with unlimited precision. The true statistical difficulty is
therefore determined by the nature of the measures P9 for () in a small neighbourhood of
the true value. In the present situation "small" turns out to be "of size O(lf .jn)."
A relationship between the models that can be statistically interpreted will be described
through the possible (limit) distributions of statistics. For each n, let Tn = Tn (X 1, ... , Xn)
be a statistic in the experiment (P;+h;..;n: h e :Ilk) with values in a fixed Euclidean space.
Suppose that the sequence of statistics Tn converges in distribution under every possible
(local) parameter:
every h.
98 Local Asymptotic Normality
Here ~ means convergence in distribution under the parameter f) + h/ Jn, and Lo,h
may be any probability distribution. According to the following theorem, the distributions
{Lo,h: h E IRk} are necessarily the distributions of a statistic T in the normal experiment
(N(h, 19 1): h E IRk). Thus, every weakly converging sequence of statistics is "matched"
by a statistic in the limit experiment. (In the present set-up the vector f) is considered known
and the vector h is the statistical parameter. Consequently, by "statistics" Tn and T are
understood measurable maps that do not depend on h but may depend on f).)
This principle of matching estimators is a method to give the convergence of models
a statistical interpretation. Most measures of quality of a statistic can be expressed in the
distribution of the statistic under different parameters. For instance, if a certain hypothesis
is rejected for values of a statistic Tn exceeding a number c, then the power function
h ~---* Ph(Tn > c) is relevant; alternatively, if Tn is an estimator of h, then the mean square
error h ~---* Eh(Tn - h) 2 , or a similar quantity, determines the quality of Tn. Both quality
measures depend on the laws of the statistics only. The following theorem asserts that as a
function of h the law of a statistic Tn can be well approximated by the law of some statistic
T. Then the quality of the approximating T is the same as the "asymptotic quality" of the
sequence Tn. Investigation of the possible T should reveal the asymptotic performance of
possible sequences Tn. Concrete applications of this principle to testing and estimation are
given in later chapters.
A minor technical complication is that it is necessary to allow randomized statistics in
the limit experiment. A randomized statistic T based on the observation X is defined as a
measurable map T = T(X, U) that depends on X but may also depend on an independent
variable U with a uniform distribution on [0, 1]. Thus, the statistician working in the limit
experiment is allowed to base an estimate or test on both the observation and the outcome of
an extra experiment that can be run without knowledge of the parameter. In most situations
such randomization is not useful, but the following theorem would not be true without
it. t
7.10 Theorem. Assume that the experiment (Po: f) E E>) is differentiable in quadratic
mean (7.1) at the point f) with nonsingular Fisher information matrix 1o. Let Tn be statistics
in the experiments (P;+h!.../Ti: h E IRk) such that the sequence Tn converges in distribution
under every h. Then there exists a randomized statistic Tin the experiment (N(h, 19 1): h E
IRk) such that Tn ~ T for every h.
By assumption, the marginals of the sequence (Tn, lln) converge in distribution under
h = 0; hence they are uniformly tight by Prohorov's theorem. Because marginal tightness
implies joint tightness, Prohorov's theorem can be applied in the other direction to see the
existence of a subsequence of {n} along which
t It is not important that U is unifonnly distributed. Any randomization mechanism that is sufficiently rich will
do.
7.3 Convergence to a Normal Experiment 99
jointly, for some random vector (S, ~). The vector~ is necessarily a marginal weak limit
of the sequence ~n and hence it is N (0, f)-distributed. Combination with Theorem 7.2
yields
d Pn,h)
( Tn,log-- 0 (
""+
1 T Jh ) .
S,h T ~- -h
dPn,o 2
In particular, the sequence logdPn,h/dPn,o converges to the normal N(-thT Jh, hT ]h)-
distribution. By Example 6.5, the sequences Pn,h and Pn,o are contiguous. The limit law
Lh of Tn under h can therefore be expressed in the joint law on the right, by the general
form ofLe Cam's third lemma: For each Borel set B
We need to find a statistic T in the normal experiment having this law under h (for every h),
using only the knowledge that~ is N(O, f)-distributed.
By the lemma below there exists a randomized statistic T such that, with U uniformly
distributed and independent of ~.t
Because the random vectors on the left and right sides have the same second marginal
distribution, this is the same as saying that T (o, U) is distributed according to the conditional
distribution of S given ~ = o, for almost every o. As shown in the next lemma, this can be
achieved by using the quantile transformation.
Let X be an observation in the limit experiment (N(h, J- 1): h E IRk). Then JX is under
h = Onormally N(O, J)-distributedandhenceitisequalindistributionto ~.Furthermore,
by Fubini 's theorem,
7.11 Lemma. Given a random vector (S, ~)with values in IRd x IRk and an independent
uniformly [0, 1] random variable U (defined on the same probability space), there exists a
jointly measurable map Ton IRk x [0, 1] such that (T(~. U), ~)and (S, ~)are equal in
distribution.
suffices to construct T such that T(o, U1, U2) is equal in distribution to S given 1::!.. = 8, for
every 8 E :Ilk. Let Q1 (u 1 I o) and Q2(u 2 I 8, s 1) be the quantile functions of the conditional
distributions
and
respectively. These are measurable functions in their two and three arguments, respectively.
Furthermore, Q1 (U1 1o) has law pSd~=8 and Q 2 (U2 18, s 1) has law p-Sal~= 8 • 81 =s 1 , for every
8 and s 1• Set
Then the first coordinate Q1(Udo) of T(o, U1, U2) possesses the distribution pSd~=8 •
Given that this first coordinate equals s 1, the second coordinate is distributed as Q2 (U2 18, s 1),
which has law P 52 1~=8.s1 =s1 by construction. Thus T satisfies the requirements. •
Then the maximum likelihood estimator in the limit experiment is a "projection" of X and
the limit distribution of ,Jri(On- fJ) may change accordingly.
Let E> be an arbitrary subset of JR.k and define Hn as the local parameter space Hn =
,J7i(8- fJ). Then hn is the maximizer over Hn of the random function (or "process")
dPn
h I-* lo fi+h/.JTi
g dPn(J
If the experiment (P9 : f) E E>) is differentiable in quadratic mean, then this sequence of
processes converges (marginally) in distribution to the process
dN(h, Ii 1) 1 T 1 T
hI-* log (X)= --(X- h) I9 (X- h)+ -X I9 X.
dN(O, I9- 1) 2 2
If the sequence of sets Hn converges in a suitable sense to a set H, then we should expect,
under regularity conditions, that the sequence hn converges to the maximizer h of the latter
process over H. This maximizer is the projection of the vector X onto the set H relative
to the metric d(x, y) = (x- yl I 9 (x- y) (where a "projection" means a closest point); if
H = JR.k, this projection reduces to X itself.
An appropriate notion of convergence of sets is the following. Write Hn --* H if H
is the set of all limits lim hn of converging sequences hn with hn E Hn for every n and,
moreover, the limit h = limi hn, of every converging sequence hn1 with hn, E Hn, for every
i is contained in H. t
7.12 Theorem. Suppose that the experiment (P9 : f) E E>) is differentiable in quadratic
mean at f)0 with nonsingular Fisher information matrix I9o. Furthermore, suppose that for
every fJ1 and fJ2 in a neighborhood offJo and a measurable function i with P9ol 2 < oo,
*Proof. Let Gn = Jn<Pn- P9o) be the empirical process. In the proof of Theorem 5.39
it is shown that the map f) I-* log P9 is differentiable at fJo in L2 (P9o) with derivative
l 90 and that the map f) ~-"* P9o log PrJ permits a Taylor expansion of order 2 at fJo, with
"second-derivative matrix" -I9o. Therefore, the conditions of Lemma 19.31 are satisfied
for me = log Pe, whence, for every M,
I
sup nJP>n log PIJo+h/.JTi - h T Gnl!Jo
.
+ -21 h T I90 h I --*p 0.
llhii~M P!Jo
estimators hn are bounded in probability and hence belong to the balls of radius Mn with
probability tending to 1. Furthermore, the sequence of intersections Hn n ball (0, Mn)
converges to H, as the original sets Hn. Thus, we may assume that the fin are the maximum
likelihood estimators relative to local parameter sets Hn that are contained in the balls of
radius Mn. Fix an arbitrary closed set F. If hn E F, then the log likelihood is maximal on
F. Hence P(hn E F) is bounded above by
P( sup
heFnH.
Til>
.1r n
l
og P9o+hf.jn >
P9o
_ sup
heH.
Til>
.1r n
l P9o+h/.jn)
og ____:...;,._:....:._
P9o
= P( heFnH.
sup hTGnl90 -
2
~hT l9oh 2: sup hTGnifJo-
heH.
~hT l9oh + op(l))
2
= P(III~ 112 GnifJo- l~ 2 (F n Hn)ll::: III~ 112 GnifJo- ~~:2 Hnll +op(l)).
by completing the square. By Lemma 7.13 (ii) and (iii) ahead, we can replace Hn by H on
both sides, at the cost of adding a further op(l)-term and increasing the probability. Next,
by the continuous mapping theorem and the continuity of the map z ~---* liz - A II for every
set A, the probability is asymptotically bounded above by, with Z a standard normal vector,
Proof. (i). Because the map x ~---* llx - Hll is (Lipschitz) continuous for any setH,
we have that IIXn- Hll-v-+IIX- Hll by the continuous-mapping theorem. If we also show
that II Xn - Hn II - II Xn - H II ~ 0, then the proof is complete after an application of
Slutsky's lemma. By the uniform tightness of the sequence Xn, it suffices to show that
llx- Hnll--* llx- Hll uniformly for x ranging over compact sets, or equivalently that
llxn - Hn II --* llx - H II for every converging sequence Xn --* x.
For every fixed vector Xn, there exists a vector hn E Hn with II Xn - Hn II 2: II Xn- hn Il-l/ n.
Unless llxn - Hn II is unbounded, we can choose the sequence hn bounded. Then every
subsequence of hn has a further subsequence along which it converges, to a limit h in H.
Conclude that, in any case,
Conversely, for every e > 0 there exists h E H and a sequence hn --* h with hn E Hn and
llx- Hll 2: llx- hll- e =lim llxn- hnll- e 2: limsup llxn- Hnll- e.
7.6 Local Asymptotic Normality 103
Combination of the last two displays yields the desired convergence of the sequence
ilxn- Hnll to llx- Hll.
(ii). The assertion is equivalent to the statementP(IIXn- Hn n Fll- IIXn- H n Fll >
-e) --+ 1 for every e > 0. In view of the uniform tightness of the sequence Xn. this follows
if lim inf llxn - Hn n F II :::: llx - H n F II for every converging sequence Xn --+ x. We can
prove this by the method of the first half of the proof of (i), replacing Hn by Hn n F.
(iii). Analogously to the situation under (ii), it suffices to prove that lim sup llxn - Hn n
G II :::: llx - H n G II for every converging sequence Xn --+ x. This follows as the second
half of the proof of (i). •
1 1 h hT 1 h )
lo dPnfi+hf,fn ~ N ( --hT
g dPn 2 9 ' 9 •
9
Therefore, in view of Example 6.5 the sequences of distributions P;+hf,fn and P; are
mutually contiguous. This is of great use in many proofs. With the help of Le Cam's
third lemma it also allows to obtain limit distributions of statistics under the parameters
()+hI ,Jn, once the limit behavior under() is known. Such limit distributions are of interest,
for instance, in studying the asymptotic efficiency of estimators or tests.
The general scheme is as follows. Many sequences of statistics Tn allow an approxima-
tion by an average of the type
1 n
,y'n(Tn - f.L9) = ,y'n ~ Vr9(X;) + Op (1).
9
According to Theorem 7 .2, the sequence of log likelihood ratios can be approximated by an
average as well: It is asymptotically equivalentto an affine transformation of n -l/Z .L: l9 (X;).
The sequence of joint averages n- 112 .L:(1/r9 (X;),l 9 (X;)) is asymptotically multivariate
normal under () by the central limit theorem (provided 1/r9 has mean zero and finite second
moment). With the help of Slutsky's lemma we obtain the joint limit distribution of Tn and
the log likelihood ratios under():
Finally we can apply Le Cam's third Example 6.7 to obtain the limit distribution of
,jn(Tn - J.Le) under()+ hj ,Jn. Concrete examples of this scheme are discussed in later
chapters.
wide variety of models satisfy a general form of local asymptotic normality and for that
reason allow a unified treatment. These include models with independent, not identically
distributed observations, but also models with dependent observations, such as used in time
series analysis or certain random fields. Because local asymptotic normality underlies a
large part of asymptotic optimality theory and also explains the asymptotic normality of
certain estimators, such as maximum likelihood estimators, it is worthwhile to formulate a
general concept.
Suppose the observation at "time" n is distributed according to a probability measure
Pn, 9 • for a parameter() ranging over an open subset 8 ofll~k.
dP 9 -lh 1
log n, +rn n = hT 1::!.. n,9
dP. - -hT 19h
2
+ 0 Pn,B (1) •
n,9
An inspection of the proof of Theorem 7.10 readily reveals that this depends on the local
asymptotic normality property only. Thus, the local experiments
7.17 Example (Gaussian time series). This example requires some knowledge of time-
series models. Suppose that at time n the observations are a stretch X 1, ... , Xn from a
stationary, Gaussian time series {X1 : t E Z} with mean zero. The covariance matrix of n
7.6 Local Asymptotic Normality 105
The function fe is the spectral density of the series. It is convenient to let the parameter
enter the model through the spectral density, rather than directly through the density of the
observations.
Let Pn,o be the distribution (on lln) of the vector (X 1 , ••• , Xn), a normal distribution
with mean zero and covariance matrix Tn Cfo). The periodogram of the observations is the
function
1 n 't}.. 12
In(A.) = - ILXte' .
2rrn t=!
Suppose that fo is bounded away from zero and infinity, and that there exists a vector-valued
function le : ll ~ :lld such that, as h --* 0,
The proof is elementary, but involved, because it has to deal with the quadratic forms in
then-variate normal density, which involve vectors whose dimension converges to infinity
(see [30]). D
is twice continuously differentiable. If L(aj + bj) > 0, then the model is locally asymp-
totically normal at(} = 0 with, for Vn equal to the interval (n- 112 (logn)- 114 , (logn)- 1)
around zero, t
~n,O = 1 ~~(1{Xi
~~
-ci E Yn} - 11f(
- X+ Cj
)d)
X .
.Jnlogn i=l j=l Xi- Cj v. X
The sequence ~n.o may be thought of as "asymptotically sufficient" for the local parameter h.
Its definition of ~n.o shows that, asymptotically, all the "information" about the parameter is
contained in the observations falling into the neighborhoods Vn + ci. Thus, asymptotically,
the problem is determined by the points of irregularity.
The remarkable rescaling rate .Jn log n can be explained by computing the Hellinger
distance between the densities f(x -(})and f(x) (see section 14.5). D
Notes
Local asymptotic normality was introduced by Le Cam [92], apparently motivated by the
study and construction of asymptotically similar tests. In this paper Le Cam defines two
sequences of models (Pn,9: (} E E>) and (Qn,9 : (} E E>) to be differentially equivalent if
for every bounded set K and every(}. He next shows that a sequence of statistics Tn in a given
asymptotically differentiable sequence of experiments (roughly LAN) that is asymptotically
equivalent to the centering sequence ~n.e is asymptotically sufficient, in the sense that the
original experiments and the experiments consisting of observing the Tn are differentially
equivalent. After some interpretation this gives roughly the same message as Theorem 7.1 0.
The latter is a concrete example of an abstract result in [95], with a different (direct) proof.
PROBLEMS
1. Show that the Poisson distribution with mean(} satisfies the conditions of Lemma 7.6. Find the
information.
2. Find the Fisher information for location for the normal, logistic, and Laplace distributions.
3. Find the Fisher information for location for the Cauchy distributions.
4. Let f be a density that is symmetric about zero. Show that the Fisher information matrix (if it
exists) of the location scale family f( (x - ~-t)/a )fa is diagonal.
5. Find an explicit expression for the o p9 (1)-term in Theorem 7.2 in the case that PO is the density
of the N(e, I)-distribution.
6. Show that the Laplace location family is differentiable in quadratic mean.
t See, for example, [80, pp. 133-139] for a proof, and also a discussion of other almost regular situations. For
instance, singularities of the form f(x) ~ f(cj) + lx- Cj 11/ 2 at pointscj with f(cj) > 0.
Problems 107
7. Find the form of the score function for a location-scale family f ((x - 1-L) Ia) 1a with parameter
(} = (/.L, a) and apply Lemma 7.6 to find a sufficient condition for differentiability in quadratic
mean.
8. Investigate for which parameters k the location family f(x- 9) for f the gamma(k, 1) density
is differentiable in quadratic mean.
9. Let Pn,fi be the distribution of the vector (X 1, ... , Xn) if {X1 : t E Z} is a stationary Gaussian
time series satisfying X 1 = 9Xt-! + Z1 for a given number 191 < 1 and independent standard
normal variables Z 1 • Show that the model is locally asymptotically normal.
10. Investigate whether the log normal family of distributions with density
1 e-~(log(x-~)-JJ.)21{x > g}
a$(x-g)
108
8.1 Asymptotic Concentration 109
8.1 Example (Hodges' estimator). Suppose that Tn is a sequence of estimators for a real
parameter () with standard asymptotic behavior in that, for each () and certain limit distri-
butions L 9 ,
As a specific example, let Tn be the mean of a sample of size n from the N ((), 1) -distribution.
Define a second estimator Sn through
If the estimator Tn is already close to zero, then it is changed to exactly zero; otherwise it
is left unchanged. The truncation point n- 114 has been chosen in such a way that the limit
behavior of Sn is the same as that of Tn for every () =j:. 0, but for () = 0 there appears to be a
great improvement. Indeed, for every rn,
0
rnSn.,. 0
../Ti(Sn- ()) !.. L9, () =1- 0.
To see this, note first that the probability that Tn falls in the interval (()- M n - 112 , () +M n - 112)
converges to L 9 (-M, M) for most M and hence is arbitrarily close to 1 forM and n
sufficiently large. For() =j:. 0, the intervals (()- Mn- 112 , () + Mn- 112 ) and ( -n-1/4, n-1/4)
are centered at different places and eventually disjoint. This implies that truncation will
rarely occur: P9(Tn = Sn)--+ lif() =j:. 0, whencethesecondassertion. Ontheotherhandthe
interval ( -Mn- 112 , Mn- 112 ) is contained in the interval ( -n- 114 , n- 114 ) eventually. Hence
under()= 0 we have truncation with probability tending to 1 and hence Po(Sn = 0) --+ 1;
this is stronger than the first assertion.
At first sight, Sn is an improvement on Tn. For every () =j:. 0 the estimators behave
the same, while for () = 0 the sequence Sn has an "arbitrarily fast" rate of convergence.
However, this reasoning is a bad use of asymptotics.
Consider the concrete situation that Tn is the mean of a sample of size n from the
normal N((), I)-distribution. It is well known that Tn = X is optimal in many ways for
every fixed n and hence it ought to be asymptotically optimal also. Figure 8.1 shows
why Sn = Xl { lXI 2:: n- 1/ 4 } is no improvement. It shows the graph of the risk function
() ~ Ee(Sn - 8) 2 for three different values of n. These functions are close to 1 on most
110 Efficiency of Estimators
Lt) -
(\
/ \
..........
'
-- ..../-,_ \
..· ' ' \
...... ~
0-
I I I
-2 -1 0 1 2
Figure 8.1. Quadratic risk functions of the Hodges estimator based on the means of samples of
size 10 (dashed), 100 (dotted), and 1000 (solid) observations from theN((}, I)-distribution.
of the domain but possess peaks close to zero. As n --* oo, the locations and widths of
the peaks converge to zero but their heights to infinity. The conclusion is that Sn "buys"
e
its better asymptotic behavior at = 0 at the expense of erratic behavior close to zero.
e
Because the values of at which Sn is bad differ from n ton, the erratic behavior is not
visible in the pointwise limit distributions under fixed (J. D
needed to meet the requirement with each of the estimators. Then, if it exists, the limit
limnv,2
-
v~oo nv,!
is called the relative efficiency of the estimators. (In general, it depends on the para-
meter e.)
Because .JV(Tn. -1/r(O)) can be written as Jvfnv Fv(Tn. -1/r(O)), it follows that
necessarily nv -+ oo, and also that nvfv -+ a 2 (e). Thus, the relative efficiency of two
estimator sequences with asymptotic variances a?(e) is just
If the value of this quotient is bigger than 1, then the second estimator sequence needs
proportionally that many observations more than the first to achieve the same (asymptotic)
precision.
Then Tn can be considered a good estimator for 1/r(O) if the limit distributions L9,h are
maximally concentrated near zero. If they are maximally concentrated for every h and
e, e.
some fixed then Tn can be considered locally optimal at Unless specified otherwise,
we assume in the remainder of this chapter that the parameter set E> is an open subset of
e
:Ilk, and that 1/r maps E> into llm. The derivative of t-+ 1/r(O) is denoted by lfr9·
Suppose that the observations are a sample of size n from a distribution P9 • If P9 depends
smoothly on the parameter, then
as experiments, in the sense of Theorem 7 .10. This theorem shows which limit distributions
are possible and can be specialized to the estimation problem in the following way.
8.3 Theorem. Assume that the experiment (P9: e E E>) is differentiable in quadratic
mean (7.1) at the point() with nonsingular Fisher information matrix /9. Let 1/r be dif-
ferentiable at e. Let Tn be estimators in the experiments (P;+h/.fii: h E :Ilk) such that
112 Efficiency of Estimators
(8.2) holds for every h. Then there exists a randomized statistic T in the experiment
(N(h, / 8- 1): h E IRk) such that T -1/reh has distribution Le,hfor every h.
Proof. Apply Theorem 7.10 to Sn = ,Jn(Tn - 1/1(8) ). In view of the definition of Le,h
and the differentiability of 1/1, the sequence
This theorem shows that for most estimator sequences Tn there is a randomized estimator
T such that the distribution of ,Jn(Tn - 1/1(8 + hj ,Jn)) under 8 + hj ,Jn is, for large n,
approximately equal to the distribution ofT -1/reh under h. Consequently the standardized
distribution of the best possible estimator Tn for 1/1 (8 +hI ,.fo) is approximately equal to the
standardized distribution of the best possible estimator T for ltre h in the limit experiment. If
we know the best estimator T for 1/r8 h, then we know the "locally best" estimator sequence
Tn for 1/1(8).
In this way, the asymptotic optimality problem is reduced to optimality in the experiment
based on one observation X from a N(h, / 01)-distribution, in which 8 is known and h
ranges over IRk. This experiment is simple and easy to analyze. The observation itself is
the customary estimator for its expectation h, and the natural estimator for ltreh is 1/r8 X.
This has several optimality properties: It is minimum variance unbiased, minimax, best
equivariant, and Bayes with respect to the noninformative prior. Some of these properties
are reviewed in the next section.
Let us agree, at least for the moment, that 1/reX is a "best" estimator for 1/r9 h. The
distribution of1/r8 X -1/reh is normal with zero mean and covariance ltreli 1ltreT for every h.
The parameter h = 0 in the limit experiment corresponds to the parameter 8 in the original
problem. We conclude that the "best" limit distribution of ,Jn(Tn -1/1(8)) under 8 is the
N (0, ltreli 1ltre T)-distribution.
This is the main result of the chapter. The remaining sections discuss several ways of
making this reasoning more rigorous. Because the expression ltreli 1ltreT is precisely the
Cramer-Rao lower bound for the covariance of unbiased estimators for 1/1(8), we can think
of the results of this chapter as asymptotic Cramer-Rao bounds. This is helpful, even though
it does not do justice to the depth of the present results. For instance, the Cramer-Rao bound
in no way suggests that normal limiting distributions are best. Also, it is not completely
true that an N(h, / 9- 1)-distribution is "best" (see section 8.8). We shall see exactly to what
extent the optimality statement is false.
The measure M can be interpreted as the distribution of a noise factor that is added to
the estimator AX. If no noise is best, then it follows that AX is best equivariant-in-law.
A more precise argument can be made in terms of loss functions. In general, convoluting
a measure with another measure decreases its concentration. This is immediately clear in
terms of variance: The variance of a sum of two independent variables is the sum of the
variances, whence convolution increases variance. For normal measures this extends to
all "bowl-shaped" symmetric loss functions. The name should convey the form of their
graph. Formally, a function is defined to be bowl-shaped if the sublevel sets {x: l(x) ~ c}
are convex and symmetric about the origin; it is called subconvex if, moreover, these sets
are closed. A loss function is any function with values in [0, oo). The following lemma
quantifies the loss in concentration under convolution (for a proof, see, e.g., [80] or [114].)
supEhl(T- Ah),
h
over all (randomized) estimators T. For every bowl-shaped loss function l, this leads again
to the estimator AX.
8.6 Proposition. For any bowl-shaped loss function l, the maximum risk ofany random-
ized estimator T of Ah is bounded below by Eol(AX). Consequently, AX is a minimax
estimator for Ah. If Ah is real and Eo(AX) 2l(AX) < oo, then AX is the only minimax
estimator for Ah up to changes on sets ofprobability zero.
Proofs. For a proof of the uniqueness of the minimax estimator, see [18] or [80]. We
prove the other assertions for subconvex loss functions, using a Bayesian argument.
114 Efficiency of Estimators
Let H be a random vector with a normal N(O, A)-distribution, and consider the original
N(h, :E)-distribution as the conditional distribution of X given H =h. The randomization
variable U in T (X, U) is constructed independently of the pair (X, H). In this notation, the
distribution of the variable T - A H is equal to the "average" of the distributions of T - Ah
under the different values of h in the original set-up, averaged over h using a N (0, A)-"prior
distribution."
By a standard calculation, we find that the "a posteriori" distribution, the distribution
of H given X, is the normal distribution with mean (:E- 1 +A - 1)- 1:E- 1 X and covariance
matrix ( :E - 1 + A - 1) - 1• Define the random vectors
These vectors are independent, because WA is a function of (X, U) only, and the condi-
tional distribution of GA given X is normal with mean 0 and covariance matrix A(I:- 1 +
A - 1)- 1Ar, independent of X. As A= A./ fora scalar A.--* oo, the sequence GA converges
in distribution to a N(O, A:EAT)-distributed vector G. The sum of the two vectors yields
T- AH, for every A.
Because a supremum is larger than an average, we obtain, where on the left we take the
expectation with respect to the original model,
by Anderson's lemma. This is true for every A. The liminf of the right side as A --* oo is
at least E.e(G), by the portmanteau lemma. This concludes the proof that AX is minimax.
If T is equivariant-in-law with invariant law L, then the distribution of G A + WA =
T - AH is L, for every A. It follows that
As A --* oo, the left side remains fixed; the first factor on the right side converges to the
characteristic function of G, which is positive. Conclude that the characteristic functions of
WA converge to a continuous function, whence WA converges in distribution to some vector
W, by Levy's continuity theorem. By the independence of G A and WA for every A, the
sequence ( G A, WA) converges in distribution to a pair ( G, W) of independent vectors with
marginal distributions as before. Next, by the continuous-mapping theorem, the distribution
of G A + WA, which is fixed at L, "converges" to the distribution of G + W. This proves
that L can be written as a convolution, as claimed in Proposition 8.4.
If Tis an equivariant-in-law estimator and T(X) = E(T(X, U)l X), then
8.8 Theorem (Convolution). Assume that the experiment (P9: e E E>) is differentiable
in quadratic mean (7.1) at the point e with nonsingular Fisher information matrix /9.
Let 1/1 be differentiable at e. Let Tn be an ate regular estimator sequence in the experi-
ments (P0 :e e E>) with limit distribution L9. Then there exists a probability measure M9
such that
" I "
Le = N ( 0, 1/1919 1/19
T) * M9.
Jn particular, if L9 has covariance matrix :E9, then the matrix :E9-1/19 Ii 1lfr9 T is nonnegative-
definite.
possible to improve on a given estimator sequence for selected parameters. In this section
it is shown that improvement over an N(O, 1/19 19- 11/19 T)-distribution can be made on at most
a Lebesgue null set of parameters. Thus the possibilities for improvement are very much
restricted.
. I' T
In particular, if L9 has covariance matrix :E9, then the matrix :E9 - 1/1919 1/19 is
nonnegative definite for Lebesgue almost every e.
The theorem follows from the convolution theorem in the preceding section combined
with the following remarkable lemma. Any estimator sequence with limit distributions is
e
automatically regular at almost every along a subsequence of {n}.
e
8.10 Lemma. Let Tn be estimators in experiments (Pn,9 : E 8) indexed by a measurable
e
subset 8 of"JRk. Assume that the map ~---* Pn, 9(A) is measurable for every measurable
set A and every n, (lnd that the map e ~---* 1jl(e) is measurable. Suppose that there exist
distributions L9 such that for Lebesgue almost every e
Then for every Yn --+ 0 there exists a subsequence of {n} such that, for Lebesgue almost
every (e, h), along the subsequence,
Proof. Assume without loss of generality that 8 = IRk; otherwise, fix some eo and let
Pn,9 = Pn,9o for every e not in 8. Write Tn,9 = rn(Tn -1jl(e)). There exists a countable
collection F of uniformly bounded, left- or right-continuous functions f such that weak
convergence of a sequence of maps Tn is equivalentto Ef (Tn) --+ Jf dL for every f E F. t
Suppose that for every f there exists a subsequence of {n} along which
t For continuous distributions L we can use the indicator functions of cells ( -oo, c] with c ranging over Qk. For
general L replace every such indicator by an approximating sequence of continuous functions. Alternatively,
see, e.g., Theorem 1.12.2 in [146]. Also see Lemma 2.25.
8. 7 Local Asymptotic Minimax Theorem 117
Secondly for any fixed continuous and bounded function g8 the sequence Elg8 (E> + Yn H)-
g 8 (E>)I converges to zero as n-+ oo by the dominated convergence theorem. Thus, by the
triangle inequality, we obtain
This is the asymptotic maximum risk over an arbitrarily small neighborhood of The e.
following theorem concerns an even more refined (and smaller) version of the local maxi-
mumrisk.
118 Efficiency of Estimators
Here the first supremum is taken over all finite subsets I of IRk.
Proof. We only give the proof under the further assumptions that the sequence Jn (Tn -
.e
1/l(fJ)) is uniformly tight under (J and that is (lower) semicontinuous.t Then Prohorov's
theorem shows that every subsequence of {n} has a further subsequence along which the
vectors
There exists a subsequence {nk} of {n} such that this expression is equal to
lim
~oo~
supE9+h!~/(vnk(Tnk -1jl(e + ~)))· v~
We apply the preceding argument to this subsequence and find a further subsequence along
which Tn satisfies (8.2). For simplicity of notation write this as {n'} rather than with a
.e
double subscript. Because is nonnegative and lower semicontinuous, the portmanteau
lemma gives, for every h,
Every rational vector his contained in Ik for every sufficiently large k. Conclude that
R ~ sup
heft
J.e dLe,h = sup Ehl(T -1/leh).
heft
The risk function in the supremum on the right is lower semicontinuous in h, by the
continuity of the Gaussian location family and the lower semicontinuity of .e. Thus
the expression on the right does not change if tfl is replaced by IRk. This concludes the
proof. •
- Xn
Tn=Xn-(k-2) .
niiXnll 2
Because Xn converges in probability to the mean (), the second term in the definition of Tn
is Op(n- 1) if() =f. 0. In that case .jn(Tn - Xn) converges in probability to zero, whence
the estimator sequence Tn is regular at every() =f. 0. For() = hf .jn, the variable Mn is
distributed as a variable X with an N(h, /)-distribution, and for every n the standardized
estimator .jn(Tn - hf .jn) is distributed as T- h for
X
T(X) =X- ( k - 2 ) -
IIXII2'
This is the Stein shrinkage estimator. Because the distribution of T - h depends on h, the
sequence Tn is not regular at () = 0. The Stein estimator has the remarkable property that,
for every h (see, e.g., [99, p. 300]),
It follows that, in terms of joint quadratic loss l(x) = llxll 2 , the local limit distributions
Lo,h of the sequence .jn(Tn- hf .jn) under() = hf .jn are all better than the N(O, /)-limit
distribution of the best regular estimator sequence X n. D
The example of shrinkage estimators shows that, depending on the optimality criterion, a
normal N (0, 1/18 19 11/18 T)-limit distribution need not be optimal. In this light, is it reasonable
120 Efficiency of Estimators
**
to uphold that maximum likelihood estimators are asymptotically optimal? Perhaps not. On
the other hand, the possibility of improvement over the N (0, 9 Ii 1 1l )-limit is restricted
in two important ways.
First, improvement can be made only on a null set of parameters by Theorem 8.9.
Second, improvement is possible only for special loss functions, and improvement for one
loss function necessarily implies worse performance for other loss functions. This follows
from the next lemma.
Suppose that we require the estimator sequence to be locally asymptotically minimax for
e
a given loss function in the sense that
This is a reasonable requirement, and few statisticians would challenge it. The following
lemma shows that for one-dimensional parameters tft(e) local asymptotic minimaxity for
even a single loss function implies regularity. Thus, if it is required that all coordinates of a
certain estimator sequence be locally asymptotically minimax for some loss function, then
the best regular estimator sequence is optimal without competition.
8.13 Lemma. Assume that the experiment (P9 : e E 8) is differentiable in quadratic mean
(7.1) ate with nonsingular Fisher information matrix Ie. Let t/1 be a real-valued map
that is differentiable at e. Then an estimator sequence in the experiments (P9: e E ~k)
can be locally asymptotically minimax at e for a bowl-shaped loss function e such that
0 < J x 2 l(x) dN(O, *9/9- 1*9T)(x) < oo only ifTn is best regular at e.
Proof. We only give the proof under the further assumption that the sequence Jn ( Tn -
tft(e))is uniformly tight under e.
Then by the same arguments as in the proof of Theo-
rem 8.11, every subsequence of {n} has a further subsequence along which the sequence
e e
Jn (Tn - t/1 ( + h1,Jri)) converges in distribution under + h1Jn to the distribution
L 9,h ofT- * 9h under h, for a randomized estimator T based on an N(h, / 9- 1)-distributed
observation. Because Tn is locally asymptotically minimax, it follows that
*
heJRk heJRk
**
If the convolution theorem is taken as the basis for asymptotic optimality, then an estimator
sequence is best if it is asymptotically regular with a N(O, 9 I 9- 1 9 T)-limit distribution.
An estimator sequence has this property if and only if the estimator is asymptotically linear
in the score function.
'-( Tn -1/t(e) ) = Jn
'V"
~ . I.lo(X;)
1 -8_1/toli + Op (1).
9
Then Tn is best regular estimator for 1/f(e) at e. Conversely, every best regular estimator
sequence satisfies this expansion.
Proof. The sequence ll.n,o = n- 112 .L:lo(X;) converges in distribution to a vector ll. 9 with
aN (0, Io )-distribution. By Theorem 7.2 the sequence log d P;+h/ ..;nl d P9 is asymptotically
equivalent to hT ll.n,IJ - ~hT I 9 h. If Tn is asymptotically linear, then ..fo(Tn - 1/t(e)) is
asymptotically equivalent to the function 1/loli 1 ll.n,IJ· Apply Slutsky's lemma to find that
for a randomized estimator (S, T) in the normal-limit experiment. Because Sn and Tn are
best regular, the estimators SandT are best equivariant-in-law. Thus S = T = 1/loX almost
surely by Proposition 8.6, whence ..fo(Sn - Tn) converges in distribution to S- T = 0.
Thus every two best regular estimator sequences are asymptotically equivalent. The
second assertion of the lemma follows on applying this to Tn and the estimators
1 . -1
Sn = 1/t(e) + Jn1/tol9 ll.n,IJ·
e
Because the parameter is known in the local experiments (P;+h/..fo: h E :Ilk), this indeed
defines an estimator sequence within the present context. It is best regular by the first part
of the lemma. •
Under regularity conditions, for instance those of Theorem 5.39, the maximum likeli-
hood estimator en in a parametric model satisfies
Then the maximum likelihood estimator is asymptotically optimal for estimating in terms e
of the convolution theorem. By the delta method, the estimator 1/t(en) for 1/t(e) can be seen
122 Efficiency of Estimators
8.15 Theorem. Suppose that the estimator sequence Tn is consistentfor 1/1 (()) under every
(). Then, for every e > 0 and every 8o,
lim sup
n-+oo
-!n log P9o (d (Tn, 1/1 (8o)) > e) ::: 9: d(1/r(9),inf1fr(9o))>s - P9 log P9o
P9
.
Proof. If the right side is infinite, then there is nothing to prove. The Kullback-Leibler
information - P9 log p 90 / p9 can be finite only if P9 « P9o. Hence, it suffices to prove that
- P9log P9ol P9 is an upper bound for the left side for every () such that P9 « P9o and
d(1/1(8), 1/1(8o)) > e. The variable An = (n- 1) I.:7= 1 log(p9/P1Jo)(X;) is well defined
(possibly -oo). For every constant M,
-~log P9o ( d(Tn, 1/1(8o)) > e) ::: M- ~log P9 ( d(Tn, 1jl(8o)) > e, An < M ).
ForM> P9logp9/P90, we have that P9(An < M)-+ 1 by the law of large numbers.
Furthermore, by the consistency of Tn for 1/1(8), the probability P9 (d(Tn, 1/1(8o)) > e)
Problems 123
converges to 1 for every() such that d( 1jl(()), 1jl(()o)) > e. Conclude that the probability in
the right side of the preceding display converges to 1, whence the lim sup of the left side is
bounded by M. •
Notes
Chapter 32 of the famous book by Cramer [27] gives a rigorous proof of what we now
know as the Cramer-Rao inequality and next goes on to define the asymptotic efficiency of
an estimator as the quotient of the inverse Fisher information and the asymptotic variance.
Cramer defines an estimator as asymptotically efficient if its efficiency (the quotient men-
tioned previously) equals one. These definitions lead to the conclusion that the method of
maximum likelihood produces asymptotically efficient estimators, as already conjectured
by Fisher [48, 50] in the 1920s. That there is a conceptual hole in the definitions was clearly
realized in 1951 when Hodges produced his example of a superefficient estimator. Not long
after this, in 1953, Le Cam proved that superefficiency can occur only on a Lebesgue null
set. Our present result, almost without regularity conditions, is based on later work by Le
Cam (see [95].) The asymptotic convolution and minimax theorems were obtained in the
present form by Hajek in [69] and [70] after initial work by many authors. Our present
proofs follow the approach based on limit experiments, initiated by Le Cam in [95].
PROBLEMS
1. Calculate the asymptotic relative efficiency of the sample mean and the sample median for
estimating 9, based on a sample of size n from the normal N(9, 1) distribution.
2. As the previous problem, but now for the Laplace distribution (density p(x) = !e-lxl).
3. Consider estimating the distribution function P(X ::::: x) at a fixed point x based on a sample
X1, ... , Xn from the distribution of X. The "nonparametric" estimator is n- 1#(X; ::=:: x). If it
is known that the true underlying distribution is normal N(9, 1), another possible estimator is
<l>(x- X). Calculate the relative efficiency of these estimators.
4. Calculate the relative efficiency of the empirical p-quantile and the estimator <1>- 1(p)Sn + Xn
for the estimating the p-th quantile of the distribution of a sample from the normal N(f.L, u 2)-
distribution.
5. Consider estimating the population variance by either the sample variance S2 (which is unbiased)
or else n- 1I:?=! (X; - X) 2 = (n- 1)/n S 2 . Calculate the asymptotic relative efficiency.
6. Calculate the asymptotic relative efficiency of the sample standard deviation and the interquartile
range (corrected for unbiasedness) for estimating the standard deviation based on a sample of
size n from the normal N (f.L, u 2 )-distribution.
7. Given a sample of size n from the uniform distribution on [0, 9], the maximum X(n) of the
observations is biased downwards. BecauseEe(9- X(n)) = E9X(l)• the bias can be removed by
adding the minimum of the observations. Is X (I)+ X (n) a good estimator for 9 from an asymptotic
point of view?
8. Consider the Hodges estimator Sn based on the mean of a sample from theN (9, I)-distribution.
(i) Showthat.Jii"(Sn-9n)~-oo,if9n ~ Oinsuchawaythatn 114 9n ~ Oandn 1129n ~ oo.
(ii) Show that Sn is not regular at 9 = 0.
124 Efficiency of Estimators
(iii) Show that sup_,S< 9 < 8 Pe (.JiiiSn - £11 > kn) ~ 1 for every kn that converges to infinity
sufficiently slowly.
9. Show that a loss function l: R r+ R is bowl-shaped if and only if it has the form l(x) = lo(lxl)
for a nondecreasing function lo.
10. Show that a function of the form l(x) = lo ( llx II) for a nondecreasing function lo is bowl-shaped.
11. Prove Anderson's lemma for the one-dimensional case, for instance by calculating the derivative
of J l(x +h) dN(O, 1)(x). Does the proof generalize to higher dimensions?
12. What does Lemma 8.13 imply about the coordinates of the Stein estimator. Are they good
estimators of the coordinates of the expectaction vector?
13. All results in this chapter extend in a straightforward manner to general locally asymptotically
normal models. Formulate Theorem 8.9 and Lemma 8.14 for such models.
9
Limits of Experiments
9.1 Introduction
This chapter introduces a notion of convergence of statistical models or "experiments" to
a limit experiment. In this notion a sequence of models, rather than just a sequence of
estimators or tests, converges to a limit. The limit experiment serves two purposes. First,
it provides an absolute standard for what can be achieved asymptotically by a sequence of
tests or estimators, in the form of a "lower bound": No sequence of statistical procedures
can be asymptotically better than the "best" procedure in the limit experiment. For instance,
the best limiting power function is the best power function in the limit experiment; a best
sequence of estimators converges to a best estimator in the limit experiment. Statements
of this type are true irrespective of the precise meaning of "best." A second purpose of
a limit experiment is to explain the asymptotic behaviour of sequences of statistical pro-
cedures. For instance, the asymptotic normality or (in)efficiency of maximum likelihood
estimators.
Many sequences of experiments converge to normal limit experiments. In particular,
the local experiments in a given locally asymptotically normal sequence of experiments,
as considered in Chapter 7, converge to a normal location experiment. The asymptotic
representation theorem given in the present chapter is therefore a generalization of Theo-
rem 7.10 (for the LAN case) to the general situation. The importance of the general concept
is illustrated by several examples of non-Gaussian limit experiments.
In the present context it is customary to speak of "experiment" rather than model, al-
though these terms are interchangeable. Formally an experiment is a measurable space
(X, A), the sample space, equipped with a collection of probability measures (Ph: h e H).
The set of probability measures serves as a statistical model for the observation, written as
X. In this chapter the parameter is denoted by h (and not (J), because the results are typi-
cally applied to "local" parameters (such ash= ,jii(e - e0 )). The experiment is denoted
125
126 Limits of Experiments
by (X, A, Ph: hE H) and, if there can be no misunderstanding about the sample space,
also by (Ph : h E H).
Given a fixed parameter ho E H, the likelihood ratio process with base ho is formed as
Because a weak limit of a sequence of random vectors is unique, the marginal distributions
of the likelihood ratio process of a limit experiment are unique. The limit experiment itself
is not unique; even its sample space is not uniquely determined. This causes no problems.
Two experiments of which the likelihood ratio processes are equal in marginal distributions
are called equivalent or of the same type. Many examples of equivalent experiments arise
through sufficiency.
&n = (Pn,h : h E H) and statistics Tn. For instance, the statistics are test statistics for testing
certain hypotheses concerning the parameter h, or estimators of some function of h. Most
of the quality measures of the procedures based on the statistics Tn can be expressed in their
laws under the different parameters. For simplicity we assume that the sequence of statistics
Tn converges under a given parameter h in distribution to a limit Lh, for every parameter
h. Then the asymptotic quality of the sequence Tn may be judged from the set of limit
laws {Lh : h E H}. According to the following theorem the only possible sets of limit laws
are the laws of randomized statistics in the limit experiment: Every weakly converging se-
quence of statistics converges to a statistic in the limit experiment. One consequence is that
asymptotically no sequence of statistical procedures can be better than the best procedure in
the limit experiment. This is true for every meaning of "good" that is expressible in terms of
laws. In this way the limit experiment obtains the character of an asymptotic lower bound.
We assume that the limit experiment£ = (Ph : h E H) is dominated: This requires the
existence of a a-finite measure f.L such that Ph « f.L for every h. Recall that a randomized
statistic T in the experiment (X, A, Ph: h E H) with values in JRk is a measurable map
T: X x [0, 1] 1-+ JRk for the product a-field Ax Borel sets on the space X x [0, 1]. Its law
under h is to be computed under the product measure Ph x uniform[O, 1].
9.3 Theorem. Let &n = (Xn, An, Pn,h : h E H) be a sequence of experiments that conver-
ges to a dominated experiment£ = (X, A, Ph : h E H). Let Tn be a sequence of statistics
in &n that converges in distribution for every h. Then there exists a randomized statistic T
in £ such that Tn .!::.t T for every h.
Proof. The proof of the theorem starting from the definition of convergence of experi-
ments is long and can best be broken up into parts of independent interest. This goes beyond
the scope of this book.
The proof for the case of local asymptotic normal sequences of experiments is given in
Chapter 7. (It is shown in Theorem 9.4 that such a sequence of experiments converges to
a Gaussian location experiment.) Many other examples can be treated by the same method
ofproof.t •
for a sequence of statistics ll.n that converges weakly under h = 0 to aN (0, I)-distribution.
Then the sequence &n converges to the experiment (N(Jh, J): hE H).
Proof. The log likelihood ratio process with base ho for the normal experiment has
coordinates
dN(Jh,J) T 1 T 1 T
log dN(Jho, J) (X)= (h- ho) X- 2,h Jh + 2,h 0 Jh 0 •
If J is nonsingular, then this follows by simple algebra, because the left side is the quotient
of two normal densities. The case that J is singular perhaps requires some thought.
By the assumption combined with Slutsky's lemma, the sequence log Pn,h/ Pn,o is under
h = 0 asymptotically normal with mean -thT Jh and variance hT Jh). This implies con-
tiguity of the sequences of measures Pn,h and Pn,o for every h, by Example 6.5. Therefore,
the probability of the set on which one of Pn,o. Pn,h· or Pn,ho is zero converges to zero.
Outside this set we can write
Pn,h = Io g
1og-- Pn,h- - Iog--.
Pn,ho
Pn,ho Pn,O Pn,O
Because this is true with probability tending to 1, the difference between the left and the right
sides converges to zero in probability. Apply the (local) asymptotic normality assumption
twice to obtain that
Pnh T 1 T 1 T
log-'-= (h- ho) ll.n- 2,h Ih + 2_ho Iho + Op•,ho (1).
Pn,ho
On comparing this to the expression for the normal likelihood ratio process, we see that
it suffices to show that the sequence ll.n converges under ho in law to X: In that case
the vector (Pn,h!Pn,hohei converges in distribution to (dN(Ih, I)/dN(O, I)(X))hei' by
Slutsky's lemma and the continuous-mapping theorem.
By assumption, the sequence (ll.n, h~ ll.n) converges in distribution under h = 0 to
a vector (fl., h~ ll.), where fl. is N(O, I)-distributed. By local asymptotic normality and
Slutsky's lemma, the sequence of vectors (!:in, log Pn,hol Pn,o) converges to the vector
(fl., h~ fl.- !h~ Iho). In other words
9.5 Corollary. Let e be an open subset of!Rd, and let the sequence of statistical models
(Pn,!J: 8 E 8) be locally asymptotically normal at 8 with norming matrices rn and a
nonsingular matrix !9. Then the sequence of experiments ( Pn,!J+r;;J h : h E !Rd) converges to
the experiment (N(h, ! 9- 1): h e !Rd).
9.4 Uniform Distribution 129
9.6 Theorem. Let Pfj be the distribution of a random sample of size n from a uniform
distribution on [0, 0]. Then the sequence of experiments (Pfj_h/n : h E IR) converges for
each fixed 0 > 0 to the experiment consisting of observing one observation from the shifted
exponentialdensityz ~--+ e-<z-h)f 9 1{z > h}/O.t
almost surely under ho, because the indicator 1{z > h 0 } in the denominator equals 1 almost
surely if ho is the true parameter.
The joint density of a random sample X 1, ••• , Xn from the uniform [0, 0] distribution
can be written in the form (1/0)n 1 {X(n) :::: 0}. The likelihood ratios take the form
Under the parameter 0- hoi n, the maximum of the observations is certainly bounded above
by 0 - h0 jn and the indicator in the denominator equals 1. Thus, with probability 1 under
0- h 0 jn, the likelihood ratio in the preceding display can be written
Along the same lines it may be proved that in the case of uniform distributions with
both endpoints unknown a limit experiment based on observation of two independent ex-
ponential variables pertains. These types of experiments are completely determined by the
discontinuities of the underlying densities at their left and right endpoints. It can be shown
more generally that exponential limit experiments are obtained for any densities that have
jumps at one or both of their endpoints and are smooth in between. For densities with
discontinuities in the middle, or weaker singularities, other limit experiments pertain.
The convergence to a limit experiment combined with the asymptotic representation
theorem, Theorem 9.3, allows one to obtain asymptotic lower bounds for sequences of
estimators, much as in the locally asymptotically normal case in Chapter 8. We give only
one concrete statement.
9.7 Corollary. Let Tn be estimators based on a sample X,, ... , Xn from the uniform
distribution on [0, 0] such that the sequence n(Tn - 0) converges under 0 in distribution
to a limit Le, for every 0. Then for Lebesgue almost-every 0 we have f ixi dLe(x) ~
J
EIZ- medZI and x 2 dLe(x) ~ E(Z- EZ) 2 for the random variable Z exponentially
distributed with mean e.
Proof (Sketch). By Lemma 8.10, the estimator sequence Tn is automatically almost reg-
ular in the sense that n(Tn - 0 + h/n) converges under 0 - h/n in distribution to L 9 for
Lebesgue almost every 0 and h, at least along a subsequence. Thus, it is matched in the
limit experiment by an equivariant-in-law estimator for almost every 0. More precisely, for
almost every 0 there exists a randomized statistic T9 such that the law of T9 ( Z + h, U) - h
does not depend on h (if Z is exponentially distributed with mean 0). By classical statistical
decision theory the given lower bounds are the (constant) risks of the best equivariant-in-law
estimators in the exponential limit experiment in terms of absolute error and mean-square
error loss functions, respectively. •
In view of this lemma, the maximum likelihood estimator X(n) is asymptotically ineffi-
cient. This is not surprising given its bias downwards, but it is encouraging for the present
approach that the small bias, which is of the order 1/ n, is visible in the "first-order'' asymp-
totics. The bias can be corrected by a multiplicative factor, which, unfortunately, must
depend on the loss function. The sequences of estimators
n +log2X n+1
n (n) and --X(n)
n
are asymptotically efficient in terms of absolute value and quadratic loss, respectively.
af-t"
x ~--* x"+ 1 1{x > f.J-}.
11n 1
=--I: n ( X· 1)
log__.!.--
.jn i=! f.l. a
converges weakly to a normal distribution with mean g 0 ja 2 and variance lja 2 ; and the
sequence Zn = n(X(l) - f.J.,) converges in distribution to the (shifted) exponential distri-
bution with mean f.J.fa + ho and variance (f.J./a) 2 • The two sequences are asymptotically
independent. Thus the likelihood is a product of a locally asymptotically normal and a
"locally asymptotically exponential" factor. The local limit experiment consists of observ-
ing a pair (11, Z) of independent variables 11 and Z with a N(g, a 2)-distribution and an
exp(a/f.J.) + h-distribution, respectively.
The maximum likelihood estimators for the parameters a and f.1. are given by
A n
an = L~= 1 log(X;/ X (I))'
The sequence .jn(an - a) converges in distribution under the parameters (a + g f .jn, f.1. +
h f n) to the variable 11 - g. Because the distribution of Z does not depend on g, and 11
follows a normal location model, the variable 11 can be considered an optimal estimator
for g based on the observation (/1, Z). This optimality is carried over into the asymptotic
optimality of the maximum likelihood estimator an. A precise formulation could be given
in terms of a convolution or a minimax theorem.
On the other hand, the maximum likelihood estimator for f.1. is asymptotically inefficient.
Because the sequence n(Jtn - f.1.- hfn) converges in distribution to Z- h, the estimators
Ctn are asymptotically biased upwards.
for every converging sequence hn --* h, and random vectors 11n,9 and random matrices ln,9
such that (11n,9, ln,9) $, (!19, J9) for a random vector such that the conditional distribution
of !19 given that J9 = J is normal N(O, J).
Locally asymptotically mixed normal is often abbreviated to LAMN. Locally asymp-
totically normal, or LAN, is the special case in which the matrix J9 is deterministic. Se-
quences of experiments whose likelihood ratios allow a quadratic approximation as in the
preceding display (but without the specific limit distribution of (11n,9• ln,9)) and that are
132 Limits of Experiments
such that Pn,8+Yn,eh <1 1> Pn,fi are called locally asymptotically quadratic, or LAQ. We note
that LAQ or LAMN requires much more than the mere existence of two derivatives of the
likelihood: There is no reason why, in general, the remainder would be negligible.
9.8 Theorem. Assume that the sequence ofexperiments ( Pn,fi : (} E E>) is locally asymptot-
ically mixed normal at e.
Then the sequence of experiments (Pn,8+y., 8 h : h E !Rd) converges
to the experiment consisting ofobserving a pair ( ll., J) such that J is marginally distributed
as J9 for every h and the conditional distribution of ll. given J is normal N ( J h, J).
Proof. Write P9 ,h for the distribution of (ll., J) under h. Because the marginal distribution
of J does not depend on h and the conditional distribution of ll. given J is Gaussian
By Slutsky's lemma and the assumptions, the sequence dPn,fl+y., 8 h/dPn,fi converges under
(} in distribution to exp(hT ll. 9 - ~hT J9h). Because the latter variable has mean one, it
follows that the sequences of distributions Pn,fl+y.,8 h and Pn,fi are mutually contiguous. In
particular, the probability under(} that dPn,fi+y., 8 h is zero converges to zero for every h, so
that
T 1 T 1 T
= (h- ho) ll.n,fi- 2h ln,fih + 2_ho ln,fiho + op•. (1).
8
Conclude that it suffices to show that the sequence (ll.n,fi• ln,f!) converges under(}+ Yn,f!ho
to the distribution of (ll., J) under h 0 .
Using the general form of Le Cam's third lemma we obtain that the limit distribution of
the sequence (ll.n,fi• ln,f!) under(}+ Yn,fih takes the form
On noting that the distribution of (ll., J) under h = 0 is the same as the distribution of
(!:if!, Jf!), weseethatthisis equal toEolB(ll., J) dPf!,h/dPf!,o(ll., J) =Ph ( (ll., J) E B). •
It is possible to develop a theory of asymptotic "lower bounds" for LAMN models, much
as is done for LAN models in Chapter 8. Because conditionally on the ancillary statistic
J, the limit experiment is a Gaussian shift experiment, the lower bounds take the form of
mixtures of the lower bounds for the LAN case. We give only one example, leaving the
details to the reader.
We include two examples to give some idea of the application of local asymptotic mixed
normality. In both examples the sequence of models is LAMN rather than LAN due to an
explosive growth of information, occurring at certain supercritical parameter values. The
second derivative of the log likelihood, the information, remains random. In both examples
there is also (approximate) Gaussianity present in every single observation. This appears
to be typical, unlike the situation with LAN, in which the normality results from sums over
(approximately) independent observations. In explosive models of this type the likelihood is
dominated by a few observations, and normality cannot be brought in through (martingale)
central limit theorems.
for given numbers a0 , a 1, .... The natural parameter space is the set of all (} such that
c(0)- 1 = Lz az(JZ is finite (an interval). We shall concentrate on parameters in the interior
of the natural parameter space such that JL(O) := EeZ > 1. Set a 2 (0) = var9 Z.
The sequence X I' x2 •... is a Markov chain with transition density
xtimes
Pe(Y lx) = Pe(Xn = Y I Xn-1 = x) = ~ ()Yc(O)x.
To obtain a two-term Taylor expansion of the log likelihood ratios, let .e 9 (y Ix) be the log
transition density, and calculate that
(The fact that the score function of the model (} ~--+ P9 (Z = z) has derivative zero yields
the identity JL(O) = -O(cji:)(O), as is usual for exponential families.) Thus, the Fisher
information in the observation (X,, ... , Xn) equals (note thatEe(Xj I Xj-1) = Xj-!JL(O))
n .. n {L(O)
-Be I)9<xi 1 xi_,)= E9 .L:xi_,-0 -
i=' j=l
For JL (0) > 1, this converges to infinity at a much faster rate than "usually." Because the total
information in (X 1, ... , X n) is of the same order as the information in the last observation
Xn, the model is "explosive" in terms of growth of information. The caltulation suggests
the rescaling rate Yn, 9 = jJ.,(e)-nfZ, which is roughly the inverse root of the information.
134 Limits of Experiments
A Taylor expansion of the log likelihood ratio yields the existence of a point On between
f) and f) + Yn,eh such that
log nn
i=l
PIJ+Yn,oh (Xj I Xj-1)
Pe
h n . 1 h2 n ..
= (fJ)nl 2 ,?:le(Xi I Xi_,)+ 2 (fJ)n ,?:.ee.(Xi I Xj-1).
f.L j=l f.L j=l
Because Ee(Xn I Xn-1, ... , X,) = Xn-!JL(fJ), the sequence of random variables JL(fJ)-n Xn
is a martingale under f). Some algebra shows that its second moments are bounded as
n --+ oo. Thus, by a martingale convergence theorem (e.g., Theorem 10.5.4 of [42]), there
exists a random variable V such that JL(fJ)-n Xn --+ V almost surely. By the Toeplitz lemma
(Problem 9.6) and again some algebra, we obtain that, almost surely under e,
1 n 1
JL(fJ)n t;xi-! --+ JL(fJ) - 1 V.
It follows that the point On in the expansion of the log likelihood can be replaced by f) at the
cost of adding a term that converges to zero in probability under f). Furthermore,
It remains to derive the limit distribution of the sequence ll.n,9. If we write Xi = L;~~~ Z j,i
for independent copies Zi,i of the offspring variable Z, then
for independent copies Z; of Z and Vn = L7=,xi-l· Even though z,, Z2, ... and the
total number Vn of variables in the sum are dependent, a central limit theorem applies
to the right side: conditionally on the event {V > 0} (on which Vn --+ oo), the sequence
v; 112 L~;:,, (Z; - JL(fJ)) converges in distribution to u (fJ) times a standard normal variable
G. Furthermore, if we define G independent of V, conditionally on {V > 0}, t
(fl. J. )- (u(fJ)
n,9• n,8 f) vf V G'e(JL(fJ)-
~
jL(fJ) v)
1) .
(9 11)
.
t See the appendix of [81] or, e.g., Theorem 3.5.1 and its proof in [146].
9.6 Asymptotic Mixed Normality 135
It is well known that the event {V = 0} coincides with the event {limXn = 0} of extinction
of the population. (This occurs with positive probability if and only if a0 > 0.) Thus, on the
set {V = 0} the series I::j.. 1 Xi converges almost surely, whence ll.n,e -+ 0. Interpreting
zero as the product of a standard normal variable and zero, we see that again (9 .11) is valid.
Thus the sequence (ll.n,e. ln,e) converges also unconditionally to this limit. Finally, note
that a 2 (8)f8 =it(()), so that the limit distribution has the right form.
The maximum likelihood estimator for J.L (()) can be shown to be asymptotically efficient,
(see, e.g., [29] or [81]). D
9.12 Example (Gaussian AR). The canonical example of an LAMN sequence of exper-
iments is obtained from an explosive autoregressive process of order one with Gaussian
innovations. (The Gaussianity is essential.) Let 18 I > 1 and 8 1, 8 2 , ••. be an i.i.d. sequence
of standard normal variables independent of a fixed variable X 0 • We observe the vector
(Xo, X~o ... , Xn) generated by the recursive formula X 1 = 8X1-1 + 8 1•
The observations form a Markov chain with transition density p(·l x 1_ 1) equal to the
N(8x 1 -~o I)-density. Therefore, the log likelihood ratio process takes the form
Pn,9+Yn,eh n 1 2 2 n
log (Xo •... ' Xn) = h Yn,e L:cx,- ex,_!)Xt-!- 2h Yn,(J L
2
xt-!'
Pn,e t=! t=!
This has already the appropriate quadratic structure. To establish LAMN, it suffices to find
the right rescaling rate and to establish the joint convergence of the linear and the quadratic
term. The rescaling rate may be chosen proportional to the Fisher information and is taken
Yn,e =e-n.
By repeated application of the defining autoregressive relationship, we see that
t 00
almost surely as well as in second mean. Given the variable X 0 , the limit is normally
distributed with mean Xo and variance (8 2 - 1)-1 • An application of the Toeplitz lemma
(Problem 9.6) yields
The linear term in the quadratic representation of the log likelihood can (under ()) be
rewritten as e-n I::;= I 8 1X1_ 1 , and satisfies, by the Cauchy-Schwarz inequality and the
Toeplitz lemma,
1
E 1 ()n ~ 8tXt-!- en1 f;;)_
f;;)_ ~ 818-t I
V I 1
:5 !()In ~ 181-
f;;)_ t I(
E(e-t+l X - 1 - V) 2)1/2 -+ 0.
1
It follows that the sequence of vectors (ll.n,e. ln,e) has the same limit distribution as the
sequence of vectors (e-n I:;=! 818t-1 V, V 2 /(8 2-1) ). For every n the vector (e-n r::;=l 81
136 Limits of Experiments
9.7 Heuristics
The asymptotic representation theorem, Theorem 9.3, shows that every sequence of statistics
in a converging sequence of experiments is matched by a statistic in the limit experiment.
It is remarkable that this is true under the present definition of convergence of experiments,
which involves only marginal convergence and is very weak.
Under appropriate stronger forms of convergence more can be said about the nature of
the matching procedure in the limit experiment. For instance, a sequence of maximum like-
lihood estimators converges to the maximum likelihood estimator in the limit experiment,
or a sequence of likelihood ratio statistics converges to the likelihood ratio statistic in the
limit experiment. We do not introduce such stronger convergence concepts in this section
but only note the potential of this argument as a heuristic principle. See section 5.9 for
rigorous results.
For the maximum likelihood estimator the heuristic argument takes the following form.
If hn maximizes the likelihood h ~--+ dPn,h. then it also maximizes the likelihood ratio
process h ~--+ dPn,h/dPn,ho· The latter sequence of processes converges (marginally) in
distribution to the likelihood ratio process h ~--+ dPh/dPho of the limit experiment. It is
reasonable to expect that the maximizer hn converges in distribution to the maximizer of
the process h ~--+ dPh/dPho• which is the maximum likelihood estimator for h in the limit
experiment. (Assume that this exists and is unique.) If the converging experiments are
the local experiments corresponding to a given sequence of experiments with a parameter
(;I, then the argument suggests that the sequence of local maximum likelihood estimators
hn = rn({)n -(;I) converges, under (;1, in distribution to the maximum likelihood estimator
in the local limit experiment, under h = 0.
Besides yielding the limit distribution of the maximum likelihood estimator, the argu-
ment also shows to what extent the estimator is asymptotically efficient. It is efficient, or
inefficient, in the same sense as the maximum likelihood estimator is efficient or ineffi-
cient in the limit experiment. That maximum likelihood estimators are often asymptotically
efficient is a consequence of the fact that often the limit experiment is Gaussian and the
maximum likelihood estimator of a Gaussian location parameter is optimal in a certain
sense. If the limit experiment is not Gaussian, there is no a priori reason to expect that the
maximum likelihood estimators are asymptotically efficient.
A variety of examples shows that the conclusions of the preceding heuristic arguments are
often but not universally valid. The reason for failures is that the convergence of experiments
is not well suited to allow claims about maximum likelihood estimators. Such claims require
stronger forms of convergence than marginal convergence only.
For the case of experiments consisting of a random sample from a smooth parametric
model, the argument is made precise in section 7 .4. Next to the convergence of experiments,
it is required only that the maximum likelihood estimator is consistent and that the log density
is locally Lipschitz in the parameter. The preceding heuristic argument also extends to the
other examples of convergence to limit experiments considered in this chapter. For instance,
the maximum likelihood estimator based on a sample from the uniform distribution on [0, (;I]
Problems 137
Notes
This chapter presents a few examples from a large body of theory. The notion of a limit
experiment was introduced by Le Cam in [95]. He defined convergence of experiments
through convergence of all finite subexperiments relative to his deficiency distance, rather
than through convergence of the likelihood ratio processes. This deficiency distance in-
troduces a "strong topology" next to the "weak topology" corresponding to convergence
of experiments. For experiments with a finite parameter set, the two topologies coincide.
There are many general results that can help to prove the convergence of experiments and
to find the limits (also in the examples discussed in this chapter). See [82], [89], [96], [97],
[115], [138], [142] and [144] for more information and more examples. For nonlocal ap-
proximations in the strong topology see, for example, [96] or [110].
PROBLEMS
1. Let Xt. ... , Xn be an i.i.d. sample from the normal N(hl...fti, 1) distribution, in which hER
The corresponding sequence of experiments converges to a normal experiment by the general
results. Can you see this directly?
2. If the nth experiment corresponds to the observation of a sample of size n from the uniform
[0, 1-hI n ], then the limit experiment corresponds to observation of a shifted exponential variable
Z. The sequences -n(X(n)- 1) and ...fti(2Xn- 1) both converge in distribution under every h.
According to the representation theorem their sets of limit distributions are the distributions of
randomized statistics based on Z. Find these randomized statistics explicitly. Any implications
regarding the quality of X(n) and Xn as estimators?
3. Let the nth experiment consist of one observation from the binomial distribution with parameters
n and success probability hI n with 0 < h < 1 unknown. Show that this sequence of experiments
converges to the experiment consisting of observing a Poisson variable with mean h.
4. Let the nth experiment consists of observing an i.i.d. sample of size n from the uniform
[-1 - hI n, 1 + hI n] distribution. Find the limit experiment.
5. Prove the asymptotic representation theorem for the case in which the nth experiment corresponds
to an i.i.d. sample from the uniform [0, (} - h 1n] distribution with h > 0 by mimicking the proof
of this theorem for the locally asymptotically normal case.
6. (Toeplitz lemma.) If an is a sequence of nonnegative numbers with L an = oo and Xn --+ x an
arbitrary converging sequence of numbers, then the sequence I:'}= 1ai xi II:'} = 1ai converges to
x as well. Show this.
7. Derive a limit experiment in the case of Galton-Watson branching with JL(B) < 1.
8. Derive a limit experiment in the case of a Gaussian AR(l) process with(} = 1.
9. Derive a limit experiment for sampling from a U [u, -r] distribution with both endpoints unknown.
10. In the case of sampling from the U[O, (}] distribution show that the maximum likelihood estimator
for (} converges to the maximum likelihood estimator in the limit experiment. Why is the latter
not a good estimator?
11. Formulate and prove a local asymptotic minimax theorem for estimating (} from a sample from
a U[O, (}]distribution, using l(x) = x 2 as loss function.
10
Bayes Procedures
10.1 Introduction
In Bayesian terminology the distribution Pn 9 of an observation Xn under a parameter()
is viewed as the conditional law of x n giv~n that a random variable en is equal to e.
The distribution n of the "random parameter" en is called the prior distribution, and the
conditional distribution of en given xn is the posterior distribution. n en possesses a
density 1r and Pn,9 admits a density Pn,9 (relative to given dominating measures), then the
density of the posterior distribution is given by Bayes' formula
This expression may define a probability density even if rr is not a probability density itself.
A prior distribution with infinite mass is called improper.
The calculation of the posterior measure can be considered the ultimate aim of a Bayesian
analysis. Alternatively, one may wish to obtain a ''point estimator" for the parameter (),
using the posterior distribution. The posterior mean E(en I Xn) = f () Pe. 1 (0) d() is
x.
often used for this purpose, but other location estimators are also reasonable.
A choice of point estimator may be motivated by a loss function. The Bayes risk of an
estimator Tn relative to the loss function l and prior measure n is defined as
Here the expectation &Jl(Tn- ()) is the risk function of Tn in the usual set-up and is identical
to the conditional risk E(l(Tn- en) I en = ()) in the Bayesian notation. The corresponding
Bayes estimator is the estimator Tn that minimizes the Bayes risk. Because the Bayes risk
can be written in the formEE(l(Tn- en) I Xn). the value Tn = Tn(X) minimizes, for every
fixed x, the "posterior risk"
Minimizing this expression may again be a well-defined problem even for prior densities
of infinite total mass. For the loss function l(y) = llyll 2 , the solution Tn is the posterior
mean E(Bn I Xn), for absolute loss l(y) = llyll, the solution is the posterior median.
Other Bayesian point estimators are the posterior mode, which reduces to the maximum
likelihood estimator in the case of a uniform prior density; or a maximum probability
estimator, such as the center of the smallest ball that contains at least posterior mass 1/2
(the "posterior shorth" in dimension one).
If the underlying experiments converge, in a suitable sense, to a Gaussian location
experiment, then all these possibilities are typically asymptotically equivalent. Consider
the case that the observation consists of a random sample of size n from a density p 9 that
depends smoothly on a Euclidean parameter e. Thus the density Pn,IJ has a product form,
and, for a given prior Lebesgue density rr:, the posterior density takes the form
e = TI7=1 P9(X;)rr:(e) .
Pa. I x,, .... x. ( ) J TI7=1 P9 (X; )rr(e) de
Typically, the distribution corresponding to this measure converges to the measure that is
degenerate at the true parameter value eo, as n--* oo. In this sense Bayes estimators are
usually consistent. A further discussion is given in sections 10.2 and 10.4. To obtain a
more interesting limit, we rescale the parameter in the usual way and study the sequence of
posterior distributions of ..jTi(Bn -eo), whose densities are given by
_ h = TI7=IP11o+h!vln(X;)rr:(eo+hf..j1i) .
p vln<B.-!Jo) I x,, .... x. ( ) J TI7=1 Pllo+hfvfn(X;) rr:(eo + hf ..jTi) dh
If the prior density rr: is continuous, then rr: (e0 +h f ..jTi), forlarge n, behaves like the constant
rr:(eo), and rr: cancels from the expression for the posterior density. For densities p 9 that
are sufficiently smooth in the parameter, the sequence of models (PIIo+h/vln: h e IRk) is
locally asymptotically normal, as discussed in Chapter 7. This means that the likelihood
ratio processes h 1-+ TI7= 1 Pllo+h!vlnf Plio (X;) behave asymptotically as the likelihood ratio
process of the normal experiment (N(h, /9~ 1 ): h e IRk). Then we may expect the preceding
display to be asymptotically equivalent in distribution to
dN(h, /~ 1 )(X) 1
-:f-dN-(~h-,/...:.-":~1~)(-X-)d-h = dN(X, I~ )(h),
where dN(JL, :E) denotes the density of the normal distribution. The expression in the
preceding display is exactly the posterior density for !he experiment (N(h, /~ 1 ): h e IRk),
relative to the (improper) Lebesgue prior distribution. The expression on the right shows
that this is a normal distribution with mean X and covariance matrix /~ 1 • _
This heuristic argument leads us to expect that the posterior distribution of ..j7i(Bn -
eo) "converges" under the true parameter eo to the posterior distribution of the Gaussian
limit experiment relative to the Lebesgue prior. The latter is equal to the N(X, /~ 1 )
distribution, for X possessing theN (0, /~ 1 )-distribution. The notion of convergence in this
statement is a complicated one, because a posterior distribution is a conditional, and hence
stochastic, probability measure, but there is no need to make the heuristics precise at this
point. On the other hand, the convergence should certainly include that "nice" Euclidean-
valued functionals applied to the posterior laws converge in distribution in the usual sense.
140 Bayes Procedures
1 ~ -1'
ll.n,fJo = Jn f:J l9o l9o(X;),
where i 8 is the score function of the model. The Bernstein-von Mises theorem asserts
that the total variation distance between the posterior distribution of Jn(en - e0) and the
random distribution N ( ll.n,fJo, I~ 1) converges to zero. Because ll.n,fJo ---+ X, this has as a
10.2 Bernstein-von Mises Theorem 141
consequence that the posterior distribution of ,.fii(E>n -eo) converges, in any reasonable
sense, in distribution to N(X, Fi;/).
The conditions of the following version of the Bernstein-von Mises theorem are re-
markably weak. Besides differentiability in quadratic mean of the model, it is assumed
that there exists a sequence of uniformly consistent tests for testing H 0 : e = e0 against
e
H 1 : 11e - 0 ii 2:: 8, for every 8 > 0. In other words, it must be possible to separate the
true value eo from the complements of balls centered at eo. Because the theorem implies
that the posterior distributions eventually concentrate on balls of radii Mnl Jn around Oo.
for every Mn -+ oo, this separation hypothesis appears to be very reasonable. Even more
so, since, as is noted in Lemmas 10.4 and 10.6, under continuity and identifiability of the
model, separation by tests of Ho: e = e0 from H1 : 11e- e0 11 2:: 8 for a single (large) 8 > 0
already implies separation for every 8 > 0. Furthermore, if e is compact and the model
continuous and identifiable, then even the separation condition is superfluous (because it is
automatically satisfied). t
10.1 Theorem (Bernstein-von Mises). Let the experiment (Pe: e E E>) be differentiable
in quadratic mean at eo with nonsingular Fisher information matrix leo, and suppose that
for every 8 > 0 there exists a sequence of tests ~n such that
Furthermore, let the prior measure be absolutely continuous in a neighborhood ofe0 with
a continuous positive density at eo. Then the corresponding posterior distributions satisfy
li P.Jii"(®.-9o)IXI,
- ... ,x. - N(ll. n,fJo• ri)II ~ o
9o
P."
•
Proof. Throughout the proof we rescale the parameter to the local parameter h = e
,.fii(e- e0 ). Let lln be the corresponding prior distribution on h (hence lln(B) = n(e0 +
B I ,.fii)), and for a given set C let n~ be the probability measure obtained by restricting
lln to C and next renormalizing. Write Pn,h for the distribution of Xn = (XI, ... , Xn)
under the original parameter eo + hI Jn, and let Pn,C = f Pn,h d n~ (h). Finally, let
Hn = ,.fii(E>n -eo). and denote the posterior distributions relative to nn and n~ by l'H I X
C
x,
and P8 1 respectively.
n n
The'Proof consists of two steps. First, it is shown that the difference between the posterior
measures relative to the priors lln and n~·, for Cn the ball with radius Mn. is asymptotically
negligible, for any Mn-+ oo. Next it is shown that the difference between N(ll.n,e0 , Ii;/)
and the posterior measures relative to the priors n~· converges to zero in probability, for
some Mn -+ oo.
For U, a ball of fixed radius around zero, we have Pn,u <JI>Pn,O• because Pn,h. <Jt>Pn,o
for every bounded sequence hn, by Theorem 7 .2. Thus, when showing convergence to zero
in probability, we may always exchange Pn,o and Pn,U·
t Recall that a test is a measurable function of tbe observations taking values in tbe interval [0, 1]; in tbe present
context this means a measurable function <Pn : xn ~ [0, 1].
142 Bayes Procedures
Let Cn be the ball of radius Mn. By writing out the conditional densities we see that, for
any measurable set B,
Jti. I x. (B) - p~:l x. (B) = Pil. I x. (C~ n B) - Jti. I x. (C~) p~:l x. (B).
Taking the supremum over B yields the bound
Manipulating again the expressions for the posterior densities, we can rewrite the first term
on the right as
For the tests given in the statement of the theorem, the integrand on the right converges
to zero pointwise, but this is not enough. By Lemma 10.3, there automatically exist tests
f/Jn for which the convergence is exponentially fast. For the tests given by the lemma the
preceding display is bounded above by
Here nn(U) = nceo + u I ,Jri) is bounded below by a term of the order 1/ Jnk, by the
positivity and continuity of the density rr at eo. Splitting the integral into the domains
Mn ::; llhll ::; D,Jri and llhll ~ D,Jri forD ::; 1 sufficiently small that rr(e) is uniformly
bounded on 11e -eo II ::; D, we see that the expression is bounded above by a multiple of
<
-
//(1- Pn,g(Xn)rrn(g)dNc(fln,llo• ~~~)(h))+ dNc(fl
... C( -1)
Pn,h(Xn)rrn(h)dN fln,llo•[llo (g)
rt)C )dPE ~ (h)
n,llo• llo g H IX
• "
'
10.2 Bernstein-vonMises Theorem 143
because (1 - EY)+ :::: E(l - Y)+. This can be further bounded by replacing the third
occurrence of Nc(ll.n,Bo• / 0: 1) by a multiple of the uniform measure A.c on C. By the
dominated-convergence theorem, the double integral on the right side converges to zero in
mean under Pn,c if the integrand converges to zero in probability under the measure
10.3 umma. Under the conditions ofTheorem 10.1, there exists for every Mn--* oo a
sequence of tests l/Jn and a constant c > 0 such that, for every sufficiently large n and every
11e- eoll :=::: Mnf..fo,
Proof. We shall construct two sequences of tests, which ''work" for the ranges Mn/ ,.fo ::5
II e - e0 II :::: e and II e - e0 II > e, respectively, and a given e > 0. Then the l/Jn of the lemma
can be defined as the maximum of the two sequences.
First consider the range Mn/..fo :::: 11e - eoll :::: e. Let i~ be the score function
truncated (coordinatewise) to the interval [- L, L]. By the dominated convergence theorem,
Peoi~i~--* leo as L--* oo. Hence, there exists L > 0 such that the matrix Po0 l~i~ is
nonsingular. Fix such an L and define
By the central limit theorem, P:Own--* 0, so that Wn satisfies the first requirement. By the
triangle inequality,
by Hoeffding's inequality (e.g., Appendix Bin [117]), for a sufficiently small constant C.
Next, consider the range II e - eo II > e for an arbitrary fixed 8 > 0. By assumption there
exist tests l/Jn such that
It suffices to show that these tests can be replaced, if necessary, by tests for which the
convergence to zero is exponentially fast. Fix k large enough such that P~ </Jk and Pt ( 1- </Jk)
are smaller than 1/4 for every 110- Ooll > e. Let n =
mk + r for 0:::: r < k, and define
Yn,! .... ' Yn,m as </Jk applied in turn to x,, ... ' xk. to xk+l· ... ' x2k. and so forth. Let Yn,m
be their average and then define Wn = 1{Yn,m 2: 1/2}. Because E11Yn,j 2: 3/4 for every
110-00 11 > e and every j, Hoeffding's inequality implies that
Because m is proportional ton, this gives the desired exponential decay. Because ~0 Yn,j ::::
1/4, the expectations P~wn are similarly bounded. •
The Bernstein-von Mises theorem is sometimes written with a different "centering se-
quence." By Theorem 8.14 any sequence of standardized asymptotically efficient estimators
.jn(Bn- 0) is asymptotically equivalent in probability to lln,ll· Because the total variation
distance
Under regularity conditions this is true for the maximum likelihood estimators Bn. Com-
bining this with Theorem 5.39 we then have, informally,
e
since conditioning en on E>n = gives the usual "frequentist" distribution of en under e.
This gives a remarkable symmetry.
Le Cam's version of the Bernstein-von Mises theorem requires the existence of tests
that are uniformly consistent for testing Ho: 0 = Oo versus H,: 110 - Ooll 2: e, for every
e > 0. Such tests certainly exist if there exist estimators Tn that are uniformly consistent,
in that, for every e > 0,
In that case, we can define <Pn = 1 {II Tn - Oo II 2: e j2}. Thus the condition of the Bernstein-
von Mises theorem that certain tests exist can be replaced by the condition that uniformly
consistent estimators exist. This is often the case. For instance, the next lemma shows that
this is the case for a Euclidean sample space X provided, for F9 the distribution functions
corresponding to the P9 ,
For compact parameter sets, this is implied by identifiability and continuity of the maps (J ~
F9 • We generalize and formalize this in a second lemma, which shows that uniformity on
compact subsets is always achievable if the model (P9 : (J E E>) is differentiable in quadratic
mean at every (J and the parameter (J is identifiable.
A class of measurable functions F is a uniform Glivenko-Cantelli class (in probability)
if, for every e > 0,
Here the supremum is taken over all probability measures P on the sample space, and
II Qll.r = sup1 e.r IQfl. An example is the collection of indicators of all cells ( -oo, t] in a
Euclidean sample space.
10.4 Lemma. Suppose that there exists a uniform Glivenko-Cantelli class F such that, for
every e > 0,
inf
d((J,(J')>B
IIPe- Pe,ll.r > 0. (10.5)
Then there exists a sequence of estimators that is uniformly consistent on E> for estimat-
ing e.
10.6 Lemma. SupposethatE>iscr-compact, Pe =f. Pe,foreverypairfJ =f.(}', and the maps
(J ~ P9 are continuous for the total variation norm. Then there exists a sequence of
estimators that is uniformly consistent on every compact subset of E>.
This means that (10.5) is satisfied. The class F is also a unifonn Glivenko-Cantelli class,
because by Chebyshev's inequality,
Then there exists a sequence m11 -+ oo such that a"·"'• -+ 0 as n -+ oo. It is not hard to see
that 011 = T,.,,. satisfies the requirements. a
As a consequence of the second lemma, if there exists a sequence of tests t/>11 such that
(10.2) holds for some e > 0, then it holds for every e > 0. In that case we can replace the
given sequence t/>11 by the minimum of ,P,. and the tests I {liT,.- Ooll ~ t/2} for a sequence
of estimators T,. that is uniformly consistent on a sufficiently large subset of e.
10.7 Lemma. Let tile set of probability measures 1' 011 a measurable space (X, A) be
sepamble for the total variation norm. The11 there exist.f a countable subset .Ao C A such
that P, = P2 on Ao implies P1 = P2for e\'eT)' P1, P2 e 1'.
Proof. The set 'P can be identified with a subset of Lt (JL) for a suitable probability measure
JL. For instance, JL can be taken a convex linear combination of a countable dense set. Let
'Po be a countable dense subset, and let Ao be the set of all finite intersections of the sets
p- 1(B) for p ranging over a choice of densities of the set 'Po c L 1(It) and B ranging over
a countable generator of the Borel sets in JR.
Then every density p e 'Po is u(Ao)-measurable by construction. A density of a measure
P e P - Po can be approximated in L 1(JL) by a sequence from Po and hence can be chosen
u(Ao)-measurable, without Joss of generality.
Because Ao is intersection-stable (a "zr-system"), two probability measures that agree
on .Ao automatically agree on the a-field u(Ao) generated by Ao. Then they also give the
same expectation to every u(.Ao)-measurable function f: X t-+ [0, 1]. If the measures
=
have u(Ao)-measurable densities, then they must agree on A, because P(A) E,., I AP =
E,.,E,.,(IA I u(.Ao))p if pis u(.Ao)-measurable. a
posterior laws converges to the same location functional applied to the limiting Gaussian
posterior distribution. For most choices this means to X, or a N(O, /~ 1 )-distribution.
In this section we consider more general Bayes point estimators that are defined as the
minimizers of the posterior risk functions relative to some loss function. For a given loss
function l: R1 1-+ [0, oo), let T,., for fixed X., ••• , X,., minimize the posterior risk
It is not immediately clear that the minimizing values T,. can be selected as a measurnble
function of the observations. This is an implicit assumption, or otherwise the statements
are to be understood relative to outer probabilities. We also make it an implicit assumption
that the integrals in the preceding display exist, for almost every sequence of observations.
To derive the limit distribution of .Jn(T,. -Oo), we apply gencrnl results on M -estimators,
in particular the argmax continuous-mapping theorem, Theorem 5.56.
We restrict ourselves to loss functions with the property, for every M > 0,
sup l(ll) :s inf l(/1),
lhi~M lhi?::2M
with strict inequality for at least one M.f This is true, for instance, for loss functions of
the form l(/1) = lo{llh II) for a nondecreasing function t 0 : [0, oo) 1-+ [0, oo) that is not
constant on (0, oo). Furthermore, we suppose that t grows at most polynomially: For some
constant p ;:: 0,
10.8 Tlreorem. uttlze conditions ofTheorem 10.1 hold, and lett satisfy tlze conditions as
J
listed, for a p suclltlrat 110 II" dn (0) < oo. 111en the sequence .jn(T,. - Oo) converges
rmder Oo in distribution to tire minimizer oft 1-+ Jt (t-lr) d N (X, I~ 1)(lr ),for X possessing
theN (0, 1~ 1 )-distribution, provided that any two minimizers oftIris process coincide almost
surely. In particular, for every nonzero, subcom•ex loss function it converges to X.
*Proof. We adopt the notation as listed in the first paragraph of the proof ofTheorem 10.1.
The last assertion of the theorem is a consequence of Anderson's lemma, Lemma 8.5.
The standardized estimator .jn(T,. - Oo) minimizes the function
It-+ Z (r) -
J l(t- lr) Pn.h(X,.) dn,.(h) - Pn. D
-(,.
" - J Pn.h(X,.)dn,.(ll) - l.rx. "
where t, is the function h 1-+ l(t - II). The proof consists of three parts. First it is shown
thatintegrnlsoverthesets Rhll ;:: M, can be neglected for every M,. ~ oo. Next, it is proved
that the sequence .jn(T,. - Oo) is uniformly tight. Finally, it is shown that the stochastic
processes t 1-+ Z,.(t) converge in distribution in the space t 00 (K), for every compact K, to
the process
The sample paths of this limit process are continuous in t, in view of the subexponential
growth of .e and the smoothness of the normal density. Hence the theorem follows from the
argmax theorem, Corollary 5.58.
Let Cn be the ball of radius Mn for a given, arbitrary sequence Mn ---+ oo. We first show
that, for every measurable function f that grows subpolynomially of order p,
Pn,O
Pg• I x. (f lc~) ---+ 0. (10.9)
e e
To see this, we utilize the tests 4>n for testing Ho : = 0 that exist by assumption. In view
of Lemma 10.3, these may be assumed without loss of generality to satisfy the stronger
property as given in the statement of this lemma. Furthermore, they can be constructed
to be nonrandomized (i.e., to have range {0, 1}). Then it is immediate that (Pg. 1x.f)4>n
converges to zero in Pn,o-probability for every measurable function f. Next, by writing out
the posterior densities, we see that, for U a fixed ball around the origin,
Here nn (U) is bounded below by a term of the order 11 ..;r/', by the positivity and continuity
at eo of the prior density 1r. Split the integral over the domains Mn ::: llh II ::: D,fn and
llh II 2:: D,fn, and use the fact that f 11e liP dn(e) < oo to bound the right side of the display
by terms of the order e-AM; and ,Jnk+p e-Bn, for some A, B > 0. These converge to zero,
whence (10.9) has been proved.
Define l(M) as the supremum of .f.(h) over the ball of radius M, and f(M) as the
infimum over the complement of this ball. By assumption, there exists > 0 such that o
'f/: = f(28) -l(o) > 0. Let U be the ball of radius o around 0. For every lit II 2:: 3Mn and
sufficiently large Mn, we have .f.(t- h)- .f.(-h) 2:: 'f/ if hE U, and .f.(t- h)- .f.(-h) 2::
f(2Mn) -l(Mn) 2:: 0 if h E uc n Cn, by assumption. Therefore,
x.
Here the posterior probability Pg• 1 (U) of U converges in distribution to N(X, Ii;/)(U),
by the Bernstein-von Mises theorem. This limit is positive almost surely. The second
term in the preceding display converge~ to zero in probability by (10.9). Conclude that the
infimum of Zn(t)- Zn(O) over the set oft with lit II 2:: 3Mn is bounded below by variables
that converge in distribution to a strictly positive variable. Thus this infimum is positive
with probability tending to one. This implies that the probability that t 1-+ Zn(t) has a
minimizer in the set II t II 2:: 3Mn converges to zero. Because this is true for any Mn ---+ oo,
it follows that the sequence ,fn(Tn -eo) is uniformly tight.
Let C be the ball of fixed radius M around 0, and fix some compact set K c ~k. Define
stochastic processes
The function h ~--+ l(t- h)1c(h) is bounded, uniformly if t ranges over the compact K.
Hence, by the Bernstein-von Mises theorem, Zn,M- Wn,M ~ 0 in l 00 (K) as n--* oo,
for every fixed M. Second, by the continuous-mapping theorem, Wn,M -v-+ WM in l 00 (K),
as n --* oo, for fixed M. Next WM ~ Z in l 00 (K) as M--* oo, or equivalently C t ~k.
Conclude that there exists a sequence Mt --* oo such that the processes Zn,M. -v-+ Z in .eoo ( K).
Because, by (10.9), Zn (t)- Zn,M. (t) --* 0, we finally conclude that Zn ""+ Z in l 00 (K). •
*10.4 Consistency
A sequence of posterior measures Pa. 1x1 , ... ,x. is called consistent under (J if under Pt-
probability it converges in distribution to the measure 80 that is degenerate at (J, in proba-
bility; it is strongly consistent if this happens for almost every sequence X 1 , X2, ....
Given that, usually, ordinarily consistent point estimators of (J exist, consistency of
posterior measures is a modest requirement. If we could know (J with almost complete
accuracy as n --* oo, then we would use a Bayes estimator only if this would also yield the
true value with similar accuracy. Fortunately, posterior measures are usually consistent.
The following famous theorem by Doob shows that under hardly any conditions we already
have consistency under almost every parameter.
Recall that E> is assumed to be Euclidean and the maps (J ~--+ Po (A) to be measurable
for every measurable set A.
10.10 Theorem (Doob's consistency theorem). Suppose that the sample space (X, A) is
a subset ofEuclidean space with its Borel a-field. Suppose that Po ::f. Po' whenever(} ::f. fJ'.
Then for every prior probability measure II on E> the sequence of posterior measures is
consistent for II -almost every (J.
Proof. On an arbitrary probability space construct random vectors E> and Xt, X2, ...
such that E> is marginally distributed according to II and such that given E> = (J the vectors
X 1 , X 2 , ••• are i.i.d. according to P0 • Then the posterior distribution based on the first n
observations is Pa1x 1 , .... x.· Let Q be the distribution of (Xt, X2, ... , E>) on X 00 X E>.
The main part of the proof consists of showing that there exists a measurable function
h : X 00 1--+ e with
Q-a.s .. (10.11)
Suppose that this is true. Then, for any bounded, measurable function f : E> ~--+ IR, by
Doob's martingale convergence theorem,
Q-a.s ..
ISO Bayrs Procrdu"s
This statement refers to the marginal distribution of (X1, X2, •..) under Q. We wish to
tran!Olatc it into a statement concerning the P81'11)·measures. Let C C .,y:o x 9 be the inter·
section of the sets on which the weak convergence holds and on which (15.9) is valid. By
Fubini's theorem
JJ lh,(x)- Po(A)IJQ(.r,O)- 0.
The merit of the preceding theorem is that it imposes hardly any conditions, bur its
drawback is that it gives the consisrency only up to null sets of possible parameters (de-
pending on the prior). In certnin ways these null sets can be quite large, and examples have
10.-1 Consistmey lSI
been constructed where Bayes estimators beha,.-e badly. To guarantee consistency under
every parameter it is ncce..c;sary to impo~e some funher conditions. Because in this chapter
we are mainly concerned with asymptotic normality of Bayes estimators (which implies
consistency with a rate), we omit a discussion.
Proof. By assumption, the map II: E> .- RF defined by h(O)f = /(0) is measurable and
one-to-one. Because :F is countable, the Borel u-ficld on RF (for the product topology) is
equal to the u-field generated by the coordinate projections. Hence the u-fields generated
by hand :F (viewed a.11 Borel mea.c;urable maps in aF and R, respectively) one are identical.
Now 11- 1, defined on the range ofh, is automatically Borel measurable, by Proposition 8.3.5
in (24], and hence E> and h(0) arc Borel isomorphic. a
Notes
The Bernstein-von Mises theorem has that name, because, as Le Cam and Yang [97] write, it
was first discovered by Laplace. The theorem that is presented in this chapter is considerably
more elegant than the results by these early authors, and also much better than the result
in Le Cam (9 J], who revi\·ed the theorem in order to prove results on superefficiency. We
ndapted it from Le Cam [96] nnd Lc Cam and Yang [97).
lbragimov and Hasminskii (80] discuss the convergence of Bayes point estimators in
greater generulity, nnd nlso cover non·Ouussinn limit experiments, but their discussion of
the i.i.d. case a.li discussed in the present chapter is limited to bounded parameter sets and
requires stronger at>sumptions. Our treatment uses some clements of their proof, but is
heavily based on Le Cam's Bemstein-'t·on Mises theorem. Inspection of the proof shows
that the conditions on the loss function can be relaxe<l significantly, for instance allowing
exponential growth.
Doob's theorem originates in [39]. The potential null sets ofinconsistency that it leaves
open really exist in some situations panicularly if the parameter set is infinite dimensional,
152 Bayes Procedures
and have attracted much attention. See [34], which is accompanied by evaluations of the
phenomenon by many authors, including Bayesians.
PROBLEMS
1. Verify the conditions of the Bernstein-von Mises theorem for the experiment where Pe is the
Poisson measure of mean ().
2. Let Pe be the k-dimensional normal distribution with mean() and covariance matrix the identify.
Find the a posteriori law for the prior IT = N(r, A) and some nonsingular matrix A. Can you
see directly that the Bernstein-von Mises theorem is true in this case?
3. Let Pe be the Bernoulli distribution with mean (). Find the posterior distribution relative to the
beta-prior measure, which has density
4. Suppose that, in the case of a one-dimensional parameter, we use the loss function f_(h)
1(-I ,2) (h). Find the limit distribution of the corresponding Bayes point estimator, assuming that
the conditions of the Bernstein-von Mises theorem hold.
11
Projections
11.1 Projections
A common method to derive the limit distribution of a sequence of statistics Tn is to show that
it is asymptotically equivalent to a sequence Sn of which the limit behavior is known. The
basis of this method is Slutsky's lemma, which shows that the sequence Tn = Tn - Sn + Sn
converges in distribution to S if both Tn - Sn ~ 0 and Sn -v-+ S.
How do we find a suitable sequence Sn? First, the variables Sn must be of a simple form,
because the limit properties of the sequence Sn must be known. Second, Sn must be close
enough. One solution is to search for the closest Sn of a certain predetermined form. In
this chapter, "closest" is taken as closest in square expectation.
Let T and {S : S E S} be random variables (defined on the same probability space)
with finite second-moments. A random variable ~is called a projection ofT onto S (or
L 2-projection) if~ E S and minimizes
SeS.
11.1 Theorem. LetS be a linear space of random variables with finite second moments.
Then S is the projection ofT onto S if and only if S E S and
Every two projections ofT onto S are almost surely equal. If the linear space S contains
the constant variables, then ET = ES and cov(T - S, S) = 0 for every S E S.
153
154 Projections
If S satisfies the orthogonality condition, then the middle tenn is zero, and we conclude
that E(T - S)2 2! E(T - S)2, with strict inequality unless E(S- S)2 = 0. Thus, the
orthogonality condition implies that S is a projection, and also that it is unique.
Conversely, for any number a,
If S is a projection, then this expression is nonnegative for every a. But the parabola
a ~-+- a 2ES2 - 2aE(T - S)S is nonnegative if and only if the orthogonality condition
E(T - ~)S = 0 is satisfied.
If the constants are inS, then the orthogonality condition implies E(T- S)c = 0, whence
the last assertions of the theorem follow. a
The theorem does not assert that projections always exist. This is not true: The in-
fimum infs E(T - S) 2 need not be achieved. A sufficient condition for existence is that
S is closed for the second-moment nonn, but existence is usually more easily established
directly.
=
The orthogonality ofT -SandS yields the Pythagorean rule ET 2 E(T - 5) 2 + E.f.
(See Figure 11.1.) If the constants are contained inS, then this is also true for variances
instead of second moments.
Now suppose a sequence of statistics T,. and linear spaces S,. is given. For each n, Jet S11
be the projection of T11 on S,.. Then the limiting behavior of the sequence T,. follows from
that of S,., and vice versa, provided the quotient varT,.fvarS,. converges to I.
11.2 Theorem. ut S,. be linear spaces of random variables with finite second moments
that contain the constants. ut T,. be random variables with projections S,. otrto S,.. If
varT,.fvarS,. --. lthen
Proof. We shall prove convergence in second mean. which is stronger. The expectation
of the difference is zero. Its variance is equal to
2 _ 2 cov(T,. S,).
sd T, sdS,
By the orthogonality ofT, - S, and S,. it follows that ET,.S, = ES!. Because the constants
=
are ins,. this implies that cov(T,. S,) vars,. and the theorem follows. •
The condition varT,JvarS, -+ 1 in the theorem implies that the projections S, are
asymptotically of the same size as the original T,. This explains that "nothing is lost.. in
the limit. and that the difference between T, and its projection converges to zero. In the
preceding theorem it is essential that the S, are the projections of the variables T,, because
the condition varT,/varS,. -+ 1 for general sequences S11 and T11 does not imply anything.
over all measurable functions g. In the terminology of the preceding section. E(X 1f) is
the projection of X onto the linear space of all measurable functions of f. It follows that
the conditional expectation is the unique measurable function E(X I f) of f that satisfies
the orthogonality relation
11.4 Example. If X = /(f) for a measurable function f, then E(X 1f) = X. This
follows immediately from the definition. in which the minimum can be reduced to zero.
The interpretation is that X is perfectly predictable given knowledge of f. 0
156 Projections
11.5 Example. Suppose that (X, Y) has a joint probability density f(x, y) with respect
to au-finite product measure fJ- x v, and let f(x I y) = f(x, y)jfy(y) be the conditional
density of X given Y = y. Then
(This is well defined only if fr(Y) > 0.) Thus the conditional expectation as defined above
concurs with our intuition.
The formula can be established by writing
To minimize this expression over g, it suffices to minimize the inner integral (between sq-
uare brackets) by choosing the value of g(y) for every y separately. For each y, the integ-
ral j(x - a) 2 f(x I y) df,.t(x) is minimized for a equal to the mean of the density x ~--*
f(x ly). D
11.6 Example. If X and Y are independent, then E(X I Y) = EX. Thus, the extra knowl-
edge of an unrelated variable Y does not change the expectation of X.
The relationship follows from the fact that independent random variables are uncorre-
lated: Because E(X - EX)g(Y) = 0 for all g, the orthogonality relationship holds for
go(Y) =EX. D
11.7 Example. Iff is measurable, then E{J(Y)X I Y) = f(Y)E(X I Y) for any X and
Y. The interpretation is that, given Y, the factor f (Y) behaves like a constant and can be
"taken out" of the conditional expectation.
Formally, the rule can be established by checking the orthogonality relationship. For
every measurable function g,
Once again, this is minimized over g by choosing for each y separately the value g(y) to
minimize the inner integral. D
This expresses that a projection can be carried out in steps: The projection onto a smaller
set can be obtained by projecting the projection onto a bigger set a second time.
Formally, the relationship can be proved by verifying the orthogonality relationship
E(E(X I Y, Z)- E(X I Y) )g(Y) = 0 for all measurable functions g. By Example 11.7, the
left side of this equation is equivalent to EE(Xg(Y) I Y, z)- EE(g(Y)X I Y) = 0, which
is true because conditional expectations retain expectations. 0
for arbitrary measurable functions g; with Egr(X;) < oo. This class is of interest, because
the convergence in distribution of the sums can be derived from the central limit theorem.
The projection of a variable onto this class is known as its Hajek projection.
11.10 Lemma. Let X 1 , ••• , Xn be independent random vectors. Then the projection of
an arbitrary random variable T with finite second moment onto the class S is given by
n
S = LE(T I X;)- (n -l)ET.
i=1
Proof. The random variable on the right side is certainly an element of S. Therefore, the
assertion can be verified by checking the orthogonality relation. Because the variables X;
are independent, the conditional expectation E(ECT I X;) I Xi) is equal to the expectation
EE(T I X;) = ET for every i ::f. j. Consequently, E(S I Xi) = E(T I Xi) for every j,
whence
Consider the special case that X 1 , .•• , Xn are not only independent but also identically
distributed, and that T = T (X 1, ... , Xn) is a permutation-symmetric, measurable function
of the X;. Then
Because this does not depend on i, the projection S is also the projection of T onto the
smaller set of variables of the form.L:7
=1g (Xi), where g is an arbitrary measurable function.
Because a projection onto a sum of orthogonal spaces is the sum of the projt."Ctions
onto the individu:ll spaces, it is convenient to decompose the proposed projection space
into a sum of orthogonal spaces. Given independent variables X 1, •••• X11 and a subset
A c (I, •.• , "), Jet H,. denote the set of all squure-integrable random variables of the
type
(Define E(T 10) = ET.) By the independence of X1, .•.• X, the condition in the last
display is automatically valid for any 8 c (I, 2, .•• , n) that does not contain A. Con-
sequently, the spaces HA, when A rJnges over all subsets of (1, ... , 11), arc pairwise
orthogonal. Stated in its present form, the condition reflects the intention to build ap-
proximations of increasing complexity by projecting a given \':triable in tum onto the
spaces
P,,T = ET,
Pr;IT = E(T I X,)- ET,
=
P(I,JIT E(T I X,, XJ)- E(T I X,)- E(T I XJ) + ET.
Now the general formula given by the following lemma should not be surprising.
11.11 Lemma. ut X 1 , ••• , X,. be independent random l'Uriables, and letT be an arbi-
trory rcmdom variable with ET 2 < oo. Theil tlrt' pmjt•ctioll ofT o11to HA is girt>11 b)·
and B of {1, .•• , n}. Thus, for PAT as defined in the lemma and a set C strictly contained
in A,
E(PAT 1C)= L<-l)rAr-rsrE(T 1B n C)
BCA
By the binomial formula, the inner sum is zero for every D. Thus the left side is zero. In
view of the form of PAT. it was nota loss of generality to assume that C cA. Hence PAT
is contained in HA.
Next we verify the orthogonality relationship. For :my measurable function gA,
This is zero for :my gA e HA. This concludes the proof that PAT is as given.
We prove the second assertion of the lemma by induction on r = IAI. If T .1.. H,,
then E(T 10) = ET = 0. Thus the assertion is true for r = 0. Suppose that it is true
for 0, ••• , r - I, and consider a set A of r clements. If T .1.. Hs for every B C A, then
certainly T .1.. He for every C c B. Consequently, the induction hypothesis shows that
=
E(T I B) 0 for every B C A of r- I or fewer elements. The formula for PAT now shows
that PAT = E(T I A). By assumption the left side is zero. This concludes the induction
argument.
The final assertion of the lemma follows if the variable TA := T - EscA Ps T is zero
for every T that depends on (X,: i e A) only. But in this case TA depends on (X1 : i e A)
only and hence equals E(TA I A), which is zero, because TA .1.. H8 for every B c A. a
If T = T(X 1, ••• , X,.) is permutation-symmetric and X 1, •••• X,. are independent and
identically distributed, then the Hoeffding decomposition ofT can be simplified to
II
T= L L g,(X,:i e A).
r-o (AJ=r
for
g,(x 1, ... ,x,) = L (-I)'-1 8 1ET(x1 e B, X;~ B).
BC(l,....rJ
The inner sum in the representation ofT is for each r a U -statistic of order r (a.o; discussed
in the Chapter 12), with degenerate kernel. All terms in the sum are orthogonal, whence
the variance ofT can be found as varT = L,... 1 (~)Eg;(x 1 , ••• , X,).
Notes
Orthogonal projections in Hilbert spaces (complete inner product spaces) are a classical sub-
ject in functional analysis. We have limited our discussion to the Hilbert space L2 (G, U, P)
of all square-integrable random variables on a probability space. Another popular method to
160 Projections
PROBLEMS
1. Show that "projecting decreases second moment": If Sis the projection ofT onto a linear space,
then ES 2 ::::: ET 2 • If S contains the constants, then also varS ::=:: varT.
2. Another idea of projection is based on minimizing variance instead of second moment. Show
that var(T - S) is minimized over a linear spaceS by S if and only if cov(T - S, S) = 0 for
every S E S.
3. If X 2: Y almost surely, then E(X I Z) 2: E(Y I Z).
4. For an arbitrary random variable X 2: 0 (not necessarily square-integrable), define a conditional
expectation E(X I Y) by limM~oo E(X 1\ M I Y).
(i) Show that this is well defined (the limit exists almost surely).
(ii) Show that this coincides with the earlier definition if EX 2 < oo.
(iii) If EX< oo showthatE(X -E(X I Y))g(Y) = Oforeverybounded, measurable function g.
(iv) Show that E(X I Y) is the almost surely unique measurable function of Y that satisfies the
orthogonality relationship of (iii).
How would you define E(X I Y) for a random variable with EIXI < oo?
5. Show that a projection S of a variable T onto a convex set S is almost surely unique.
6. Find the conditional expectation E(X I Y) if (X, Y) possesses a bivariate normal distribution.
7. Find the conditional expectation E(X1 I X(n)) if x,, ... , Xn are a random sample of standard
uniform variables.
8. Find the conditional expectation E(X, IXn) if x,, ... , Xn are i.i.d.
9. Showthatforanyrandom variables SandT (i) sd(S+ T) ::=:: sdS+sd T, and (ii) I sdS- sd Tl :::::
sd(S- T).
10. If Sn and Tn are arbitrary sequences of random variables such that var(Sn - Tn)/varTn --+ 0,
then
Sn - ESn _ Tn - ETn ~ O.
sdSn sdTn
Moreover, varSnfvarTn --+ 1. Show this.
11. Show that PAh(Xj: Xj E B)= 0 for every set B that does not contain A.
12
U-Statistics
() = Eh(X1, ... , X7 ).
In order to simplify the formulas, it is assumed throughout this section that the function
h is permutation symmetric in its r arguments. (A given h could always be replaced by
a symmetric one.) The statistic h(X1, ... , X7 ) is an unbiased estimator for(), but it is
unnatural, as it uses only the first r observations. A U-statistic with kernel h remedies this;
it is defined as
1
U = (;) ~h(Xpw .. , Xp.),
where the sum is taken over the set of all unordered subsets f3 of r different integers chosen
from {1, ... , n}. Because the observations are i.i.d., U is an unbiased estimator for ()
also. Moreover, U is permutation symmetric in X 1, ••• Xn, and has smaller variance than
h(X1, ... , X7 ). In fact, if X(l), ... , X<nl denote the values X1, ... , Xn stripped from their
order (the order statistics in the case of real-valued variables), then
161
162 U-Statistics
In this section it is shown that the sequence ~(U- 0) is asymptotically normal under
the condition that Eh 2(X a. ••• , X,) < oo.
U = nI " "'d
L..JL..J I ~
-(Xi- XJ) 2 = --L..J(X1- -.,
X)·.
(;) i<J 2 n - l,.a
12.3 Theorem. /fEh2 (X 1, ••• , X,) < oo, then ..fo<U - 0 - fJ) ~ 0. Conseq11ently,
the seqlletlce ..fo<U- 0) is asymptotically 11omaal with mea11 0 and variance r 2('1, where,
x:
with X 1, ••• , X,, XI, ... , denoting i.i.d. variables,
"· if; e fJ
E(h(X,,, .•. ' X~~r)- ()I x, =X
) = {0llt(.t) if i ¢ {J.
To calculate E(U- (J I X1), we take the average over all {J. Then the first case occurs for
(~:I> of the vectors fJ in the definition of U. The factor r/n in the formula for the projection
Darises ali rfn = (~: IY<:>.
12.1 011~·Sampl~ U-Statistics 163
12.4 Example (Signed rank statistic). The parameter(}= P(X 1 + X2 > 0) corresponds
to the kemel/r(x,, x2) = l(x, +x2 > 0}. The corresponding U-statistic is
This statistic is the average number of pairs (X1, X1) with positive sum X1 + X1 > 0, and
can be used a.c; a test statistic for investigating whether the distribution of the observations
is located at zero. If many pairs (X;, XJ) yield a positive sum (relative to the total number
of pairs), then we have an indication that the distribution is centered to the right of zero.
The sequence ./ii(U- 6) is asymptotically normal with mean zero and variance 4~1 • If
F denotes the cumulative distribution function of the observations, then the projection of
U - (J can be written
2 II
(J = -- L(F(-X,)- EF(-X,)}.
n 1=1
This formula is useful in subsequent discussion and is also convenient to express the asymp-
totic variance in F.
164 U-Statistics
The statistic is particularly useful for testing the null hypothesis that the underlying
distribution function is continuous and symmetric about zero: F (x) = 1 - F (- x) for
every x. Under this hypothesis the parameter() equals() = 1/2, and the asymptotic variance
reduces to 4 var F (X!) = 1/3, because F (X 1) is uniformly distributed. Thus, under the null
hypothesis of continuity and symmetry, the limit distribution of the sequence .jii(U -1 /2) is
normal N (0, 1/3), independent of the underlying distribution. The last property means that
the sequence Un is asymptotically distribution free under the null hypothesis of symmetry
and makes it easy to set critical values. The test that rejects Ho if ../3ii(U- 1/2) 2::: Za is
asymptotically of level a for every F in the null hypothesis.
This test is asymptotically equivalent to the signed rank test of Wilcoxon. Let Ri, ... ,
R;t denote the ranks of the absolute values IX d•... , IXnl of the observations: Ri = k
means that IX; I is the kth smallest in the sample of absolute values. More precisely, Ri =
2:j= 11{1Xil .::: IXd}. Suppose that there are no pairs of tied observations X; =Xi. Then
the signed rank statistic is defined as w+ = 2:7= 1Ri1{X; > 0}. Some algebra shows that
The second term on the right is of much lower order than the first and hence it follows that
n-312 (W+- EW+)- N(O, 1/12). D
12.5 Example (Kendall's T ). The U -statistic theorem requires that the observations X 1 ,
... , Xn are independent, but they need not be real-valued. In this example the observations
are a sample of bivariate vectors, for convenience (somewhat abusing notation) written as
(Xt. Yt) •... , (Xn, Yn). Kendall's T-statistic is
4
T = ( _ 1)
n n
I::~::){CYi- Y;)(Xj- X;)> 0} -1.
0 0
l<J
This statistic is a measure of dependence between X and Y and counts the number of
concordant pairs (X;, Y;) and (Xj. Yj) in the observations. Two pairs are concordant if the
indicator in the definition of T is equal to 1. Large values indicate positive dependence (or
concordance), whereas small values indicate negative dependence. Under independence of
X and Y and continuity of their distributions, the distribution of T is centered about zero,
and in the extreme cases that all or none of the pairs are concordant T is identically 1 or
-1, respectively.
The statistic T + 1 is a U -statistic of order 2 for the kernel
Hence the sequence .jii(-r + 1- 2P((Y2- Y1)(X2- Xt) > 0)) is asymptotically normal
with mean zero and variance 4~ 1 • With the notation F 1(x, y) = P(X < x, Y < y) and
FT(x, y) = P(X > x, Y > y), the projection of U- ()takes the form
marginal distributions. Then .jnr - N (0, 4/9) which leads to the test for "independence":
Reject independence if .j9ri'/41 r I > Zaf2· 0
where a and f3 range over the collections of all subsets of r different elements from
{1, 2, ... , m} and of s different elements from {1, 2, ... , n}, respectively. Clearly, U is
an unbiased estimator of the parameter
The sequence Um,n can be shown to be asymptotically normal by the same arguments as
for one-sample U -statistics. Here we let both m -+ oo and n -+ oo, in such a way that the
number of Xi and Yi are of the same order. Specifically, if N = m + n is the total number
of observations we assume that, as m, n -+ oo,
n
--+1-A. O<A.<l.
N '
To give an exact meaning tom, n -+ oo, we may think of m = mv and n = nv indexed by
a third index v E N. Next, we let mv -+ oo and nv -+ oo as v -+ oo in such a way that
mv/Nv-+ A..
The projection of U -e onto the set of all functions of the form L~=l ki (Xi)+ I:} =lli (Yj)
is given by
166 U.Statistics
This follows, as before. by first applying the Hlijek projection lemma. and next expressing
E(U I X,) and E(U I YJ) in the kernel function.
If the kernel is square-integrable, then the sequence (J is asymptotically nonnal by the
central limit theorem. The difference between (J and U - 0 is a.o;ymptotically negligible.
12.6 Tlleorem. l/fJJ2 (Xa ••••• X,, Y1, .... Y,) < oo. thentheseqttence../N(U -9-fJ)
converges in probability to z.ero. Comequentl>~ the sequence ../N(U- 0) converges in
distribution to the normal law with mean z.ero and varianu r 2 ~a.o/'A + s 2('o,J/{l -'A),
where, with the Xt being i.i.d. variables independent of the i.i.d. variables YJ•
It can be checked from this that both the sequence Nvar 0 and the sequence Nvar U
converge to the number r 2('1,0 /'A + s 2('o, 1/(l -'A). •
=
12.7 Example (Mann-1VIritllt] statistic). The kernel for the parameter() P(X :s f) is
h(x. y) = I (X :S Y), which is of order 1 in both x andy. The corresponding U -statistic is
1 m 11
u = -:EL)£X, :s Yj).
mn '""'i•l
The statistic mn U is known as the Mann-Whitney statistic and is used to test for a difference
in location between the two samples. A large value indicates that the YJ are "stochastically
larger.. than the X1•
If the X1 and YJ have cumulative distribution functions F and G. respectively. then the
projection of U - () can be wriuen
I "' 1
(J = --
m
~)G-(X,)- EG_(X,)) +- L(F(Y,)- EF(Y;)).
1_ 1 n ,.
II
1 1
It is easy to obtain the limit distribution of the projections (J {and hence of U) from this for-
mula.lnparticular,underthenullhypothesisthatthepooledsamplcX1, •.• , X,. Y,, ...• Y11
is i.i.d. with continuous distribution function F = G, the sequence ../l'lmn/N(U- 1/2)
12.3 Dtgenemte U-Statistics 167
Here, for each 0 ~ c ~ r, the variable U,.,e is aU-statistic of order c with kernel
To see this, fix a set A with c elements. Because the space "" is orthogonal to all functions
g(X1 : j e B) (i.e., the space Lees He) for every set B that does not contain A, the
projection P"lt(X~ 1 , . . . , XJJ,) is zero unless A c p = {fJ1, ... , fJ,}. Forthe remaining
fJ the projection P~.h(X~••••• , Xp,) does not depend on fJ (i.e., on the r- c elements of
p - A) and is a fixed function he of (X1 : j e A). This follows by symmetry, or explicitly
from the fonnula for the projections in section 11.4. The function he is indeed the function
as given previously. There are(~:~) vectors fJ that contain the set A. The claim that U,.,e is
a U -statistic with kernel he now follows by simple algebra. using the fact that(~:~/(~)(~)
=1/(~)-
By the defining properties of the space Hlt ..-.cl• it follows that the kernel he is degenerate
for c ~ 2. In fact, it is strongly degenerate in the sense that the conditional expectation of
he(X ,, ••• , Xc) given any strict subset of the variables X 1, ••• , Xc is zero. In other words,
the integral f h(x, X2, ••• , Xc) d P(x) with respect to any single argument vanishes. By
the same reasoning, Un.e is uncorrelated with every measurable function that depends on
strictly fewer than c clements of X 1, ••• , X,..
We shall show that the sequence n"/2 U,.,e converges in distribution to a limit with variance
c! Elt~(XJ, ... , Xc) for every c ~ 1. Then it follows that the sequence n"12(U11 - 9)
converges in distribution for c equal to the smaJJest value such that he ¢ 0. For c ~ 2 the
limit distribution is not normal but is known as Ga11ssian chaos.
Because the idea is simple, but the statement of the theorem (apparently) necessarily
complicated, first consider a special ca.lie: c = 3 and a "product kernel" of the fonn
168 U-Statistics
The limit is a polynomial of order 3 in the Gaussian vector (Gfl, Gf2, G/3).
There is no similarly simple formula for the limit of a general sequence of degenerate U-
statistics. However, any kernel can be written as an infinite linear combination of product
kernels. Because a U -statistic is linear in its kernel, the limit of a general sequence of
degenerate U -statistics is a linear combination of limits of the previous type.
To carry through this program, it is convenient to employ a decomposition of a given
kernel in terms of an orthonormal basis of product kernels. This is always possible. We
assume that L 2 (X, A, P) is separable, so that it has a countable basis.
12.8 Example (General kernel). If 1 = fo, !I, h. ... is an orthonormal basis of L 2 (X,
A, P), then the functions fk 1 x · · · x ike with (k1, ... , ke) ranging over the nonnegative
integers form an orthonormal basis of L 2(Xe, Ae, pe). Any square-integrable kernel can
be written in the form he(XI, ... , Xe) = L a(k1, ... , ke)fk 1 x · · · x fkc for a(k1, ... , ke) =
(he, /k 1 x · · · x fkc} the inner products of he with the basis functions. D
12.9 Example (Second-order kernel). In the case that c = 2, there is a choice that is spe-
cially adapted to our purposes. Because the kernel h 2 is symmetric and square-integrable
by assumption, the integral operator K: L 2 (X, A, P) 1-+ L 2 (X, A, P) defined by Kf(x) =
f h 2(x, y)f(y) dP(y) is self-adjoint and Hilbert-Schmidt. Therefore, it has at most count-
ably many eigenvalues Ao, A1 , ••• , satisfying LA~ < oo, and there exists an orthonormal
basis of eigenfunctions fo, !I, .... (See, forinstance, Theorem VI.16 in [124].) The kernel
h 2 can be expressed relatively to this basis as
00
h2(x, y) = LAdk(x)fk(y).
k=O
For a degenerate kernel h 2 the function 1 is an eigenfunction for the eigenvalue 0, and we
can take fo = 1 without loss of generality.
12.3 Degenerate U-Statistics 169
The gain over the decomposition in the general case is that only product functions of the
type f x f are needed. D
n
d(k)
ne12Un,e- L {he, fkt X ••• X A) Ha,(k)(G'I/f;(k)).
k=(kt, ... ,ke)el\le i=l
Here G is a P-Brownian bridge process, the functions 1/ft (k), ... , 1/fa(k)(k) are the different
elements in fkp ... , Ae' and a;(k) is number of times 1/f;(k) occurs among AI' ... , fke·
The variance of the limit variable is equal to c! Eh~(Xt •... , Xe).
Proof. The function he can be represented in L2(Xe, Ae, pe) as the series Lk(he, At x
... , A.) At x · · · x ike· Bythedegeneracyofhe the sum can be restricted tok = (kt •... , ke)
with every ki 2::: 1. If Un,eh denotes the U-statistic with kernel h(xt •... , Xe), then, for a
pair of degenerate kernels h and g,
This means that the map h 1--+ nc/ 2 -/Cf Un,ch is close to being an isometry between L2(Pc)
and L2(Pn). Consequently, the series Lk(he, At x · · · x A}Un,c!kt x · · · x A con-
verges in L2(Pn) and equals Un,ehe = Un,e· Furthermore, if it can be shown that the
finite-dimensional distributions of the sequence of processes {Un,eAt x · · · x Ae: k E
n
:f1 converge weakly to the corresponding finite-dimensional distributions of the process
{Il1~{ Ha,(k)(G'I/f;(k)): k E f:f}, then the partial sums of the series converge, and the proof
can be concluded by approximation arguments.
There exists a polynomial Pn,e of degree c, with random coefficients, such that
(See the example for c = 3 and problem 12.13). The only term of degree c in this polynomial
is equal to Gnfkt Gnfk2 • • • GnA· The coefficients of the polynomials Pn,e converge in
probability to constants. Conclude that the sequence nef 2c! Un,efkt x · · · x fke converges
in distribution to Pe(Gfkt, ... , Gfke) for a polynomial Pe of degree c with leading term,
and only term of degree c, equal to GAt G A 2 • • • G fke. This convergence is simultaneous
in sets of finitely many k.
It suffices to establish the representation of this limit in terms of Hermite polynomials.
This could be achieved directly by algebraic and combinatorial arguments, but then the
occurrence of the Hermite polynomials would remain somewhat mysterious. Alternatively,
170 U·Statistics
the representation can be derived from the definition of the Hermite polynomials and co-
variance calcuiations. By the degeneracy of the kernel /t1 x · • • x /~. the U-statistic
Un.ch 1 x • • • x ft. is orthogonal to all measurable functions of c- 1 or fewer elements
of X 1, ••• , Xn. and their linear combinations. This includes the functions fl 1(Gng; )a1 for
arbitrary functions g1 and nonnegative integers a1 with E a; < c. Taking limits, we con-
clude that Pc(G/t. 1 ••••• Gk) must be orthogonal to every polynomial in G/~; 1 , •••• Gk
of degree less than c- I. By the orthonormality of the basis Ji, the variables Gfi are
independent standard normal variables. Because the Hermite polynomials form a basis for
the polynomials in one variable, their (tensor) products form a basis for the polynomials
of more than one argument. The polynomial Pc can be wriuen as a linear combination of
elements from this basis. By the orthogonality, the coefficients of base elements of degree
< c vanish. From the base elements of degree c only the product as in the theorem can
occur, a.~ follows from consideration of the leading term of Pc. a
12.11 Example. For c = 2 and a basis I = / 0 , /~e ... of eigenfunctions of the kernel
1!2. we obtain a limit of the form E~:(lrz, ft. x /l)Hz(G/l). By the orthonormality of the
ba.'iis this variable is distributed as Et At (Zf -1) for Z I• Zz, ••• a sequence ofindependent
standard normal variables. 0
12.12 Example (Sample.•ariance). The kernel h(xt. x2) =! (x1-x2) 2 yields the sample
variances;. Because this has asymptotic variance Jl4- JL~ (see Example 3.2), the kernel
is degenerate if and only if Jl4 = Jl~. This can happen only if (X 1 - a 1) 2 is constant, for
a 1 = EX1• If we center the observations, so that a 1 = 0, then this means that X 1 only
takes the values -a and a = .[iii. each with probability 1/2. This is a very degenerate
situation, and it is easy to find the limit distribution directly, but perhaps it is instructive to
apply the general theorem. The kernels Ire take the forms (See section 11.4),
=
The kernel is degenerate if II 1 0 almost surely, and then the second-order kernel is
11 2 (x~ex:) = !<x1 - x2 ) 2 - a 2• Because the underlying distribution has only two sup-
port points, the eigenfunctions f of the corresponding integml operator can be identified
with vectors (/(-a), f(a)) in R2• Some linear algebra shows that they arc (1, 1) and
(-1, I), corresponding to the eigenvalues 0 and -a 2, respectively. Correspondingly, under
degeneracy the kernel allows the decomposition
We can conclude that the sequence n(S;- JLz) converges in distribution to -a 2 (Zr -1). 0
=
12.13 Example (Cramir-•·on Mises). LetFn(x) n- 1L,.. 11(X;::; x} be the empirical
distribution function of a random sample X 1, ••• , X" of real-valued random variables. The
Cmmir-Von Mises statistic for testing the (null) hypothesis that the underlying cumulative
Problems 171
The double sum restricted to the off-diagonal terms is a U-statistic, with, under Ho. a
degenemte kernel. Thus, this statistic converges to a nondcgcncratc limit distribution. The
diagonal terms contribute the constant J F(l - F) d F to the limit distribution, by the law
of large numbers. IfF is uniform, then the kernel of the U -statistic is
Notes
The main partofthis chapter has its roots in the paper by Hocffding [76]. Because the nsymp-
totic V"clriance is smaller than the true \"ariance of a U -statistic, Hocffding recommends to
apply a standard normal approximation to (U- EU)/ sd U. Degenerate U-statistics were
considered, among others, in [131] within the context of more general linear combina-
tions of symmetric kernels. An:ones and Gin~ [2) have studied the weak con\·ergcnce of
"U -processes", stochastic processes indexed by classes of kernels, in spaces of bounded
functions a.-. discussed in Chapter 18.
PROBLEMS
I. Derive the asymptotic distribution of Gini's mean difftren~t, which is defined as (2)- 1 E E 1<J
IXt- XJI·
:Z. Derive the projection of the S3mple variance.
=
3. Find a kernel for the parumetc:r 8 E(X - EX) 3•
4. Find a kernel for the par.1meter 0 = c:o,·(X, Y). Show th:1t the corresponding U-statistic is the
sample covariance E7. 1(X, - XHYt - Y)/(n - 1).
5. Find the limit distribution or u =<2>- 1 EEI<J(Yj- Yi)(Xj- x,).
6. Let U111 and U11 2 be U-statistics with kernels h 1 and h2. respecti\·ely. Derive the joint asymptotic:
distribution of (Unl• U,~).
7. Suppose EXi < oo. Deri\'e the asymptotic distribution of the sequence n- 1 E E1,.1 X;XJ.
Crut you give a two line proof without using the U -statistic lh~rem? What happens if EX l = 01
8. (Mann's test against trend.) To test the null hypothesis that a sample X,, ••• , X11 is i.i.d. against
the nltcrnath·e hypothesis that the distributions of the x, arc stocha.'>ticnlly increasing in l, Mann
172 U-Statistics
suggested to reject the null hypothesis if the number of pairs (X;, Xi) with i < j and X; < Xi
is large. How can we choose the critical value for large n?
9. Show that the U -statistic U with kernel 1{xt + xz > 0}, the signed rank statistic w+, and the
positive-sign statistic S = L:7= 11{X; > 0} are related by w+ = (~)U + Sin the case that there
are no tied observations.
10. A V-statistic of order 2 is of the form n-2 L:f=ILJ=lh(X;, Xj) where h(x, y) is symmetric
in x andy. Assume that Eh 2 (Xt, Xt) < oo and Eh 2 (Xt. Xz) < oo. Obtain the asymptotic
distribution of a V -statistic from the corresponding result for a U -statistic.
11. Define a V -statistic of general order r and give conditions for its asymptotic normality.
12. Derive the asymptotic distribution of n(s; - f.L2) in the case that f.L4 = f.L~ by using the delta-
method (see Example 12.12). Does it make a difference whether we divide by nor n- 1?
13. For any (n x c) matrix aij we have
Here the sum on the left ranges over all ordered subsets (it, ... , ic) of different integers from
{1, ... , n} and the first sum on the right ranges over all partitions B of {1, ... , c} into nonempty
sets (see Example [131]).
14. Given a sequence of i.i.d. random variables Xt, Xz, ... , let An be the a-field generated by all
functions of (Xt. Xz, ... ) that are symmetric in their first n arguments. Prove that a sequence
Un of U -statistics with a fixed kernel h of order r is a reverse martingale (for n ?:: r) with respect
to the filtration A, ::> Ar+l ::> · • ·•
I
15. (Strong law.) If Elh(X 1, · · ·, X,) < oo, then the sequence Un of U -statistics with kernel h
converges almost surely to Eh(Xt, ···,X,). (For r > 1 the condition is not necessary, but a
simple necessary and sufficient condition appears to be unknown.) Prove this. (Use the preceding
problem, the martingale convergence theorem, and the Hewitt-Savage 0-llaw.)
13
Rank, Sign, and Permutation Statistics
Statistics that depend on the observations only through their ranks can be
used to test hypotheses on departures from the null hypothesis that the ob-
servations are identically distributed. Such rank statistics are attractive,
because they are distribution-free under the null hypothesis and need not
be less efficient, asymptotically. In the case ofa sample from a symmetric
distribution, statistics based on the ranks of the absolute values and the
signs of the observations have a similar property. Rank statistics are a
special example ofpermutation statistics.
If X; is tied with some other observations, this definition is invalid. Then the rank RNi is
defined as the average of all indices j such that X; = XN(j) (sometimes called the midrank),
or alternatively as 'Lf=i 1{Xi :s X;} (which is something like an up rank).
In this section it is assumed that the random variables Xi, ... , XN have continuous
distribution functions, so that ties in the observations occur with probability zero. We shall
neglect the latter null set. The ranks and order statistics are written with double subscripts,
because N varies and we shall consider order statistics of samples of different sizes. The
vectors of order statistics and ranks are abbreviated to X No and RN, respectively.
A rank statistic is any function of the ranks. A linear rank statistic is a rank statistic of
the special form L~iaN(i. RN;) for a given (N x N) matrix (aN(i, j)). In this chapter
we are be concerned with the subclass of simple linear rank statistics, which take the form
N
L CNi aN,RN,.
i=i
Here (eN!, ... , CNN) and (aNI, ... , aNN) are given vectors in IRN and are called the coeffi-
cients and scores, respectively. The class of simple linear rank statistics is sufficiently large
173
174 Rank. Sign, and Pemllltation Statistics
Proof. Statements (i) through (iv) are well-known and elementary. For the proof of (v), it
is helpful to write T(X~o ... , XN) as a function of the ranks and the order statistics. Next,
we apply (i). For the proofof statement (vi), we use that the distributions of the variables RN;
and the vectors (RN;, RN1 ) fori ::F j are unifom1 on the sets I = (I, .•• , N) and ( (i, j) e
/ 2 : i ::F j }. respectively. Furthermore, a double sum of the form Li'I'J(b; - b)(b1 -b) is
equal to- L;(b;- b) 2 • a
It follows that rank statistics are distribution-free over the set of all models in which the
observations are independent and identically distributed. On the one hand, this makes them
statistically useless in situations in which the observations are, indeed, a r.1ndom sample
from some distribution. On the other hand, it makes them of great interest to detect certain
differences in distribution between the observations, such as in the two-sample problem. If
the null hypothesis is taken to assert that the observations are identically distributed, then
the critical values for a rank test can be chosen in such a way that the probability of an
error of the first kind is equal to a given level a, for any probability distribution in the null
hypothesis. Somewhat surprisingly, this gain is not necessarily counteracted by a loss in
asymptotic efficiency, as we see in Chapter 14.
13.2 Example (Two-sample location problem). Suppose that the total set of observations
consists of two independent random samples, inconsistently with the preceding notation
written ao; X a••••• Xm and Y~o •.• , Yn. Set N = m + 11 and let RN be the rank vector of
the pooled sample X1 .... , Xm, Y1, ••• , Yn.
13.1 Rank Statistics 115
We are interested in testing the null hypothesis that the two samples are identically dis-
tributed (according to a continuous distribution) against the alternative that the distribution
of the second sample is stochastically larger than the distribution of the first sample. Even
without a more precise description of the alternative hypothesis, we can discuss a collec-
tion of useful rank statistics. If the Y1 are a sample from a stochastically larger distribu-
tion, then the ranks of the Y1 in the pooled sample should be relatively large. Thus, any
measure of the size of the ranks RN,m+lo ••• , RNN can be used as a test statistic. It will be
distribution-free under the null hypothesis.
The most popular choice in this problem is the Wilcoxon statistic
N
\V= L
,_+1
RNt•
=
This is a simple linear rank statistic with coefficients c (0, ••• , 0, I, ••• , I), and scores
a = (I, ••• , N). The null hypothesis is rejected for large values of the Wilcoxon statistic.
(The Wilcoxon statistic is equivalent to the Mann-\VIzit11ey statistic U = 'f:.1•1 1{X, =: Y1}
=
in that \V U + !n(n + 1).)
There are many other reasonable choices of rank statistics, some of which are of special
interest and have names. For instance, the van der \Vaenlen statistic is defined as
N
L cS»- 1(RNt>·
l•m+l
Here cS»- 1 is the standard normal quantile function. We shall see ahead that this statistic is
particularly attractive if it is believed that the underlying distribution of the observations
is approximately normal. A general method to generate useful rank statistics is discussed
below. 0
A critical value for a test based on a (distribution-free) rank statistic can be found by
simply tabulating iLc; null distribution. For a large number ofobservations this is a bit tedious.
In most cases it is also unnecessary, because there exist accurate asymptotic approximations.
The remainder of this section is concerned with proving asymptotic normality of simple
linear rank statistics under the null hypothesis. Apart from being useful for finding critical
values, the theorem is used subsequently to study the asymptotic efficiency of rank tests.
Considera rank statistic of the form TN = 'f:.~ 1 CN;aN.R~tl' For a sequence of this type to
be asymptotically normal, some restrictions on the coefficients c and scores a are necessary.
In most cases of interest, the scores are ..generated" through a given function~: [0, I) t-+ R
in one of two ways. Either
(13.3)
where UNIU• ••• , UNIN) are the order statistics of a sample of size N from the unifonn
distribution on [0, I); or
For well-behaved functions ~. these definitions are closely related and almost identical,
because i/(N + 1) = EUN(I)· Scores of the first type correspond to the locally most
176 Rank, Sign, and Permutation Statistics
powerful rank tests that are discussed ahead; scores of the second type are attractive in view
of their simplicity.
13.5 Theorem. Let RN be the rank vector of an i.i.d. sample X,, ... , XN from the
continuous distribution function F. Let the scores aN be generated according to (13.3)for
J;
a measurable function 4J that is not constant almost everywhere, and satisfies 4J 2 ( u) d u <
oo. Define the variables
N N
TN= LcNiaN,RN•' TN= NcNaN +L (cNi- cN)4J(F(X;)).
i=l i=l
Then the sequences TN and TN are asymptotically equivalent in the sense that ETN = ETN
and var (TN - TN) jvar TN --+ 0. The same is true if the scores are generated according to
( 13.4) for a function 4J that is continuous almost everywhere, is nonconstant, and satisfies
N-'z:/!: 14J 2(i/(N + 1))--+ f01 4J 2(u)du < oo.
Proof. Set U; = F(X;), and view the rank vector RN as the ranks of the first N elements
of the infinite sequence U1, U2 , •••• In view of statement (v) of the Lemma 13.1 the
definition (13.3) is equivalent to
This immediately yields that the projection of TN onto the set of all square-integrable
functions of RN is equal to TN= E(TN I RN ). It is straightforward to compute that
If it can be shown that the right side converges to 1, then the sequences TN and TN are
asymptotically equivalent by the projection theorem, Theorem 11.2, and the proof for the
scores (13.3) is complete.
Using a martingale convergence theorem, we shall show the stronger statement
(13.6)
Because each rank vector R i _, is a function of the next rank vector R i (for one observation
more),itfollowsthataN,RN 1 = E(4J(U1) I R1 , ••• , RN) almost surely. Because4Jissquare-
integrable, a martingale convergence theorem (e.g., Theorem 10.5.4 in [42]) yields that the
sequence aN,RN 1 converges in second mean and almost surely to E(4J(U,) I R,, R2, .. .) . If
4J(U1) is measurable with respect to the a-field generated by R1 , R2 , ••• , then the condi-
tional expectation reduces to 4J(U1) and (13.6) follows.
The projection of U1 onto the set of measurable functions of RN 1 equals the conditional
expectation E(U1 I RNI) = RNt/(N + 1). By a straightforward calculation, the sequence
var (RN!/(N + 1)) converges to 1/12 = var U1• By the projection Theorem 11.2 it follows
that RN!/(N + 1) --+ U1 in quadratic mean. Because RNI is measurable in the a-field
generated by R 1, R2 , ••• , for every N, so must be its limit U1• This concludes the proof
that 4J{U1) is measurable with respect to the a-field generated by R 1, R 2 , ••• and hence the
proof of the theorem for the scores 13.3.
13.1 Rank Statistics 177
Next, consider the case that the scores are generated by ( 13.4). To avoid confusion, write
these scores as bNi = ¢(1/(N + 1) ), and let aNi be defined by (13.3) as before. We shall
prove that the sequences of rank statistics SN and TN defined from the scores aN and bN,
respectively, are asymptotically equivalent.
Because RN!/(N + 1) converges in probability to u, and <Pis continuous almost ev-
erywhere, it follows that </J(RN!/(N + 1)) --* </J(U,). The assumption on <Pis exactly
that &p 2(RN!/(N + 1)) converges to E¢ 2 (U1). By Proposition 2.29, we conclude that
</J(RN!/(N + 1))--* ¢{U1) in second mean. Combining this with (13.6), we obtain that
Under the conditions of the preceding theorem, the sequence of rank statistics 'L:cN iaN, RNi
is asymptotically equivalent to a sum of independent variables. This sum is asymptotically
normal under the Lindeberg-Feller condition, given in Proposition 2.27. In the present case,
because the variables <P(F(Xi)) are independent and identically distributed, this is implied
by
max!<i<N(cNi- cN) 2
-- --* 0. (13.7)
Lf:,1 (CNi -CN) 2
This is satisfied by the most important choices of vectors of coefficients.
13.8 Corollary. If the vector of coefficients cN satisfies ( 13.7), and the scores are genera-
ted according to (13.3) for a measurable, nonconstant, square-integrable function </J, then
the sequence of standardized rank statistics (TN - ETN )/sd TN converges weakly to an
N(O, I)-distribution. The same is true ifthe scores are generated by (13.4)forafunction
<P that is continuous almost everywhere, is nonconstant, and satisfies N- 1I::f:, 1¢ 2 (i j(N +
J:
1)) --* ¢ 2(u) du.
13.9 Example (Monotone score generating functions). Any nondecreasing, nonconstant
function <P satisfies the conditions imposed on score-generating functions of the type (13.4)
in the preceding theorem and corollary. The same is true for every <P that is of bounded
variation, because any such <P is a difference of two monotone functions.
To see this, we recall from the preceding proof that it is always true that RN!/(N + 1) --*
u,, almost surely. Furthermore,
1 N N +1 N
R ) ( i )
&p2 ( _!!.2.._ = - L¢2 - - ::S
N +1 N i=! N +1 N i=!
--I: f(i+l)/(N+l)
i/(N+l)
</J2(u)du.
The right side converges to J ¢ 2(u) du. Because </J is continuous almost everywhere, it
follows by Proposition 2.29 that </J(RN!/(N + 1)) --* </J(U,) in quadratic mean. D
178 Rank, Sign, and Permutation Statistics
CNi = { 01 i=1,
. ... ,m
z =m+1, ... ,m+n.
In this case CN = n/ Nand 2:~ 1 (cNi -eN ) 2 = mn/ N. The Lindeberg condition is satisfied
provided both m --+ oo and n --+ oo. 0
13.11 Example (Wilcoxon test). The function 4J(u) = u generates the scores aNi=
i/(N + 1). Combined with "two-sample coefficients," it yields a multiple of the Wilcoxon
statistic. According to the preceding theorem, the sequence of Wilcoxon statistics W N =
L~+! RNi/(N + 1) is asymptotically equivalent to
_ nm mn n1
WN = -- LF(Xi) +- LF(Yj) + N--.
N i=! N i=l N 2
.L 1
N { N+1}
RNi:::: - 2 - .
•=m+l
This counts the number of Yi less than the median of the pooled sample. Large values of
this test statistic indicate that the distribution of the second sample is stochastically smaller
than the distribution of the first sample. 0
The examples of rank statistics discussed so far have a direct intuitive meaning as statistics
measuring a difference in location. It is not always obvious to find a rank statistic appropriate
for testing certain hypotheses. Which rank statistics measure a difference in scale, for
instance?
A general method of generating rank statistics for a specific situation is as follows.
Suppose that it is required to test the null hypothesis that X 1 , ••• , XN are i.i.d. versus the
alternative that X I' ... ' X N are independent with xi having a distribution with density !cNiO'
for a given one-dimensional parametric model (J ~--+ fo. According to the Neyman-Pearson
lemma, the most powerful rank test for testing H0 : e = 0 against the simple alternative
H 1 : (J = (J rejects the null hypothesis for large values of the quotient
Po(RN = r)
- - - - = N! Po(RN = r).
Po(RN = r)
Equivalently, the null hypothesis is rejected for large values of P9 ( R N = r). This test
depends on the alternative e, but this dependence disappears if we restrict ourselves to
13.1 Rank Statistics 179
N · N
= eI ... r LCNifo(x;)
JRN=r fo
i=l
Oto(x;)dXt .. ·dXN + o(fJ)
i=l
=
1L
e-, N
cNiEo (j_Q(X;) I RN = r
)+ o(e).
N. i=l fo
Conclude that, for small e > 0, large values ofP9 (RN = r) correspond to large values of
the simple linear rank statistic TN = L:i:,1cNiaN,RN,, for the vector aN of scores given by
13.13 Example (Two-sample scale). To generate a test statistic for the two-sample scale
problem, let /IJ(x) = e- 8 f(e- 8 x) for a fixed density f. If X; has density fcN;9 and the
vector c is chosen equal to the usual vector of two-sample coefficients, then the first m
observations have density fo = f; the last n = N - m observations have density /9. The
alternative hypothesis that the second sample has larger scale corresponds to () > 0. The
scores for the locally most powerful rank test are given by
For instance, for f equal to the standard normal density this leads to the .rank statistic
L:i:,m+l aN,RNI with scores
aNi= E<I>- 1 (UN<il) 2 - 1.
The same test is found for f equal to a normal density with a different mean or variance.
This follows by direct calculation, or alternatively from the fact that rank statistics are
location and scale invariant. The latter implies that the probabilities P j,L,u, 9 (RN = r) of
the rank vector RN of a sample of independent variables X t. ... , X N with X; distributed
according to e- 9 f(e- 8 (x- f.L)fa)fa do not depend on (f.L, a). Thus the procedure to
generate locally most powerful scores yields the same result for any (f.L, a). D
180 Rank, Sign, and Permutation Statistics
13.14 Example (Two-sample location). In order to find locally most powerful tests for
location, we choose fe (x) = f (x - 8) for a fixed density f and the coefficients c equal to
the two-sample coefficients. Then the first m observations have density f (x) and the last
n = N- m observations have density f(x- 8). The scores for a locally most powerful
rank test are
For the standard normal density, this leads to a variation of the van der Waerden statistic.
The Wilcoxon statistic corresponds to the logistic density. D
13.15 Example (Log rank test). The cumulative hazard function corresponding to a con-
tinuous distribution function F is the function A = -log(l - F). This is an important
modeling tool in survival analysis. Suppose that we wish to test the null hypothesis that two
samples with cumulative hazard functions Ax and Ay are identically distributed against
the alternative that they are not. The hypothesis of proportional hazards postulates that
Ay = 8 A X for a constant 8' meaning that the second sample is a factor 8 more "at risk" at
any time. If we wish to have large power against alternatives that satisfy this postulate, then
it makes sense to use the locally most powerful scores corresponding to a family defined
by Ae = 8A 1• The corresponding family of cumulative distribution functions Fe satisfies
1- Fe = (1- F 1)e and is known as the family of Lehmann alternatives. The locally most
powerful scores for this family correspond to the generating function
It is fortunate that the score-generating function does not depend on the baseline hazard
function A 1• The resulting test is known as the log rank test. The test is related to the
Savage test, which uses the scores
The log rank test is a very popular test in survival analysis. Then usually it needs to be
extended to the situation that the observations are censored. D
13.16 Example (More-sample problem). Suppose the problem is to test the hypothesis
that k independent random samples X1, ... , XNp XN1+1• ... , XN2' •• •, XNk-t+l• ... , XNk
are identical in distribution. Let N = Nk be the total number of observations, and let RN be
the rank vector of the pooled sample X 1 , ••• , X N. Given scores aN inference can be based
on the rank statistics
Nt N2 Nk
TN1 = LaN,RNI' TN2 = L aN,RNI' ... , TNk = L aN,RNi'
i=l i=Nt+l i=Nk-t+l
The testing procedure can consist of several two-sample tests, comparing pairs of (pooled)
subsamples, or on an overall statistic. One possibility for an overall statistic is the chi-square
13.2 Signed Rank Statistics 181
statistic. For n j = Nj- Nj-l equal to the number of observations in the jth sample, define
2 ~(TN1 -n/iiN) 2
CN=~
j=l njvar~(Ut)
If the scores are generated by (13.3) or (13.4) and all sample sizes nj tend to infinity, then
every sequence TN1 is asymptotically normal under the null hypothesis, under the conditions
of Theorem 13.5. In fact, because the approximations t N1 are jointly asymptotically normal
by the multivariate central limit theorem, the vector TN = (TN1, ••• , TNk) is asymptotically
normal as well. By elementary calculations, ifni IN -* Ai,
-A.tA.k
-A.2A.k ) )
A.k(l ~ A.k) .
This limit distribution is similar to the limit distribution of a sequence of multinomial vectors.
Analogously to the situation in the case of Pearson's chi-square tests for a multinomial
distribution (see Chapter 17), the sequence C1 converges in distribution to a chi-square
distribution with k - 1 degrees of freedom.
There are many reasonable choices of scores. The most popular choice is based on
~(u) = u and leads to theKruskal-Wallis test. Its test statistic is usually written in the form
12 k (- N +
N(N- 1)t;nj Rj.- - 2 -
1) '
2
This test statistic measures the distance of the average scores of the k samples to the average
score (N + 1)/2 of the pooled sample.
An alternative is to use locally asymptotically powerful scores for a family of distribu-
tions of interest. Also, choosing the same score generating function for all subsamples is
convenient, but not necessary, provided the chi-square statistic is modified. D
The ordinary ranks of a sample can always be derived from the combined set of absolute
ranks and signs. Thus, the vectors of absolute ranks and signs are together statistically more
informative than the ordinary ranks. The difference is dramatic if testing the location of a
symmetric density of a given form, in which case the class of signed rank statistics contains
asymptotically efficient test statistics in great generality.
182 Rank, Sign, and Permutation Statistics
The main attraction of signed rank statistics is their simplicity, particularly their being
distribution-free over the set of all symmetric distributions. Write lXI, Rt. and signN(X)
for the vectors of absolute values, absolute ranks, and signs.
13.17 Lemma. Let X 1, ... , XN be a random sample from a continuous distribution that
is symmetric about zero. Then
(i) the vectors (lXI, Rt) and signN(X) are independent,·
(ii) the vector Rt is uniformly distributed over {1, ... , N};
(iii) the vector signN(X) is uniformly distributed over {-1, l}N;
(iv) for any signed rank statistic, var :E~ 1 aN,Rt, sign(X;) = :E~,a~;·
Consequently, for testing the null hypothesis that a sample is i.i.d. from a continuous,
symmetric distribution, the critical level of a signed rank statistic can be set without further
knowledge of the "shape" of the underlying distribution.
The null hypothesis of symmetry arises naturally in the two-sample problem with paired
observations. Suppose that, given independent observations (X 1 , Y1), ••• , (XN, YN). it is
desired to test the hypothesis that the distribution of X; - Y; is "centered at zero." If the
observations (X;, Y;) are exchangeable, that is, the pairs (X;, Y;) and (Y;, X;) are equal
in distribution, then X; - Y; is symmetrically distributed about zero. This is the case, for
instance, if, given a third variable (usually called "factor"), the observations X; andY; are
conditionally independent and identically distributed. For the vector of absolute ranks to
be uniformly distributed on the set of all permutations it is necessary to assume in addition
that the differences are identically distributed.
For the signs alone to be distribution-free, it suffices, of course, that the pairs are inde-
t
pendent and that P(X; < Y;) = P(X; > Y;) = for every i. Consequently, tests based on
only the signs have a wider applicability than the more general signed rank tests. However,
depending on the model they may be less efficient.
Then the sequences TN and TN are asymptotically equivalent in the sense that N- 1var (TN-
TN) --+ 0. Consequently, the sequence N- 112 TN is asymptotically normal with mean zero
and variance I: ¢ 2 (u) du. The same is true if the scores are generated according to (13.4)
for a function </J that is continuous almost everywhere and satisfies N- 1:E~ 1 ¢ 2 (i/(N +
I:
1)) --+ ¢ 2 (u) du < oo.
Proof. Because the vectors signN(X) and (lXI, Rt) are independent and EsignN(X) =
0, the means of both TN and TN are zero. Furthermore, by the independence and the
orthogonality of the signs,
13.4 Rank Statistics for Independence 183
The expectation on the right side is exactly the expression in (13.6), evaluated for the
special choice U1 = p+(IX 1 1). This can be shown to converge to zero as in the proof of
Theorem 13.5. •
13.19 Example (Wilcoxon signed rank statistic). The Wilcoxon signed rank statistic
WN = L,[:,, Rt; sign( Xi) is obtained from the score-generating function ~(u) = u. Large
values of this statistic indicate that large absolute values IXi I tend to go together with pos-
itive Xi. Thus large values of the Wilcoxon statistic suggest that the location of the X; is
larger than zero. Under the null hypothesis that X 1, •.• , XN are i.i.d. and symmetrically
distributed about zero, the sequence N- 312 WN is asymptotically normal N(O, 1/3). The
variance of WN is equal to N(2N + 1)(N + 1)/6.
The signed rank statistic is asymptotically equivalent to the U -statistic with kernel
h(x 1 , x 2 ) = 1{x1 + x 2 > 0}. (See problem 12.9.) This connection yields the limit distri-
bution also under nonsymmetric distributions. D
Signed rank statistics that are locally most powerful can be obtained in a similar fashion
as locally most powerful rank statistics were obtained in the previous section. Let f be
a symmetric density, and let X,, ... , XN be a random sample from the density f(· -e).
Then, under regularity conditions,
=-8 2 NN!~siEo
1 N (f'
f(IXd)IRti=ri +o(8). )
In the second equality it is used that f'/f(x) is equal to sign(x)f'/f(lxl) by the skew
symmetry of f' /f. It follows that locally most powerful signed rank statistics for testing
f against f(· -e) are obtained from the scores
13.20 Example. The Laplace density has score function f' 1f (x) = sign(x) = 1, for
0. This leads to the locally most powerful scores aNi = 1. The corresponding test
x ::::.
statistic is the sign statistic TN = L,i:, 1 sign(Xi). Is it surprising that this simple statistic
possesses an optimality property? It is shown to be asymptotically optimal for testing
Ho: 8 = 0 in Chapter 15. D
13.21 Example. The locally most powerful score for the normal distribution are aNi =
E<l>- 1 (CUN<I) + 1)/2). These are appropriately known as the normal (absolute) scores. D
184 Rank, Sign, and Permutation Statistics
with aN and bN increasing vectors, are reasonable choices for testing independence.
Under the null hypothesis of independence of X; andY;, the vectors RN and SN are
independent and both uniformly distributed on the permutations of {1, ... , N}. Let R'N be
the vector of ranks of X 1, ... , XN if first the pairs (X 1, YI), ... , (XN, YN) have been put in
increasing order of Y1 < Y2 < · · · < YN. The coordinates of R'N are called the antiranks.
Under the null hypothesis, the antiranks are also uniformly distributed on the permutations
of {1, ... , N}. By the definition of the antiranks,
N N
L:aN,RN1 bN,SNi = L:aN,R'JubNi·
i=l i=l
The right side is a simple linear rank statistic and can be shown to be asymptotically normal
by Theorem 13.5.
Thus the tests based on the rank correlation coefficient PN are equivalent to tests based on
the signed rank statistic L RNi SNi.
It is straightforward to derive from Theorem 13.5 that the sequence -INPN is asymptot-
ically standard normal under the null hypothesis of independence. D
but under the strong assumption that the distribution functions F; are all equal. This is suffi-
cient for setting critical values of rank tests for the null hypothesis of identical distributions,
but for studying their asymptotic efficiency we also need the asymptotic behavior under
alternatives. For instance, in the two-sample problem we are interested in the asymptotic
distributions under alternatives of the form F, ... , F, G, ... , G, where F and G are the
distributions of the two samples.
For alternatives that converge to the null hypothesis "sufficiently fast," the best approach
is to use Le Cam's third lemma. In particular, if the log likelihood ratios of the alternatives
Fn, ... , Fn, Gn, ... , Gn with respect to the null distributions F, ... , F, F, ... , F allow
an asymptotic approximation by a sum of the type 2:.f;(X;), then the joint asymptotic
distribution of the rank statistics and the log likelihood ratios under the null hypothesis
can be obtained from the multivariate central limit theorem and Slutsky's lemma, because
Theorem 13.5 yields a similar approximation for the rank statistics. Next, we can apply Le
Cam's third lemma, as in Example 6.7, to find the limit distribution of the rank statistics
under the alternatives. This approach is relatively easy, and is sufficiently general for most
of the questions of interest. See sections 7.5 and 14.1.1 for examples.
More general alternatives must be handled directly and appear to require stronger con-
ditions on the score-generating function. One possibility is to write the rank statistic as a
functional of the empirical distribution function lF N, and the weighted empirical distribution
lF~(x) = N- 1 2:~ 1 cNi1{X;:::: x} oftheobservations. Because RNi = NlFN(X;), we have
Next, we can apply a von Mises analysis, using the convergence of the empirical distribution
functions to Brownian bridges. This method is explained in general in Chapter 20.
In this section we illustrate another method, based on Hajek's projection lemma. To
avoid technical complications, we restrict ourselves to smooth score-generating functions.
- -c 1 N
Let FN be the average of Ft •... , FN and let F N be the weighted sum N- Li=lcNiF;, and
define
t
We shall show that the variables N are the Hajek projections of approximations to the
variables TN, up to centering at mean zero. The Hajek projections of the variables TN
themselves give a better approximation but are more complicated.
13.23 Lemma. If </J: [0, 1] 1--+ ~is twice continuously differentiable, then there exists a
universal constant K such that
1~
var(TN- TN):::: K- ~(cNi -eN)
h
lloo + 11</J112)
2( 11</J12 lloo ·
Ni=l
Proof. Because the inequality is for every fixed N, we delete the index N in the proof.
Furthermore, because the assertion concerns a variance and both TN and t N change by a
186 Rank, Sign, and Permutation Statistics
constant if the CNi are replaced by cNi -eN, it is not a loss of generality to assume that
CN = 0. (Evaluate the integral defining TN to see this.)
The rank of Xi can be written as Ri = I+ Lk#i I{Xk ::::; Xi}. This representation and
a little algebra show that
<-BE
I
N2
I'V
-N
(I "(I{Xk < X.}-
f;t I - Fk(X.))
I
41 X·I ) +N4
I
- < -.
I
N2
I'V
Next, developing 4J in a two-term Taylor expansion around F(Xi), for each term in the sum
that defines T, we see that there exist random variables Ki that are bounded by IW'IIoo such
that
+" c·
~I
1=1
N ( RNi
- - - -F(X·)
N+I I
)
K·I
2
Using the Cauchy-Schwarz inequality and the fourth-moment bound obtained previously,
we see that the quadratic term T2 is bounded above in second mean as in the lemma. The
leading term To is a sum of functions of the single variables Xi, and is the first part of T.
We shall show that the linear term T1 is asymptotically equivalent to its Hajek projection,
which, moreover, is asymptotically equivalent to the second part ofT, up to a constant. The
Hajek projection of T1 is equal to, up to a constant,
,4:ci ,4:E[N
I J
~ I4J'( F(Xi)) IXi J- ,4:ciF(Xi)4J'( F(Xi))
I
=Lei
i
[2:E[~4J'(
#i N+I
F(Xi)) I xi]
_I_
N+I i
Lei l:E(l{Xj::::;
#i
Xil4J'( F(Xi)) I Xi)+ constant.
13.4 Rank Statistics under Alternatives 187
If we replace (N + 1) by N, write out the conditional expectation, add the diagonal terms, and
remove the constant, then we obtain the second term in the definition of T. The difference
between these two expressions is bounded above in second mean as in the lemma.
To conclude the proof it suffices to show that the difference between T1 and its Hajek
projection is negligible. We employ the Hoeffding decomposition. Because each of the
variables R;</J'( F(X;)) is contained in the space LIAis2 HA. the difference between T1
and its Hajek projection is equal to the projection of T1 onto the space LIAI=Z HA. This
projection has second moment
1 2
2 LE(PALc;L1{Xk:S:Xd</J'(F(X;))) .
(N + 1) IA1=2 i k
The projection of the variable 1{Xk ::: X;}</J'( F(X;) ), which is contained in the space H1k,il•
onto the space H[a,bl is zero unless {a, b} C {k, i}. Thus, the expression in the preceding
display is equal to
As a consequence of the lemma, the sequences (TN - ETN) jsd TN and ctN - t
ETN) jsd N
have the same limiting distribution (if any) if
ETN~CNLlP
N (
- 1•
-
) N
+I:CcN;-cN)El/J(FN(X;)).
i=l N +1 i=l
The square of the difference is bounded by the upper bound of the lemma.
The preceding lemma is restricted to smooth score-generating functions. One possibility
to extend the result to more general scores is to show that the difference between the rank
statistics of interest and suitable approximations by rank statistics with smooth scores is
small. The following lemma is useful for this purpose, although it is suboptimal if the
observations are identically distributed. (For a proof, see Theorem 3.1, in [68].)
13.24 Lemma (Variance inequality). For nondecreasing coefficients aNI ::: · · · ::: aNN
and arbitrary scores CNI, ••• , CNN•
188 Rank, Sign, and Permutation Statistics
These statistics include, for instance, the score statistics for testing that the two samples
have distributions Po and Po, respectively, for which we take f equal to the score function
Pol Po of the model. Because a permutation test is conditional on the observed values,
and TN is fixed once Li f(Yj) and Li f(Zi) are fixed, it would be equivalent to consider
statistics of the form Lj f(Yj).
Let (rrNI, ... , 7rNN) be uniformly distributed on theN! permutations of the numbers
1, 2, ... , N, and be independent of Xt. ... , Xm, Yt, ... , Yn.
13.25 Theorem. Let both Ef 2 (X 1) and Ef 2 (Y1) be .finite, and suppose that m, n--+ oo
such that mj N--+ .A. E (0, 1). Then, given almost every sequence Xt. X2, ... , Yt. Y2, ... ,
the sequence .../NTN(ZnN1 , ••• , ZnNN) is asymptotically normal with mean zero. Under the
null hypothesis the asymptotic variance is equal to var f(X 1)/(.A.(1- .A.)).
Proof. Conditionally on the values of the pooled sample, the statistic NTN(ZnNI, ... ,
ZnNN) is distributed as the simple linear rank statistic .E~ 1 cNiaN,RNt with coefficients and
scores
n'
i<m
.
-
z >m
Here RNl, ... , RNN aretheantiranks ofrrNl, ... , 7rNN defined by the equation .E cN,nN,aNi
= .E cNiaN,RNI (for any numbers CNi and aNi).
13.5 Permutation Tests 189
The scores are generated as aNi = 4JN (i j (N + 1)) for the functions
4JN(u) = I N
!i
m'
-n,
u<....!!L
- N+I'
m
U > N+I'
These functions depend on N, unlike the situation of Theorem 13.5, but they converge
to the fixed function (jJ = )... -I1[o,A) - (1 - )...)-I1<".Il· By a minor extension of Theo-
rem 13.5, the sequence L. cNiaN,RN1 is asymptotically equivalent to L_(cNi - CN )(jl(Ui),
for a uniform sample U I , ... , UN. The (asymptotic) variance of the latter variable is easy to
compute. •
2 var /(XI)
(]' = }..(1- )...) .
The limit is the same as the conditional limit distribution of the sequence ../NTN (Z,..N 1 , ••• ,
Z,..NN) under the null hypothesis. Thus, we have a choice of two sequences of tests, both of
asymptotic level o:, rejecting Ho if:
- ../NTN(XI, ... , Xm, YI, ... , Yn):::: Zau; or
- ../NTN(XI, ... , Xm, Y!t ... , Yn):::: CN(XI, ... , Xm, YI, ... , Yn), where
CN(XI, ... , Xm, YI, ... , Yn) is the upper o:-quantile of the conditional
distribution of ../NTN (Z,..N[' ... ' Z,..NN) given z(I)· ... ' Z(N)·
The second test is just the permutation test discussed previously. By the preceding theorem
the "random critical values" CN(XI, ... , Xm, YI, ... , Yn) converge in probability to ZaO'
under H0 • Therefore the two tests are asymptotically equivalent under the null hypoth-
esis. Furthermore, this equivalence remains under "contiguous alternatives" (for which
again cN(XIt ... , Xm, YI, ... , Yn) ~ zau; see Chapter 6), and hence the local asymp-
totic power functions as discussed in Chapter 14 are the same for the two sequences of
tests.
The preceding theorem also shows that the sequence of "critical values" CN (X I, ... , Xm,
YI, ... , Yn) remains bounded in probability under every alternative. Because ../NTN
(XI, ... , Xm, YI, ... , Yn)- ooifE/(XI) > Ef(YI), thepoweratany alternative with this
property converges to 1. Thus, permutation tests are an attractive alternative to both rank
and classical tests. Their main drawback is computational complexity. The dependence of
the null distribution on the observed values means that it cannot be tabulated and must be
computed for every new data set.
190 Rank, Sign, and Permutation Statistics
13.26 Theorem (Rank central limit theorem). Let TN = I: CNiaN,RN, be the simple lin-
ear rank statistic with coefficients and scores such that maxlsisN laNi- aNI/AN --+ 0
and maxlsisN lcNi- cNI/CN --+ 0, and let the rank vector RN be uniformly distributed
on the set of all N! permutations of {1, 2, ... , N}. Then the sequence (TN - ETN )/sd TN
converges in distribution to a standard normal distribution if and only if, for every e > 0,
Notes
The classical reference on rank statistics is the book by Hajek and Sidak [71], which still
makes wonderful reading and gives extensive references. Its treatment of rank statistics for
nonidentically distributed observations is limited to contiguous alternatives, as in the first
sections of this chapter. The papers [43] and [68] remedied this, shortly after the publication
of the book. Section 13.4 reports only a few of the results from these papers, which, as
does the book, use the projection method. An alternative approach to obtaining the limit
distribution of rank statistics, initiated by Chernoff and Savage in the late 1950s and refined
many times, is to write them as functions of empirical measures and next apply the von
Mises method. We discuss examples of this approach in Chapter 20. See [134] for a more
comprehensive treatment and further references.
PROBLEMS
1. This problem asks one to give a precise meaning to the notion of a locally most powerful test.
Let TN be a rank statistic based on the "locally most powerful scores." Leta = Po(TN > ca) for
a given number Ca. (Then a is a natural level of the test statistic, a level that is attained without
randomization.) Then there exists e > 0 such that the test that rejects the null hypothesis if
TN > ca is most powerful within the class of all rank tests at level a uniformly in the alternatives
ee (0, e).
(i) Prove the statement.
(ii) Can the statement be extended to arbitrary levels?
2. Find the asymptotic distribution of the median test statistic under the null hypothesis that the two
samples are identically distributed and continuous.
3. Show that Jn times Spearman's rank correlation coefficient is asymptotically standard normal.
4. Find the scores for a locally most powerful two-sample rank test for location for the Laplace
family of densities.
Problems 191
5. Find the scores for a locally most powerful two-sample rank test for location for the Cauchy
family of densities.
6. For which density is the Wilcoxon signed rank statistic locally most powerful?
7. Show that Spearman's rank correlation coefficient is a linear combination of Kendall's rand the
U -statistic with (asymmetric) kernel h(x, y, z) = sign(x1 - YI) sign(x2 - z2). This decompo-
sition yields another method to prove the asymptotic normality.
8. The symmetrized Siegel-Tukey test is a two-sample test with score vector of the form aN =
(1, 3, 5, ... , 5, 3, 1). For which type of alternative hypothesis would you use this test?
9. For any aNi given by (13.3), show that 7iN = J~ cjJ(u) du.
14
Relative Efficiency of Tests
The quality of sequences of tests can be judged from their power at alter-
natives that become closer and closer to the null hypothesis. This moti-
vates the study oflocal asymptotic powerfunctions. The relative efficiency
of two sequences of tests is the quotient of the numbers of observations
needed with the two tests to obtain the same level and power. We discuss
several types of asymptotic relative efficiencies.
(An alternative definition is to drop the supremum and require only that lim sup rrn (()) ::::; a
for every() E E>o.) A test with power function rrn is better than a test with power function
!fn ifboth
The aim of this chapter is to compare tests asymptotically. We consider sequences of tests
with power functions rrn and !fn and wish to decide which of the sequences is best as
n --+ oo. '!Ypically, the tests corresponding to a sequence rr1 , rr2 , •.• are of the same type.
For instance, they are all based on a certain U -statistic or rank statistic, and only the number
of observations changes with n. Otherwise the comparison would have little relevance.
A first idea is to consider limiting power functions of the form
If this limit exists for all (), and the same is true for the competing tests !fn, then the se-
quence rrn is better than the sequence !fn if the limiting power function rr is better than the
192
14.1 Asymptotic Power Functions 193
limiting power function rr,. It turns out that this approach is too naive. The limiting power
functions typically exist, but they are trivial and identical for all reasonable sequences of
tests.
14.1 Example (Sign test). Suppose the observations Xt, ... , Xn are a random sample
from a distribution with unique median e. e
The null hypothesis Ho : = 0 can be tested
e
against the alternative H 1 : > 0 by means of the sign statistic Sn = n- 1 I:?= 1 1{X; > 0}.
If F (x -e) is the distribution function of the observations, then the expectation and variance
of Snare equal to p,(e) = 1- F( -e) and a 2 (e)jn = (1- F( -e) )F( -e)jn, respectively.
By the normal approximation to the binomial distribution, the sequence ,Jn(Sn- p,(e)) is
asymptotically normal N(O, a 2 (e) ). Under the null hypothesis the mean and variance are
equal to p,(O) = 1/2 and a 2 (0) = 1/4, respectively, so that ,y'n(Sn - 1/2) ~ N(O, 1/4).
The test that rejects the null hypothesis if ,y'n(Sn- 1/2) exceeds the critical value ~za has
power function
Because F(O) - F( -e) > 0 for every e > 0, it follows that foro: = o:n --* 0 sufficiently
slowly
if e = o,
if e > o.
The limit power function corresponds to the perfect test with all error probabilities equal to
zero. D
The example exhibits a sequence of tests whose (pointwise) limiting power function is
the perfect power function. This type of behavior is typical for all reasonable tests. The
point is that, with arbitrarily many observations, it should be possible to tell the null and
alternative hypotheses apart with complete accuracy. The power at every fixed alternative
should therefore converge to 1.
e
14.2 Definition. A sequence of tests with power functions t-+ rrn (e) is asymptotically
consistent at level o: at (or against) the alternative e if it is asymptotically of level o: and
rrn (e) --* 1. If a family of sequences oftests contains for every level o: E (0, 1) a sequence
that is consistent against every alternative, then the corresponding tests are simply called
consistent.
Consistency is an optimality criterion for tests, but because most sequences of tests are
consistent, it is too weak to be really useful. To make an informative comparison between
sequences of (consistent) tests, we shall study the performance of the tests in problems
that become harder as more observations become available. One way of making a testing
problem harder is to choose null and alternative hypotheses closer to each other. In this
section we fix the null hypothesis and consider the power at sequences of alternatives that
converge to the null hypothesis.
194 Relative Efficiency of Tests
14.3 Example (Sign test, continued). Consider the power of the sign test at sequences of
alternatives On ..j.. 0. Suppose that the null hypothesis Ho: 0 = 0 is rejected if ..[ii(Sn- ~) ~
~ Za. Extension of the argument of the preceding example yields
lz - 'n(F(O)- F(-0 )) )
1fn(On) = 1- <I> ( 2 a 'V" u(On) n + o(1).
Since u(O) = ~.the levels 7rn(O) of the tests converge to <l>(za) = a. The asymptotic
power at On depends on the rate at which On --+ 0. If On converges to zero fast enough to
ensure that .fii(F(O)- F( -On)) --+ 0, then the power 7rn(On) converges to a: the sign testis
not able to discriminate these alternatives from the null hypothesis. If On converges to zero
at a slow rate, then .fii(F (0) - F (-On)) --+ oo, and the asymptotic power is equal to 1:
these alternatives are too easy. The intermediate rates, which yield a nontrivial asymptotic
power, appear to be of most interest. Suppose that the underlying distribution function F
is differentiable at zero with positive derivative f(O) > 0. Then
This is bounded away from zero and infinity if On converges to zero at rate On = 0 (n- 112).
For such rates the power 7rn(On) is asymptotically strictly between a and 1. In particular,
for every h,
In the preceding example only alternatives On that converge to the null hypothesis at
rate 0 (1 1..[ii) lead to a nontrivial asymptotic power. This is typical for parameters that
depend "smoothly" on the underlying distribution. In this situation a reasonable method
for asymptotic comparison of two sequences of tests for H0 : 0 = 0 versus Ho : 0 > 0 is to
consider local limiting power functions, defined as
Often p,(O) and a 2 (0) can be taken to be the mean and the variance of Tn, but this is not
necessary. Because the convergence (14.4) is under a law indexed by On that changes with
n, the convergence is not implied by
For On =hi Jn, the sequence Jn(p,(On)- p,(O)) converges to hp,'(O) if p, is differentiable
at zero. If a(On) --+ a(O), then under (14.4)
The limiting power function depends on the sequence of test statistics only through the
quantity p,'(O)Ia(O). This is called the slope of the sequence of tests. Two sequences
of tests can be asymptotically compared by just comparing the sizes of their slopes. The
bigger the slope, the better the test for Ho : 0 = 0 versus H1 : 0 > 0. The size of the slope
depends on the rate p,' (0) of change of the asymptotic mean of the test statistics relative to
their asymptotic dispersion a(O). A good quantitative measure of comparison is the square
of the quotient of two slopes. This quantity is called the asymptotic relative efficiency and
is discussed in section 14.3.
If 0 is the only unknown parameter in the problem, then the available tests can be ranked
in asymptotic quality simply by the value of their slopes. In many problems there are also
nuisance parameters (for instance the shape of a density), and the slope is a function of the
nuisance parameter rather than a number. This complicates the comparison considerably.
For every value of the nuisance parameter a different test may be best, and additional criteria
are needed to choose a particular test.
196 Relative Efficiency of Tests
14.8 Example (Sign test). According to Example 14.3, the sign test has slope 2 f (0). This
can also be obtained from the preceding theorem, in which we can choose J.L (()) = 1- F ( -(})
andu 2 (0) = (1- F(-O))F(-0). D
14.9 Example (t-test). Let X 1, ... , Xn be a random sample from a distribution with mean
() and finite variance. The t-test rejects the null hypothesis for large values of~. The sample
variance S2 converges in probability to the variance u 2 of a single observation. The central
limit theorem and Slutsky's lemma give
Thus Theorem 14.7 applies with J.L(O) = () ju and u(()) = 1. The slope of the t-test equals
1ju.t D
14.10 Example (Sign versus t-test). Let X 1 , •.• , Xn be a random sample from a density
f(x- ()),where f is symmetric about zero. We shall compare the performance of the
sign test and the t-test for testing the hypothesis Ho: () = 0 that the observations are
symmetrically distributed about zero. Assume that the distribution with density f has a
unique median and a finite second moment.
It suffices to compare the slopes of the two tests. By the preceding examples these
are 2/(0) and (Jx 2 f(x)dx( 112 , respectively. Clearly the outcome of the comparison
depends on the shape f. It is interesting that the two slopes depend on the underlying shape
in an almost orthogonal manner. The slope of the sign test depends only on the height
of f at zero; the slope of the t-test depends mainly on the tails of f. For the standard
normal distribution the slopes are ..j2J1i and 1. The superiority of the t-test in this case is
not surprising, because the t-test is uniformly most powerful for every n. For the Laplace
4
distribution, the ordering is reversed: The slopes are 1 and ,J2. The superiority of the sign
test has much to do with the "unsmooth" character of the Laplace density at its mode.
The relative efficiency of the sign test versus the t -test is equal to
Table 14.1 summarizes these numbers for a selection of shapes. For the uniform distribution,
the relative efficiency of the sign test with respect to the t-test equals 1/3. It can be shown
that this is the minimal possible value over all densities with mode zero (problem 14.7). On
the other hand, it is possible to construct distributions for which this relative efficiency is
arbitrarily large, by shifting mass into the tails of the distribution. The sign test is "robust"
against heavy tails, the t-test is not. D
The simplicity of comparing slopes is attractive on the one hand, but indicates the
potential weakness of asymptotics on the other. For instance, the slope of the sign test was
seen to be f (0), but it is clear that this value alone cannot always give an accurate indication
t Although (14.4) holds with this choice of J.t and a, it is not true that the sequence .Jil( XIS- 9 fa) is asymp-
totically standard normal for every fixed 9. Thus (14.5) is false for this choice of J.t and a. For fixed 9 the
contribution of S - a to the limit distribution cannot be neglected, but for our present purpose it can.
14.1 Asymptotic Power Functions 197
Logistic rr 2 /12
Normal 2/rr
Laplace 2
Uniform 1/3
of the quality of the sign test. Consider a density that is basically a normal density, but a tiny
proportion of w- 10 % of its total mass is located under an extremely thin but enormously
high peak at zero. The large value f (0) would strongly favor the sign test. However, at
moderate sample sizes the observations would not differ significantly from a sample from
a normal distribution, so that the t-test is preferable. In this situation the asymptotics are
only valid for unrealistically large sample sizes.
Even though asymptotic approximations should always be interpreted with care, in the
present situation there is actually little to worry about. Even for n = 20, the comparison of
slopes of the sign test and the t-test gives the right message for the standard distributions
listed in Table 14.1.
14.11 Example (Mann- Whitney). Suppose we observe two independent random samples
X,, ... , Xm and Y1 , ••• , Yn from distributions F(x) and G(y- 0), respectively. The base
distributions F and G are fixed, and it is desired to test the null hypothesis Ho : 0 = 0
versus the alternative H 1 :0 > 0. Set N = m + n and assume that mj N -+ A E (0, 1).
Furthermore, assume that G has a bounded density g.
The Mann-Whitney test rejects the null hypothesis for large numbers of U =
(mn)- 1 Li Li 1{Xi :::: Yj}. By the two-sample U-statistic theorem
This readily yields the asymptotic normality (14.5) for every fixed 0, with
14.12 Example (Two-sample t-test). In the set-up of the preceding example suppose that
the base distributions F and G have equal means and finite variances. Then 0 = E(Y- X)
198 Relative Efficiency of Tests
Efficiency
Distribution (Mann-Whitney/two-sample t-test)
Logistic 1(2 /9
Normal 3/rr
Laplace 3/2
Uniform 1
1.24
1.90
108/125
and the t -test rejects the null hypothesis H0 : () = 0 for large values of the statistic (Y- X) 1S,
where S 2 IN = Sj.lm + Siln is the unbiased estimator of var(Y- X). The sequence S2
converges in probability to a 2 = varXI'A +varY1(1 -'A). By Slutsky's lemma and the
central limit theorem
Thus (14.4) is satisfied and Theorem 14.7 applies with JL((}) = ()Ia and a(()) = 1. The
slope of the t-test equals f.L'(O)Ia(O) = 11a. D
14.13 Example (t-Test versus Mann-Whitney test). Suppose we observe two indepen-
dent random samples X1, ... , Xm and Y1, ... , Yn from distributions F(x) and G(x- ()),
respectively. The base distributions F and G are fixed and are assumed to have equal means
and bounded densities. It is desired to test the null hypothesis Ho : () = 0 versus the
alternative H1 : () > 0. Set N = m +nand assume that ml N --* 'A e (0, 1).
The slopes of the Mann-Whitney test and the t-test depend on the nuisance parameters
F and G. According to the preceding examples the relative efficiency of the two sequences
of tests equals
makes the Mann-Whitney test a strong competitor to the t-test, even in situations in which
the underlying distribution is thought to be approximately normal. D
14.14 Example (Median test). In the two-sample set-up of Example 14.11, suppose that
F = G is a continuous distribution function with finite Fisher information for location 18 •
The median test rejects the null hypothesis H0 : () = 0 for large values of the rank statistic
TN= N- 1 L~+ 1 1{RNi :5 (N +1)/2}. Bytherankcentrallimittheorem, Theorem 13.5,
under the null hypothesis,
Under the null hypothesis the sequence of variables on the right side is asymptotically
normal with mean zero and variance cr 2 (0) = A.(l - A.)/4. By Theorem 7.2, for every
()N = hj~,
By the multivariate central limit theorem, the linear approximations on the right sides
of the two preceding displays are jointly asymptotically normal. By Slutsky's lemma
the same is true for the left sides. Consequently, by Le Cam's third lemma the sequence
~(TN - nj(2N)) converges under the alternatives ()N = hj~ in distribution to a
normal distribution with variance cr 2 (0) and mean the asymptotic covariance r(h) of the
linear approximations. This is given by
14.2 Consistency
After noting that the power at fixed alternatives typically tends to 1, we focused attention
on the performance of tests at alternatives converging to the null hypothesis. The compar-
ison of local power functions is only of interest if the sequences of tests are consistent at
200 Relative Efficiency of Tests
14.16 Lemma. Let t-t and a be functions of() such that (14.4) holds for every sequence
()n = h / Jn. Suppose that t-t is differentiable and that a is continuous at zero, with t-t' (0) > 0
and a (0) > 0. Suppose that the tests that reject the null hypothesis for large values of Tn
possess nondecreasing power functions() ~ 1fn (8). Then this family of tests is consistent
against every alternative() > 0. Moreover, ifrrn (0) --+ a, then 1fn (8n) --+ a or 1fn (8n) --+ 1
when Jn 8n --+ 0 or Jn ()n --+ oo, respectively.
Proofs. For the first lemma, suppose that the tests reject the null hypothesis if Tn exceeds
the critical value Cn· By assumption, the probability under() = 0 that Tn is outside the
interval (t-t(O) - e, t-t(O) +e) converges to zero as n --+ oo, for every fixed e > 0. If
the asymptotic levellimPo(Tn > Cn) is positive, then it follows that Cn < t-t(O) + e
eventually. On the other hand, under() the probability that Tn is in (t-t(8) - e, t-t(8) + e)
converges to 1. For sufficiently small e and t-t(8) > t-t(O), this interval is to the right of
t-t(O) +e. Thus for sufficiently large n, the power Po(Tn > cn) can be bounded below by
Po(Tn E (t-t(8)- e, t-t(8) +e))--+ 1.
For the proof of the second lemma, first note that by Theorem 14.7 the sequence oflocal
power functions 1fn(h/Jn) converges to rr(h) = 1- <P(z,.- ht-t'(O)/a(O)), for every h,
if the asymptotic level is a. If Jn ()n --+ 0, then eventually ()n < h/ Jn for every given
h > 0. By the monotonicity of the power functions, 1fn(8n) : : :; 1fn(h/ ,Jri) for sufficiently
large n. Thus limsuprrn(8n) ::S rr(h) for every h > 0. For h t 0 the right side converges
to rr(O) =a. Combination with the inequality 1fn(8n) :=::: 1fn(O) --+ a gives 1fn(8n) --+ a.
The case that ,Jnen --+ oo can be handled similarly. Finally, the power 7rn(8) at fixed
alternatives is bounded below by 1fn(8n) eventually, for every sequence en t 0. Thus
1fn(8)--+ 1, and the sequence of tests is consistent at e. •
The following examples show that the t -test and Mann-Whitney test are both consistent
against large sets of alternatives, albeit not exactly the same sets. They are both tests
to compare the locations of two samples, but the pertaining definitions of "location" are
not the same. The t-test can be considered a test to detect a difference in mean; the
Mann-Whitney test is designed to find a difference of P(X :::::; Y) from its value 1/2 under
the null hypothesis. This evaluation is justified by the following examples and is further
underscored by the consideration of asymptotic efficiency in nonparametric models. It is
shown in Section 25.6 that the tests are asymptotically efficient for testing the parameters
EY- EX or P(X :::::; Y) if the underlying distributions F and G are completely unknown.
14.17 Example (t-test). The two-sample t-statistic (Y- X)/ S converges in probability to
E(Y- X) I a, where a 2 = lim var(Y- X). If the null hypothesis postulates that EY = EX,
then the test that rejects the null hypothesis for large values of the t-statistic is consistent
against every alternative for which EY > EX. D
14.3 Asymptotic Relative Efficiency 201
If two sequences of tests are available, then we prefer the sequence for which the numbers
nv are smallest. Suppose that nv, 1 and nv,2 observations are needed for two given sequences
of tests. Then, if it exists, the limit
1imnv,2
-
v-+oo nv,l
is called the (asymptotic) relative efficiency or Pitman efficiency of the first with respect
to the second sequence of tests. A relative efficiency larger than 1 indicates that fewer
observations are needed with the first sequence of tests, which may then be considered the
better one.
In principle, the relative efficiency may depend on a, y and the sequence of alternatives
fJv. The concept is mostly of interest if the relative efficiency is the same for all possible
choices of these parameters. This is often the case. In particular, in the situations considered
previously, the relative efficiency turns out to be the square of the quotient of the slopes.
14.19 Theorem. Consider statistical models (Pn,IJ : () ?:: 0) such that II Pn,IJ - Pn,o II --+ 0
as () --+ 0, for every n. Let Tn, 1 and Tn,2 be sequences of statistics that satisfy (14.4) for
every sequence ()n ..j.. 0 and functions f.Li and ui such that f.Li is differentiable at zero and O'i
is continuous at zero, with f.L; (0) > 0 and ui (0) > 0. Then the relative efficiency of the
tests that reject the null hypothesis Ho: () = Ofor large values ofTn,i is equal to
(
JL~ CO) I u1 (O)) 2
JL; CO) I u2 CO) '
for every sequence of alternatives ()v ..j.. 0, independently of a > 0 andy e (a, 1). If the
powerfunctions ofthe tests based on Tn,i are nondecreasingfor every n, then the assumption
202 Relative Efficiency of Tests
ofasymptotic normality ofTn,i can be relaxed to asymptotic normality under every sequence
en = o (1 1.jn) only.
Proof. Fix a andy as in the introduction and, given alternatives 8v .J.. 0, let nv,i observations
be used with each of the two tests. The assumption that II Pn,e. - Pn,o II --+ 0 as v --+ oo for
each fixed n forces nv,i --+ oo. Indeed, the sum of the probabilities of the first and second
kind of the test with critical region Kn equals
I
the relative efficiency of the two sequences of tests equals
The device to let the alternatives 8v tend to the null hypothesis was introduced to make the
testing problems harder and harder, so that the required numbers of observations tend to
infinity, and the comparison becomes an asymptotic one. There are other possibilities that
can serve the same end. The testing problem is harder as a is smaller, as y is larger, and
(typically) as 8 is closer to the null hypothesis. Thus, we could also let a tend to zero, or y
tend to one, keeping the other parameters fixed, or even let two or all three of the parameters
vary. For each possible method we could define the relative efficiency of two sequences of
tests as the limit of the quotient of the minimal numbers of observations that are needed.
Most of these possibilities have been studied in the literature. Next to the Pitman efficiency
the most popular efficiency measure appears to be the Bahadur efficiency, which is defined
as
.
1tm n2(av, y, 8)
.
v~oo n1 (av, y, 8)
Here av tends to zero, but y and 8 are fixed. Typically, the Bahadur efficiency depends on
8, but not on y, and not on the particular sequence av t 0 that is used.
Whereas the calculation of Pitman efficiencies is most often based on distributional limit
theorems, Bahadur efficiencies are derived from large deviations results. The reason is that
the probabilities of first or second kind for testing a fixed null hypothesis against a fixed
alternative usually tend to zero at an exponential speed. Large deviations theorems quantify
this speed. Suppose that the null hypothesis Ho : 8 = 0 is rejected for large values of a test
statistic Tn, and that
2
--log Po(Tn 2: t) --+ e(t), every t, (14.20)
n
Po
Tn --+ JL(8). (14.21)
The first result is a large deviation type result, and the second a "law of large numbers."
The observed significance level of the test is defined as Po(Tn 2: t)lt=T.· Under the null
hypothesis, this random variable is uniformly distributed if Tn possesses a continuous
distribution function. For a fixed alternative 8, it typically converges to zero at an exponential
rate. For instance,- under the preceding conditions, if e is continuous at JL(8), then (because
e is necessarily monotone) it is immediate that
2
--logPo(Tn Po (
2: t)lt=T. --+ e JL(8) ) ·
n
The quantity e (JL (8)) is called the Bahadur slope of the test (or rather the limit in probability
of the left side if it exists). The quotient of the slopes of two sequences of test statistics
gives the Bahadur relative efficiency.
14.22 Theorem. Let Tn,! and Tn,2 be sequences of statistics in statistical models (Pn,o.
Pn,!J) that satisfy (14.20) and (14.21) for functions e; and numbers f.L;(8) such that e; is
continuous at f.Li (8). Then the Bahadur relative efficiency of the sequences of tests that
reject for large values of Tn,i is equal to e1 (JL! (8) )/e2(f.L2(8) ),for every av t 0 and every
1 > y > supn Pn,!J (Pn,o = 0).
Proof. For simplicity of notation, we drop the index i E { 1, 2} and write nv for the minimal
numbers of observations needed to obtain level av and power y with the test statistics Tn.
204 Relative Efficiency of Tests
The sample sizes nv necessarily converge to oo as v --* oo. If not, then there would exist
a fixed value n and a (sub)sequence of tests with levels tending to 0 and powers at least y.
However, for any fixed n, and any sequence of measurable sets Km with Pn,o(Km) --* 0 as
m --* oo, the probabilities Pn,!J(Km) = Pn,IJ(Km n Pn,o = 0) + o(l) are eventually strictly
smaller than y, by assumption.
The most powerful level av-test that rejects for large values of Tn has critical re-
gion {Tn 2:: en} or {Tn > en} for Cn = inf{ c: Po(Tn 2:: c) ::: av }, where we use 2:: if
Po(Tn 2:: Cn)::: av and> otherwise. Equivalently, with the notation Ln = Po(Tn 2:: t)lt=T••
this is the test with critical region {Ln ::: av}. By the definition of nv we conclude that
P 2 log a ) { -> y
2 log L > - - for n = nv.
9 ( --
n, n n - n v < y for n = nv - 1.
By (14.20) and (14.21), the random variable inside the probability converges in probability
to the number e(J.L(8)) as n --* oo. Thus, the probability converges to 0 or 1 if -(21 n) log av
is asymptotically strictly bigger or smaller than e(J.L(8) ), respectively. Conclude that
limsup-2_logav::: e(J.L(8))
V--+00 nv
2
liminf ---logav 2:: e(J.L(8) ).
v--+oo nv - 1
Combined, this yields the asymptotic equivalence nv "' - 2log av I e(J.L(&)). Applying this
for both nv,l and nv,2 and taking the quotient, we obtain the theorem. •
Bahadur and Pitman efficiencies do not always yield the same ordering of sequences
of tests. In numerical comparisons, the Pitman efficiencies appear to be more relevant for
moderate sample sizes. This is explained by their method of calculation. By the preceding
theorem, Bahadur efficiencies follow from a large deviations result under the null hypothesis
and a law of large numbers under the alternative. A law of large numbers is of less accuracy
than a distributional limit result. Furthermore, large deviation results, while mathematically
interesting, often yield poor approximations for the probabilities of interest. For instance,
condition (14.20) shows that Po(Tn 2:: t) = exp(- tne(t)) exp o(n ). Nothing guarantees
that the term exp o(n) is close to 1.
e
On the other hand, often the Bahadur efficiencies as a function of are more informa-
tive than Pitman efficiencies. The Pitman slopes are obtained under the condition that the
sequence .jn(Tn - J.L(O)) is asymptotically normal with mean zero and variance a 2 (0).
Suppose, for the present argument, that Tn is normally distributed for every finite n, with
the parameters J.L(O) and a 2 (0)1n. Then, because 1 - <l>(t) "'l/J(t)lt as t--* oo,
2
2
--logPo(Tn
n
2:: J.J,(O) + t) 2
=--log
n
( 1- <I> (t.jn))
-
a(O)
t ,
--* - 2-
a (0)
every t.
The Bahadur slope would be equal to (J.L(8)- J.L(0)) 2 la 2 (0). Fore--* 0, this is approx-
imately equal to 8 2 times the square of the Pitman slope J.L' (0) 21a 2 (0). Consequently, the
limit of the Bahadur efficiencies as e --* 0 would yield the Pitman efficiency.
Now, the preceding argument is completely false if Tn is only approximately normally
distributed: Departures from normality that are negligible in the sense of weak convergence
need not be so for large-deviation probabilities. The difference between the "approximate
14.4 Other Relative Efficiencies 205
Bahadur slopes" just obtained and the true slopes is often substantial. However, the argument
tends to be "more correct" as t approaches tt(O), and the conclusion that limiting Bahadur
efficiencies are equal to Pitman efficiencies is often correct. t
The main tool needed to evaluate Bahadur efficiencies is the large-deviation result
(14.20). For averages Tn, this follows from the Cramer-Chernoff theorem, which can
be thought of as the analogue of the central limit theorem for large deviations. It is a refine-
ment of the weak law of large numbers that yields exponential convergence of probabilities
of deviations from the mean.
The cumulant generating function of a random variable Y is the function u ~--+ K (u) =
log EeuY. If we allow the value oo, then this is well-defined for every u E R The set of u
such that K(u) is finite is an interval that may or may not contain its boundary points and
may be just the point {0}.
1 -
-logP(Y 2: t)-+ inf(K(u)- tu).
n u:::_O
Proof. The cumulant generating function of the variables Y; - t is equal to u ~--+ K (u) - ut.
Therefore, we can restrict ourselves to the case t = 0. The proof consists of separate upper
and lower bounds on the probabilities P(Y 2: 0).
The upper bound is easy and is valid for every n. By Markov's inequality, for every
u 2:0,
Take logarithms, divide by n, and take the infimum over u 2: 0 to find one half of the
proposition.
For the proof of the lower bound, first consider the cases that Y; is nonnegative or
nonpositive. IfP(Yi < 0) = 0, then the function u ~--+ K(u) is monotonely increasing on~
and its infimum on u 2: 0 is equal to 0 (attained at u = 0); this is equal to n- 1 log P(Y 2: 0)
for every n. Second, if P(Y; > 0) = 0, then the function u ~--+ K(u) is monotonely
decreasing on~ with K(oo) = logP(Y1 = 0); this is equal to n- 1 logP(Y 2: 0) for every
n. Thus, the theorem is valid in both cases, and we may exclude them from now on.
First, assume that K (u) is finite for every u E R Then the function u ~--+ K (u) is analytic
on~. and, by differentiating under the expectation, we see that K'(O) = EY1. Because Y;
takes both negative and positive values, K(u) -+ oo as u -+ ±oo. Thus, the infimum of
the function u ~--+ K(u) over u E ~is attained at a point uo such that K'(uo) = 0.
The case that uo < 0 is trivial, but requires an argument. By the convexity of the function
u ~--+ K(u), K isnondecreasingon [uo, oo). lfuo < 0, thenitattainsitsminimum value over
u 2: 0 at u = 0, which is K (0) = 0. Furthermore, in this case EY1 = K' (0) > K' (u 0 ) = 0
(strict inequality under our restrictions, for instance because K"(O) = varY, > 0) and
hence P(Y 2: 0) -+ 1 by the law of large numbers. Thus, the limit of the left side of the
proposition (with t = 0) is 0 as well.
For Uo :::: 0, let zb z2 •... be i.i.d. random variables with the distribution given by
Then Z 1 has cumulant generating function u 1-+ K(u 0 + u) - K(u 0 ), and, as before, its
mean can be found by differentiating this function at u = 0: EZt = K' (u 0 ) = 0. For every
e > 0,
P(Y :=::: O) = E1{Zn :::: O}e-uonZ. enK(uo)
by the preceding argument applied to the truncated variables. Let s be the limit of the right
side as M-+ oo, and let AM be the set {u:::: O:KM(u) ::s s}. Then the sets AM are
nonempty and compact for sufficiently large M (as soon as KM(u) -+ oo as u -+ ±oo),
with A1 ::) A2 ::) ···,whence nAM is nonempty as well. Because KM converges pointwise
to K as M-+ oo, any point u1 E nAM satisfies K(u 1) =lim KM(u 1) ::s s. Conclude that
s is bigger than the right side of the proposition (with t = 0). •
14.24 Example (Sign statistic). The cumulant generating function of a variable Y that
is -1 and 1, each with probability !,
is equal to K (u) = log cosh u. Its derivative is
K'(u) = tanhu andhencetheinfimumof K(u)-tuoveru E Risattainedforu = arctanht.
By the Cramer-Chernofftheorem, for 0 < t < 1,
2 -
--logP(Y:::: t)-+ e(t): = -2logcosharctanht + 2t arctanht.
n
We can apply this result to find the Bahadur slope of the sign statistic Tn = n -l 2:::7=1 sign( X d.
If the null distribution of the random variables X 1, ... , Xn is continuous and symmetric
about zero, then (14.20) is valid with e(t) as in the preceding display and with JL(()) =
Ee sign(Xt). Figure 14.2 shows the slopes of the sign statistic and the sample mean for
testing the location of the Laplace distribution. The local optimality of the sign statistic is
reflected in the Bahadur slopes, but for detecting large differences of location the mean is
better than the sign statistic. However, it should be noted that the power of the sign test in
this range is so close to 1 that improvement may be irrelevant; for example, the power is
0.999 at level 0.007 for n = 25 at () = 2. D
14.4 Other Relative Efficiencies 207
..·······
0
C\i ....···'
..··
Lll
....
.···
~
q
~
... ··
..····
Ll) ......············
c:i
..·······
......········
0
c:i
0 2 3 4
Figure 14.2. Bahadur slopes of the sign statistic (solid line) and the sample mean (dotted line) for
testing that a random sample from the Laplace distribution has mean zero versus the alternative that
the mean is (), as a function of ().
14.25 Example (Student statistic). Suppose that X 1, ... , Xn are a random sample from a
normal distribution with mean JL and variance a 2• We shall consider a known and compare
the slopes of the sample mean and the Student statistic Xnl Sn for testing Ho: JL = 0.
The cumulant generating function of the normal distribution is equal to K(u) = UJL +
~u 2 a 2 • By the Cramer-Chemofftheorem, fort> 0,
2 - ~
--logPo(Xn 2: t)
n
~ e(t): = 2·
a
Thus, the Bahadur slope of the sample mean is equal to JL 21a 2 , for every JL > 0.
Under the null hypothesis, the statistic .jnXnI Sn possesses the t -distribution with (n -1)
degrees of freedom. Thus, for a random sample Z0 , Z 1 , ••• of standard normal variables,
for every t > 0,
Po(/ n Xn ::::
n- 1 Sn
t) = !p( tL 1 :::: t 2 )
2 n- 1
= !p(z5-
2
t2 I:zr::::
i=
o). 1
This probability is not of the same form as in the Cramer-Chemoff theorem, but it concerns
almost an average, and we can obtain the large deviation probabilities from the cumulant
generating function in an analogous way. The cumulant generating function of a square
of a standard normal variable is equal to u ~--+ -~ log(l - 2u), and hence the cumulant
generating function of the variable z5 -
t 2 'L,7::;11 Zf is equal to
-~n logPo(J n -1
n Xn
Sn
:=::: t)--* e(t) = log(t 2 + 1).
For statistics that are not means, the Cramer-Chemoff theorem is not applicable, and we
need other methods to compute the Bahadur efficiencies. An important approach applies
to functions of means and is based on more general versions of Cramer's theorem. A first
generalization asserts that, for certain sets B, not necessarily of the form [t, oo ),
1 -
-logP(Y E B)--* -infyenl(y),
n
/(y) = sup(uy- K(u)).
u
For a given statistic of the form 4J (Y), the large deviation probabilities of interest P(4J (Y) :=:::
t) can be written in the form P(Y e B1) for the inverse images B 1 = 4J- 1 [t, oo). If B 1 is an
eligible set in the preceding display, then the desired large deviations result follows, although
we shall still have to evaluate the repeated "infsup" on the right side. Now, according to
Cramer's theorem, the display is valid for every set such that the right side does not change
if B is replaced by its interior or its closure. In particular, if 4J is continuous, then B 1 is
closed and its interior B1 contains the set 4J- 1 (t, oo). Then we obtain a large deviations
result if the difference set 4J -t { t} is "small" in that it does not play a role when evaluating
the right side of the display.
Transforming a univariate mean Y into a statistic 4J(Y) can be of interest (for example,
to study the two-sided test statistics IY1), but the real promise of this approach is in its
applications to multivariate and infinite-dimensional means. Cramer's theorem has been
generalized to these situations. General large deviation theorems can best be formulated
14.4 Other Relative Efficiencies 209
as separate upper and lower bounds. A sequence of random maps Xn : 0 ~ ][)) from a
probability space (Q, U, P) into a topological space][)) is said to satisfy the large deviation
principle with rate function I if, for every closed set F and for every open set G,
The rate function I : ][)) ~ [0, oo] is assumed to be lower semicontinuous and is called a
good rate function if the sublevel sets {y : I (y) .::: M} are compact, for every M E R The
inner and outer probabilities that Xn belongs to a general set B is sandwiched between the
probabilities that it belongs to the interior B and the closure B. Thus, we obtain a large
deviation result with equality for every set B such that inf{ I (y): y E Ii} = inf{ I (y):
y E B}. An implication for the slopes oftest statistics of the form <f>(Xn) is as follows.
14.27 Lemma. Suppose that 4> : ][)) ~ lR is continuous at every y such that I (y) < oo
and suppose that inf{ I (y): ¢(y) > t} = inf{ I (y): ¢(y) :::::: t }. If the sequence Xn satisfies
the large-deviation principle with the rate function I under Po, then Tn = 4> (Xn) satisfies
(14.20) with e(t) = 2inf{I(y): ¢(y) :::::: t }. Furthermore, if I is a good rate function, then
e is continuous at t.
Proof. Define sets A 1 = ¢- 1 (t, oo) and Bt = ¢- 1[t, oo ), and let ][))0 be the set where I is
finite. By the continuity of¢, B1 n ][))0 = Bt n ][))0 and Bt n ][))0 :::> At n ][))0 • (If y ~ Bt. then
there is a net Yn E Bf with Yn --+ y; if also y E ][))o, then ¢(y) = lim¢(Yn) .::: t and hence
y ¢. At.) Consequently, the infimum of I over B 1 is at least the infimum over Ae. which
is the infimum over B1 by assumption, and also the infimum over B1 • Condition (14.20)
follows upon applying the large deviation principle to B1 and Bt.
The function e is nondecreasing. The condition on the pair (I, 4>) is exactly that e is
right-continuous, because e(t+) = inf{I(y):¢(y) > t}. To prove the left-continuity
of e, let tm t t. Then e(tm) t a for some a .::: e(t). If a = oo, then e(t) = oo and
e is left-continuous. If a < oo, then there exists a sequence Ym with <f>(Ym) :::::: tm and
2/(ym) .:::a+ 1/m. By the goodness of I, this sequence has a converging subnet Ym' --+ y.
Then 2/(y) .::: liminf2I(Ym') .:::a by the lower semicontinuity of I, and ¢(y) :::::: t by the
continuity of¢. Thus e(t) .::: 21 (y) .::: a. •
Empirical distributions can be viewed as means (of Dirac measures), and are therefore
potential candidates for a large-deviation theorem. Cramer's theorem for empirical distri-
butions is known as Sanov's theorem. Let lL 1 (X, A) be the set of all probability measures
on the measurable space (X, A), which we assume to be a complete, separable metric space
with its Borel cr-field. The r:-topology on lL1 (X, A) is defined as the weak topology gen-
erated by the collection of all maps P ~ Pf for f ranging over the set of all bounded,
measurable functions on f : X~ JR. t
14.28 Theorem (Sanov's theorem). Let lP'n be the empirical measure ofa random sample
of size n from a fixed measure P. Then the sequence lP'n viewed as maps into lL1 (X, A)
satisfies the large deviation principle relative to the r:-topology, with the good rate function
I(Q) = -Qlogpjq.
For X equal to the real line, L 1 (X, A) can be identified with the set of cumulative
distribution functions. The r:-topology is stronger than the topology obtained from the
uniform norm on the distribution functions. This follows from the fact that if both Fn (x) --*
F(x) and Fn{x} --* F{x} for every x E IR, then IIFn- Flloo --* 0. (see problem 19.9).
Thus any function ~ that is continuous with respect to the uniform norm is also continuous
with respect to the r:-topology, and we obtain a large collection of functions to which we
can apply the preceding lemma. Trimmed means are just one example.
14.29 Example (Trimmed means). Let lFn be the empirical distribution function of a ran-
dom sample of size n from the distribution function F, and let JF;;- 1 be the corresponding
quantile function. The function ~(lFn) = (1- 2a)- 1 J:-a
JF;;- 1 (s) ds yields a version of the
t
a-trimmed mean (see Chapter 22). We assume that 0 < a < and (partly for simplicity)
that the null distribution Fo is continuous.
If we show that the conditions of Lemma 14.27 are fulfilled, then we can conclude, by
Sanov's theorem,
-~logPF
n
0 (~(1Fn):=::t)--*e(t):=2 G:c/>(G)?::.t
inf -Glog/0 •
g
if X :5 C,
if X> C.
Then Gm is a probability distribution for suitably chosen em > 0, and, by the dominated
convergence Gm log{fo/gm) --* G log{fo/g) as m --* oo. Because Gm(x) :5 G(x) for all
x, with strict inequality (at least) for all x:::: c such that G(x) > 0, we have that G;;; 1(s)::::
a- 1(s) for all s, with strict inequality for all s E {0, G(c)]. Hence the trimmed mean~ (Gm)
is strictly bigger than the trimmed mean ~(G), for every m. 0
14.5 Rescaling Rates 211
14.30 Lemma. The power function 'lrn of any test in (Xn, An. Pn,fJ : () E <9) satisfies
For any () and fJo there exists a test whose power function attains equality.
Proof. If rrn is the power function of the test ~n, then the difference on the left side can
be written as J ~n(Pn,9 - Pn,fJo) df.l.n· This expression is maximized for the test function
~n = 1{Pn,fJ > Pn,llo}. Next, for any pair of probability densities p and q we have Jq > P ( q -
p)df.l.=! J =
IP- qldf.l., since j(p- q)df.l. 0. •
(i) If II Pn,fJ. - Pn,llo II-+ 2, then there exists a sequence of tests with power 7rn (()n)
tending to 1 and size rrn(fJo) tending to 0 (a perfect sequence of tests).
(ii) If II Pn,fJ. - Pn,llo II --+ 0, then the power of any sequence of tests is asymptotically
less than the level (every sequence of tests is worthless).
(iii) If II Pn,9.- Pn,llo II is bounded away from 0 and 2, then there exists no perfect sequence
of tests, but not every test is worthless.
The rescaling rate h 1Jn used earlier sections corresponds to the third possibility. These ex-
amples concern models with independent observations. Because the total variation distance
between product measures cannot be easily expressed in the distances for the individual
factors, we translate the results into the Hellinger distance and next study the implications
for product experiments.
The Hellinger distance H(P, Q) between two probability measures is the L2-distance
between the square roots of the corresponding densities. Thus, its square H 2 ( P, Q) is equal
to j(..,(P- .jtj) 2 df.l.. The distance is convenient if considering product measures. First,
the Hellinger distance can be expressed in the Hellinger affinity A(P, Q) = J .JP.j(j df.l.,
through the formula
Next, by Fubini's theorem, the affinity of two product measures is the product of the
affinities. Thus we arrive at the formula
212 Relative Efficiency of Tests
14.31 Lemma. Given a statistical model (Pe: e :::: eo) set Pn,e = P;. Then the possi-
bilities (i), (ii), and (iii) arise when nH 2 (Pe., P~~o) converges to oo, converges to 0, or is
bounded away from 0 and oo, respectively. In particular, if H 2 (Pe, P~~o) = o(le- eo I")
ase ~eo, thenthepossibilities(i), (ii), and(iii)arevalidwhenn 11"len -eol converges to
oo, converges to 0, or is bounded away from 0 and oo, respectively.
Proof. The possibilities (i), (ii), and (iii) can equivalently be described by replacing the
total variation distance liP;.- P~ll by the squared Hellinger distance H 2 (P;., P~). This
follows from the inequalities, for any probability measures P and Q,
14.33 Example (Uniform law). If P11 is the uniform measure on [0, e], then H 2 (P11 , P~~o) =
O(le - e0 1). The intermediate rate of convergence is n. In this case we would study
asymptotic power functions defined as the limits of the local power functions of the form
h ~--+ 1rn(eo + h/n). Forinstance, the level a tests thatrejectthenullhypothesis Ho: e =eo
for large values of the maximum X (n) of the observations have power functions
Relative to this rescaling rate, the level a tests that reject the null hypothesis for large values
of the mean Xn have asymptotic power function a (no power). D
14.34 Example (Triangular law). Let P11 be the probability distribution with density
x ~--+ (1 - lx- e1)+ on the real line. Some clever integrations show that H 2 (P8 , Po) =
e
!e 2 log(lje) + O(e 2) as ~ 0. {It appears easiest to compute the affinity first.) This
leads to the intermediate rate of convergence .jn log n. D
The preceding lemmas concern testing a given simple null hypothesis against a simple
alternative hypothesis. In many cases the rate obtained from considering simple hypotheses
does not depend on the hypotheses and is also globally attainable at every parameter in
the parameter space. If not, then the global problems have to be taken into account from
the beginning. One possibility is discussed within the context of density estimation in
section 24.3.
Problems 213
Lemma 14.31 gives rescaling rates for problems with independent observations. In mod-
els with dependent observations quite different rates may pertain.
Notes
Apparently, E.J.G. Pitman introduced the efficiencies that are named for him in an unpub-
lished set oflecture notes in 1949. A published proof of a slightly more general result can
be found in [109].
Cramer [26] was interested in preciser approximations to probabilities oflarge deviations
than are presented in this chapter and obtained the theorem under the condition that the
moment-generating function is finite on ll. Chernoff [20] proved the theorem as presented
here, by a different argument. Chernoff used it to study the minimum weighted sums of
error probabilities of tests that reject for large values of a mean and showed that, for any
0 < 7r < 1,
!n loginf(rrP
t
0 (Y > t) + (1 - rr)P 1(Y :::=: t))
--* inf inf(Ko(u)-ut)vinf(K 1(u)-ut).
EoYt<t<EtYt u u
Furthermore, for Y the likelihood ratio statistic for testing Po versus P 1, the right side of
this display can be expressed in the Hellinger integral of the experiment (Po, P 1) as
inf logfdP.udpl-u.
O<u<l O I
Thus, this expression is a lower bound for the lim infn~oo n -I log(an + f3n) for an and f3n the
error probabilities of any test of Po versus P 1• That the Bahadur slope of Neyman-Pearson
tests is twice the Kullback-Leibler divergence (Example 14.26) is essentially known as
Stein's lemma and is apparently among those results by Stein that he never cared to publish.
A first version of Sanov's theorem was proved by Sanov in 1957. Subsequently, many
authors contributed to strengthening the result, the version presented here being given in
[65]. Large-deviation theorems are subject of current research by probabilists, particularly
with extensions to more complicated objects than sums of independent variables. See [31]
and [32]. For further information and references concerning applications in statistics, we
refer to [4] and [61], as well as to Chapters 8, 16, and 17.
For applications and extensions of the results on rescaling rates, see [37].
PROBLEMS
1. Show that the power function of the Wilcoxon two sample test is monotone under shift oflocation.
2. Let X 1, ... , Xn be a random sample from the N (/-L, a 2 )-distribution, where a 2 is known. A test
for Ho : 1-L = 0 against H1 : 1-L > 0 can be based on either X1a or X1S. Show that the asymptotic
214 Relative Efficiency of Tests
relative efficiency of the two sequences of tests is 1. Does it make a difference whether normal
or t-critical values are used?
3. Let Xt. ... , Xn be a random sample from a density f(x - 9) where f is symmetric about
zero. Calculate the relative efficiency of the t-test and the test that rejects for large values of
L Li<j 1{Xi + Xj > 0} for f equal to the logistic, normal, Laplace, and uniform shapes.
4. Calculate the relative efficiency of the van der Waerden test with respect to the t-test in the
two-sample problem.
5. Calculate the relative efficiency of the tests based on Kendall's t and the sample correlation
coefficient to test independence for bivariate normal pairs of observations.
6. Suppose <P : :F ~ lR and 1{f : :F ~ IRk are arbitrary maps on an arbitrary set :F and we wish to
find the minimum value of <P over the set {f E :F: 1/f (f) = 0}. If the map f ~ <P (f) +aT 1/f (f)
attains its minimum over :Fat fa, for each fixed a in an arbitrary set A, and there exists ao E A such
that 1/f <fao) = 0, then the desired minimum value is <P (fao). This is a rather trivial use of Lagrange
multipliers, but it is helpful to solve the next problems. (</J(fa 0 ) = <P(fa0 ) +aij'l/f(fao) is the
minimum of <P (f) + aij 1{f (f) over :F and hence smaller than the minimum of <P (f) + aij 1{f (f)
over {f E :F: 1/f(f) = 0}.)
7. Show that 4/(0) 2 J y 2 f(y) dy 2: 1/3 for every probability density f that has its mode at 0.
(The minimum is equal to the minimum of 4 J y 2 f (y) dy over all probability densities f that
are bounded by 1.)
8. Show that 12(! j 2 (y) dy) 2 J y 2 f(y) dy 2: 108/125 for every probability density f with mean
zero. (The minimum is equal to 12 times the minimum of the square of <P (f) = J j 2 (y) dy over
all probability densities with mean 0 and variance 1.)
9. Study the asymptotic power function of the sign test if the observations are a sample from
a distribution that has a positive mass at its median. Is it good or bad to have a nonsmooth
distribution?
10. Calculate the Hellinger and total variation distance between two uniform U[O, 9] measures.
11. Calculate the Hellinger and total variation distance between two normal N(f.L, cr 2) measures.
12. Let X 1, ... , Xn be a sample from the uniform distribution on [-9, 9].
(i) Calculate the asymptotic power functions of the tests that reject Ho : 9 = 9o for large values
of X(n)• X(n) V (-X(!)) and X(n)- X(!)·
(ii) Calculate the asymptotic relative efficiencies of these tests.
13. If two sequences of test statistics satisfied (14.4) for every 9n ,J.. 0, but with norming rate na instead
of ..(n, how would Theorem 14.19 have to be modified to find the Pitman relative efficiency?
15
Efficiency of Tests
215
216 Efficiency of Tests
Proof. We give the proof for the special case of experiments that satisfy the following
assumption: Every sequence of statistics Tn that is tight under every given parameter h
possesses a subsequence (not depending on h) that converges in distribution to a limit under
every h. See problem 15.2 for a method to extend the proofto the general situation.
The additional condition is valid in the case of local asymptotic normality. With the
notation of the proof of Theorem 7.10, we argue first that the sequence (Tn. D.n) is uni-
formly tight under h = 0 and hence possesses a weakly convergent subsequence by
Prohorov's theorem. Next, by the expansion of the likelihood and Slutsky's lemma, these-
quence (Tn, logdPn,hfdPn,o) converges under h = 0 along the same sequence, for every
h. Finally, we conclude by Le Cam's third lemma that the sequence Tn converges under h,
along the subsequence.
Let 4Jn be tests with power functions 7r:n. Because each 4Jn takes its values in the com-
pact interval [0, 1], the sequence of random variables 4Jn is certainly uniformly tight. By
assumption, there exists a subsequence of {n} along which 4Jn converges in distribution un-
der every h. Thus, the assumption of the asymptotic representation theorem, Theorem 9.3
or Theorem 7.10, is satisfied along some subsequence of the statistics 4Jn· By this theorem,
there exists a randomized statistic T = T(X, U) in the limit experiment such that 4Jn ~ T
along the subsequence, for every h. The randomized statistic may be assumed to take its
values in [0, 1]. Because the 4Jn are uniformly bounded, Eh4Jn """* EhT. Combination with
the assumption yields rr(h) = EhT for every h. The randomized statistic Tis not a test
function (it is a "doubly randomized" test). However, the test 4J (x) = E(T (X, U) 1 X = x)
satisfies the requirements. •
The theorem suggests that the best possible limiting power function is the power function
of the best test in the limit experiment. In classical testing theory an "absolutely best" test
is defined as a uniformly most powerful test of the required level. Depending on the
experiment, such a test may or may not exist. If it does not exist, then the classical solution
is to find a uniformly most powerful test in a restricted class, such as the class of all
unbiased or invariant tests; to use the maximin criterion; or to use a conditional test. In
combination with the preceding theorem, each of these approaches leads to a criterion for
asymptotic quality. We do not pursue this in detail but note that, in general, we would avoid
any sequence of tests that is matched in the limit experiment by a test that is considered
suboptimal.
In the remainder of this chapter we consider the implications for locally asymptotically
normal models in more detail. We start with reviewing testing in normal location models.
Proof. Fix h1 withcTht > 0. Define ho =h1- (cThtfcTY..c)Y..c. ThencTho =0. By the
Neyrnan-Pearson lemma, the most powerful test for testing the simple hypotheses Ho : h =
ho and H1: h = h1 rejects Ho for large values of
The natural test for the two-sided problem H0 : cT h = 0 versus H 1 : cT h =1= 0 rejects the
null hypothesis for large values of IcT X 1. This test is not uniformly most powerful, because
its power is dominated by the uniformly most powerful tests for the two one-sided alterna-
tives whose union is H1. However, the test with critical region {x: I cT xI~ Zaf2.JcTY..c}
is uniformly most powerful among the unbiased level ex tests (see problem 15.1).
A second problem of interest is to test a simple null hypothesis H0 : h = 0 versus the
alternative H1 : h =I= 0. If the parameter set is one-dimensional, then this reduces to the
problem in the preceding paragraph. However, if () is of dimension k > 1, then there exists
no uniformly most powerful test, not even among the unbiased tests. A variety of tests are
reasonable, and whether a test is "good" depends on the alternatives at which we desire high
power. For instance, the test that is most sensitive to detect the alternatives such that cT h > 0
(for a given c) is the test given in the preceding theorem. Probably in most situations no
particular "direction" is of special importance, and we would use a test that distributes the
power over all directions. It is known that any test with as critical region the complement
of a closed, convex set Cis admissible (see, e.g., [138, p. 137]). In particular, complements
of closed, convex, and symmetric sets are admissible critical regions and cannot easily be
ruled out a priori. The shape of C determines the power function, the directions in which
C extends little receiving large power (although the power also depends on '2:.).
The most popular test rejects the null hypothesis for large values of XT '2:. - 1X. This test
arises as the limit version of the Wald test, the score test, and the likelihood ratio test. One ad-
vantage is a simple choice of critical values, because XT '2:. - 1X is chi square-distributed with
k degrees of freedom. The power function of this test is, with Z a standard normal vector,
By the rotational symmetry of the standard normal distribution, this depends only on the non-
centrality parameter II '2:. - 112 h 11. The power is relatively large in the directions h for which
II '2:. - 112 h II is large. In particular, it increases most steeply in the direction of the eigenvector
corresponding to the smallest eigenvalue of '2:.. Note that the test does not distribute the
power evenly, but dependent on '2:.. Two optimality properties of this test are given in
problems 15.3 and 15.4, but these do not really seem convincing.
218 Efficiency of Tests
Due to the lack of an acceptable optimal test in the limit problem, a satisfactory asymp-
totic optimality theory of testing simple hypotheses on multidimensional parameters is
impossible.
A sequence of power functions () 1-+ 7rn ( ()) in the original experiments induces the sequence
of power functions h 1-+ rrn ( Oo + hI ..,In) in the local experiments. Suppose that rrn ( Oo +
hi ..,In) --* rr(h) for every hand some function rr. Then, by the asymptotic representation
theorem, the limit rr is a power function in the Gaussian limit experiment.
Suppose for the moment that () is real and that the sequence 7rn is of asymptotic level a
for testing Ho: () :::: Oo versus Ht: () > Oo. Then rr(O) = limrrn(Oo) :::: a and hence rr
corresponds to a level a test for H0 : h = 0 versus H 1 : h > 0 in the limit experiment. It must
be bounded above by the power function of the uniformly most powerful level a test in the
limit experiment, which is given by Proposition 15.2. Conclude that
lim
n~oo
7rn (eo+ ~)
vn
:::: 1 - <l>(za - h..[i';;), every h > 0.
(Apply the proposition with c = 1 and :E = /9~ 1 .) We have derived an absolute upper bound
on the local asymptotic power of level a tests.
In Chapter 14asequenceofpowerfunctions such thatrrn(Oo +hi..,In) --* 1-<l>(za -hs)
for every h is said to have slope s. It follows from the present upper bound that the square
root .jl;, of the Fisher information is the largest possible slope. The quantity
/9o
s2
is the relative efficiency of the best test and the test with slope s. It can be interpreted as
the number of observations needed with the given sequence of tests with slope s divided by
the number of observations needed with the best test to obtain the same power.
With a bit of work, the assumption that rrn ( Oo + hI ..,In) converges to a limit for every h
can be removed. Also, the preceding derivation does not use the special structure of i.i.d.
observations but only uses the convergence to a Gaussian experiment. We shall rederive the
result within the context of local asymptotic normality and also indicate how to construct
optimal tests.
Suppose that at "time" n the observation is distributed according to a distribution Pn,fJ with
parameter ranging over an open subset e ofll~k. The sequence of experiments ( Pn,e : () E E>)
is locally asymptotically normal at ()0 if
dP ~ -lh I
log n,.,o+r. = hT 1::!.. - -hT l h + 0 (1) (15.3)
d P, n,fJo 2 9o Pn,OQ '
n,fJo
15.3 Local Asymptotic Normality 219
15.4 Theorem. Let E> c ~k be open and let 1/J : E> 1--+ ~ be differentiable at 0, with e
nonzero gradient ..j,llo and such that 1/f(eo) = 0. Let the sequence of experiments
(Pn,9: e E E>) be locally asymptotically normal at eo with nonsingular Fisher informa-
tion, for constants rn-+ 00. Then the power functions e
I-+ 7rn(e) of any sequence of
level a tests for testing Ho: 1/f(e) ::::; 0 versus H,: 1/f(e) > 0 satisfy, for every h such that
..j,lloh >0,
Then the sequence oftests that reject for values ofTn exceeding Za is asymptotically optimal
in the sense that the sequence Pllo+r;I h (Tn 2: Za) converges to the right side of the preceding
display, for every h.
e
By Le Cam's third lemma, the sequence ~n.llo converges in distribution under 0 + r;;' h to a
N(I~~oh. I~~o)-distribution. Thus, the sequence Tn converges under e0 + r;; 1h in distribution
to a normal distribution with mean 'if,90 hj('if,9/~'..j,~) 1 1 2 and variance 1. •
220 Efficiency of Tests
The point 00 in the preceding theorem is on the boundary of the null and the alter-
native hypotheses. If the dimension k is larger than 1, then this boundary is typically
(k- I)-dimensional, and there are many possible values for 00 • The upper bound is valid
at every possible choice.
If k = 1, the boundary point 00 is typically unique and hence known, and we could use
Tn = Ii"o 112 ~n.Oo to construct an optimal sequence of tests for the problem H0 : 0 = 00 •
These are known as score tests.
Another possibility is to base a test on an estimator sequence. Not surprisingly, efficient
estimators yield efficient tests.
1 n .
~n.9 = r.; L)e(X;).
-vni=1
Under regularity conditions, the maximum likelihood estimator qualifies. Suppose that
0 1-+ ! 9 is continuous, and that 1/1 is continuously differentiable with nonzero gradient.
Then the sequence of tests that reject Ho : 1/1 (0) :::: 0 if
1 - <l>(za - hlf/).
15.4 One-Sample Location 221
This function is an upper bound for lim sup 7rn (hI ,Jri), for every h > 0, for every sequence
of level a power functions. Suppose that Tn are statistics with
1 1 f'
Tn =- r.:: IT:
n
-vn ...;It
L
i=l
-
1 (Xi)+ Op0 (1). (15.7)
Then, according to the second assertion of Theorem 15 .4, the sequence of tests that reject
the null hypothesis if Tn 2:: Za attains the bound and hence is asymptotically optimal. We
shall discuss several ways of constructing test statistics with this property.
If the shape of the distribution is completely known, then the test statistics Tn can simply
be taken equal to the right side of (15.7), without the remainder term, and we obtain the
score test. It is more realistic to assume that the underlying distribution is only known up
to scale. If the underlying density takes the form f(x) = fo(xja)ja for a known density
fo that is symmetric about zero, but for an unknown scale parameter a, then
f' 1 //
=-;; ~~(-;;)•
X
f(x)
15.8 Example (t-test). The standard normal density fo possesses score function !Of fo
(x) = - x and Fisher information Ito= 1. Consequently, if the underlying distribution is
normal, then the optimal test statistics should satisfy Tn = ..jiJ(nfa + Op0 (n- 112 ). The
t -statistics Xn/ Sn fulfill this requirement. This is not surprising, because in the case of nor-
mally distributed observations the t-test is uniformly most powerful for every finite nand
hence is certainly asymptotically optimal. D
The t-statistic in the preceding example simply replaces the unknown standard deviation
a by an estimate. This approach can be followed for most scale families. Under some regu-
larity conditions, the statistics
1 _l_tf~(Xi)
Tn = _ _
Jn JTj; i=l fo Un
should yield asymptotically optimal tests, given a consistent sequence of scale esti-
mators 8-n.
Rather than using score-type tests, we could use a test based on an efficient estimator for
the unknown symmetry point and efficient estimators for possible nuisance parameters, such
as the scale- for instance, the maximum likelihood estimators. This method is indicated in
general in Example 15.8 and leads to the Wald test.
Perhaps the most attractive approach is to use signed rank statistics. We summarize some
definitions and conclusions from Chapter 13. Let R:i, ... , R;;n
be the ranks of the absolute
values IX1l, ... , IXn I in the ordered sample of absolute values. A linear signed rank statistic
takes the form
for given numbers an!, ... , ann• which are called the scores of the statistic. Particular
examples are the Wilcoxon signed rank statistic, which has scores ani = i, and the sign
statistic, which corresponds to scores ani = 1. In general, the scores can be chosen to weigh
222 Efficiency of Tests
ani = E4J(Un(i)).
(Here Un(Il, ... , Un(n) are the order statistics of a random sample of size n from the uniform
distribution on [0, 1].) Under the condition that Jd 4J 2 (u) du < oo, Theorem 13.18 shows
that, under the null hypothesis, and with F+(x) = 2F(x) - 1 denoting the distribution
function of IX, I,
Because the score-generating function 4J can be chosen freely, this allows the construction
of an asymptotically optimal rank statistic for any given shape f. The choice
1 !'
4J(u) =- JTfJ((F+)- 1(u)). (15.9)
yields the locally mostpoweiful scores, as discussed in Chapter 13. Because f' I J(lxl) sign
(x) = f' If (x) by the symmetry of f, it follows that the signed rank statistics Tn satisfy
(15.7). Thus, the locally most powerful scores yield asymptotically optimal signed rank
tests. This surprising result, that the class of signed rank statistics contains asymptotically
efficient tests for every given (symmetric) shape of the underlying distribution, is sometimes
expressed by saying that the signs and absolute ranks are "asymptotically sufficient" for
testing the location of a symmetry point.
15.10 Corollary. Let Tn be the simple linear signed rank statistic with scores ani =
E4J(Un(i)) generated by the function 4J defined in (15.9). Then Tn satisfies (15.7) and hence
the sequence of tests that reject Ho : () = 0 if Tn 2: Za is asymptotically optimal at () = 0.
Signed rank statistics were originally constructed because of their attractive property of
being distribution free under the null hypothesis. Apparently, this can be achieved without
losing (asymptotic) power. Thus, rank tests are strong competitors of classical parametric
tests. Note also that signed rank statistics automatically adapt to the unknown scale: Even
though the definition of the optimal scores appears to depend on f, they are actually identical
foreverymemberofascalefamily f(x) = fo(xlu)lu (since (F+)- 1 (u) = u(F0+)- 1 (u)).
Thus, no auxiliary estimate for u is necessary for their definition.
15.11 Example (Laplace). The sign statistic Tn = n- 112 2:7= 1 sign(Xi) satisfies (15.7)
for f equal to the Laplace density. Thus the sign test is asymptotically optimal for testing
location in the Laplace scale family. D
15.12 Example (Normal). The standard normal density has score function for location
!0/fo(x) = -x and Fisher information Ito = 1. The optimal signed rank statistic for the
normal scale family has score-generating function
We conclude that the corresponding sequence of rank tests has the same asymptotic slope as
the t -test if the underlying distribution is normal. (For other distributions the two sequences
of tests have different asymptotic behavior.) D
Even the assumption that the underlying distribution of the observations is known up to
scale is often unrealistic. Because rank statistics are distribution-free under the null hypo-
thesis, the level of a rank test is independent of the underlying distribution, which is the best
possible protection of the level against misspecification of the model. On the other hand, the
power of a rank test is not necessarily robust against deviations from the postulated model.
This might lead to the use of the best test for the wrong model. The dependence of the
power on the underlying distribution may be relaxed as well, by a procedure known as
adaptation. This entails estimating the underlying density from the data and next using
an optimal test for the estimated density. A remarkable fact is that this approach can be
completely successful: There exist test statistics that are asymptotically optimal for any
shape f. In fact, without prior knowledge of f (other than that it is symmetric with finite
and positive Fisher information for location), estimators {jn and In can be constructed such
e
that, for every and f,
/i=l
9
We give such a construction in section 25.8.1. Then the test statistics Tn = .,fo eni~ 12 satisfy
(15.7) and hence are asymptotically (locally) optimal at (J = 0 for every given shape f.
Moreover, for every (J > 0, and every f,
Hence, the sequence of tests based on Tn is also consistent at every ((J, f) in the alternative
hypothesis H 1 : (J > 0.
15.13 Example (Testing shift). If PJJ.(x) = f(x- JL) andqv(Y) = g(y- v) for two densi-
ties f and g that have the same "location," then we obtain the two-sample location problem.
The alternative hypothesis asserts that the second sample is "stochastically larger." D
The alternatives of greatest interest for the study of the asymptotic performance of
tests are sequences (f.LN, VN) that converge to the boundary between null and alternative
hypotheses. In the study of relative efficiency, in Chapter 14, we restricted ourselves to
224 Efficiency of Tests
vertical perturbations (e, e + hj../N). Here we shall use the sequences (e + gj../N, e +
hj../N), which approach the boundary in the direction of a general vector (g, h).
If both p IL and qv define differentiable models, then the sequence of experiments ( P;:' ®
P:: (JL, v) E ll2) is locally asymptotically normal with norming rate ../N. If the score
functions are denoted by felL and i,, and the Fisher informations by IlL and lv. respectively,
then the parameters of local asymptotic normality are
I (IJ.,V) -- (HIL
0 (1-0A)lv ) .
The corresponding limit experiment consists of observing two independent normally dis-
r;;
tributed variables with means g and hand variances A- 1 1 and (1- A)- 11;; 1, respectively.
15.14 Corollary. Suppose that the models (PIL: f.L E ll) and (Qv: v E ll) are differen-
tiable in quadratic mean, and let m, n --* oo such that m f N --* A E (0, 1). Then the
power functions of any sequence of level a tests for Ho : v = f.L satisfies, for every f.L and for
every h > g,
Proof. This is a special case of Theorem 15.4, with 1/f(JL, v) = v - f.L and Fisher in-
formation matrix diag (HIL, (1 - A)JIL). It is slightly different in that the null hypothesis
Ho: 1/f(e) = 0 takes the form of an equality, which gives a weaker requirement on the
sequence Tn. The proof goes through because of the linearity of 1/f. •
15.5 Two-Sample Problems 225
A.(l- A.)/11-J/J.
A./11- + (1-A.)J/J.
The square of the quotient of the actual slope of a sequence of tests and this number is a
good absolute measure of the asymptotic quality of the sequence of tests.
According to the second assertion of Theorem 15 .4, an optimal sequence of tests can be
based on any sequence of statistics such that
(The multiplicative factor sop1(JL) ensures that the sequence TN is asymptotically normally
distributed with variance 1.) Test statistics with this property can be constructed using a
variety of methods. For instance, in many cases we can use asymptotically efficient esti-
mators for the parameters JL and v, combined with estimators for possible nuisance param-
eters, along the lines of Example 15.6.
If P11- = q/1- = !11- are equal and are densities on the real line, then rank statistics are
attractive. Let RNl, ... ' RNN be the ranks of the pooled sample xi .... ' Xm. Y!, ... ' Yn.
Consider the two-sample rank statistics
Up to a constant these are the locally most powerful scores introduced in Chapter 13. By
Theorem 13.5, because aN= j 01 4J(u) du = 0,
Thus, the locally most powerful rank statistics yield asymptotically optimal tests. In general,
the optimal rank test depends on JL, and other parameters in the model, which must be
estimated from the data, but in the most interesting cases this is not necessary.
15.15 Example (Wilcoxon statistic). For !11- equal to the logistic density with mean JL,
the scores aN,i are proportional to i. Thus, the Wilcoxon (or Mann-Whitney) two-sample
statistic is asymptotically uniformly most powerful for testing a difference in location
between two samples from logistic densities with different means. D
15.16 Example (Log rank test). The log rank test is asymptotically optimal for testing
proportional hazard alternatives, given any baseline distribution. D
226 Efficiency of Tests
Notes
Absolute bounds on asymptotic power functions as developed in this chapter are less known
than the absolute bounds on estimator sequences given in Chapter 8. Testing problems were
nevertheless an important subject in Wald [149], who is credited by Le Cam for having first
conceived of the method of approximating experiments by Gaussian experiments, albeit
in a somewhat different way than later developed by Le Cam. From the point of view
of statistical decision theory, there is no difference between testing and estimating, and
hence the asymptotic bounds for tests in this chapter fit in the general theory developed
in [99]. Wald appears to use the Gaussian approximation to transfer the optimality of the
likelihood ratio and the Wald test (that is now named for him) in the Gaussian experiment
to the sequence of experiments. In our discussion we use the Gaussian approximation to
show that, in the multidimensional case, "asymptotic optimality" can only be defined in a
somewhat arbitrary manner, because optimality in the Gaussian experiment is not easy to
define. That is a difference of taste.
PROBLEMS
1. Consider the two-sided testing problem Ho: cTh = Oversus H1: cTh =F Obasedon an Nk(h, :E)-
distributed observation X. A test for testing Ho versus H, is called unbiased if supheHo 1r(h) ::=::
infheH1 1r(h ). The test that rejects Ho forlarge values of leT XI is uniformly most powerful among
the unbiased tests. More precisely, for every power function 1r of a test based on X the conditions
and
The critical values of the likelihood ratio test are usually based on an
asymptotic approximation. We derive the asymptotic distribution of the
likelihood ratio statistic and investigate its asymptotic quality through its
asymptotic power function and its Bahadur efficiency.
16.1 Introduction
Suppose that we observe a sample X1, ... , Xn from a density pe, and wish to test the
null hypothesis H0 : () E e 0 versus the alternative H 1 : () E e 1• If both the null and the
alternative hypotheses consist of single points, then a most powerful test can be based on
the log likelihood ratio, by the Neyman-Pearson theory. If the two points are ()0 and B~o
respectively, then the optimal test statistic is given by
For certain special models and hypotheses, the most powerful test turns out not to depend
on 01, and the test is uniformly most powerful for a composite hypothesis e 1• Sometimes
the null hypothesis can be extended as well, and the testing problem has a fully satisfac-
tory solution. Unfortunately, in many situations there is no single best test, not even in an
asymptotic sense (see Chapter 15). A variety of ideas lead to reasonable tests. A sensible
extension of the idea behind the Neyman-Pearson theory is to base a test on the log likelihood
ratio
The single points are replaced by maxima over the hypotheses. As before, the null hypoth-
esis is rejected for large values of the statistic.
Because the distributional properties of An can be somewhat complicated, one usually
replaces the supremum in the numerator by a supremum over the whole parameter set
e = eo u e 1• This changes the test statistic only if An ~ o, which is inessential, because
in most cases the critical value will be positive. We study the asymptotic properties of the
227
228 Likelihood Ratio Tests
The most important conclusion of this chapter is that, under the null hypothesis, the sequence
An is asymptotically chi squared-distributed. The main conditions are that the model is
differentiable in (J and that the null hypothesis 8 0 and the full parameter set 8 are (locally)
equal to linear spaces. The number of degrees of freedom is equal to the difference of the
(local) dimensions of 8 and 8 0 • Then the test that rejects the null hypothesis if An exceeds
the upper a-quantile of the chi-square distribution is asymptotically of level a. Throughout
the chapter we assume that e c IRk.
The "local linearity" of the hypotheses is essential for the chi-square approximation,
which fails already in a number of simple examples. An open set is certainly locally linear
at every of its points, and so is a relatively open subset of an affine subspace. On the
other hand, a half line or space, which arises, for instance, if testing a one-sided hypothesis
Ho: f./.11 ::::; 0, or a ball Ho: llfJII ::::; 1, is not locally linear at its boundary points. In that
case the asymptotic null distribution of the likelihood ratio statistic is not chi-square, but
the distribution of a certain functional of a Gaussian vector.
Besides for testing, the likelihood ratio statistic is often used for constructing confidence
regions for a parameter 1/f(O). These are defined, as usual, as the values 1: for which a null
hypothesis H0 : 1fr (fJ) = 1: is not rejected. Asymptotic confidence sets obtained by using the
chi-square approximation are thought to be of better coverage accuracy than those obtained
by other asymptotic methods.
The likelihood ratio test has the desirable property of automatically achieving reduction
of the data by sufficiency: The test statistic depends on a minimal sufficient statistic only.
This is immediate from its definition as a quotient and the characterization of sufficiency by
the factorization theorem. Another property of the test is also immediate: The likelihood
ratio statistic is invariant under transformations of the parameter space that leave the null
and alternative hypotheses invariant. This requirement is often imposed on test statistics
but is not necessarily desirable.
16.1 Example (Multinomial vector). A vector N = (N1, ••• , Nk)thatpossesses the mul-
tinomial distribution with parameters n and p = (p 1, ••• , Pk) can be viewed as the sum of
n independent multinomial vectors with parameters 1 and p. By the sufficiency reduction,
the likelihood ratio statistic based on N is the same as the statistic based on the single
observations. Thus our asymptotic results apply to the likelihood ratio statistic based on N,
ifn-+ oo.
If the success probabilities are completely unknown, then their maximum likelihood
estimator is N In. Thus, the log likelihood ratio statistic for testing a null hypothesis Ho : p E
Po against the alternative H 1 : p ¢. Po is given by
The full parameter set can be identified with an open subset of IRk-!, if p with zero coordi-
nates are excluded. The null hypothesis may take many forms. For a simple null hypothesis
16.2 Taylor Expansion 229
as the numbers of observations falling into each of the sets of the partition. Then the
vector N = (N1, ... , Nk) possesses a multinomial distribution, and the original problem
can be translated in testing the null hypothesis that the success probabilities p have the
form (P11 (X1), ••• , P11 (Xk)) for some D e.
16.2 Example (Exponential families). Suppose that the observations are sampled from
a density Pll in the k-dimensional exponential family
Pll(x) = c(e)h(x)ellrt<x>.
Let E> c ~k be the natural parameter space, and consider testing a null hypothesis E> 0 c e
versus its complement E> - E>o. The log likelihood ratio statistic is given by
To find the limit behavior of this sequence of random variables, we might replace L .f.11 (X;)
by its Taylor expansion around the maximum likelihood estimator = ~n· If ~--* .f.11 (x) e e
t It is also proved in Chapter 17 by relating the likelihood ratio statistic to the chi-square statistic.
230 Likelihood Ratio Tests
is twice continuously differentiable for every X, then there exists a vector {;jn between en,O
and en such that the preceding display is equal to
n
-2(en,O- en) L io. (Xi) - (en,O- enl L i~. (X;)(en,O- en).
i=l
Because en is the maximum likelihood estimator in the unrestrained model, the linear term in
this expansion vanishes as soon as en is aninnerpointofe. If the averages -n- 1 L i~(X;)
converge in probability to the Fisher information matrix I, and the sequence ,JTi(en,O- en)
is bounded in probability, then we obtain the approximation
(16.3)
In view of the results of Chapter 5, the latter conditions are reasonable if t} e 8 0 , for then
both en and en,O can be expected to be ,JTi-consistent. The preceding approximation, if it
can be justified, sheds some light on the quality of the likelihood ratio test. It shows that,
asymptotically, the likelihood ratio test measures a certain distance between the maximum
likelihood estimators under the null and the full hypotheses. Such a procedure is intuitively
reasonable, even though many other distance measures could be used as well. The use of
the likelihood ratio statistic entails a choice as to how to weigh the different "directions" in
which the estimators may differ, and thus a choice of weights for "distributing power" over
different deviations. This is further studied in section 16.4.
If the null hypothesis is a single point Bo ={eo}, then en,O =eo, and the quadratic form
in the preceding display reduces under Ho: e =eo (i.e., tJ =eo) to hni,hn for hn = ,JTi(en-
tJ) T. In view of the results of Chapter 5, the sequence hn can be expected to converge in
distribution to a variable h with a normal N(O, I,;- 1)-distribution. Then the sequence An
converges under the null hypothesis in distribution to the quadratic form hT I, h. This is the
squared length of the standard normal vector I~ 12 h, and possesses a chi-square distribution
with k degrees of freedom. Thus the chi-square approximation announced in the introduction
follows.
The situation is more complicated if the null hypothesis is composite. If the sequence
,JTi(en,O - tf, en - tf) converges jointly to a variable (ho, h), then the sequence An is
asymptotically distributed as (h- h0 )T I,(h- h0 ). A null hypothesis 8 0 that is (a seg-
ment of) a lower dimensional affine linear subspace is itself a "regular" parametric model.
If it contains t} as a relative inner point, then the maximum likelihood estimator en,O
may be expected to be asymptotically normal within this affine subspace, and the pair
,JTi(en,O - tf, en - t}) may be expected to be jointly asymptotically normal. Then the like-
lihood ratio statistic is asymptotically distributed as a quadratic form in normal variables.
Closer inspection shows that this quadratic form possesses a chi-square distribution with
k - l degrees of freedom, where k and l are the dimensions of the full and null hypothe-
ses. In comparison with the case of a simple null hypothesis, l degrees of freedom are
"lost."
Because we shall rigorously derive the limit distribution by a different approach in
the next section, we make this argument precise only in the particular case that the null
hypothesis Bo consists of all points (e 1, ••• , e1, 0, ... , 0), if e ranges over an open subset
e of IRk. Then the score function for e under the null hypothesis consists of the first l
coordinates of the score function i , for the whole model, and the information matrix under
the null hypothesis is equal to the (l x l) principal submatrix of I,. Write these as i ,,~t and
I,,~t.~t· respectively, and use a similar partitioning notation for other vectors and matrices.
16.3 Using Local Asymptotic Normality 231
Under regularity conditions we have the linear approximations (see Theorem 5.39)
Given these approximations, the multivariate central limit theorem and Slutsky's lemma
yield the joint asymptotic normality of the maximum likelihood estimators. From the form
of the asymptotic covariance matrix we see, after some matrix manipulation,
The matrix (/i I) >I, >I is the asymptotic COVariance matrix Of the sequence ,JTi ~n, >I, whence
we obtain an asymptotic chi-square distribution with k -l degrees of freedom, by the same
argument as before.
We close this section by relating the likelihood ratio statistic to two other test statistics.
Under the simple null hypothesis E>o = {8o}, the likelihood ratio statistic is asymptotically
equivalent to both the maximum likelihood statistic (or Wald statistic) and the score statistic.
Hti~(X,)rr;;;' [ti~(X,)l
These are given by
The Wald statistic is a natural statistic, but it is often criticized for necessarily yielding
ellipsoidal confidence sets, even if the data are not symmetric. The score statistic has the
advantage that calculation of the supremum of the likelihood is unnecessary, but it appears
to perform less well for smaller values of n.
In the case of a composite hypothesis, a Wald statistic is given in (16.4) and a score
statistic can be obtained by substituting the approximation n~n. >I ~ u;
I) >I' >I L i ~. 0 >I (X;)
in (16.4). (This approximation is obtainable from linearizing .L:<i~. -i~.).) In both cases
we also replace the unknown parameter ff by an estimator.
In Chapter 7 it is seen that, for large n, the rescaled likelihood ratio process in this display
is similar to the likelihood ratio process of the normal experiment (N(h, Ii 1): h E ~k).
This suggests that, if the sets Hn and Hn,o converge in a suitable sense to sets Hand H0 , the
sequence An converges in distribution to the random variable A obtained by substituting
the normal likelihood ratios, given by
dN(h, Ii 1) dN(h, Ii 1)
A = 2 sup log ( _ 1) (X) - 2 sup log ( _ 1) (X).
heH dN 0, I, heH0 dN 0, I,
This is exactly the likelihood ratio statistic for testing the null hypothesis Ho : h E Ho versus
the alternative H 1 : h E H - H0 based on the observation X in the normal experiment.
Because the latter experiment is simple, this heuristic is useful not only to derive the
asymptotic distribution of the sequence An, but also to understand the asymptotic quality
of the corresponding sequence of tests.
The likelihood ratio statistic for the normal experiment is
16.6 Lemma. Let X be a k-dimensional random vector with a standard normal distri-
bution and let Ho be an [-dimensional linear subspace of ~k. Then II X - Ho 11 2 is chi
square-distributed with k - l degrees offreedom.
Proof. Take an orthonormal base of~k such that the first l elements span H0 . By Pythago-
ras' theorem, the squared distance of a vector z to the space H0 equals the sum of squares
zt
Li>l of its last k -l coordinates with respect to this basis. A change of base corresponds
to an orthogonal transformation of the coordinates. Because the standard normal distribu-
tion is invariant under orthogonal transformations, the coordinates of X with respect to any
orthonormal base are independent standard normal variables. Thus IIX- Holl 2 = Li>l Xt
is chi square-distributed. •
If if is an inner point of 8, then the setH is the full space ~k and the second term on
the right of (16.5) is zero. Thus, if the local null parameter spaces .jn.(8 0 - if) converge
to a linear subspace of dimension l, then the asymptotic null distribution of the likelihood
ratio statistic is chi-square with k - l degrees of freedom.
The following theorem makes the preceding informal derivation rigorous under the same
mild conditions employed to obtain the asymptotic normality of the maximum likelihood
estimator in Chapter 5. It uses the following notion of convergence of sets. Write Hn --* H
if H is the set of all limits lim hn of converging sequences hn with hn E Hn for every n
and, moreover, the limit h = limi hn; of every converging sequence hn; with hn; E Hn; for
every i is contained in H.
16.3 Using Local Asymptotic Normality 233
16.7 Theorem. Let the model (P9 E>) be differentiable in quadratic mean at U with
: () E
nonsingular Fisher information matrix, and suppose that for every fJ1 and fJ2 in a neighbor-
. ·2
hood of U and for a measurable function e such that P.6 e < oo,
If the maximum likelihood estimators {}n,o and {jn are consistent under U and the sets Hn,O
and Hn converge to sets Ho and H, then the sequence of likelihood ratio statistics An
converges under U + hI Jn in distribution to A given in (16.5), for X normally distributed
with mean hand covariance matrix 1; 1•
*Proof. Let Gn = .jii(IP'n- PtJ) be the empirical process, and define stochastic processes
Zn by
Zn(h) = nlP'nlog
PtJ+hf-/Ti
p,
T If"
- h unl,
•
+ -21h T I,h.
The differentiability of the model implies that Zn(h) ~ 0 for every h. In the proof of
Theorem 7.12 this is strengthened to the uniform convergence
en
Furthermore, it follows from this proof that both {}n,O and are .fii-consistent under u.
(These statements can also be proved by elementary arguments, but under stronger regularity
conditions.)
The preceding display is also valid for every sequence Mn that increases to oo suffi-
ciently slowly. Fix such a sequence. By the .jii-consistency, the estimators {}n,o and are en
contained in the ball of radius M n I Jn around U with probability tending to 1. Thus, the
limit distribution of An does not change if we replace the sets Hn and Hn,o in its definition
by the sets Hn n ball(O, Mn) and Hn,o n ball(O, Mn). These "truncated" sequences of sets
still converge to H and H 0 , respectively. Now, by the uniform convergence to zero of the
processes Zn (h) on Hn and Hn,o. and simple algebra,
PtJ+h!-/Ti PtJ+h/-/Ti
An = 2 sup nlP'n log - 2 sup nlP'n log_,;,:,...!....,
heH. PtJ heH•. o PtJ
= 2 sup ( hT '
Gn.e,- lT)
2h J,h -2 sup (
hT '
GnltJ- lT)
2h lf}h + Op(1)
heH. heH•. o
16.8 Example (Generalized linear models). In a generalized linear model a typical ob-
servation (X, Y), consisting of a "covariate vector" X and a "response" Y, possesses a
density of the form
pp(x, y) = eyk({Jrx)¢>-bok({Jrx)¢>cq,(y)px(x).
(It may be more natural to model the covariates as (observed) constants, but to fit the model
into our i.i.d. setup, we consider them to be a random sample from a density p x.) Thus, given
234 Likelihood Ratio Tests
X, the variable Y follows an exponential family density eY 9¢>-b(!J)¢> cq, (y) with parameters() =
k ({3 T X) and 4>. Using the identities for exponential families based on Lemma 4.5, we obtain
b" o k({JTX)
varp,q,(Y I X) = 4>
The function (b' ok)- 1 is called the linkfunction of the model and is assumed known. To
make the parameter f3 identifiable, we assume that the matrix EX xr exists and is nonsin-
gular.
To judge the goodness-of-fit of a generalized linear model to a given set of data (X 1,
Y1), ••• , (Xn, Yn). it is customary to calculate, for fixed 4>, the log likelihood ratio statistic
for testing the model as described previously within the model in which each Yi, given Xi,
still follows the given exponential family density, but in which the parameters() (and hence
the conditional means E(Yi I Xi)) are allowed to be arbitrary values ()i, unrelated across
then observations (Xi, Yi). This statistic, with the parameter 4> set to 1, is known as the
deviance, and takes the form, with ~n the maximum likelihood estimator for {J,t
In our present setup, the codimension of the null hypothesis within the "full model" is
equal to n - k, if f3 is k-dimensional, and hence the preceding theory does not apply
to the deviance. (This could be different if there are multiple responses for every given
covariate and the asymptotics are relative to the number of responses.) On the other hand,
the preceding theory allows an "analysis of deviance" to test nested sequences of regression
models corresponding to inclusion or exclusion of a given covariate (i.e., column of the
regression matrix). For instance, if Di(Yn, P,(i)) is the deviance of the model in which
the i + 1, i + 2, ... , kth coordinates of f3 are a priori set to zero, then the difference
Di-1 (Y no P,(i-1)) - Di(Y no p,(i)) is the log likelihood ratio statistic for testing that the ith
coordinate of f3 is zero within the model in which all higher coordinates are zero. According
to the theory of this chapter, 4> times this statistic is asymptotically chi square-distributed
with one degree of freedom under the smaller of the two models.
To see this formally, it suffices to verify the conditions of the preceding theorem. Using
the identities for exponential families based on Lemma 4.5, the score function and Fisher
information matrix can be computed to be
t The arguments Yn and P, of Dare the vectors of estimated (conditional) means of Y given the full model and
the generalized linear model, respectively. Thus f1i = b' o k(fl~ Xi).
16.3 Using Local Asymptotic Normality 235
square-integrable, for every small neighborhood U, and the Fisher information is continu-
ous. Thus, the local conditions on the model are easily satisfied.
Proving the consistency of the maximum likelihood estimator may be more involved,
depending on the link function. If the parameter f3 is restricted to a compact set, then most
approaches to proving consistency apply without further work, including Wald's method,
Theorem 5.7, and the classical approach of section 5.7. The last is particularly attractive
in the case of canonicallinkfunctions, which correspond to setting k equal to the identity.
Then the second-derivative matrix lp is equal to -b"(f3T x)xxr, whence the likelihood
is a strictly concave function of f3 whenever the observed covariate vectors are of full
rank. Consequently, the point of maximum of the likelihood function is unique and hence
consistent under the conditions of Theorem 5.14.t D
16.9 Example (Location scale). Suppose we observe a sample from the density f ((x -
JL) I u) I u for a given probability density f, and a location-scale parameter (J = (JL, u)
ranging over the set E> = ll x Jl+. We consider two testing problems.
(i). Testing Ho : JL = 0 versus H, : JL =f. 0 corresponds to setting E>o = {0} x Jl+. For a
given pointJJ = (0, u) from the null hypothesis the set .Jii"(E>o - '0) equals {0} x ( -,Jnu, oo)
and converges to the linear space {0} x ll. Under regularity conditions on f, the sequence
of likelihood ratio statistics is asymptotically chi square-distributed with 1 degree of
freedom.
(ii). Testing H 0 : JL :::: 0 versus H 1 : JL > 0 corresponds to setting E>o = ( -oo, 0] x Jl+.
For a given point '0 = (0, u) on the boundary of the null hypothesis, the sets .Jii"(E>o -
'0) converge to Ho = ( -oo, 0] x R In this case, the limit distribution of the likelihood
ratio statistics is not chi-square but equals the distribution of the square distance of a
standard normal vector to the seti~ 12 Ho = {h: (h, r;' 12 e 1) :S: o}: The latter is a half-space
with boundary line through the origin. Because a standard normal vector is rotationally
symmetric, the distribution of its distance to a half-space of this type does not depend on the
orientation of the half-space. Thus the limit distribution is equal to the distribution of the
squared distance of a standard normal vector to the half-space {h : h2 :::: 0}: the distribution
of (Z v 0) 2 for a standard normal variable Z. Because P( (Z v 0) 2 > c)= ~P(Z2 > c) for
every c > 0, we must choose the critical value of the test equal to the upper 2a-quantile of
the chi-square distribution with 1 degree of freedom. Then the asymptotic level of the test
is a for every '0 on the boundary of the null hypothesis (provided a < 112).
For a point '0 in the interior of the null hypothesis Ho: JL :::: 0 the sets .Jii"(E>o - '0)
converge to ll x ll and the sequence of likelihood ratio statistics converges in distribution to
the squared distance to the whole space, which is zero. This means that the probability of an
error of the first kind converges to zero for every '0 in the interior of the null hypothesis. D
16.10 Example (Testing a ball). Suppose we wish to test the null hypothesis Ho : llfJ II :::: 1
that the parameter belongs to the unit ball versus the alternative H, : llfJ II > 1 that this is
not case.
If the true parameter '0 belongs to the interior of the null hypothesis, then the sets
,Jn(E>o - '0) converge to the whole space, whence the sequence of likelihood ratio statistics
converges in distribution to zero.
For iJ on the boundary of the unit ball, the sets .jn(E>o- iJ) grow to the half-space
Ho = {h : (h, iJ} ::=: 0}. The sequence of likelihood ratio statistics converges in distribution
to the distribution of the square distance of a standard normal vector to the half-space
r;
I~/ 2 Ho= {h: (h, 1/ 2 iJ} ::=: 0}. By the same argument as in the preceding example, this
is the distribution of (Z v 0) 2 for a standard normal variable Z. Once again we find an
asymptotic level-a test by using a 2a-quantile. D
16.11 Example (Testing a range). Suppose that the null hypothesis is equal to the image
8 0 = g(T) of an open subset T of a Euclidean space of dimension l ::=: k. If g is a
homeomorphism, continuously differentiable, and of full rank, then the sets .jn(8 0 - g ( r))
converge to the range of the derivative of g at r, which is a subspace of dimension l.
Indeed, for any 'f/ E IR1 the vectors r + 'f/1.jn are contained in T for sufficiently large
n, and the sequence .jn (g (r + 'f/ I .jn) - g (r)) converges to g~ 'f/. Furthermore, if a sub-
sequence of .fil(g(tn)- g(r)) converges to a point h for a given sequence tn in T, then
the corresponding subsequence of .fil(tn - r) converges to 'f/ = (g- 1 )~('<")h by the differ-
entiability of the inverse mapping g- 1 and hence .fil(g(tn) - g(r)) -+ g~'f/· (We can use
the rank theorem to give a precise definition of the differentiability of the map g- 1 on the
manifold g(T).) D
The variable A is the squared distance of the vector Z to the affine subspace -I~ 12 h + I~ 12 H 0 •
By the rotational invariance of the normal distribution, the distribution of A does not de-
pend on the orientation of the affine subspace, but only on its codimension and its distance
16.4 Asymptotic Power Functions 237
o= 111~ 12 h- 1~ 12 Holi to the origin. This distribution is known as the noncentral chi-square
distribution with noncentrality parameter o. Thus
The test performs best for departures from the null hypothesis in the direction of the eigen-
vector corresponding to the largest eigenvalue. Even though the likelihood ratio test gives
power in all directions, it does not treat the directions equally. This may be worrisome if
the eigenvalues are very inhomogeneous.
Further insight is gained by comparing the likelihood ratio test to tests that are designed
to be optimal in given directions. Let X be an observation in the limit experiment, having a
N(h, Ii1)-distribution. The test that rejects the null hypothesis Ho = {0} if lv'Ae h~ XI >
Zaf2 has level a and power function
For large k this is a considerably better power function than the power function of the
likelihood ratio test (Figure 16.1), but the forms of the power functions are similar. In
particular, the optimal power functions show a similar dependence on the eigenvalues of
········
.....·········
·········· ---
....···
..······
..···
..···
"!
0
...... ···
....··
.........··
...····
.....·
.......··
N ................
d
0 5 10 15 20 25
Figure 16.1. The functions JL 2 --+ P(xt(JL) > xf.a) fork = 1 (solid), k = 5 (dotted) and k = 15
(dashed), respectively, for a = 0.05.
238 Likelihood Ratio Tests
the covariance matrix. In this sense, the apparently unequal distribution of power over the
different directions is not unfair in that it reflects the intrinsic difficulty of detecting changes
in different directions. This is not to say that we should never change the (automatic)
emphasis given by the likelihood ratio test.
rAn
EeoAn ·
The distribution of this statistic is next approximated by a chi-square distribution with r
degrees of freedom. Unfortunately, the mean EeoAn may be hard to calculate, and may
depend on an unknown null parameter &o. Therefore, one first obtains an expression for the
mean of the form
b(&o)
Ee0 An = 1 + -n- + " · .
Next, with bn an appropriate estimator for the parameter b(&o), the corrected statistic takes
the form
rAn
1 + bnfn
The surprising fact is that this recipe works in some generality. Ordinarily, improved approx-
imations would be obtained by writing down and next inverting an Edgeworth expansion
of the probabilities P(An ::::; x); the correction would depend on x. In the present case this
is equivalent to a simple correction of the mean, independent of x. The technical reason is
that the polynomial in x in the (1/n)-term of the Edgeworth expansion is of degree 1.t
t For a further discussion, see [5], [9], and [83], and the references cited there.
16.6 Bahadur Efficiency 239
in Chapter 7. The purpose of this section is to show that, under regularity conditions, the
likelihood ratio test is asymptotically optimal in a different setting, the one of Bahadur
efficiency.
For simplicity we restrict ourselves to the testing of finite hypotheses. Given finite sets
Po and P 1 of probability measures on a measurable space (X, A) and a random sample
X 1, ... , Xn, we study the log likelihood ratio statistic
- SUPQe'P1 Il7=1q(Xi)
An= log Tin .
SUPPe'Po i=lp(Xi)
More general hypotheses can be treated, under regularity conditions, by finite approximation
(see e.g., Section 10 of [4]).
The observed level of a test that rejects for large values of a statistic Tn is defined as
The test that rejects the null hypothesis if Ln ::S a has level a. The power of this test is
maximal if Ln is "minimal" under the alternative (in a stochastic sense). The Bahadur
slope under the alternative Q is defined as the limit in probability under Q (if it exists)
of the sequence (-2/n) log Ln. If this is "large," then Ln is small and hence we prefer
sequences of test statistics that have a large slope. The same conclusion is reached in
section 14.4 by considering the asymptotic relative Bahadur efficiencies. It is indicated
there that the Neyman-Pearson tests for testing the simple null and alternative hypotheses
P and Q have Bahadur slope -2Q log(p/q). Because these are the most powerful tests,
this is the maximal slope for testing P versus Q. (We give a precise proof in the following
theorem.) Consequently, the slope for a general null hypothesis cannot be bigger than
infPe'Po -2Q log(p/q). The sequence of likelihood ratio statistics attains equality, even if
the alternative hypothesis is composite.
16.12 Theorem. The Bahadur slope of any sequence of test statistics for testing an arbi-
trary null hypothesis Ho : P E Po versus a simple alternative H1 : P = Q is bounded above
by infPe'Po -2Q log(p/q), for any probability measure Q. If Po and P1 are finite sets of
probability measures, then the sequence of likelihood ratio statistics for testing Ho : P E Po
versus H1 : P E P1 attains equality for every Q E P1.
Proof. Because the observed level is a supremum over Po, it suffices to prove the upper
bound of the theorem for a simple null hypothesis Po = {P}. If -2Q log(p/q) = oo,
then there is nothing to prove. Thus, we can assume without loss of generality that Q is
absolutely continuous with respect toP. Write An for log Il7= 1(q/p)(Xi). Then, for any
constants B > A > Q log(q I p),
PQ(Ln < e-nB, An< nA) = Epl{Ln < e-nB, An< nA}eA"
::S enAPp(Ln < e-nB).
Because Ln is superuniformly distributed under the null hypothesis, the last expression
is bounded above by exp -n(B- A). Thus, the sum of the probabilities on the left side
over n E N is finite, whence -(2/n) log Ln ::S 2B or An 2:: nA for all sufficiently large
n, almost surely under Q, by the Borel-Cantelli lemma. Because the sequence n- 1An
240 Likelihood Ratio Tests
converges almost surely under Q to Q log(q I p) < A, by the strong law oflarge numbers, the
second possibility can occur only finitely many times. It follows that -(21n) log Ln S 2B
eventually, almost surely under Q. This having been established for any B > Q log(ql p),
the proof of the first assertion is complete.
To prove that the likelihood ratio statistic attains equality, it suffices to prove that its slope
is bigger than the upper bound. Write An for the log likelihood ratio statistic, and write
supp and supQ for suprema over the null and alternative hypotheses. Because (11n)An is
bounded above by supQ JP>n log(q I p ), we have, by Markov's inequality,
The expectation on the right side is the nth power of the integral J(q I p) d P = Q(p > 0) S
1. Take logarithms left and right and multiply with -(21n) to find that
2 ( 1- ) 2log IP,I
--logPp -An ?::: t ?::: 2t- .
n n n
Because this is valid uniformly in t and P, we can take the infimum over P on the left
side; next evaluate the left and right sides at t = (lln)An. By the law of large numbers,
JP>n log(ql p) --+ Q log(qlp) almost surely under Q, and this remains valid if we first add
the infimum over the (finite) set Po on both sides. Thus, the limit inferior of the sequence
(lln)An?::: infpJP>nlog(qlp) is bounded below by infp Qlog(qlp) almost surely under
Q, where we interpret Q log(q I p) as oo if Q (p = 0) > 0. Insert this lower bound in
the preceding display to conclude that the Bahadur slope of the likelihood ratio statistics is
bounded below by 2infp Q log(qlp). •
Notes
The classical references on the asymptotic null distribution of likelihood ratio statistic are
papers by Chernoff [21] and Wilks [150]. Our main theorem appears to be better than
Chernoff's, who uses the "classical regularity conditions" and a different notion of approx-
imation of sets, but is not essentially different. Wilks' treatment would not be acceptable to
present-day referees but maybe is not so different either. He appears to be saying that we
can replace the original likelihood by the likelihood for having observed only the maximum
likelihood estimator (the error is asymptotically negligible), next refers to work by Doob
to infer that this is a Gaussian likelihood, and continues to compute the likelihood ratio
statistic for a Gaussian likelihood, which is easy, as we have seen. The approach using a
Taylor expansion and the asymptotic distributions of both likelihood estimators is one way
to make the argument rigorous, but it seems to hide the original intuition.
Bahadur [3] presented the efficiency of the likelihood ratio statistic at the fifth Berkeley
symposium. Kallenberg [84] shows that the likelihood ratio statistic remains asymptotically
optimal in the setting in which both the desired level and the alternative tend to zero, at least
in exponential families. As the proof of Theorem 16.12 shows, the composite nature of the
alternative hypothesis "disappears" elegantly by taking ( 11 n) log of the error probabilities -
too elegantly to attach much value to this type of optimality?
Problems 241
PROBLEMS
1. Let (XI, Y1), ... , (Xn, Yn) be a sample from the bivariate normal distribution with mean vec-
tor (f.L, v) and covariance matrix the diagonal matrix with entries cr 2 and -r 2 • Calculate (or
characterize) the likelihood ratio statistic for testing Ho: f.L = v versus H1 : f.L =F v.
2. Let N be a kr-dimensional multinomial variable written as a (k x r) matrix (Nij). Calculate
the likelihood ratio statistic for testing the null hypothesis of independence Ho : Pii = Pi. p. i for
every i and j. Here the dot denotes summation over all columns and rows, respectively. What is
the limit distribution under the null hypothesis?
3. Calculate the likelihood ratio statistic for testing Ho : f.L = v based on independent samples of
sizen from multivariate normal distributions N,(f.L, :E) and N,(v, :E). The matrix :E is unknown.
What is the limit distribution under the null hypothesis?
4. Calculate the likelihood ratio statistic for testing Ho: f.Ll = · · · = f.Lk based on k independent
samples of size n from N(f.L i, cr 2 )-distributions. What is the asymptotic distribution under the
null hypothesis?
5. Show that (l;J 1hl,>l is the inverse of the matrix lit,>l,>l- lit,>l,slli,~ 1 .s1 I,,sl,>l·
6. Study the asymptotic distribution of the sequence An if the true parameter is contained in both
the null and alternative hypotheses.
7. Study the asymptotic distribution of the likelihood ratio statistics for testing the hypothesis
Ho: cr = --r based on a sample of size n from the uniform distribution on [cr, -r]. Does the
asymptotic distribution correspond to a likelihood ratio statistic in a limit experiment?
17
Chi-Square Tests
Proof. There exists an orthogonal matrix 0 such that OI:.OT = diag (A.i). Then the
vector 0 X is Nk ( 0, diag (A.i)) -distributed, which is the same as the distribution of the vector
(JIIZ1 , ••• , ,JY:iZk)· Now IIX11 2 = II OXII 2 has the same distribution as 2:(JIIZi) 2 • •
242
17.2 Pearson Statistic 243
We shall show that the sequence Cn (a) converges in distribution to a chi-square distribution
if the null hypothesis is true. The practical relevance is that we can use the chi-square table
to find critical values for the test. The proof shows why Pearson divided the squares by nai
and did not propose the simpler statistic IIXn- nall 2 •
17.2 Theorem. If the vectors Xn are multinomially distributed with parameters n and
a = (a 1 , ••• , ak) > 0, then the sequence Cn(a) converges under a in distribution to the
xl-1-distribution.
Proof. The vector Xn can be thought of as the sum of n independent multinomial vectors
Y1, ... , Yn with parameters 1 and a = (a I, ... , ak). Then
al (1 - a1) -a1a2
-a1ak )
( -a2a1 a2(1- a2) -a2ak
CovYi = . .
.. ..
-aka! -aka2 ak(l ~ ak) .
By the multivariate central limit theorem, the sequence n -l/2 (Xn-na) converges in distribu-
tion to the Nk(O, Cov Y1)-distribution. Consequently, with Ja the vector with coordinates
../(ii,
(
Xn,l- na1 Xn,k- nak) N(O I_ r.:: r.:;T)
r;:;;:;: , ••• , ~ - , vava .
'V ..... 1 'V nak
The number of degrees of freedom in the chi-squared approximation for Pearson's statis-
tic is the number of cells of the multinomial vector that have positive probability. However,
the quality of the approximation also depends on the size of the cell probabilities ai. For
instance, if 1001 cells have null probabilities 10-23 , ••. , 10-23 , 1 - 10-20 , then it is clear
that for moderate values of n all cells except one are empty, and a huge value of n is
xf
necessary to make a 000 -approximation work. As a rule of thumb, it is often advised to
choose the partitioning sets such that each number nai is at least 5. This criterion depends
on the (possibly unknown) null distribution and is not the same as saying that the number
of observations in each cell must satisfy an absolute lower bound, which could be very
unlikely if the null hypothesis is false. The rule of thumb means to protect the level.
The Pearson statistic is oddly asymmetric in the observed and the true frequencies (which
is motivated by the form of the asymptotic covariance matrix). One method to symmetrize
244 Chi-Square Tests
Up to a multiplicative constant this is the Hellinger distance between the discrete probabil-
ity distributions on {1, ... , k} with probability vectors a and Xnfn, respectively. Because
Xnfn- a~ 0, the Hellinger statistic is asymptotically equivalent to the Pearson statistic.
Cn (an) = t
i=l
(Xn,i -:: nan,i )2
nan,i
The estimator an is constructed to be a good estimator if the null hypothesis is true. The
asymptotic distribution of this modified Pearson statistic is not necessarily chi-square but
depends on the estimators an being used. Most often the estimators are asymptotically
normal, and the statistics
.jn(an,i - an,i)
Jli;;
are asymptotically normal as well. Then the modified chi-square statistic is asymptotically
distributed as a quadratic form in a multivariate-normal vector. In general, the eigenvalues
determining this form are not restricted to 0 or 1, and their values may depend on the
unknown parameter. Then the critical value cannot be taken from a table of the chi-square
distribution. There are two popular possibilities to avoid this problem.
First, the Pearson statistic is a certain quadratic form in the observations that is motivated
by the asymptotic covariance matrix of a multinomial vector. If the parameter a is estimated,
the asymptotic covariance matrix changes in form, and it is natural to change the quadratic
form in such a way that the resulting statistic is again chi-square distributed. This idea leads
to the Rao-Robson-Nikulin modification of the Pearson statistic, of which we discuss an
example in section 17.5.
Second, we can retain the form of the Pearson statistic but use special estimators a. In
particular, the maximum likelihood estimator based on the multinomial vector Xn, or the
minimum-chi square estimator 'iin defined by, with Po being the null hypothesis,
t
i=l
(Xn,i - n'iin,i) 2 = inf
nan,i pePo
t
i=l
(Xn,i - npi) 2
npi
The right side of this display is the "minimum-chi square distance" of the observed frequen-
cies to the null hypothesis and is an intuitively reasonable test statistic. The null hypothesis
17.3 Estimated Parameters 245
is rejected if the distance of the observed frequency vector Xnfn to the set Po is large. A
disadvantage is greater computational complexity.
These two modifications, using the minimum-chi square estimator or the maximum
likelihood estimator based on Xn, may seem natural but are artificial in some applications.
For instance, in goodness-of-fit testing, the multinomial vector is formed by grouping the
"raw data," and it is more natural to base the estimators on the raw data rather than on the
grouped data. On the other hand, using the maximum likelihood or minimum-chi square
estimator based on Xn has the advantage of a remarkably simple limit theory: If the null
hypothesis is "locally linear," then the modified Pearson statistic is again asymptotically
chi-square distributed, but with the number of degrees of freedom reduced by the (local)
dimension of the estimated parameter.
This interesting asymptotic result is most easily explained in terms of the minimum-
chi square statistic, as the loss of degrees of freedom corresponds to a projection (i.e., a
minimum distance) of the limiting normal vector. We shall first show that the two types
of modifications are asymptotically equivalent and are asymptotically equivalent to the
likelihood ratio statistic as well. The likelihood ratio statistic for testing the null hypothesis
Ho: p E Po is given by (see Example 16.1)
17.3 Lemma. Let Po be a closed subset of the unit simplex, and let an be the maximum
likelihood estimator of a under the null hypothesis Ho : a E Po (based on Xn). Then
. ~ (Xn,i - npi) 2
inf L....,
(A ( (A ) ( )
= Cn an)+ Op 1) = Ln an + Op 1 .
pe'Po i=l npi
Proof. Let an be the minimum-chi square estimator of a under the null hypothesis. Both
sequences of estimators an and an are .Jli"-consistent. For the maximum likelihood esti-
mator this follows from Corollary 5.53. The minimum-chi square estimator satisfies by its
definition
This implies that each term in the sum on the left is Op(1), whence nlan,i - ad 2 =
Op(an,i) + Op(IXn,i- nai1 2 fn) and hence the Jn-consistency.
Next, the two-term Taylor expansion log(l + x) = x- ~x 2 + o(x 2 ) combined with
Lemma 2.12 yields, for any Jn-consistent estimator sequence f>n.
"x
k X ni
npA . = -"x
L...., n,1·log-·
k ( npni
L...., n,1· -
A 1 k
X ·. -1 +-
2 "x
( npni
L...., n,1· -
) )2
X ·. -1 +op(1)
A
= O+ _1~(X
L...., n,1
· -npA ·) 2
n,1 + Op(l).
2i=l Xn,i
In the last expression we can also replace Xn,i in the denominator by nf>n,i• so that we find
the relation Ln(f>n) = Cn(f>n) between the likelihood ratio and the Pearson statistic, for
246 Chi-Square Tests
every .fil-consistent estimator sequence Pn· By the definitions of an and an. we conclude
that, up to op(l)-terms, Cnlan) :::; Cn(an) = Ln(an) :::; Ln(an) = Cn(an). The lemma
follows. •
17.4 Theorem. Let Po be a subset of the unit simplex such that the sequence of sets
.jn(Po -a) converges to a setH (in :Ilk), and suppose that a > 0. Then, under a,
fora vector X with the N(O, I- ,Ja,JaT)-distribution. Here (11 .ja)H is the set ofvectors
(hi/ ..fiil, ... , hkl .JCiiJ ash ranges over H.
17.5 Corollary. Let Po be a subset of the unit simplex such that the sequence of sets
.jn(Po- a) converges to a linear subspace of dimension l (of :Ilk), and let a > 0. Then
both the sequence of minimum-chi square statistics and the sequence of modified Pearson
statistics Cn (an) converge in distribution to the chi-square distribution with k -1-l degrees
offreedom.
Proof. The map (at, ... , ak-t. bt, ... , br-t) r+ (a x b) from Jlk+r-Z into :llkr is con-
tinuously differentiable and of full rank. The true values (at, ... , ak-t. bt ... , br-t) are
interior to the domain of this map. Thus the sequence of sets ,Jri(Po- a x b) converges
to a (k + r - 2)-dimensionallinear subspace of :llkr. •
Here P(Xj) is an estimate of P(Xj) under the null hypothesis and can take a variety of
forms.
Theorem 17.4 applies but is restricted to the case that the estimates P(Xj) are based
on the frequencies n(JP>n(Xt) •... , JP>n(Xk)) only. In the present situation it is more natural
to base the estimates on the original observations Xt. ... , Xn. Usually, this results in a
non-chi square limit distribution. Forinstance, Table 17.2 shows the "errors" in the level of
a chi-square test for testing normality, if the unknown mean and variance are estimated by
the sample mean and the sample variance but the critical value is chosen from the chi-square
distribution. The size of the errors depends on the numbers of cells, the errors being small
if there are many cells and few estimated parameters.
17.8 Example (Parametric model). Consider testing the null hypothesis that the true dis-
tribution belongs to a regular parametric model {P9 : f) E e}. It appears natural to estimate
the unknown parameter f) by an estimator Bn that is asymptotically efficient under the null
hypothesis and is based on the original sample Xt •... , Xn, for instance the maximum
likelihood estimator. If Gn = Jn (JP>n - Pe) denotes the empirical process, then efficiency
entails the approximation .jn(Bn -f)) = I{;tGnie + op(1). Applying the delta method to
17.5 Goodness-of-Fit Tests 249
Table 17.2. True levels of the chi-square test for normality using
xf_ 3 ,a -quantiZes as critical values but estimating unknown mean
and variance by sample mean and sample variance. Chi square
statistic based on partitions of[ -10, 10] into k = 5, 10, or 20
equiprobable cells under the standard normal law.
e
(The map ~--+ P9(A) has derivative P 9 1Al 9 .) The sequence of vectors (Gn1x1 , Gnf 11 )
converges in distribution to a multivariate-normal distribution. Some matrix manipulations
show that the vectors in the preceding display are asymptotically distributed as a Gaussian
vector X with mean zero and covariance matrix
P.111xi11 ·
( c)··_ 1 ,,
11 ,, - J<al!)j .
In general, the covariance matrix of X is not a projection matrix, and the variable II X 11 2
does not possess a chi-square distribution.
Because P~ 9 = 0, we have that C9 .j(ii = 0 and hence the covariance matrix of X can
be rewritten as the product (I - .jiii.jiiiT)(I - C[ I9- 1C9). Here the first matrix is the
projection onto the orthocomplement of the vector .jiii and the second matrix is a positive-
definite transformation that leaves .jiii invariant, thus acting only on the orthocomplement
.jiiil.. This geometric picture shows that Cov9 X has the same system of eigenvectors as the
matrix I -C[ Ii 1C9 , and also the same eigenvalues, except for the eigenvalue corresponding
to the eigenvector .jiii, whichisOforCov11 X and 1 for I -C[ Ii 1C9. Because both matrices
C[ I9- 1C9 and I - C[ I9- 1C9 are nonnegative-definite, the eigenvalues are contained in
[0, 1]. One eigenvalue (corresponding to eigenvector .jiii) is 0, dim N(C11 ) -1 eigenvalues
(corresponding to eigenspace N(C11 ) n .jiiil.) are 1, but the other eigenvalues may be
contained in (0, 1) and then typically depend on e. By Lemma 17.1, the variable IIX11 2 is
distributed as
dimN(Co)-1 k-1
L zf + L
i=1 i=dimN(C9 )
A.;(e)zf,
This means that it is stochastically "between" the chi-square distributions with dim N (C9) -
1 and k - 1 degrees of freedom.
The inconvenience that this distribution is not standard and depends one can be remedied
by not using efficient estimators en
or, alternatively, by not using the Pearson statistic.
250 Chi-Square Tests
The square root of the matrix I - Cl Ii) 1Ce is the positive-definite matrix with the same
eigenvectors, but with the square roots of the eigenvalues. Thus, it also leaves the vector
..fiii invariant and acts only on the orthocomplement ..fiiil... It follows that this square root
commutes with the matrix I - ..fiii..fiiir and hence
(We assume that the matrix I- Cl Ii) 1C 9 is nonsingular, which is typically the case; see
problem 17.6). By the continuous-mapping theorem, the squared norm of the left side is
asymptotically chi square-distributed with k - 1 degrees of freedom. This squared norm
is the Rao-Robson-Nikulin statistic. D
It is tempting to choose the partitioning sets Xi dependent on the observed data X 1 , ••• ,
Xn. for instance to ensure that all cells have positive probability under the null hypothesis.
This is permissible under some conditions: The choice of a "random partition" typically
does not change the distributional properties of the chi-square statistic. Consider partition-
ing sets Xi = Xi(X,, ... , Xn) that possibly depend on the data, and a further modified
Pearson statistic of the type
tn(JP>n(X~-:: P(Xi)) 2
i=l P(Xi)
If the random partitions settle down to a fixed partition eventually, then this statistic is
asymptotically equivalent to the statistic for which the partition had been set equal to the
limit partition in advance. We discuss this for the case that the null hypothesis is a model
{Pe: (J E E>} indexed by a subset E> of a normed space. We use the language of Donsker
classes as discussed in Chapter 19.
17.9 Theorem. Suppose that the sets Xi belong to a P9o-Donsker class C of sets and
A p
that P9o(Xi !:::.. Xi) ~ 0 under P9o, for given nonrandom sets Xi such that Pe0 (Xi) > 0.
e
Furthermore, assume that Jnii9- £Joll = Op(1), and suppose that the map ~--+ Pefrom
E> into .f 00 (C) is differentiable at eo with derivative P9o such that Pe0 (Xi)- Pe0 (Xi) ~ 0
for every j. Then
Proof. Let Gn = ,Jn(JP>n- P9o) be the empirical process and define IH!n = .Jli"(P~- P9o).
Then ,Jn(JP>n(Xi) - PiJ(Xi)) = (Gn - IHin)(Xi), and similarly with xi replacing xi.
The condition that the sets Xi belong to a Donsker class combined with the continuity
condition P9o(Xi!:::.. Xi)~ 0, imply that Gn(Xi)- Gn(Xi) ~ 0 (see Lemma 19.24). The
e
differentiability of the map ~--+ Pe implies that
Together with the continuity P!Jo(Xi)- P!Jo(Xi) ~ 0 and the Jn-consistency of e, this
17.6 Asymptotic Efficiency 251
A p A p .(',
shows that !Hln (Xi)- !Hln (Xi) -+ 0. In particular, because P90 (Xi) -+ P90 (Xi), both Pb (('[j)
and P~ (Xi) converge in probability to P9o (Xi) > 0. The theorem follows. •
The conditions on the random partitions that are imposed in the preceding theorem
are mild. An interesing choice is a partition in sets Xi(e) such that P9 (Xi(8)) = ai is
e.
independent of The corresponding modified Pearson statistic is known as the Watson-Roy
statistic and takes the form
Here the null probabilities have been reduced to fixed values again, but the cell frequencies
are "doubly random." If the model is smooth and the parameter and the sets Xi (8) are
not too wild, then this statistic has the same null limit distribution as the modified Pearson
statistic with a fixed partition.
17.10 Example (Location-scale). Consider testing a null hypothesis that the true under-
lying measure of the observations belongs to a location-scale family {Fo ( (· - JL) I u) : JL E
ll, a > 0}, given a fixed distribution Fo on ll. It is reasonable to choose a partition in sets
Xj = Jl+a(Cj-1• Cj], fora fixed partition -00 =Co< C1 < · · · < Ck = 00 and estimators
a
Jl and of the location and scale parameter. The partition could, for instance, be chosen
equal to ci = F0- 1 {j 1k), although, in general, the partition should depend on the type of
deviation from the null hypothesis that one wants to detect.
If we use the same location and scale estimators to "estimate" the null probabilities
Fo((..:fj- JL)Iu) of the random cells Xi = Jl + a(ci_ 1 , Cj]. then the estimators cancel, and
we find the fixed null probabilities Fo(cj)- Fo(Cj-1). D
Pearson and the likelihood ratio statistics as functions of an empirical measure and next
can apply Sanov's theorem to compute the desired limits of large deviations probabilities.
Define maps C and K by
~(p; -a;)2
C( p,a ) --~ ,
i=l a;
a k p;
K(p,a) = -Plog- = LP;log-.
P i=l a;
Then the Pearson and likelihood ratio statistics are equivalent to C (JP>n, a) and K (JP>n, a),
respectively.
Under the assumption that a > 0, both maps are continuous in p on the k-dimensional
unit simplex. Furthermore, for t in the interior of the ranges of C and K, the sets B1 =
{p:C(p,a) :=::: t} andB1 = {p:K(p,a) :=::: t} areequaltotheclosuresoftheirinteriors.
Two applications of Sanov's theorem yield
We take the function e(t) of (14.20) equal to minus two times the right sides. Because
JP>n{i} --+ p; by the law oflarge numbers, whence C(JP>n, a) ~ C(P, a) and K(JP>n, a) ~
K(P, a), the Bahadur slopes of the Pearson and likelihood ratio tests at the alternative
Ht : p = q are given by
2 inf K(p, a)
p:C(p,a)~C(q,a)
and
2K(q, a).
It is clear from these expressions that the likelihood ratio test has a bigger slope. This is
in agreement with the fact that the likelihood ratio test is asymptotically Bahadur optimal
in any smooth parametric model. Figure 17.1 shows the difference of the slopes in one
particular case. The difference is small in a neighborhood of the null hypothesis a, in
agreement with the fact that the Pitman efficiency is equal to 1, but can be substantial for
alternatives away from a.
Notes
Pearson introduced his statistic in 1900 in [112] The modification with estimated para-
meters, using the multinomial frequencies, was considered by Fisher [49], who corrected
the mistaken belief that estimating the parameters does not change the limit distribution.
Chernoff and Lehmann [22] showed that using maximum likelihood estimators based on
the original data for the parameter in a goodness-of-fit statistic destroys the asymptotic
chi-square distribution. They note that the errors in the level are small in the case of testing
a Poisson distribution and somewhat larger when testing normality.
Problems 253
------··r·-...
·· ...
Figure 17.1. The difference of the Bahadur slopes of the likelihood ratio and Pearson tests for testing
Ho: p = (1/3, 1/3, 1/3) based on a multinomial vector with parameters n and p = (PI, p2, p3), as
a function of (PI, P2).
The choice of the partition in chi-square goodness-of-fit tests is an important issue that we
have not discussed. Several authors have studied the optimal number of cells in the partition.
This number depends, of course, on the alternative for which one desires large power. The
conclusions of these studies are not easily summarized. For alternatives p such that the
likelihood ratio p j P9o with respect to the null distribution is "wild," the number of cells k
should tend to infinity with n. Then the chi-square approximation of the null distribution
needs to be modified. Normal approximations are used, because a chi-square distribution
with a large number of degrees of freedom is approximately a normal distribution. See [40],
[60], and [86] for results and further references.
PROBLEMS
1. Let N = (Nij) be a multinomial matrix with success probabilities Pii. Design a test statistic for
the null hypothesis of symmetry Ho: Pii = Pii and derive its asymptotic null distribution.
2. Derive the limit distribution of the chi-square goodness-of-fit statistic for testing normality if
using the sample mean and sample variance as estimators for the unknown mean and variance.
Use two or three cells to keep the calculations simple. Show that the limit distribution is not
chi-square.
3. Suppose that Xm and Yn are independent multinomial vectors with parameters (m, a1, ... , ak)
and (n, b1, ... , bk), respectively. Under the null hypothesis Ho: a = b, a natural estimator of
the unknown probability vector a = b is c = (m + n)- 1 (Xm + Yn). and a natural test statistic is
given by
~(Xmi•
L.,
-mci) 2 ~(Yni
+ L., -nci)
..:....;;..:.:.•-..,-__;_;-
2
i=l mci i=l nci .
c
Show that is the maximum likelihood estimator and show that the sequence of test statistics is
asymptotically chi square-distributed if m, n ~ oo.
254 Chi-Square Tests
6. Show that the matrix I - C[ 19 1C9 in Example 17.8 is nonsingular unless the empirical estima-
tor (J!Dn(Xt), ... , JIDn(Xk)} is asymptotically efficient. (The estimator (P~(Xt), ... , P~ (Xk)} is
asymptotically efficient and has asymptotic covariance matrix diag (..jii8)C[ 19 1Ce diag (..jii8);
the empirical estimator has asymptotic covariance matrix diag (..jii8) (/- ..jii8.;o:er) diag (..jii8) .)
18
Stochastic Convergence in Metric Spaces
A semimetric satisfies (i) and (ii), but not necessarily (iii). An open ball is a set of the
form {y : d (x, y) < r}. A subset of a metric space is open if and only if it is the union of
open balls; it is closed if and only if its complement is open. A sequence Xn converges to x if
and only if d (xn, x) --* 0; this is denoted by Xn --* x. The closure A of a set A C ]])) consists
of all points that are the limit of a sequence in A; it is the smallest closed set containing A.
The interior A is the collection of all points x such that x E G c A for some open set G;
it is the largest open set contained in A. A function f:]])) I-* lE between two metric spaces
is continuous at a point x if and only if f(xn) --* f(x) for every sequence Xn --* x; it is
continuous at every x if and only if the inverse image f- 1(G) of every open set G c lE
is open in ]])). A subset of a metric space is dense if and only if its closure is the whole
space. A metric space is separable if and only if it has a countable dense subset. A subset
K of a metric space is compact if and only if it is closed and every sequence in K has a
converging subsequence. A subset K is totally bounded if and only if for every e > 0 it
can be covered by finitely many balls of radius e. A semimetric space is complete if every
Cauchy sequence, a sequence such that d(xn, Xm) --* 0 as n, m --* oo, has a limit. A subset
of a complete semimetric space is compact if and only if it is totally bounded and closed.
A normed space ]])) is a vector space equipped with a norm. A norm is a map II · II :]])) ~---*
[0, oo) such that, for every x, yin]])), and o: E R,
255
256 Stochastic Convergence in Metric Spaces
A seminorm satisfies (i) and (ii), but not necessarily (iii). Given a norm, a metric can be
defined by d(x, y) = llx- Yll·
18.1 Definition. The Borel a -field on a metric space ID> is the smallest a -field that contains
the open sets (and then also the closed sets). A function defined relative to (one or two)
metric spaces is called Borel-measurable if it is measurable relative to the Borel a-field(s).
A Borel-measurable map X: Q ~ ID> defined on a probability space (Q, U, P) is referred
to as a random element with values in ID>.
For Euclidean spaces, Borel measurability is just the usual measurability. Borel measur-
ability is probably the natural concept to use with metric spaces. It combines well with the
topological structure, particularly if the metric space is separable. For instance, continuous
maps are Borel-measurable.
Proof. A map g : ID> ~ E is continuous if and only if the inverse image g- 1 (G) of every
open set G c E is open in ID>. In particular, for every open G the set g- 1 (G) is a Borel set in
ID>. By definition, the open sets in E generate the Borel a-field. Thus, the inverse image of a
generator of the Borel sets in E is contained in the Borel a -field in ID>. Because the inverse
image g- 1 (g) of a generator g of a a-field B generates the a-field g- 1 (B), it follows that
the inverse image of every Borel set is a Borel set. •
18.3 Example (Euclidean spaces). The Euclidean space Rk is a normed space with re-
spect to the Euclidean norm (whose square is llxll 2 = :E~= 1 xf), but also with respect to
many other norms, for instance llx II = maxi Ixi I, all of which are equivalent. By the Reine-
Borel theorem a subset ofJRk is compact if and only if it is closed and bounded. A Euclidean
space is separable, with, for instance, the vectors with rational coordinates as a countable
dense subset.
The Borel a -field is the usual a-field, generated by the intervals of the type ( -oo, x]. 0
18.4 Example (Extended real line). The extended real line 'i = [ -oo, oo] is the set
consisting of all real numbers and the additional elements -oo and oo. It is a metric space
with respect to
I
d(x, y) = <l>(x) - <l>(y) I·
Here <I> can be any fixed, bounded, strictly increasing continuous function. For instance,
the normal distribution function (with <I>( -oo) = 0 and <l>(oo) = 1). Convergence of a
sequence Xn --+ x with respect to this metric has the usual meaning, also if the limit x
is -oo or oo (normally we would say that Xn "diverges"). Consequently, every sequence
has a converging subsequence and hence the extended real line is compact. 0
18.1 Metric and Normed Spaces 257
18.5 Example (Uniform norm). Given an arbitrary set T, let .f. 00 (T) be the collection
of all bounded functions z: T 1-+ JR. Define sums z 1 + z2 and products with scalars
az pointwise. For instance, z 1 +z 2 is the element of .f. 00 (T) such that (z 1 +z2)(t) =
Z! (t) + z2(t) for every t. The uniform norm is defined as
llzllr = suplz(t)l.
teT
With this notation the space .f.00 (T) consists exactly of all functions z: T 1-+ R such that
llzllr < oo. The space .f. 00 (T) is separable if and only if Tis countable. 0
18.6 Example (Skorohod space). LetT = [a, b] be an interval in the extended real line.
We denote by C[a, b] thesetofallcontinuousfunctionsz: [a, b] 1-+ Rand by D[a, b] the set
of all functions z : [a, b] 1-+ R that are right continuous and whose limits from the left exist
everywhere in [a, b]. (The functions in D[a, b] are called cadlag: continue adroite, limites
agauche.) It can be shown that C[a, b] c D[a, b] c .f.00 [a, b]. We always equip the spaces
C[a, b] and D[a, b] with the uniform norm llzllr. which they "inherit" from .f. 00 [a, b].
The space D[a, b] is referred to here as the Skorohod space, although Skorohod did not
consider the uniform norm but equipped the space with the "Skorohod metric" (which we
do not use or discuss).
The space C[a, b] is separable, but the space D[a, b] is not (relative to the uniform
norm). 0
18.8 Example (Product spaces). Given a pair of metric spaces ID> and JE with metrics d
and e, the Cartesian product ID> x lEis a metric space with respect to the metric
For this metric, convergence of a sequence (xn. Yn) -+ (x, y) is equivalent to both Xn -+ x
and Yn-+ y.
For a product metric space, there exist two natural a-fields: The product of the Borel
a-fields and the Borel a-field of the product metric. In general, these are not the same,
258 Stochastic Convergence in Metric Spaces
the second one being bigger. A sufficient condition for them to be equal is that the metric
spaces ID> and lE are separable (e.g., Chapter 1.4 in [146])).
The possible inequality of the two a-fields causes an inconvenient problem. If X : Q ~ ill>
andY : Q ~ lE are Borel-measurable maps, defined on some measurable space (Q, U), then
(X, Y): Q ~ ill> x lEis always measurable for the product of the Borel a-fields. This is
an easy fact from measure theory. However, if the two a-fields are different, then the map
(X, Y) need not be Borel-measurable. If they have separable range, then they are. D
weak convergence in terms of weak convergence of random vectors: Xn "-"+ X in the metric
space ID> if and only if g(Xn) -v-+ g(X) for every continuous map g : ID> ~---* Rk.
18.9 Lemma (Portmanteau). For arbitrary maps Xn: Qn ~---* ID> and every random ele-
ment X with values in ID>, the following statements are equivalent.
(i) E* f(Xn)--* Ef(X)forall bounded, continuousfunctions f.
(ii) E* f(Xn) --* Ef(X) for all bounded, Lipschitzfunctions f.
(iii) liminfP*(Xn E G) 2: P(X E G) for every open set G.
(iv) limsupP*(Xn E F).::: P(X E F)foreveryclosedset F.
(v) P*(Xn E B) --* P(X E B) for all Borel sets B with P(X E 8B) = 0.
18.10 Theorem. For arbitrary maps Xn, Yn: Qn I-* ID> and every random element X with
values in ID>:
(i) Xn ~ X implies Xn ~ X.
(ii) Xn ~ X implies Xn "-"+X.
(iii) Xn ~ c for a constant c if and only if Xn "-"+c.
(iv) ifXn -v-+X andd(Xn, Yn) ~ 0, then Yn -v-+X.
(v) ifXn -v-+X and Yn ~ cfora constantc, then (Xn, Yn) "-"+(X, c).
(vi) ifXn ~X and Yn ~ Y, then (Xn, Yn) ~(X, Y).
18.11 Theorem (Continuous mapping). LetiD>n C ID> be arbitrary subsets and gn: ID>n I-*
lE be arbitrary maps (n 2: 0) such that for every sequence Xn E ID>n: if Xn' --* x along a
subsequence and x E ID>o, then gn' (xn') --* go(x ). Then, for arbitrary maps Xn : Qn ~---* ID>n
and every random element X with values in ID>o such that go(X) is a random element in JE:
(i) If Xn "-"+X, then gn(Xn) "-"+ go(X).
(ii) If Xn ~ X, then gn(Xn) ~ go(X).
(iii) If Xn ~ X, then gn (Xn) ~ go(X).
Proof. The proofs for ID>n = ID> and gn = g fixed, where g is continuous at every point of
ID>0 , are the same as in the case of Euclidean spaces. We prove the refinement only for (i).
The other refinements are not needed in the following.
For every closed set F, we have the inclusion
Indeed, suppose that x is in the set on the left side. Then for every k there is an mk :=:: k and
an element Xmk E g;;;!(F) with d(xmk• x) < lfk. Thus, there exist a sequence mk--* oo
and elements Xmk E ID>mk with Xmk --* x. Then either gmk (xmk) --* go(x) or x ¢. ID>o. Because
the set F is closed, this implies that g0 (x) E F or x ¢. ID>0 .
Now, for every fixed k, by the portmanteau lemma,
The extension of Prohorov's theorem requires more care.t In a Euclidean space, a set
is compact if and only if it is closed and bounded. In general metric spaces, a compact
set is closed and bounded, but a closed, bounded set is not necessarily compact. It is the
compactness that we employ in the definition of tightness. A Borel-measurable random
element X into a metric space is tight if for every e > 0 there exists a compact set K such
that P(X ¢ K) < e. A sequence of arbitrary maps Xn: Qn 1-+ Ill> is called asymptotically
tight if for every e > 0 there exists a compact set K such that
Here K 0 is the 8-enlargement {y: d(y, K) < 8} of the set K. It can be shown that, for
Borel-measurable maps in Rk, this is identical to "uniformly tight," as defined in Chapter 2.
In order to obtain a theory that applies to a sufficient number of applications, again we do
not wish to assume that the Xn are Borel-measurable. However, Prohorov's theorem is true
only under, at least, "measurability in the limit." An arbitrary sequence of maps Xn is called
asymptotically measurable if
every f E Cb(D).
Here E* denotes the inner expectation, which is defined in analogy with the outer expec-
tation, and Cb(Ill>) is the collection of all bounded, continuous functions f: Ill> 1-+ R. A
Borel-measurable sequence of random elements Xn is certainly asymptotically measur-
able, because then both the outer and the inner expectations in the preceding display are
equal to the expectation, and the difference is identically zero.
18.12 Theorem (Prohorov's theorem). Let Xn : Qn ~ Ill> be arbitrary maps into a metric
space.
(i) If Xn -v-+ X for some tight random element X, then {Xn : n E N} is asymptotically
tight and asymptotically measurable.
(ii) If Xn is asymptotically tight and asymptotically measurable, then there is a subse-
quence and a tight random element X such that Xn 1 -v-+ X as j ~ 00.
t The following Prohorov's theorem is not used in this book. For a proof see, for instance, [146].
18.3 Bounded Stochastic Processes 261
map X: Q 1-+ l 00 (T). If T = [a, b] and the sample paths are continuous or cadlag, then X
is also a map with values in C[a, b] or D[a, b].
Because C[a, b] c D[a, b] c l 00 [a, b], we can consider the weak convergence of a
sequence of maps with values in C[a, b] relative to C[a, b], but also relative to D[a, b], or
l 00 [a, b]. The following lemma shows that this does not make a difference, as long as we
use the uniform norm for all three spaces.
18.13 Lemma. Let ][))o C ][)) be arbitrary metric spaces equipped with the same metric. If
X and every Xn take their values in ][))o, then Xn ---+X as maps in ][))o if and only if Xn ---+X
as maps in][)).
Proof. Because a set Go in ][))0 is open if and only if it is of the form G n ][))0 for an open
set G in][)), this is an easy corollary of (iii) of the portmanteau lemma. •
Thus, we may concentrate on weak convergence in the space l 00 (T), and automatically
obtain characterizations of weak convergence in C[a, b] or D[a, b]. The next theorem
gives a characterization by finite approximation. It is required that, for any e > 0, the index
set T can be partitioned into finitely many sets T1 , •.• , Tk such that (asymptotically) the
variation of the sample paths t 1-+ Xn,t is less thane on every one of the sets Ji, with large
probability. Then the behavior of the process can be described, within a small error margin,
by the behavior of the marginal vectors (Xn,t 1 , ••• , Xn,rk) for arbitrary fixed points t; E 1i.
If these marginals converge, then the processes converge.
Proof. We only give the proof of the more constructive part, the sufficiency of (i) and
(ii). For each natural number m, partition T into sets Tt, ... , Tk:, as in (ii) corresponding
to e = 17 = 2-m. Because the probabilities in (ii) decrease if the partition is refined, we
can assume without loss of generality that the partitions are successive refinements as m
increases. For fixed m define a semimetric Pm on T by Pm(s, t) = 0 if s and t belong
to the same partioning set Tt, and by Pm (s, t) = 1 otherwise. Every Pm -ball of radius
0 < e < 1 coincides with a partitioning set. In particular, T is totally bounded for Pm.
and the Pm-diameter of a set Tt is zero. By the nesting of the partitions, PI =s P2 =s · · ·.
Define p(s, t) = 2::;:'= 1 2-mpm(s, t). Then pis a semimetric such that the p-diameter of
Tt is smaller than Lk>m 2-k = 2-m' and hence T is totally bounded for p. Let To be the
countable p-dense subset constructed by choosing an arbitrary point tj from every TT.
By assumption (i) and Kolmogorov's consistency theorem (e.g., [133, p. 244] or [42,
p. 347]), we can construct a stochastic process {X1 : t E To} on some probability space such
that (Xn,t 1 , ••• , Xn,tk)---+ (X11 , ••• , X,k) for every finite set of points t1, ... , tk in To. By the
262 Stochastic Convergence in Metric Spaces
In the course of the proof of the preceding theorem a semimetric p is constructed such
that the weak limit X has uniformly p-continuous sample paths, and such that (T, p) is
totally bounded. This is surprising: even though we are discussing stochastic processes with
values in the very large space .f. 00 (T), the limit is concentrated on a much smaller space of
continuous functions. Actually, this is a consequence of imposing the condition (ii), which
can be shown to be equivalent to asymptotic tightness. It can be shown, more generally, that
every tight random element X in .f. 00 (T) necessarily concentrates on UC(T, p) for some
semimetric p (depending on X) that makes T totally bounded.
In view of this connection between the partitioning condition (ii), continuity, and tight-
ness, we shall sometimes refer to this condition as the condition of asymptotic tightness or
asymptotic equicontinuity.
We record the existence of the semimetric for later reference and note that, for a Gaussian
limit process, this can always be taken equal to the "intrinsic" standard deviation semimetric.
18.15 Lemma. Under the conditions (i) and (ii) of the preceding theorem there exists
a semimetric p on T for which T is totally bounded, and such that the weak limit of the
Problems 263
sequence Xn can be constructed to have almost all sample paths in U C (T, p ). Furthermore,
if the weak limit X is zero-mean Gaussian, then this semimetric can be taken equal to
p(s, t) = sd(Xs- Xr).
Notes
The theory in this chapter was developed in increasing generality over the course of many
years. Work by Donsker around 1950 on the approximation of the empirical process and
the partial sum process by the Brownian bridge and Brownian motion processes was an
important motivation. The first type of approximation is discussed in Chapter 19. For
further details and references concerning the material in this chapter, see, for example, [76]
or [146].
PROBLEMS
1. (i) Show that a compact set is totally bounded.
(ii) Show that a compact set is separable.
2. Show that a function f: D 1-+ JE is continuous at every x ED if and only if f- 1(G) is open in D
for every open G E JE.
3. (Projection u-field.) Show that the a-field generated by the coordinate projections z 1-+ z(t) on
C[a, b] is equal to the Borel a-field generated by the uniform norm. (First, show that the space
264 Stochastic Convergence in Metric Spaces
C[a, b] is separable. Next show that every open set in a separable metric space is a countable
union of open balls. Next, it suffices to prove that every open ball is measurable for the projection
a-field.)
4. Show that D[a, b] is not separable for the uniform norm.
5. Show that every function in D[a, b] is bounded.
6. Let h be an arbitrary element of D[ -oo, oo] and let 8 > 0. Show that there exists a grid uo =
-oo < u1 < · · · Um = oo such that h varies at most 8 on every interval [ui, Ui+I). Here "varies
at most 8" means that ih(u)- h(v)l is less than 8 for every u, v in the interval. (Make sure that
all points at which h jumps more than 8 are grid points.)
7. Suppose that Hn and Hoare subsets of a semimetric space H such that Hn --+ Ho in the sense that
(i) Every h E Ho is the limit of a sequence hn E Hn;
(ii) If a subsequence hn1 converges to a limit h, then h E Ho.
Suppose that An are stochastic processes indexed by H that converge in distribution in the
space £00 (H) to a stochastic process A that has uniformly continuous sample paths. Show that
SUPheH. An(h) "-"+ SUPheHo A(h).
19
Empirical Processes
In this chapter we improve on these results by considering t 1-+ IFn (t) as a random function,
rather than as a real-valued estimator for each t separately. This is of interest on its own
account but also provides a useful starting tool for the asymptotic analysis of other statistics,
such as quantiles, rank statistics, or trimmed means.
The Glivenko-Cantelli theorem extends the law of large numbers and gives uniform
convergence. The uniform distance
265
266 Empirical Processes
19.1 Theorem (Glivenko-Cantelli). If X 1 , X2 , ••• are i.i.d. random variables with distri-
bution function F, then lllF'n- Flloo ~ 0.
Proof. By the strong law oflarge numbers, both 1F'n(t) ~ F(t) andlFn(t-) ~ F(t-) for
every t. Given a fixed e > 0, there exists a partition -oo = to < t 1 < · · · < tk = oo such
that F(t;-)- F(t;- 1) < e for every i. (Points at which F jumps more thane are points of
the partition.) Now, for t;-1 :=: t < t;,
The convergence of lFn ( t) and lFn (t-) for every fixed t is certainly uniform for t in the finite
set {t~o ... , tk-d· Conclude that lim sup lllFn - Flloo :=: e, almost surely. This is true for
every e > 0 and hence the limit superior is zero. •
The extension of the central limit theorem to a "uniform" or "functional" central limit
theorem is more involved. A first step is to prove the joint weak convergence of finitely
many coordinates. By the multivariate central limit theorem, for every t1, ... , tko
where the vector on the right has a multivariate-normal distribution, with mean zero and
covariances
(19.2)
This suggests that the sequence of empirical processes ,Jn(lFn - F), viewed as random
functions, converges in distribution to a Gaussian process G F with zero mean and covariance
functions as in the preceding display. According to an extension of Donsker's theorem,
this is true in the sense of weak convergence of these processes in the Skorohod space
D[-oo, oo] equipped with the uniform norm. The limit process GF is known as an F-
Brownian bridge process, and as a standard (or uniform) Brownian bridge if F is the
uniform distribution A. on [0, 1]. From the form of the covariance function it is clear that the
F -Brownian bridge is obtainable as G-. o F from a standard bridge G-.. The name "bridge"
results from the fact that the sample paths of the process are zero (one says "tied down")
at the endpoints -oo and oo. This is a consequence of the fact that the difference of two
distribution functions is zero at these points.
19.3 Theorem (Donsker). If Xt, X2, ... are i.i.d. random variables with distribution
function F, then the sequence ofempirical processes .jn(JFn- F) converges in distribution
in the spaceD[ -oo, oo] to a tight random element GF, whose marginal distributions are
zero-mean normal with covariance function (19.2).
Proof. The proof of this theorem is long. Because there is little to be gained by considering
the special case of cells in the real line, we deduce the theorem from a more general result
in the next section. •
Figure 19.1 shows some realizations of the uniform empirical process. The roughness
of the sample path for n = 5000 is remarkable, and typical. It is carried over onto the limit
19.1 Empirical Distribution Functions 267
;!
:;:
"'9
""9
0.0 0.2 0.4 0.6 0.8 1.0
:::!
:::!
;!
:;:
~
"'9
..
9
;!
:;:
"'
9
~
0.0 .9·2 0.4 0.6 0.8 1.0
Figure 19.1. Three realizations of the uniform empirical process, of 50 (top), 500 (middle), and 5000
(bottom) observations, respectively.
A related process is the Brownian motion process, which can be defined by ZA(t) =
GA(t) + tZ for a standard normal variable Z independent of GA. The addition of tZ
"liberates" the sample paths at t = 1 but retains the "tie" at t = 0. The Brownian motion
process has the same modulus of continuity as the Brownian bridge and is considered an
appropriate model for the physical Brownian movement of particles in a gas. The three
coordinates of a particle starting at the origin at time 0 would be taken equal to three
independent Brownian motions.
The one-dimensional empirical process and its limits have been studied extensively. t
For instance, the Glivenko-Cantelli theorem can be strengthened to a law of the iterated
logarithm,
n I
lim sup
n-+oo 21og 1ogn IIIFn- Flloo:::: 2• a.s.,
with equality ifF takes on the value ~- This can be further strengthened to Strassen's
theorem
n "-"+
- - - - OFn- F) 1i oF, a.s.
2log logn -v-+
Here'HoF is the class ofallfunctions hoF if h: [0, 1] 1-+ llranges over the set of absolutely
continuous functionst with h(O) = h(1) = 0 and J01 h'(s) 2 ds :::; 1. The notation hn ::=: 1{
means that the sequence hn is relatively compact with respect to the uniform norm, with
the collection of all limit points being exactly equal to 1{. Strassen's theorem gives a fairly
precise idea of the fluctuations of the empirical process .JTi(IFn- F), when striving in law
toGF.
The preceding results show that the uniform distance of IFn to F is maximally of the
order .Jlog lognjn as n -+ oo. It is also known that
Thus the uniform distance is asymptotically (along the sequence) at least 1/(n log logn).
A famous theorem, the DKW inequality after Dvoretsk:y, Kiefer, and Wolfowitz, gives a
bound on the tail probabilities of IIIFn - Flloo· For every x
The originally DKW inequality did not specify the leading constant 2, which cannot be
improved. In this form the inequality was found as recently as 1990 (see [103]).
The central limit theorem can be strengthened through strong approximations. These
give a special construction of the empirical process and Brownian bridges, on the same
probability space, that are close not only in a distributional sense but also in a pointwise
sense. One such result asserts that there exists a probability space carrying i.i.d. random
variables xi. x2 .... with law F and a sequence of Brownian bridges GF,n such that
limsup .J7i) 2
n-+oo (logn
II.JTi(IFn- F)- GF,nlloo < 00, a.s.
t See [134] for the following and many other results on the univariate empirical process.
+ A function is absolutely continuous if it is the primitive function J~ g (s) ds of an integrable function g. Then it
is almost-everywhere differentiable with derivative g.
19.2 Empirical Distributions 269
Because, by construction, every GF,n is equal in law to GF, this implies that JnOFn -
F)--+ GF as a process (Donsker's theorem), but it implies a lot more. Apparently, the
distance between the sequence and its limit is of the order 0 ((log n )2 I Jn). After the
method of proof and the country of origin, results of this type are also known as Hungarian
embeddings. Another construction yields the estimate, for fixed constants a, b, and c and
every x > 0,
Pf=jfdP.
Actually, this chapter is concerned with these maps rather than with JIDn as a measure.
By the law oflarge numbers, the sequence JIDn f converges almost surely to Pf, for every
f such that Pf is defined. The abstract Glivenko-Cantelli theorems make this result uniform
in f ranging over a class of functions. A class :F of measurable functions f: X~--+ R is
called P -Glivenko-Cantelli if
where the vector on the right possesses a multivariate-normal distribution with mean zero
and covariances
The abstract Donsker theorems make this result "uniform" in classes of functions. A class
:F of measurable functions f: X~-o+ R is called P-Donsker if the sequence of processes
{Gnf: f E F} converges in distribution to a tight limit process in the space l 00 (:F). Then
the limit process is a Gaussian process G p with zero mean and covariance function as given
in the preceding display and is known asaP-Brownian bridge. Of course, the Donsker
property includes the requirement that the sample paths f 1--+ Gnf are uniformly bounded
for every n and every realization of X 1 , •.• , X n. This is the case, for instance, if the class :F
270 Empirical Processes
has a finite and integrable envelope function F: a function such that lf(x)l :::: F(x) < oo,
for every x and f. It is not required that the function x 1-+ F (x) be uniformly bounded.
For convenience of terminology we define a class :F of vector-valued functions f : x 1-+ :Ilk
to be Glivenko-Cantelli or Donsker if each of the classes of coordinates /; : x 1-+ R with
f = (/;, ... , fk) ranging over :F (i = 1, 2, ... , k) is Glivenko-Cantelli or Donsker. It can
be shown that this is equivalent to the union of the k coordinate classes being Glivenko-
Cantelli or Donsker.
Whether a class of functions is Glivenko-Cantelli or Donsker depends on the "size" of
the class. A finite class of integrable functions is always Glivenko-Cantelli, and a finite
class of square-integrable functions is always Donsker. On the other hand, the class of
all square-integrable functions is Glivenko-Cantelli, or Donsker, only in trivial cases. A
relatively simple way to measure the size of a class :F is in terms of entropy. We shall
mainly consider the bracketing entropy relative to the L,(P)-norm
For most classes of interest, the bracketing numbers Nu(e, :F, L,(P)) grow to infinity
as e -!- 0. A sufficient condition for a class to be Donsker is that they do not grow too fast.
The speed can be measured in terms of the bracketing integral
Proof. Let g be the collection of all differences f - g if f and g range over :F. With
a given set of e-brackets [l;, u;] over :F we can construct 2e-brackets over g by tak-
ing differences [l; - u i, u; - l i] of upper and lower bounds. Therefore, the bracket-
ing numbers Nu (e, g, L 2 ( P)) are bounded by the squares of the bracketing numbers
19.2 Empirical Distributions 271
N[](el2, :F, L 2(P)). Taking a logarithm turns the square into a multiplicative factor 2,
and hence the entropy integrals of :F and g are proportional.
For a given, small8 > 0 choose a minimal number of brackets of size 8 that cover :F, and
use them to form a partition of :F = Ui.G in sets of diameters smaller than 8. The subset of
g consisting of differences f - g of functions f and g belonging to the same partitioning
set consists of functions of L 2(P)-norm smaller than 8. Hence, by Lemma 19.34 ahead,
there exists a finite number a(8) such that
Here the envelope function F can be taken equal to the supremum of the absolute values of
the upper and lower bounds of finitely many brackets that cover :F, for instance a minimal
set of brackets of size 1. This F is square-integrable.
The second term on the right is bounded by a(8)- 1 P F 2 1 { F > a(8)Jn} and hence
converges to zero as n--* oo for every fixed 8. The integral converges to zero as 8 --* 0.
The theorem follows from Theorem 18.14, in view of Markov's inequality. •
If Plml 7 < oo, then there exists a constant K, depending one and d only, such that the
bracketing numbers satisfy
diame)d
N[](eilmilP,ro :F, Lr(P)):::: K ( - e - • everyO < e < diame.
Thus the entropy is of smaller order than log( 11e). Hence the bracketing entropy integral
certainly converges, and the class of functions :F is Donsker.
To establish the upper bound we use brackets of the type [fe -em, fe +em] for()
ranging over a suitably chosen subset of e. These brackets have L 7 (P)-size 2ellmliP,r· If
() ranges over a grid of meshwidth e over e, then the brackets cover :F, because by the
Lipschitz condition, fel -em :::: flh :::: lei +em if nel - {}zll ::::e. Thus, we need as many
brackets as we need balls of radius e12 to cover e.
272 Empirical Processes
The size of E> in every fixed dimension is at most diam E>. We can cover E> with fewer
than (diam E> I e)d cubes of size e. The circumscribed balls have radius a multiple of e and
also cover E>. If we replace the centers of these balls by their projections into E>, then the
balls of twice the radius still cover E>. D
19.8 Example (Pointwise Compact Class). The parametric class in Example 19.7 is cer-
tainly Glivenko-Cantelli, but for this a much weaker continuity condition also suffices. Let
:F = { fe : f) E E>} be a collection of measurable functions with integrable envelope function
F indexed by a compact metric space E> such that the map f) 1-+ fe (x) is continuous for
every x. Then the L 1-bracketing numbers of :F are finite and hence :F is Glivenko-Cantelli.
We can construct the brackets in the obvious way in the form [fB, f 8 ], where B is an
open ball and f 8 and f 8 are the infimum and supremum of fe for f) E B, respectively.
Given a sequence of balls Bm with common center a given f) and radii decreasing to 0, we
have f 8 m- !Bm .J.. fe - fe = 0 by the continuity, pointwise in x and hence also in L 1
by the dominated-convergence theorem and the integrability of the envelope. Thus, given
e > 0, for every f) there exists an open ball B around f) such that the bracket [/B, f 8 ] has
size at most e. By the compactness of E>, the collection of balls constructed in this way has
a finite subcover. The corresponding brackets cover :F.
This construction shows that the bracketing numbers are finite, but it gives no control on
their sizes. D
19.9 Example (Smooth functions). Let :lld = UiIi be a partition in cubes of volume 1
and let :F be the class of all functions f : :lld --* ll whose partial derivatives up to order a
exist and are uniformly bounded by constants Mi on each of the cubes Ii. (The condition
includes bounds on the "zero-th derivative," which is f itself.) Then the bracketing numbers
of :F satisfy, for every V 2: d I a and every probability measure P,
19.10 Example (Sobolev classes). Let :F be the set of all functions f: [0, 1] 1-+ ll such
that 11/lloo :::; 1 and the (k-1)-thderivative is absolutely continuous with j(JCkl) 2(x) dx :::;
1 for some fixed k E N. Then there exists a constant K such that, for every e > o,t
t The upper bound and this sufficient condition can be slightly improved. For this and a proof of the upper bound,
see e.g., [146, Corollary 2.74].
+ See [16].
19.2 Empirical Distributions 273
19.11 Example (Bounded variation). Let F be the collection of all monotone functions
f: ~~---* [-1, 1], or, bigger, the set of all functions that are of variation bounded by 1. These
are the differences of pairs of monotonely increasing functions that together increase at most
1. Then there exists a constant K such that, for every r =::: 1 and probability measure P, t
19.12 Example (Weighted distribution function). Let w: (0, 1) ~---* ~+ be a fixed, con-
tinuous function. The weighted empirical process of a sample of real-valued observations
is the process
= Jii'OFn- F)(t)w(F(t))
t ~---* G~(t)
(defined to be zero if F(t) = Oor F(t) = 1). Fora bounded function w, themapz ~---* z·woF
is continuous from l 00 [-oo, oo] into l 00 [-oo, oo] and hence the weak convergence of
the weighted empirical process follows from the convergence of the ordinary empirical
process and the continuous-mapping theorem. Of more interest are weight functions that
are unbounded at 0 or 1, which can be used to rescale the empirical process at its two
extremes -oo and oo. Because the difference (IFn - F)(t) converges to 0 as t --+ ±oo,
the sample paths of the process t ~---* G~ (t) may be bounded even for unbounded w, and the
rescaling increases our knowledge of the behavior at the two extremes.
A simple condition for the weak convergence of the weighted empirical process in
.eoo (-oo, oo) is that the weight function w is monotone around 0 and 1 and satisfies
J01 w 2 (s) ds < oo. The square-integrability is almost necessary, because the convergence
is known to fail for w(t) = 1/ ./t(1 - t). The Chibisov-O'Reilly theorem gives necessary
and sufficient conditions but is more complicated.
We shall give the proof for the case that w is unbounded at only one endpoint and
decreases from w(O) = oo to w(1) = 0. Furthermore, we assume that F is the uni-
form measure on [Q, 1]. (The general case can be treated in the same way, or by the
quantile transformation.) Then the function v(s) = w2 (s) with domain [0, 1] has an
inverse v- 1 (t) = w- 1 (.jt) with domain [0, oo]. A picture of the graphs shows that
J0""w- 1(.jt)dt = J~ w2 (t)dt, which is finite by assumption. Thus, given an e > 0,
we can choose partitions 0 = so < s1 < · · · < sk = 1 and 0 = to < t1 < · · · < tz = oo
such that, for every i,
This corresponds to slicing the area under w2 both horizontally and vertically in pieces of
size e2 • Let the partition 0 = uo < u1 < · · · < Um = 1 be the partition consisting of all
points s; and all points w- 1(.Jtj)· Then, for every i,
have L 1(.A.)-size 2e 2. Their square roots are brackets for the functions of interest x 1-+ w(t)
l[o,rJ(X), and have L2(.A.)-size .J2e, because PIJU- .Jll 2 ::::; Plu -11. Because the number
m of points in the partitions can be chosen of the order (1 1e)2 for small e, the bracketing
integral of the class of functions x 1-+ w(t) 1[0,,1(x) converges easily. D
The conditions given by the preceding theorems are not necessary, but the theorems
cover many examples. Simple necessary and sufficient conditions are not known and may
not exist. An alternative set of relatively simple conditions is based on "uniform covering
numbers." The covering number N (e, F, L2 (Q)) is the minimal number of L2 ( Q)-balls of
radius e needed to cover the set F. The entropy is the logarithm of the covering number. The
following theorems show that the bracketing numbers in the preceding Glivenko-Cantelli
and Donsker theorems can be replaced by the uniform covering numbers
supN(eiiFIIQ,r• F, L,(Q)).
Q
Here the supremum is taken over all probability measures Q for which the class F is not
identically zero (and hence IIFII'Q,, = QF' > 0). The uniform covering numbers are
relative to a given envelope function F. This is fortunate, because the covering numbers
under different measures Q typically are more stable if standardized by the norm II Fll Q,r of
the envelope function. In comparison, in the case of bracketing numbers we consider a single
distribution P, and standardization by an envelope does not make much of a difference.
The uniform entropy integral is defined as
8
J(8,F,L2) = { logsupN(eiiFIIQ,2•F,L2(Q))de.
lo Q
The condition that the class F be "suitably measurable" is satisfied in most examples
but cannot be omitted. We do not give a general definition here but note that it suffices that
there exists a countable collection g of functions such that each f is the pointwise limit of
a sequence gm in g. t
An important class of examples for which good estimates on the uniform covering
numbers are known are the so-called Vapnik-Cervonenkis classes, or VC classes, which are
defined through combinatorial properties and include many well-known examples.
t See, for example, [117], [120], or [146] for proofs of the preceding theorems and other unproven results in this
section.
19.2 Empirical Distributions 275
Say that a collection C of subsets of the sample space X picks out a certain subset A of
the finite set {x1, ... , Xn} c X if it can be written as A = {x1, ... , Xn} n C for some C E C.
The collection Cis said to shatter {x1, ... , Xn} if C picks out each of its 2n subsets. The VC
index V (C) of Cis the smallest n for which no set of size n is shattered by C. A collection
C of measurable sets is called a VC class if its index V(C) is finite.
More generally, we can define VC classes of functions. A collection :F is a VC class of
functions if the collection of all subgraphs {(x, t): f(x) < t }, iff ranges over :F, forms a
VC class of sets in X x lR (Figure 19.2). It is not difficult to see that a collection of sets C
is a VC class of sets if and only if the collection of corresponding indicator functions 1c is
a VC class of functions. Thus, it suffices to consider VC classes of functions.
By definition, a VC class of sets picks out strictly less than 2n subsets from any set of
n :::: V(C) elements. The surprising fact, known as Sauer's lemma, is that such a class can
necessarily pick out only a polynomial number 0 (n v (C)- 1) of subsets, well below the 2n - 1
that the definition appears to allow. Now, the number of subsets picked out by a collection
C is closely related to the covering numbers of the class of indicator functions {1c : C E C}
in L 1 (Q) for discrete, empirical type measures Q. By a clever argument, Sauer's lemma
can be used to bound the uniform covering (or entropy) numbers for this class.
19.15 Lemma. There exists a universal constant K such that for any VC class :F of
functions, any r 2':: 1 and 0 < 8 < 1,
1 )r(V(F)-1)
s~pN(8IIFIIQ,r• :F, Lr(Q)):::: KV(:F)(16e)V(F) ( ;
Consequently, VC classes are examples of polynomial classes in the sense that their
covering numbers are bounded by a polynomial in 118. They are relatively small. The
276 Empirical Processes
upper bound shows that VC classes satisfy the entropy conditions for the Glivenk:o-Cantelli
theorem and Donsker theorem discussed previously (with much to spare). Thus, they are P-
Glivenk:o-Cantelli and P-Donsker under the moment conditions P* F < oo and P* F 2 < oo
on their envelope function, if they are "suitably measurable." (The VC property does not
imply the measurability.)
19.16 Example (Cells). The collection of all cells ( -oo, t] in the real line is a VC class
of index V (C) = 2. This follows, because every one-point set {xd is shattered, but no
two-point set {x 1 , x2 } is shattered: If x 1 < x 2 , then the cells (-oo, t] cannot pick out
{xz}. D
19.17 Example (Vector spaces). Let :F be the set of all linear combinations I: A.; fi of a
given, finite set of functions ft, ... , fk on X. Then :F is a VC class and hence has a finite
uniform entropy integral. Furthermore, the same is true for the class of all sets {f > c} if
f ranges over f and c over JR.
For instance, we can construct :F to be the set of all polynomials of degree less than
some number, by taking basis functions 1, x, x 2 , ••• on JR and functions x~ 1 · · · x~d more
generally. For polynomials of degree up to 2 the collection of sets {f > 0} contains
already all half-spaces and all ellipsoids. Thus, for instance, the collection of all ellipsoids
is Glivenk:o-Cantelli and Donsker for any P.
To prove that :F is a VC class, consider any collection of n = k + 2 points (Xt, t1), ... ,
(xn, tn) in X x JR. We shall show this set is not shattered by :F, whence V(:F) ::; n.
By assumption, the vectors (f(xt)- t1, ... , f(xn)- tn))T are contained in a (k +I)-
dimensional subspace of JRn. Any vector a that is orthogonal to this subspace satisfies
(Define a sum over the empty set to be zero.) There exists a vector a with at least one
strictly positive coordinate. Then the set {(x;, t;) : a; > 0} is nonempty and is not picked
out by the subgraphs of :F. If it were, then it would be of the form {(x;, t;) : t; < f (t;)}
for some f, but then the left side of the display would be strictly positive and the right side
nonpositive. D
A number of operations allow to build new VC classes or Donsker classes out of known
VC classes or Donsker classes.
19.18 Example (Stability properties). The class of all complements cc, all intersections
C n D, all unions C U D, and all Cartesian products C x D of sets C and D that range over
VC classes C and V is VC.
The class of all suprema f v g and infima f A g of functions f and g that range over
VC classes :F and Q is VC.
The proof that the collection of all intersections is VC is easy upon using Sauer's lemma,
according to which a VC class can pick out only a polynomial number of subsets. From
n given points C can pick out at most O(nV<Cl) subsets. From each of these subsets V
can pick out at most O(n V(V)) further subsets. A subset picked out by C n D is equal to
the subset picked out by C intersected with D. Thus we get all subsets by following the
19.3 Goodness-of-Fit Statistics 277
two-step procedure and hence C nv can pick out at most O(n V(C)+V<V>) subsets. For large
n this is well below 2n, whence C n V cannot pick out all subsets.
That the set of all complements is VC is an immediate consequence of the definition.
Next the result for the unions follows by combination, because C U D = ( cc n Dey.
The results for functions are consequences of the results for sets, because the subgraphs
of suprema and infima are the intersections and unions of the subgraphs, respectively. D
19.19 Example (Uniform entropy). If :F and g possess a finite uniform entropy inte-
gral, relative to envelope functions F and G, then so does the class ;:g of all functions
x ~--+ f(x)g(x), relative to the envelope function FG.
More generally, suppose that 4> : R? ~--+ R is a function such that, for given functions L f
and L 8 and every x,
These statistics, as well as many others, are continuous functions of the empirical process.
The continuous-mapping theorem and Theorem 19.3 immediately imply the following
result.
19.21 Corollary. If X 1, X2, ... are i.i.d. random variables with distribution function F,
then the sequences of Kolmogorov-Smirnov statistics and Cramer-von Mises statistics con-
J
verge in distribution to iiG F II 00 and G} d F, respectively. The distributions ofthese limits
are the same for every continuous distribution function F.
278 Empirical Processes
For many goodness-of-fit statistics of this type, the limit distribution follows from the limit
distribution of .JTi'(lPn- P(J). This is not a Brownian bridge but also contains a "drift," due
e. e
to Informally, if 1-+ P9 has a derivative P9 in an appropriate sense, then
By the continuous-mapping theorem, the limit distribution of the last approximation can be
derived from the limit distribution of the sequence .J1i'(JP>n - P9, 8- (J). The first component
converges in distribution to a Brownian bridge. Its joint behavior with .J1i( e) can most e-
easily be obtained if the latter sequence is asymptotically linear. Assume that
1 n
Jn(en- e)= .J7i'f.;1/t9(X;) + op8(1),
for "influence functions" Vt9 with P91{t9 = 0 and P9IIVt9ll 2 < oo.
Proof. In view of the differentiability of the map e1-+ P9 and Lemma 2.12,
This justifies the approximation (19.22). The class Q obtained by adding the k components
of 1{t9 to :F is Donsker. (The union of two Donsker classes is Donsker, in general. In
t This means that there exists a map Pe: J"t-+ IRk such that IIPB+h - Pe - h 1 Pell.r = o(llhll) ash ~ 0; see
Chapter 20.
19.4 Random Functions 279
the present case, the result also follows directly from Theorem 18.14.) The variables
(.Jfi'(IPn - Pe), n- 112 L 1/fe (Xi)) are obtained from the empirical process seen as an element
of l""(Q) by a continuous map. Finally, apply Slutsky's lemma. •
The preceding theorem implies, for instance, that the sequences of modified Kolmogorov-
Smirnov statistic .Jfi'IIIFn- Fblloo converge in distribution to the supremum of a certain
Gaussian process. The distribution of the limit may depend on the model t-+ Fe, the e
estimators en,
and even on the parameter value e.
Typically, this distribution is not known
in closed form but has to be approximated numerically or by simulation. On the other
hand, the limit distribution of the true Kolmogorov-Smirnov statistic under a continuous
distribution can be derived from properties of the Brownian bridge, and is given byt
00
j=1
With the Donsker theorem in hand, the route via the Brownian bridge is probably the
most convenient. In the 1940s Smirnov obtained the right side as the limit of an explicit
expression for the distribution function of the Kolmogorov-Smirnov statistic.
The preceding lemma can also be proved by reference to an almost sure representation
for the converging sequence Gn "-"+ G p. Such a representation, a generalization of Theorem
2.19 exists. However, the correct handling of measurability issues makes its application
involved.
19.25 Example (Mean absolute deviation). The mean absolute deviation of a random
sample X 1 , ••• , Xn is the scale estimator
1 n -
Mn =- L)Xi- Xnl·
n i=l
The absolute value bars make the derivation of its asymptotic distribution surprisingly
difficult. (Try and do it by elementary means.) Denote the distribution function of the
observations by F, and assume for simplicity of notation that they have mean F x equal to
zero. We shall write IFn lx - ()I for the stochastic process () r+ n-1.L:7= 1 1Xi - () 1. and use
the notations Gnlx- fJI and Fix- fJI in a similar way.
If F x2 < oo, then the set of functions x r+ lx - ()I with () ranging over a compact, such
as [-1, 1], is F-Donsker by Example 19.7. Because, by the triangle inequality, F(lx-
Xnl- lxl) 2 _:::: IXnl 2 ~ 0, the preceding lemma shows that Gnlx- Xnl- Gnlxl ~ 0.
This can be rewritten as
as centering at the true mean. In this case not knowing the true mean does not hurt the scale
estimator. In comparison, for the sample variance this is true for any F. 0
Perhaps the most important application of the preceding lemma is to the theory of Z-
estimators. In Theorem 5.21 we imposed a pointwise Lipschitz condition on the maps
() H-1/19 to ensure the convergence 5.22:
In view of Example 19.7, this is now seen to be a consequence of the preceding lemma. The
display is valid if the class of functions {Vt9: II()- eo II < 8} is Donsker for some 8 > 0 and
Vt!J --+ Vt!Jo in quadratic mean. Imposing a Lipschitz condition is just one method to ensure
these conditions, and hence Theorem 5.21 can be extended considerably. In particular, in
its generalized form the theorem covers the sample median, corresponding to the choice
1jt9 (x) = sign(x- ()). The sign functions can be bracketed just as the indicator functions
of cells considered in Example 19.6 and thus form a Donsker class.
For the treatment of semiparametric models (see Chapter 25), it is useful to extend the
results on Z-estimators to the case of infinite-dimensional parameters. A differentiability
or Lipschitz condition on the maps () H- 1jt9 would preclude most applications of interest.
However, if we use the language of Donsker classes, the extension is straightforward and
useful.
If the parameter () ranges over a subset of an infinite-dimensional normed space, then we
use an infinite number of estimating equations, which we label by some set H and assume
to be sums. Thus the estimator ~n (nearly) solves an equation lP'nVt!J,h = 0 for every h E H.
We assume that, for every fixedx and e, the map h H-1/trJ,h(x), which we denote by 1/trJ(x),
is uniformly bounded, and the same for the map h H- P1/t9 ,h, which we denote by P1jt9 •
19.26 Theorem. For each() in a subset E> ofa normed space and every h in an arbitrary set
H, let X H-1/t!J,h(x) be a measurable function such that the class {Vt!J,h: II()- ()oil < 8, hE
H} is P-Donsker for some 8 > 0, with .finite envelope function. Assume that, as a map
into l 00 (H), the map() f-+ P1jt9 is Frechet-differentiable at a zero ()0 , with a derivative
V: lin E> f-+ l 00 (H) that has a continuous inverse on its range. Furthermore, assume that
l
II P(1/I!J,h - Vt!Jo,h) 2 H --+ 0 as() --+ eo . .lf II1P'n1/t,.IIH = Op(n-!1 2 ) and ~n ~ eo, then
Furthermore, we suppose that the classes Fn possess envelope functions Fn that satisfy the
Lindeberg condition
PF; = 0(1),
P F; {Fn > 8 ..fo} --* 0, every 8 > 0.
Then the central limit theorem holds under an entropy condition. As before, we can use
either bracketing or uniform entropy.
Proof. Under bracketing the proof of the following theorem is similar to the proof of
Theorem 19.5. We omit the proof under uniform entropy.
For every given 8 > 0 we can use the semimetric p and condition (19.27) to partition T
into finitely many sets T1 , ••• , Tk such that, for every sufficiently large n,
(This is the only role for the totally bounded semimetric p; alternatively, we could assume
the existence of partitions as in this display directly.) Next we apply Lemma 19.34 to obtain
the bound
Here an (8) is the number given in Lemma 19.34 evaluated for the class of functions Fn - Fn
and Fn is its envelope, but the corresponding number and envelope of the class Fn differ
only by constants. Because Ju(8n, Fn, L2(P)) --+ 0 for every 8n t 0, we must have that
Ju(8, Fn. L2(P)) = 0(1) for every 8 > 0 and hence an(8) is bounded away from zero.
Then the second term in the preceding display converges to zero for every fixed 8 > 0,
by the Lindeberg condition. The first term can be made arbitrarily small as n --+ oo by
choosing 8 small, by assumption. •
19.29 Example (Local empirical measure). Consider the functions !n,t = rn1(a,a+t8.J
fort ranging over a compact in IR, say [0, 1], a fixed number a, and sequences 8n t 0 and
rn --+ oo. This leads to a multiple of the local empirical measure IPnfn,t = (1/n)#(X; E
(a, a+ t8nl), which counts the fraction of observations falling into the shrinking intervals
(a, a+ t8nl·
Assume that the distribution of the observations is continuous with density p. Then
In the treatment of M-estimators and the likelihood ratio statistic in Chapters 5 and 16,
we encountered random functions resulting from rescaling a given class of functions. Given
284 Empirical Processes
functions x 1-+ m 0 (x) indexed by a Euclidean parameter(), we needed conditions that ensure
that, for a given sequence rn ~ oo and any random sequence hn = Of,(1),
(19.30)
We shall prove this under a Lipschitz condition, but it should be clear from the following
proof and the preceding theorem that there are other possibilities.
19.31 Lemma. For each () in an open subset of Euclidean space let x 1-+ mo (x) be a
measurable function such that the map () 1-+ mo (x) is differentiable at fJo for almost every x
(or in probability) with derivative rh11o (x) and such that,for every fJ1 and (h. in a neighborhood
offJo, and for a measurable function rh such that Prh 2 < oo,
Then (19.30) is valid for every random sequence hn that is bounded in probability.
Proof. The random variables Gn(rn(moo+hfrn - mo0 ) - hT rho0 ) have mean zero and their
variance converges to 0, by the differentiability of the maps () 1-+ m 0 and the Lipschitz con-
dition, which allows application of the dominated-convergence theorem. In other words,
this sequence seen as stochastic processes indexed by h converges marginally in distribu-
tion to zero. Because the sequence hn is bounded in probability, it suffices to strengthen
this to uniform convergence in II h II ::::; 1. This follows if the sequence of processes con-
verges weakly in the space l 00 (h: llhll ::::; 1), because taking a supremum is a continuous
operation and, by the marginal convergence, the weak limit is then necessarily zero. By
Theorem 18.14, we can confine ourselves to proving asymptotic tightness (i.e., condition
(ii) of this theorem). Because the linear processes h 1-+ h TGnrhlio are trivially tight, we may
concentrate on the processes h 1-+ Gn (rn (m!Jo+h/r. - m~~o) ), the empirical process indexed
by the classes offunctions rnMlfrn• for Mo = {mo- mo0 : llfJ- fJoll ::::; 8}.
By Example 19.7, the bracketing numbers of the classes of functions Mo satisfy
The constant Cis independent of e and 8. The function M 0 = 8rh is an envelope function of
Mo. The left side also gives the bracketing numbers of the rescaled classes Mo (8 relative
to the envelope functions M0 /8 = rh. Thus, we compute
The right side converges to zero as 8n -!- 0 uniformly in 8. The envelope functions M0 /8 = rh
also satisfy the Lindeberg condition. The lemma follows from Theorem 19.28. •
Proof. The leading term 2 results from separate bounds on the right and left tail probabil-
ities. It suffices to bound the right tail probabilities by the exponential, because the left tail
inequality follows from the right tail inequality applied to -f. By Markov's inequality, for
every A> 0,
by Fubini's theorem and next developing the exponential function in its power series. The
term fork = 1 vanishes because P (f - P f) = 0, so that a factor 11 n can be moved outside
the sum. We apply this inequality with the choice
Next, with At and A2 defined as in the preceding display, we insert the boundAk .::: AtA~- 2 A
and use the inequality iPU- Pf)ki .::: P/ 2 (211/lloo)k- 2 , and we obtain
1 1 )n
1
P(Gnf > x).::: e->..x ( 1 +-
n
L 00
~-AX
k. 2
k=2
Because 2::(1/ k!) .::: e - 2 .::: 1 and (1 + a)n .::: ean, the right side of this inequality is
bounded by exp( -Ax /2), which is the exponential in the lemma. •
19.33 Lemma. For any finite class :F of bounded, measurable, square-integrable func-
tions, with IFI elements,
Proof. Define a = 2411/lloo/Jn and b = 24Pf 2 • For x 2: bja and x .::: bja the
exponent in Bernstein's inequality is bounded above by -3xja and -3x 2 jb, respectively.
t The constant l/4 can be replaced by l/2 (which is the best possible constant) by a more precise argument.
286 Empirical Processes
For the truncated variables At = Gnfl{IGnfl > bla} and Bt = Gnf1{1Gnfl ::::; bla },
Bernstein's inequality yields the bounds, for all x > 0,
lA I)
E1jl,--!-
( =EJ{IAtlla exdx= Jor)O P(IAtl>xa)exdx:::;l.
0
By a similar argument we find that E1/lz (I B f II .jj}) ::::; 1. Because the function 1/11 is convex
and nonnegative, we next obtain, by Jensen's inequality,
Proof. Because IGnfl ::::; Jii"(JP>n + P)g for every pair offunctions 1/1 : : ; g, we obtain,
for F an envelope function ofF,
To see this, first cover F with minimal numbers of L 2 (P)-brackets of size 2-q andre-
place these by as many disjoint sets, each of them equal to a bracket minus "previous"
brackets. This gives partitions that satisfy the conditions with ll.qi equal to the difference
19.6 Maximallnequalities 287
of the upper and lower brackets. If this sequence of partitions does not yet consist of suc-
cessive refinements, then replace the partition at stage q by the set of all intersections of
the form n~=qoFp,ip· This gives partitions into Nq = Nq0 • • • Nq sets. Using the inequal-
ity (log TI Np/ 12 :::: l:Oog Np) 112 and rearranging sums, we see that the first of the two
displayed conditions is still satisfied.
Choose for each q 2: q0 a fixed element /qi from each partitioning set Fqi, and set
Then rrq f and l::!..q f run through a set of Nq functions iff runs through F. Define for each
fixed n and q 2: qo numbers and indicator functions
Then Aq f and Bq f are constant in f on each of the partitioning sets Fqi at level q, because
the partitions are nested. Our construction of partitions and choice of qo also ensure that
2a(8) :::: aq0 , whence Aq0 / = 1. Now decompose, pointwise in x (which is suppressed in
the notation),
00 00
The idea here is to write the left side as the sum of f - rrq 1 f and the telescopic sum
E:~+I (rrq f - rrq-1 f) for the largest q1 = q1 (f, x) such that each of the bounds l::!..q f on
the "links" rrq/- 'lrq-d in the "chain" is uniformly bounded by .;Tiaq (with q 1 possibly
infinite). We note that either all Bq f are 1 or there is a unique q1 > qo with Bq 1 f = 1. In
the first case Aq f = 1 for every q; in the second case Aq f = 1 for q < q1 and Aq f = 0
forq 2: q1.
Next we apply the empirical process Gn to both series on the right separately, take
absolute values, and next take suprema over f e F. We shall bound the means of the
resulting two variables.
First, because the partitions are nested, l::!..q/ Bq/ :::: l::!..q-d Bq/ :::: .;Tiaq-! trivially
P(l::!..qf) 2 Bq/ :::: 2-2q. Because IGn/1 :::: Gng + 2..jiiPg for every pair of functions
Ill :::: g, we obtain, by the triangle inequality and next Lemma 19.33,
In view of the definition of aq, the series on the right can be bounded by a multiple of the
series E:+ 12-qJLogNq.
288 Empirical Processes
Second, there are at most Nq functions 1rq f -1rq-d and at most Nq_, indicator functions
Aq-d· Because the partitions are nested, the function lrrqf- 1rq-diAq-d is bounded
by l::!..q-d Aq-d .::: ,Jn aq-!· The L2(P)-norm of lrrqf - 1rq-dl is bounded by 2-q+l.
Apply Lemma 19.33 to find
Again this is bounded above by a multiple of the series L:o'+ 1 2 -q .JLog Nq.
To conclude the proof it suffices to consider the terms 1rq0 f. Because 11rq0 fl .::: F .:::
a(8).jn.::: ,Jnaq0 and P(1rq0 f) 2 .::: 82 by assumption, another application of Lemma 19.33
yields
By the choice of q0 , this is bounded by a multiple of the first few terms of the series
L:o'+ 1 2-q.JLog Nq. •
19.35 Corollary. For any class F of measurable functions with envelope function F,
The second term in the maximal inequality Lemma 19.34 results from a crude majoriza-
tion in the first step of its proof. This bound can be improved by taking special properties of
the class of functions F into account, or by using different norms to measure the brackets.
The following lemmas, which are used in Chapter 25, exemplify this.t The first uses the
L 2(P)-norm but is limited to uniformly bounded classes; the second uses a stronger norm,
which we call the "Bernstein norm" as it relates to a strengthening of Bernstein's inequality.
Actually, this is not a true norm, but it can be used in the same way to measure the size of
brackets. It is defined by
t For a proof of the following lemmas and further results, see Lemmas 3.4.2 and 3.4.3 and Chapter 2.14, in [146]
Also see [14], [15], and [51].
Problems 289
19.37 Lemma. For any class F of measurable functions f : X 1-+ R such that II f II P,B
< 8forevery f,
E* IIG II
p
< J,[] (8 ,
n :F,...., F • II· II P,B ) (1 + l[](8' 82F,.jiiII·IIP,B)) ·
Instead of brackets, we may also use uniform covering numbers to obtain maximal
inequalities. As is the case for the Glivenko-Cantelli and Donsker theorem, the inequality
given by Corollary 19.35 has a complete uniform entropy counterpart. This appears to be
untrue for the inequality given by Lemma 19 .34, for it appears difficult to use the information
that a class F is contained in a small L 2 (P)-ball directly in a uniform entropy maximal
inequality. t
19.38 Lemma. For any suitably measurable class F ofmeasurable functions f : X 1-+ R
we have, with e;
= sup fe:F JP>n f 2 /!P'n F 2,
Notes
The law of large numbers for the empirical distribution function was derived by Glivenko
[59] and Cantelli [19] in the 1930s. The Kolmogorov-Smirnov and Cramer-von Mises
statistics were introduced and studied in the same period. The limit distributions of these
statistics were obtained by direct methods. That these were the same as the distribution
of corresponding functions of the Brownian bridge was noted and proved by Doob before
Donsker [38] formalized the theory of weak convergence in the space of continuous func-
tions in 1952. Donsker's main examples were the empirical process on the real line, and
the partial sum process. Abstract empirical processes were studied more recently. The
bracketing central limit presented here was obtained by Ossiander [111] and the uniform
entropy central limit theorem by Pollard [116] and Kolcinskii [88]. In both cases these
were generalizations of earlier results by Dudley, who also was influential in developing
a theory of weak convergence that can deal with the measurability problems, which were
partly ignored by Donsker. The maximal inequality Lemma 19.34 was proved in [119].
The first Vapnik-Cervonenkis classes were considered in [147].
For further results on the classical empirical process, including an introduction to strong
approximations, see [134]. For the abstract empirical process, see [57], [117], [120] and
[146]. For connections with limit theorems for random elements with values in Banach
spaces, see [98].
PROBLEMS
1. Derive a formula for the covariance function of the Gaussian process that appears in the limit of
the modified Kolmogorov-Smirnov statistic for estimating normality.
t For a proof of the following lemma, see, for example, [120], or Theorem 2.14.1 in [146].
290 Empirical Processes
1
</>(P + tH)- </>(P) = t<f>~(H) + · · · + -tm</>~m>(H) + o(tm).
m!
291
292 Functional Delta Method
Substituting t = 1/.jn. and H = Gn, for Gn = .jn(JP>n- P) the empirical process of the
observations, we obtain the von Mises expansion
Actually, because the empirical process Gn is dependent on n, it is not a legal choice for
H under the assumed type of differentiability: There is no guarantee that the remainder is
small. However, we make this our working hypothesis. This is reasonable, because the
remainder has one factor 1 j .jn more, and the empirical process Gn shares at least one
property with a fixed H: It is "bounded." Then the asymptotic distribution of 4> (JP>n)- 4> (P)
should be determined by the first nonzero term in the expansion, which is usually the first-
order term 4>~ (Gn). A method to make our wishful thinking rigorous is discussed in the next
section. Even in cases in which it is hard to make the differentation operation rigorous, the
von Mises expansion still has heuristic value. It may suggest the type of limiting behavior
of <f>(JP>n)- <f>(P), which can next be further investigated by ad-hoc methods.
We discuss this in more detail for the case that m = 1. A first derivative typically gives
a linear approximation to the original function. If, indeed, the map H 1-+ 4>~ (H) is linear,
then, writing JP>n as the linear combination JP>n = n- 1 L 8x, of the Dirac measures at the
observations, we obtain
(20.1)
Thus, the difference ¢(JP>n) - <f>(P) behaves as an average of the independent random
variables 4>~(8x1 - P). If these variables have zero means and finite second moments, then
a normal limit distribution of .jn.(¢CJP>n)- <f>(P)) may be expected. Here the zero mean
ought to be automatic, because we may expect that
The interchange of order of integration and application of 4>~ is motivated by linearity (and
continuity) of this derivative operator.
The function x 1-+ 4>~(8x - P) is known as the influence function of the function¢. It
can be computed as the ordinary derivative
The name "influence function" originated in developing robust statistics. The function
measures the change in the value <f>(P) if an infinitesimally small part of Pis replaced by a
pointmass at x. In robust statistics, functions and estimators with an unbounded influence
function are suspect, because a small fraction of the observations would have too much
influence on the estimator if their values were equal to an x where the influence function is
large.
In many examples the derivative takes the form of an "expectation operator" </>~(H) =
J(i>p dH, for some function (i>p with J(i>p dP = 0, at least for a subset of H. Then the
influence function is precisely the function 4> P·
20.1 von Mises Calculus 293
20.2 Example (Mean). The sample mean is obtained as <f>(J!Dn) from the mean function
J
<f>(P) = s dP(s). The influence function is
4>~(8x- P) = dd
t it=O
lsd[(1- t)P + t8x](s) = x- lsdP(s).
In this case, the approximation (20.1) is an identity, because the function is linear already. If
the sample space is a Euclidean space, then the influence function is unbounded and hence
the sample mean is not robust. D
20.3 Example (Wilcoxon). Let (XI, Y1), ... , (Xn, Yn) bearandomsamplefromabivari-
ate distribution. Write IFn and Gn for the empirical distribution functions of the Xi and Yj,
respectively, and consider the Mann-Whitney statistic
This statistic corresponds to the function <f>(F, G) = J F dG, which can be viewed as
a function of two distribution functions, or also as a function of a bivariate distribution
function with marginals F and G. (We have assumed that the sample sizes of the two
samples are equal, to fit the example into the previous discussion, which, for simplicity, is
restricted to i.i.d. observations.) The influence function is
e
20.4 Example (Z-functions). For every in an open subset of :Ilk, let x ~--* 1/f9(x) be
a given, measurable map into :Ilk. The corresponding Z-function assigns to a probability
e
measure P a zero <f>(P) of the map ~--* P1/f9 • (Consider only P for which a unique zero
exists.) If applied to the empirical distribution, this yields a Z -estimator 4> (JIDn).
Differentiating with respect to t across the identity
The expression enclosed by squared brackets is the influence function of the Z-function.
Informally, this is seen to be equal to
a
- ( ae P1/re
)-! 1/rq,(P)(x).
9=t/>(P)
In robust statistics we look for estimators with bounded influence functions. Because the
influence function is, up to a constant, equal to 1/rq,(P)(x), this is easy to achieve with
Z -estimators!
The Z-estimators are discussed at length in Chapter 5. The theorems discussed there
give sufficient conditions for the asymptotic normality, and an asymptotic expansion for
Jn (~ (JP>n) - ~ ( P)). This is of the type.(20.1) with the influence function as in the preceding
display. D
20.5 Example (Quantiles). The pth quantile of a distribution function F is, roughly, the
number ~(F) = F- 1(p) such that F F- 1(p) = p. We set F1 = (1 - t)F + t8x, and,
differentiate with respect to t the identity
This "identity" may actually be only an inequality for certain values of p, t, and x, but we
do not worry about this. We find that
The graph of this function is given in Figure 20.1 and has the following interpretation.
Suppose the pth quantile has been computed for a large sample, but an additional observation
x is obtained. If x is to the left of the pth quantile, then the pth quantile decreases; if x
is to the right, then the quantile increases. In both cases the rate of change is constant,
irrespective of the location of x. Addition of an observation x at the pth quantile has an
unstable effect.
_P_
- - - - - - - - - - - - f(F1 (p))
-I
F (p)
_!:e._
f(F1(p))
-------------------------
Figure 20.1. Influence function of the pth quantile.
20.1 von Mises Calculus 295
The von Mises calculus suggests that the sequence of empirical quantiles .jn (JF;;- 1 ( t) -
F- 1 (t)) is asymptotically normal with variance varF~~(8x 1 ) = p(1- p)ff o F- 1 (p) 2 •
In Chapter 21 this is proved rigorously by the delta method of the following section. Alter-
natively, a pth quantile may be viewed as an M -estimator, and we can apply the results of
Chapter 5. 0
= ~~(0, 8y- P) = 0.
If we delete the diagonal, then a V -statistic turns into a U -statistic and hence we can apply
Theorem 12.10 to find the limit distribution of n(~(JP>n) - ~(P) ). We expect that
2
n(~(JP>n)- ~(P)) = -
n
L L hp(Xi, Xj) +-n1 L hp(Xi, Xi)+ Op(l).
i<j
n
i=l
If the function x 1-+ hp(x, x) is P-integrable, then the second term on the right only
contributes aconstanttothelimitdistribution. If the function (x, y) 1-+ h~(x, y) is (P x P)-
integrable, then the first term on the right converges to an infinite linear combination of
independent xl-variables, according to Example 12.12.
20.6 Example (Cramer-von Mises). The Cramer-von Mises statistic is the function
~(lFn) for ~(F) = j(F- Fo) 2 dFo and a fixed cumulative distribution function Fo. By
direct calculation,
Consequently, the first derivative vanishes at F = Fo and the second derivative is equal to
J
~~o (H) = 2 H 2 d F0 • The von Mises calculus suggests the approximation
11" (Gn)
~(lFn) -~(Fo) ~ 2;~p =;1 / GndFo.
2
0
296 Functional Delta Method
This is certainly correct, because it is just the definition of the statistic. The preceding
discussion is still of some interest in that it suggests that the limit distribution is nonnormal
and can be obtained using the theory of V -statistics. Indeed, by squaring the sum that is
hidden in G~. we see that
In Example 12.13 we used this representation to find that the sequence n</J('in)...,... (1/6) +
L~! F 2rr- 2 (Z]- 1) for an i.i.d. sequence of standard normal variables z,,
Zz, .. ., if
the true distribution Fo is continuous. D
For IE the real line, this is precisely the differentiability as introduced in the preceding
section. Gateaux differentiability is also called "directional differentiability," because for
every possible direction h in the domain the derivative value </J~ (h) measures the direction
of the infinitesimal change in the value of the function </J. More formally, the o(t) term in
the previous displayed equation means that
The suggestive notation </J~ (h) for the ''tangent vectors" encourages one to think of the
directional derivative as a map </J~: ID> 1-+ !E, which approximates the difference map r/J(O +
h) - </J (0) : ID> 1-+ !E. It is usually included in the definition of Gateaux differentiability that
this map </J~ : ][}) 1-+ IE be linear and continuous.
However, Gateaux differentiability is too weak for the present purposes, and we need a
stronger concept. A map </J : ID>.p 1-+ IE, defined on a subset ID>.p of a normed space ][}) that
contains 0, is called Hadamard differentiable at 0 if there exists a continuous, linear map
</J~ : ID> 1-+ lE such that
20.8 Theorem (Delta method). Let ][)) and lE be normed linear spaces. Let ~ : ][J)q, c
e
lE be Hadamard differentiable at tangentially to ][J)o. Let Tn : Qn t-+ ][J)q, be maps
][)) t-+
such that rn(Tn- e)- T for some sequence ofnumbers rn --* oo and a random element T
that takes its values in ][J)o. Then rn (~(Tn) -~(e))- ~9(T). lf~9 is defined and continuous
on the whole space][)), then we also have rn (~ (Tn) - ~(e)) = ~9 (rn(Tn - e)) + o p (1).
Proof. To prove that rn ( ~ (Tn) -~(e)) - ~~ (T), define for eachn a map gn (h) = rn (~(e+
e
r;;'h) -~(e)) on the domain ][J)n = {h: + r;; 1h E ][J)q, }. By Hadamard differentiability,
this sequence of maps satisfies gn'Chn') --* ~~(h) for every subsequence hn' --* h E
][J)o. Therefore, gn(rn(Tn- e)) -~9(T) by the extended continuous-mapping theorem,
Theorem 18.11, which is the first assertion.
The seemingly stronger last assertion of the theorem actually follows from this, if applied
to the function 1/1 = (~. 4>9): ][)) 1-+ lEX IE. This is Hadamard-differentiable at ce. e) with
derivative 1/1~ = (~9. ~9). Thus, by the preceding paragraph, rn(l/J(Tn) -1/f(e)) converges
weakly to (~9 (T), ~~ (T)) in lE x IE. By the continuous-mapping theorem, the diffe~ence
rn(~CTn)- ~(e)) - ~9(rn(Tn -e)) converges weakly to ~~(T) - ~~(T) = 0. Weak
convergence to a constant is equivalent to convergence in probability. •
298 Functional Delta Method
Without the chain rule, Hadamard differentiability would not be as interesting. Con-
sider maps~:][}) ~---+ lE and 1/r: lE 1-+ IF that are Hadamard-differentiable ate and~((}),
respectively. Then the composed map 1/r o ~ : ][}) ~---+ IF is Hadamard-differentiable at e, and
the derivative is the map obtained by composing the two derivative maps. (For Euclidean
spaces this means that the derivative can be found through matrix multiplication of the two
derivative matrices.) The attraction of the chain rule is that it allows a calculus of Hadamard-
differentiable maps, in which differentiability of a complicated map can be established by
decomposing this into a sequence of basic maps, of which Hadamard differentiability is
known or can be proven easily. This is analogous to the chain rule for real functions, which
allows, for instance, to see the differentiability of the map x 1-+ exp cos log(l + x 2 ) in a
glance.
20.9 Theorem (Chain rule). Let~ : lDlq, 1-+ JE"' and 1/r : JE"' 1-+ IF be maps defined on sub-
sets lDlq, and JE"' ofnormed spaces lDl and JE, respectively. Let~ be Hadamard-differentiable
at(} tangentially to lDlo and let 1/r be Hadamard-differentiable at~((}) tangentially to~~ (Jl))o).
Then 1/r o ~ : lDlq, 1-+ IF is Hadamard-differentiable at e tangentially to lDlo with derivative
1/r~(IJ) 0 ~9-
20.10 Lemma. Let~: [0, 1] 1-+ R be twice continuously differentiable. Then the func-
J
tion (Ft. Fz) 1-+ ~(Ft) dFz is Hadamard-differentiable from the domain D[-oo, oo] x
BVt[-oo, oo] C D[-oo, oo] x D[-oo, oo] into Rat every pair offunctions of bounded
variation (Ft. F 2). The derivative is given byt
t We denote by h_ the left-continuous version of a cadlag function hand abbreviate hi~ = h(b) - h(a).
20.3 Some Examples 299
Furthermore, the function (F,, Fz) 1-+ ~-oo,·J </J(F,) dFz is Hamamard-differentiable as a
map into D[ -oo, oo].
Proof. Let hit --+ h, and h21 --+ hz in D[-oo, oo] be such that Fz1 = Fz + th21 is a
function of variation bounded by 1 for each t. Because F 2 is of bounded variation, it follows
that h 21 is of bounded variation for every t. Now, with Fit = F, + th,~o
By partial integration, the second term on the right can be rewritten as </J o F1h2tl~oo -
J h 21 _ d </J o F 1• Under the assumption on h21 , this converges to the first part of the derivative
as given in the lemma. The firsttermis bounded above by (11</J''IIootllh,,lloo + 11</J'IIoollhlt-
h 111 00 ) J d IF21 1. Because the measures Fz1 are of total variation at most 1 by assumption,
this expression converges to zero. To analyze the third term on the right, take a grid uo =
-oo < u1 < · · · < Um = oo such that the function </J' o F1h 1 varies less than a prescribed
value e > 0 on each interval [u;_ 1 , u;). Such a grid exists for every element of D[ -oo, oo]
(problem 18.6). Then
The first term is bounded by e0(1), in which thee can be made arbitrarily small by the
choice of the partition. For each fixed partition, the second term converges to zero as t t 0.
Hence the left side converges to zero as t t 0.
This proves the first assertion. The second assertion follows similarly. •
20.11 Example (Wilcoxon). Let 1Fm and Gn be the empirical distribution functions of
two independent random samples X 1, ••• , Xm and Y,, ... , Yn from distribution functions
F and G, respectively. As usual, consider both m and n as indexed by a parameter v, let
N = m + n, and assume that mfN--+ A. E (0, 1) as v--+ oo. By Donsker's theorem and
Slutsky's lemma,
vr;;; (GF
1v (IFm- F, Gn- G)- ..fi_'
Ga ) ,
.Jf=""I
in the space D[ -oo, oo] x D[ -oo, oo], for a pair of independent Brownian bridges GF
and G 0 . The preceding lemma together with the delta method imply that
distributed can be more easily seen by applying the delta method in its stronger form, which
implies that the limit variable is the limit in distribution of the sequence
This can be rewritten as the difference of two sums of independent random variables, and
next we can apply the central limit theorem for real variables. 0
20.12 Example (Two-sample rank statistics). Let lHIN be the empirical distribution func-
tion of a sample X 1, ... , Xm, Y1, ... , Yn obtained by "pooling" two independent random
samples from distributions F and G, respectively. Let RN~o ... , RNN be the ranks of the
pooled sample and let Gn be the empirical distribution function of the second sample. If
no observations are tied, then NlHIN(Yj) is the rank of Yi in the pooled sample. Thus,
AF(t)= { ~.
)[O,t] 1- F_
In particular, if F has a density f, then A F has a density )... F = fl (1 - F). If F (t) gives the
probability of "survival" of a person or object until time t, then d A F ( t) can be interpreted
as the probability of "instant death at time t given survival until t ." The hazard function is
an important modeling tool in survival analysis.
The correspondence between distribution functions and hazard functions is one-to-one.
The cumulative distribution function can be explicitly recovered from the cumulative hazard
function as the product integral of -A (see the proof of Lemma 25.74),
1- FA(t) = n
O<s:se
(1- A{s})e-A<(t). (20.13)
20.14 Lemma. Let ][J)q, be the set of all nondecreasing cadlag functions F: [0, r] ~ lR
with F(O) = 0 and 1 - F(r) :::: e > 0 for some e > 0, and let E!Jr be the set of all
nondecreasing cadlagfunctions A: [0, r] ~ lR with A(O) = 0 and A(r) ::::; M for some
MER
20.3 Some Examples 301
(i) The map 4J: lDlq, C D[O, r] 1-+ D[O, r] defined by 4J(F) = AF is Hadamard differ-
entiable.
(ii) The map 1/1: JE"' C D[O, r] 1-+ D[O, r] defined by 1/I(A) =FA is Hadamard differ-
entiable.
Proof. Part (i) follows from the chain rule and the Hadamard differentiability of each of
the three maps in the decomposition
A(t) ={ - 1- d F ={ 1 dH1,
)[O,tl 1 - F- )[O,tl 1 - H_
for 1 - H = (1- F)(1 -G) and dH 1 = (1 - G_)dF, and every choice of distribution
function G. If we assume that the censoring times C 1, ... , Cn are a random sample from
G and are independent of the failure times 1i, then H is precisely the distribution function
of X; and H1 is a "subdistribution function,"
~
An(t) = 1[0,1] 1-
1JHI
n-
dlHI1n·
Because they are empirical distribution functions, the pair (lHin. 1HI 1n) is asymptotically
normal in the space D[-oo, oo] x D[-oo, oo]. The easiest way to see this is to consider
them as continuous transformations of the (bivariate) empirical distribution function of the
pairs (X;, ~;). The Nelson-Aalen estimator is constructed through the maps
asymptotic normality of the Nelson-Aalen estimator An(t) follows for every t such that
H(t) < 1, and even as a process in D[O, r] for every r such that H(r) < 1.
If we apply the product integral given in (20.13) to the Nelson-Aalen estimator, then
we obtain an estimator 1 - Pn for the distribution function, known as the product limit
estimator or Kaplan-Meier estimator. For a discrete hazard function the product integral is
an ordinary product over the jumps, by definition, and it can be seen that
1-Fn(t)=
n #(j': X.>
.1 -
X·)
1
-1::!..·1
=
n( n- i
.
).6-(i)
i:X;st #(J: Xi 2: Xi) i:X(i)-st n- z +1
This estimator sequence is asymptotically normal by the Hadamard differentiability of the
product integral. D
Notes
A calculus of "differentiable statistical functions" was proposed by von Mises [104]. Von
Mises considered functions ~(IFn) of the empirical distribution function (which he calls
the "repartition of the real quantities x1, ... , Xn ") as in the first section of this chapter.
Following Volterra he calls ~ m times differentiable at F if the first m derivatives of the
map t ~--+ ~(F + t H) at t = 0 exist and have representations of the form
Euclidean spaces. This is convenient, because many functions of statistical interest, such
as zeros, inverses or integrals, are defined only on irregularly shaped subsets of a normed
space, which, besides a linear space, should be chosen big enough to support the limit
distribution of Tn.
PROBLEMS
1. Let tf>(P) = J Jh(u, v) dP(u) dP(v) for a fixed given function h. The corresponding estimator
1/>oP'n) is known as a V -statistic. Find the influence function.
2. Find theinfluencefunctionofthefunctiontf>(F) = Ja(Ft +Fz) dFz if Ft and Fz arethemarginals
of the bivariate distribution function F, and a is a fixed, smooth function. Write out 4>0Fn). What
asymptotic variance do you expect?
3. Find the influence function of the map F ~ J[O,tl ( 1 - F_) -l d F (the cumulative hazard function).
4. Show that a map 4> : D ~ IE is Hadamard differentiable at a point(} if and only if for every compact
set K cD the expression in (20.7) converges to zero uniformly in h E K as t--+ 0.
5. Show that the symmetrization map((}, F)~ !{F(t)+ 1- F(2(} -t)) is (tangentially) Hadamard
differentiable under appropriate conditions.
6. Let g: [a, b] ~ R be a continuously differentiable function. Show that the map z ~ go z with
domain the functions z: T ~ [a, b] contained in t 00 (T) is Hadamard differentiable. What does
this imply for the function z ~ 1/z?
7. Show that the map F ~ f[a,b] s dF(s) is Hadamard differentiable from the domain of all distri-
bution functions to R, for each pair of finite numbers a and b. View the distribution functions as
a subset of D[ -oo, oo] equipped with supremum norm. What if a orb are infinite?
8. Find the first- and second-order derivative of the function 1/>(F) = j(F- Fo) 2 dF at F = Fo.
What limit distribution do you expect for 4> (IFn)?
21
QuantiZes and Order Statistics
It is a left-continuous function with range equal to the support of F and hence is often
unbounded. The following lemma records some useful properties.
Proof. The proofs of the inequalities in (i) through (iv) are best given by a picture. The
equalities (v) follow from (ii) and (iv) and the monotonicity ofF and p-l. If p = F(x)
for some x, then, by (ii) p :s F o p- 1(p) = F o p-I o F(x) = F(x) = p, by (iv). This
proves the first statement in (ii); the second is immediate from the inequalities in (ii) and
(iii). Statement (vi) follows from (i) and the definition of (F o G)- 1• •
304
21.2 Asymptotic Normality 305
Consequences of (ii) and (iv) are that F o F- 1 (p)=p on (0, 1) if and only ifF is
continuous (i.e., has range [0, 1]), and F- 1 o F(x) = x on lR if and only ifF is strictly
increasing (i.e., has no "flats"). Thus F- 1 is a proper inverse if and only ifF is both
continuous and strictly increasing, as one would expect.
By (i) the random variable F- 1(U) has distribution function F if U is uniformly dis-
tributed on [0, 1]. This is called the quantile transformation. On the other hand, by (i) and
(ii) the variable F(X) is uniformly distributed on [0, 1] if and only if X has a continuous
distribution function F. This is called the probability integral transformation.
A sequence of quantile functions is defined to converge weakly to a limit quantile func-
tion, denoted Fn- 1 ""'F- 1, if and only if Fn- 1 (t) --* F- 1(t) at every t where F- 1 is contin-
uous. This type of convergence is not only analogous in form to the weak convergence of
distribution functions, it is the same.
21.2 Lemma. For any sequence of cumulative distribution functions, Fn- 1 ""'F- 1 if and
only if Fn ""'F.
Proof. Let U be uniformly distributed on [0, 1]. Because F- 1 has at most countably many
discontinuity points, Fn- 1 ""'F- 1 implies that Fn- 1 (U)--* F- 1(U) almost surely. Conse-
quently, Fn- 1(U) converges in law to F- 1(U), which is exactly Fn ""'F by the quantile
transformation.
For a proof the converse, let V be a normally distributed random variable. If Fn ""' F,
then Fn(V) ~ F(V), because convergence can fail only at discontinuity points of F.
Thus <1>(Fn- 1(t)) =P(Fn(V) < t) (by (i) of the preceding lemma) converges to P(F(V) <
t) = <1>(F- 1(t)) at every t at which the limit function is continuous. This includes
every tat which F- 1 is continuous. By the continuity of <1>- 1, Fn- 1 (t)--* F- 1 (t) for every
such t. •
An advantage of this method is that it is not restricted to empirical quantiles but applies to
the quantiles of any estimator of the distribution function.
For a nondecreasing function F E D[a, b], [a, b] c [-oo, oo ], and a fixed p E JR, let
~(F) E [a, b] be an arbitrary point in [a, b] such that
The natural domain lDl.p of the resulting map ~ is the set of all nondecreasing F such that
there exists a solution to the pair of inequalities. If there exists more than one solution, then
the precise choice of ~(F) is irrelevant. In particular, ~(F) may be taken equal to the pth
quantile p-I(p).
21.3 Lemma. Let F E lDl.p be differentiable at a point gP E (a, b) such that F(gp) = p,
with positive derivative. Then ~: lDl.p C D[a, b] 1-+ lR is Hadamard-differentiable at F
tangentially to the set offunctions h E D[a, b] that are continuous at gP, with derivative
~~(h)= -h(gp)/ F'(gp).
Proof. Let h1 --* h uniformly on [a, b] for a function h that is continuous at gP. Write gp 1
for~(F+th 1 ). Bythedefinitionof~,foreverye 1 > 0,
Choose e1 positive and such that e1 = o(t). Because the sequence h1 converges uniformly
to a bounded function, it is uniformly bounded. Conclude that F(gp 1 - e1) + O(t) ::: p :::
F(gp1 ) + O(t). By assumption, the function F is monotone and bounded away from p
outside any interval (gp - e, gP +e) around gP. To satisfy the preceding inequalities the
numbers gp 1 must be to the right of gP- e eventually, and the numbers gp1 - e1 must be to
the left of gP + e eventually. In other words, gp1 --* gP.
By the uniform convergence of h 1 and the continuity of the limit, h 1 (gp 1 - e1) --* h(gp)
for every e1 --* 0. Using this and Taylor's formula on the preceding display yields
Conclude first that gpt - gP = O(t). Next, use this to replace the o(gpt- gp) terms in the
display by o(t) terms and conclude that (gpt- gp)/t --* -(h/ F')(gp)· •
Instead of a single quantile we can consider the quantile function F 1-+ (F-I (p )) ,
Pl<P<P2
for fixed numbers 0 ::: PI < P2 ::: 1. Because any quantile function is bounded on an interval
[PI· p 2] strictly contained in (0, 1), we may hope that a quantile estimator converges in
distribution in .eoo (PI, P2) for such an interval. The quantile function of a distribution
with compact support is bounded on the whole interval (0, 1), and then we may hope to
strengthen the result to weak convergence in l 00 (0, 1).
Given an interval [a, b] c JR, let [})I be the set of all restrictions of distribution functions
on!R to [a, b], and let lDl 2 be the subset oflDli of distribution functions of measures that give
mass 1 to (a, b ].
21.2 Asymptotic Normality 307
21.4 Lemma.
(i) Let 0 < PI < pz < 1, and let F be continuously differentiable on the interval [a, b] =
[ F- 1(p1)- e, p-I (pz) + e ]for some e > 0, with strictly positive derivative f. Then
the inverse map a ~--+ a- 1 as a map IDl1 C D[a, b] ~--+ l 00 [p!, pz] is Hadamard
differentiable at F tangentially to C[a, b].
(ii) Let F have compact support [a, b] and be continuously differentiable on its support
with strictly positive derivative f. Then the inverse map a ~--+ a- 1 as a map IDlz c
D[a, b] ~--+ l 00 (0, 1) is Hadamard differentiable at F tangentially to C[a, b].
In both cases the derivative is the map h 1--+ -(hIf) o p-I.
Proof. It suffices to make the proof of the preceding lemma uniform in p. We use the
same notation.
(i). Because the function F has a positive density, it is strictly increasing on an interval
[gp\, gp~] that strictly contains [gp1 , gP2]. Then on [p~, p~] the quantile function p-I is the
ordinary inverse ofF and is (uniformly) continuous and strictly increasing. Let h1 --* h
uniformly on [gp\, gp~] for a continuous function h. By the proof of the preceding lemma,
gp,t --* gp, and hence every gpt for PI ::::: p ::::: pz is contained in [gp\, gp~] eventually. The
remainder of the proof is the same as the proof of the preceding lemma.
(ii). Let h 1 --* h uniformly in D[a, b], where his continuous and F + th 1 is contained
in 1Dl2 for all t. Abbreviate p-I (p) and (F + th 1)- 1(p) to gP and gptt respectively. Because
F and F + th 1 are concentrated on (a, b] by assumption, we have a < gPt• gP ::::: b for all
0 < p < 1. Thus the numbers ept = t 2 1\ (gpt- a) are positive, whence, by definition,
The o(t) terms are uniform in 0 < p < 1. The far left side and the far right side are O(t);
the expression in the middle is bounded above and below by a constant times lgp1 - gpl·
Conclude that lgp 1 - gP I = O(t), uniformly in p. Next, the lemma follows by the uniform
differentiability of F. •
21.5 Corollary. Fix 0 < p < 1. IfF is differentiable at p-I (p) with positive derivative
f(F- 1(p)), then
1 ~1{xj::::: p-l(p)}- P
r.:(JF-1( ) -p-Ic )) = _ _ +o (I).
'V" n p p ,Jn{;;t f(F-l(p)) p
308 QuantiZes and Order Statistics
Consequently, the sequence Jn(IF;;- 1(p)- p-'(p)) is asymptotically normal with mean 0
and variance p(l- p)lf 2 (F- 1(p)). Furthermore, ifF satisfies the conditions (i) or (ii)
of the preceding lemma, then JnCIF;;- 1 - p-I) converges in distribution in l 00 [p 1 , p 2 ] or
l 00 (0, 1), respectively, to the process G;..lf(F- 1(p)), where G;.. is a standard Brownian
bridge.
Proof. By Theorem 19.3, the empirical process Gn,F =Jn(IFn- F) converges in distri-
bution in D[ -oo, oo] to an F -Brownian bridge process G F = G;.. o F. The sample paths
of the limit process are continuous at the points at which F is continuous. By Lemma 21.3,
the quantile function F ~---* p-'(p) is Hadamard-differentiable tangentially to the range
of the limit process. By the functional delta method, the sequence Jn(IF;;- 1(p)-F- 1 (p))
is asymptotically equivalent to the derivative of the quantile function evaluated at Gn,F•
that is, to -Gn,F(F- 1(p))lf(F- 1 (p)). This is the first assertion. Next, the asymptotic
normality of the sequence Jn (IF; 1(p) - p-I (p)) follows by the central limit theorem.
The convergence of the quantile process follows similarly, this time using Lemma
21.4. •
21.6 Example. The uniform distribution function has derivative 1 on its compact support.
Thus, the uniform empirical quantile process converges weakly in l 00 (0, 1). The limiting
process is a standard Brownian bridge.
The normal and Cauchy distribution functions have continuous derivatives that are
bounded away from zero on any compact interval. Thus, the normal and Cauchy empirical
quantile processes converge in .eoo [p,, pz], for every 0 < PI < P2 < 1. D
The empirical quantile function at a point is equal to an order statistic of the sample. In
estimating a quantile, we could also use the order statistics directly, not necessarily in the
way that IF;' picks them. For the k,-th order statistic Xn(k.) to be a consistent estimator
for p-'(p), we need minimally that knln --+ pas n --+ oo. For mean-zero asymptotic
normality, we also need that kn In --+ p faster than 1I Jn, which is necessary to ensure that
X n(k.) and IF; 1 (p) are asymptotically equivalent. This still allows considerable freedom for
choosing kn.
21.7 Lemma. Let F be differentiable at p-l (p) with positive derivative and let knl n =
p+c1Jn+o(11Jn). Then
'-(X
'V"
IF-I ( ))
n(kn)- n P
P c
--+ J(F-l(p)) ·
Proof. First assume that F is the uniform distribution function. Denote the observations
by Ui, rather than Xi. Define a function gn:l 00 (0, 1)1--*R by gn(Z)=z(knln)- z(p).
Then gn(Zn)--+ z(p)- z(p) = 0, whenever Zn--+ z for a function z that is continuous at p.
Because the uniform quantile process Jn (G;;- 1 - G -I) converges in distribution in .eoo (0, 1),
theextendedcontinuous-mappingtheorem, Theorem 18.11, yields gn(JnCG;;- 1 - a- 1)) =
Jn(Un(k.) - G;;- 1 (p)) - Jn(knln- p) ""'0. This is the result in the uniform case.
A sample from a general distribution function F can be generated as p-I (Ui), by the
quantile transformation. Then Jn ( Xn(k.) - IF; 1(p)) is equal to
Apply the delta method to the two terms to see that f (p-I (p)) times their difference is
asymptotically equivalent to ,.fo(Un(k.)- p)- ,.fo(G;; 1(p)- p). •
Un(k)• Un(l) =
G;; 1 {p)
Jn ± Za
J p(1- p)
n
1 )
+ Op ( Jn ·
Thus the event Un(k) < p.::: Un(l) is asymptotically equivalent to the event JniG;; 1(p)-
Pi .::: za.Jp(1- p). Its probability converges to 1- 2o:.
An alternative is to use the asymptotic normality of the empirical quantiles IF;; 1, but
this has the unattractive feature of having to estimate the density f (p-I (p)), because this
appears in the denominator of the asymptotic variance. If using the distribution-free method,
we do not even have to assume that the density exists. D
The expression on the left is known as the standardized empirical difference process. "Stan-
dardized" refers to the leading factor f(gp). That a sum is called a difference is curious
but stems from the fact that minus the second term is approximately equal to the first term.
The identity shows an interesting symmetry between the empirical distribution and quantile
processes, particularly in the case that F is uniform, if f(l;p) = 1 and l;p = p. The result
that Rn (p) ~ 0 can be refined considerably. IfF is twice-differentiable at I;P with positive
first derivative, then, by the Bahadur-Kiefer theorems,
nl/4 [ 32 ]1/4
limsup O ) IRn(P)I = 27 p(l- p) • a.s.,
n~oo og1ogn 314
n 114 Rn(P)""' 2
Jp(l- p)
10
00
(X,JY ) ~ ( Jp(l-
<I> - y
p)
) dy.
The right side in the last display is a distribution function as a function of the argument
x. Thus, the magnitude of the empirical difference process is Op(n- 114 ), with the rate
of its fluctuations being equal to n- 114(loglogn) 314. Under some regularity conditions on
F, which are satisfied by, for instance, the uniform, the normal, the exponential, and the
logistic distribution, versions of the preceding results are also valid in supremum norm,
nl/4 1
li~sc:!!p (logn)1/2(2loglogn)1/411Rnlloo = .fi' a.s.,
nl/4
(logn)I/2 11Rn lloo ""'JIIZ1-IIoo·
Here Z1. is a standard Brownian motion indexed by the interval [0, 1].
mF + mo and/or mF- mo. Then the map 4J: D[ -oo, oo] I-* IR, with as domain the dis-
tribution functions, is Hadamard-differentiable at F, tangentially to the set of functions
that are continuous both at mF and on neighborhoods ofmF + mo and mF- mo. The
derivative 4J~(H) is given by
H(mF) f(mF
---
+ mo)- f(mF- mo) H(mF + mo)- H(mF- mo)
f(mF) f(mF + mo) + f(mF- mo) f(mF + mo) + f(mF- mo) ·
Proof. Define the maps 4J; as indicated previously.
By Lemma 21.3, the map 4J1: D[-oo, oo] ~---* lR x D[-oo, oo] is Hadamard-differenti-
able at F tangentially to the set of functions H that are continuous at mF.
The map ¢2: lR x D[ -oo, oo] ~---* D[m 0 - e, m 0 + e] is Hadamard-differentiable at the
point (m F, F) tangentially to the set of points (g, H) such that H is continuous on the
intervals [mF ± m 0 - 2e, mF ± m 0 + 2e], for sufficiently small e > 0. This follows
because, if a1 --+ a and H1 --+ H uniformly,
The F -Brownian bridge process GF has sample paths that are continuous everywhere
that F is continuous. Under the conditions of the lemma, they are continuous at the point
mF and in neighborhoods of the points mF + m 0 and mF- m 0 . Thus, in view of the
lemma and the delta method, the sequence Jn (4J (lFn) - 4J (F)) converges in distribution to
the variable 4J~ (G F).
21.10 Example (Symmetric F). If F has a density that is symmetric about 0, then its
median mF is 0 and the median absolute deviation m 0 is equal to F- 1(3/4). Then
the first term in the definition of the derivative vanishes, and the derivative 4J~ (G F) at
the F-Brownian bridge reduces to - (GA(3/4) -GA(l/4))/2/(F- 1 (3/4)) for a stan-
dard Brownian bridge GA. Then the asymptotic variance of Jn(MADn -m 0 ) is equal
to (1/16)// o F- 1 (3/4) 2 • 0
21.11 Example (Normal distribution). IfF is equal to the normal distribution with mean
zero and varianceu 2 , thenmF = Oandm 0 = u<l>- 1 (3/4). We find an asymptotic variance
(u 2 f 16)4J o c~>- 1 (3/4)- 2 • As an estimator for the standard deviation u, we use the estimator
MADn/<1>- 1 (3/4), and as an estimator for u 2 the square of this. By the delta method,
the latter estimator has asymptotic variance equal to (lf4)u 4 4J o <P- 1(3/4)- 2 <1>- 1 (3/4)- 2 ,
which is approximately equal to 5.440' 4 • The relative efficiency, relative to the sample
variance, is approximately equal to 37%, and hence we should not use this estimator without
a good reason. 0
312 QuantiZes and Order Statistics
21.12 Lemma. For any sequence of numbers Xn and any r E [0, oo], we have P(Xn(n) ::5
Xn) -+ e-• if and only ifnF(xn) -+ r.
In view of the lemma we can find "interesting limits" for the probabilities P(Xn(n) ::5 Xn)
only for sequences Xn such that F(xn) = 0(1/n). Depending on F this may mean that Xn
is bounded or converges to infinity.
Suppose that we wish to find constants an and bn > 0 such that b;; 1 (Xn(n)- an) con-
verges in distribution to a nontrivial limit. Then we must choose an and bn such that
F(an + bnx) = 0(1/n) for a nontrivial set of x. Depending on F such constants may or
may not exist. It is a bit surprising that the set of possible limit distributions is extremely
small.t
21.13 Theorem (Extremal types). If b;; 1 (Xn(n) -an).,. G for a nondegenerate cumula-
tive distribution function G, then G belongs to the location-scale family of a distribution of
one of the following forms:
(i) e-e-• with support IR;
(ii) e-<lfx") with support [0, oo) and a > 0;
(iii) e-(-x)" with support ( -oo, 0] and a > 0.
21.14 Example (Uniform). If the distribution has finite support [0, 1] with F(t) = (1 -
t)", then nF(l + n- 1/"x) -+ (-x)" for every x ::5 0. In view of Lemma 21.12, the
sequence n 11"(Xn(n) - 1) converges in distribution to a limit of type (iii). The uniform
distribution is the special case with a = 1, for which the limit distribution is the negative
of an exponential distribution. 0
t For a proof of the following theorem, see [66] or Theorem 1.4.2 in [90].
21.4 Extreme Values 313
21.15 Example (Pareto). The survival distribution of the Pareto distribution satisfies
F(t) = (JL/t)a for t?:. JL. Thus nF(n 1fa JLX) ~ 1/xa for every x > 0. In view of Lemma
21.12, the sequence n-lfa Xn(n)/ JL converges in distribution to a limit of type (ii). 0
21.16 Example (Normal). For the normal distribution the calculations are similar, but
more delicate. We choose
~ 1loglog n +log 47r
an = y ~lUI;; f£ - -
2 .../2logn
, bn = 1j.J2logn.
Using Mill's ratio, which asserts that ct>(t) ""(jl(t)jt as t ~ oo, it is straightforward to
see that nct>(an +bnx) ~e-x for every x. In view of Lemma 21.12, the sequence
.../2log n(Xn(n) -an) converges in distribution to a limit of type (i). 0
21.17 Theorem. There exist constants an and bn such that the sequence b;; 1(Xn(n)- an)
converges in distribution if and only if, as t ~ -cF,
(i) There exists a strictly positive function g on lR such that F(t + g(t)x)j F(t) ~ e-x,
for every x E IR;
(ii) "CF = oo and F(tx)j F(t) ~ 1jxa,Jor every x > 0;
(iii) "CF < oo and F(-cF- (-cF- t)x )/ F(t) ~ xa,Jor every x > 0.
The constants (an, bn) can be taken equal to (un, g(un)), (0, Un) and (-cF, "CF- Un), respec-
tively,forun = F- 1(1-1/n).
Proof. We only give the proof for the "only if' part, which follows the same lines as
the preceding examples. In every. of the three cases, nF(un) ~ 1. To see this it suffices
to show that the jump F(un)- F(un-) = o(1jn). In case (i) this follows because, for
every x < 0, the jump is smaller than F(un + g(un)x) - F(un). which is of the order
F(un)(e-x - 1) ::::; (1/n)(e-x - 1). The right side can be made smaller than e(ljn)
for any e > 0, by choosing x close to 0. In case (ii), we can bound the jump at un by
F(xun)- F(un) for every x < 1, which is oftheorder F(un)(1jxa -1) ::::; (ljn)(1jxa -1).
In case (iii) we argue similarly.
We conclude the proof by applying Lemma 21.12. For instance, in case (i) we have
nF(un + g(un)x) ""nF(un)e-x ~e-x for every x, and the result follows. The argument
under the assumptions (ii) or (iii) is similar. •
21.18 Theorem. /fb;; 1(Xn(n)- an)._,.. G, then b;; 1 (Xn(n-k)- an)._,.. H for the distribution
function H(x) = G(x) L~=o( -log G(x) )iIi!.
314 QuantiZes and Order Statistics
Proof. If Pn = F(an + bnx), then npn--+ -log G(x) for every x where G is continuous
(all x), by Lemma 21.12. Furthermore,
By the same, but more complicated, arguments, the sample extremes can be seen to
converge jointly in distribution also, but we omit a discussion.
Any order statistic depends, by its definition, on all observations. However, asymptot-
ically central and extreme order statistics depend on the observations in orthogonal ways
and become stochastically independent. One way to prove this is to note that central-order
statistics are asymptotically equivalent to means, and averages and extreme order statistics
are asymptotically independent, which is a result of interest on its own.
Proof. Let Un = n- 1/ 2 2::7::{ g(Xn(i)) and Vn = b;; 1(Xn(n)- an). Because Fg 2 < 00,
it follows that max1~i~n lg(Xi)l = op(.jn). Hence n- 112 lg(Xn(n))l ~ 0, whence the
distance between (Gng. Vn) and (Un, Vn) converges to zero in probability. It suffices to
show that (Un. Vn).,.. (U, V). Suppose that we can show that, for every u,
By taking linear combinations, we readily see from the preceding lemma that the em-
pirical process Gn and b;; 1(Xn(n) -an), if they converge, are asymptotically independent
as well. This independence carries over onto statistics whose asymptotic distribution can
Problems 315
be derived from the empirical process by the delta method, including central order statis-
tics Xn(k./n) with knfn = p + O(lj,Jri), because these are asymptotically equivalent to
averages.
Notes
For more results concerning the empirical quantile function, the books [28]and [134] are
good starting points. For results on extreme order statistics, see [66] or the book [90].
PROBLEMS
1. Suppose that Fn ~ F uniformly. Does this imply that F;; 1 ~ p-l uniformly or pointwise? Give
a counterexample.
2. Show that the asymptotic lengths of the two types of asymptotic confidence intervals for a quan-
tile, discussed in Example 21.8, are within o p (1 I ...jTi). Assume that the asymptotic variance of
the sample quantile (involving 1If o p-l (p)) can be estimated consistently.
3. Find the limit distribution of the median absolute deviation from the mean, medt::;i::;n IX; - Xn 1.
4. Find the limit distribution of the pth quantile of the absolute deviation from the median.
5. Prove that Xn and Xn(n-1) are asymptotically independent.
22
L-Statistics
22.1 Introduction
Let Xn(l)• ... , Xn(n) be the order statistics of a sample of real-valued random variables. A
linear combination of (transformed) order statistics, or L-statistic, is a statistic of the form
n
:~::::cnia(Xn(i)).
i=l
The coefficients Cni are a triangular array of constants and a is some fixed function. This
"score function" can without much loss of generality be taken equal to the identity function,
for an L-statistic with monotone function a can be viewed as a linear combination of the
order statistics of the variables a(Xt) •... , a(Xn). and an L-statistic with a function a of
bounded variation can be dealt with similarly, by splitting the L-statistic into two parts.
22.1 Example (Trimmed and Winsorized means). The simplest example of an L-statistic
is the sample mean. More interesting are the a-trimmed meanst
l n-LanJ
n- 2Lanj i=LanJ+l
L Xn(i)•
1[
-
n
LanJXn(LanJ) + -~~
L
i=LanJ+l
Xn(i) + LanJXn(n-LanJ+l) ] ·
t The notation Lx J is used for the greatest integer that is less than or equal to X. Also rX l denotes the smallest
integergreaterthanorequaltox. Foranaturalnumbern and areal numberO ~X ~none has Ln- xJ = n- rxl
and rn- xl = n- LxJ.
316
22.1 Introduction 317
Cauchy Laplace
0
co co;
ll)
ll)
C\i
0
'<t
C\i
It'!
"' ~
C\1 ~
~
ll)
d
0
0
d
0.0 0.2 0.4 0.0 0.2 0.4
normal logistic
0
co; ll)
ll)
C\i
'<t
0
C\i
It'! ......
~
"'
~
~
C\1
ll)
d
0
d
0.0 0.2 0.4 0.0 0.2 0.4
Figure 22.1. Asymptotic variance of the a-trimmed mean of a sample from a distribution F as
function of a for four distributions F.
The a-trimmed mean is the average of the middle (1- 2a)-th fraction of the observations,
the a-Winsorized mean replaces the ath fractions of smallest and largest data by Xn(LanJ)
and Xn(n-LanJ+l)• respectively, and next takes the average. Both estimators were already
used in the early days of statistics as location estimators in situations in which the data were
suspected to contain outliers. Their properties were studied systematically in the context of
robust estimation in the 1960s and 1970s. The estimators were shown to have good properties
in situations in which the data follows a heavier tailed distribution than the normal one.
Figure 22.1 shows the asymptotic variances of the trimmed means as a function of a for
four distributions. (A formula for the asymptotic variance is given in Example 22.11.) The
four graphs suggest that 10% to 15% trimming may give an improvement over the sample
mean in some cases and does not cost much even for the normal distribution. D
22.2 Example (Ranges). Two estimators of dispersion are the interquartile range
Xn(r3n/4l) - Xn<rn/41) and the range Xn(n) - Xn(l)· Of these, the range does not have a
normal limit distribution and is not within the scope of the results of this chapter. D
We present two methods to prove the asymptotic normality of £-statistics. The first
method is based on the Hajek projection; the second uses the delta method. The second
method is preferable in that it applies to more general statistics, but it necessitates the study
of empirical processes and does not cover the simplest £-statistic: the sample mean.
318 L-Statistics
= -1 00
(P(Xn-l(i) > y)- P(Xn-1(i-1) > y)) dy + EXn-1(i)·
The second expression is valid for x < 0 as well, as can be seen by a similar argument.
Because Xn-1 (i -1) ::::; X n-1 (i), the difference between the two probabilities in the last integral
is equal to the probability of the event {Xn-1(i-1) ::::; y < Xn-1(i)}. This is precisely the
probability that a binomial (n -1, F (y)) -variable is equal to i - 1. If we write this probability
as Bn-1,F(y)(i- 1), then the Hajek projectionX;.(i) of Xn(i) satisfies, with 1Fn the empirical
distribution function of X1, ... , Xn,
L 1 Bn-1,F(y)(i -
n oo
X;.(i)- EX;.(i) = - 1) dy + Cn
k= 1 xk
22.3 Theorem. Suppose that EXr < oo and that Cni = fjJ (i I (n +1)} for a boundedfunction
fjJ that is continuous at F(y )for Lebesgue almost-every y. Then the sequence n- 112 (Tn -ETn)
22.2 Hajek Projection 319
en(Y) n
= L:Cni Bn-l,F(y)(Z. - 1) = E</J (Bn
-- +-
1
1) ,
i=! n+
for Bn binomially distributed with parameters (n - 1, F(y) ). By the law of large numbers
(Bn + 1)/(n + 1) ~ F(y). Because l/J is bounded, en(Y) --* e(y) for every y such that l/J is
continuous at F(y), by the dominated-convergence theorem. By assumption, this includes
almost every y.
By Theorem 11.2, the sequence n- 112 (Tn- Tn) converges in second mean to zero if the
variances of n- 112 Tn and n -!f2 Tn converge to the same number. Because n -lf 2 (Tn - ETn) =
- J Gn(Y) en(Y) dy, the second variance is easily computed to be
~varTn =II (F(x 1\ y)- F(x)F(y)) en(x)en(Y) dx dy.
1
-varTn
n
1 LL:CniCnj
=- n n
n i=! i=l
cov(Xn(i)• Xn(j)) = If Rn(x, y)dxdy,
where, because cov(X, Y) = JJ cov({X :::: x}, {Y :::: y}) dx dy for any pair of variables
(X, Y),
1nn
Rn(X, y) =- LL) (i)(j)
--
n+
n i=! i=!
1 l/J - -
n+
1 cov({Xn(i) :5 x}, {Xn(j) :5 y}).
Because the order statistics are positively correlated, all covariances in the double sum are
nonnegative. Furthermore,
1 n n
- LLcov({Xn(i) :5 x}, {Xn(j) :5 y}) = cov(Gn(x), Gn(Y))
n i=li=l
= (F(x 1\ y)- F(x)F(y)).
For pairs (i, j) such that i ~ nF(x) and j ~ nF(y), the coefficient of the covariance
is approximately e(x)e(y) by the continuity of l/J. The covariances corresponding to other
pairs (i, j) are negligible. Indeed, fori 2:: nF(x) + nen,
by Hoeffding's inequality.t Thus, because 4> is bounded, the terms with i :::: nF(x) + nen
contribute exponentially little as en -+ 0 not too fast (e.g., e~ = n- 112 ). A similar argument
applies to the terms with i :::: nF(x) - nen or li - nF(y) I :::: nen. Conclude that, for every
(x, y) such that 4> is continuous at both F(x) and F(y),
Finally, we apply the dominated convergence theorem to see that the double integral of this
expression, which is equal to n- 1var Tn, converges to u 2 (¢, F).
This concludes the proof that Tn and Tn are asymptotically equivalent. To show that
the sequence n- 112 (Tn- ETn) is asymptotically normal, define Sn = - Grn(Y) e(y) dy. J
Then, by the same arguments as before, n- 1var(Sn- Tn)-+ 0. Furthermore, the sequence
n- 112 Sn is asymptotically normal by the central limit theorem. •
i- 1 i
for--< s < -.
n - n
Consequently, an £-statistic can be expressed in the empirical distribution function as well.
Given a fixed function a and a fixed signed measure K on (0, l)t, consider the function
View 4> as a map from the set of distribution functions into ll. Clearly,
¢(1Fn) = LK
n
i=l
:..::__,1 .:·_ Ja(Xn(i))·
( ·
n n
(22.4)
The right side is an £-statistic with coefficients Cni = K ( (i - 1) In, i In]. Not all possible
arrays of coefficients Cni can be "generated" through a measure K in this manner. However,
most £-statistics of interest are almost of the form (22.4 ), so that not much generality is lost
by assuming this structure. An advantage is simplicity in the formulation of the asymptotic
properties of the statistics, which can be derived with the help of the von Mises method.
More importantly, the function ¢(F) can also be applied to other estimators besides 1Fn.
The results of this section yield their asymptotic normality in general.
22.5 Example. The a-trimmed mean corresponds to the uniform distribution K on the
interval (a, 1- a) and a the identity function. More precisely, the £-statistic generated by
t See for example, the appendix of [117]. This inequality gives more than needed. For instance, it also works to
apply Markov's inequality for fourth moments.
t A signed measure is a difference K = K, - Kz of two finite measures K, and Kz.
22.3 Delta Method 321
this measure is
1-
--
1- 2a a
11-a IF; 1(s)ds = 1
n- 2an
[ (fanl- an)Xn<ranl)
n-ranl ]
+ L Xn(i) + (fanl- an)Xn(n-ranl+1) .
i=ranl+1
Except for the slightly different weight factor and the treatment of the two extremes in
the averages, this agrees with the a-trimmed mean as introduced before. Because Xn(k.)
converges in probability to F- 1(p) ifknln --* p and (n-2LanJ)I(n-2an) = 1+ 0(11n),
the difference between the two versions of the trimmed mean can be seen to be Op(11n).
For the purpose of this chapter this is negligible.
The a-Winsorized mean corresponds to the measure K that is the sum of Lebesgue
measure on (a, 1 -a) and the discrete measure with pointmasses of size a at each of the
points a and 1 - a. Again, the difference between the estimator generated by this K and
the Winsorized mean is negligible.
The interquartile range corresponds to the discrete, signed measure K that has point-
masses of sizes 1 and -1 at the points 1I 4 and 3I 4, respectively. D
:t 11
a(F,- 1)dK = 1 1
a'(F,- 1) [:tF,- 1] dK.
Here the expression within square brackets if evaluated at t = 0 is the influence function of
the quantile function and is derived in Example 20.5. Substituting the representation given
there, we see that the influence function of the £-function ~(F)= f a(F- 1) dK takes the
form
~' (8 -F)=-
F x
1 o
1
a'(F- 1 (u)) [x,oo)
1
f(F-1(u))
-
(F- 1(u)) u
dK(u)
(22.6)
=- f a'(y) 1[x,oo)~~y~ F(y) dK o F(y).
The second equality follows by (a generalization of) the quantile transformation.
An alternative derivation of the influence function starts with rewriting ~(F) in the form
Here K o F(x) = K o F(oo) - K o F(x) and the partial integration can be justified for
a a function of bounded variation with a(O) = 0 (see problem 22.6; the assumption that
a(O) = 0 simplifies the formula, and is made for convenience). This formula for ~(F)
suggests as influence function
Under appropriate conditions each of the two formulas (22.6) and (22.8) for the influence
function is valid. However, already for the defining expressions to make sense very different
conditions are needed. Informally, for equation (22.6) it is necessary that a and F be
differentiable with a positive derivative for F, (22.8) requires that K be differentiable. For
this reason both expressions are valuable, and they yield nonoverlapping results.
Corresponding to the two derivations of the influence function, there are two basic
approaches towards proving asymptotic normality of L-statistics by the delta method, valid
under different sets of conditions. Roughly, one approach requires that F and a be smooth,
and the other that K be smooth.
The simplest method is to view the L-statistic as a function of the empirical quantile
function, through the map JF; 1 1-+ J a o JF; 1 d K, and next apply the functional delta method
to the map Q 1-+ J a o Q d K. The asymptotic normality of the empirical quantile function
is obtained in Chapter 21.
22.9 Lemma. Let a : JR. 1-+ JR. be continuously differentiable with a bounded derivative.
Let K be a signed measure on the interval (a, {3) C (0, 1). Then the map Q 1-+ J a(Q) d K
from l""(a, {3) to JR. is Hadamard-differentiable at every Q. The derivative is the map
H 1-+ f a'(Q) H dK.
If the underlying distribution has unbounded support, then its quantile function is un-
bounded on the domain (0, 1), and no estimator can converge in l""(O, 1). Then the pre-
ceding lemma can apply only to generating measures K with support (a, {3) strictly within
(0, 1). Fortunately, such generating measures are the most interesting ones, as they yield
bounded influence functions and hence robust L-statistics.
A more serious limitation of using the preceding lemma is that it could require unnec-
essary smoothness conditions on the distribution of the observations. For instance, the
empirical quantile process converges in distribution in l""(a, {3) only if the underlying dis-
tribution has a positive density between its a- and {3-quantiles. This is true for most standard
distributions, but unnecessary for the asymptotic normality of empirical L-statistics gen-
erated by smooth measures K. Thus we present a second lemma that applies to smooth
measures K and does not require that F be smooth. Let DF[-oo, oo] be the set of all
distribution functions.
22.10 Lemma. Let a :JR. 1-+ JR. be ofbounded variation on bounded intervals with J(a++
a-) diK o Fi < oo and a(O) = 0. Let K be a signed measure on (0, 1) whose distribution
function K is differentiable at F(x)fora almost-every x and satisfies IK(u +h)- K(u)i :::;:
J
M(u)hforevery sufficientlysmallihi, and some function M such that M(F_) dial < oo.
Then the map F 1-+ J a o p-I dK from DF[-oo, oo] c D[-oo, oo] to JR. is Hadamard-
differentiable at F, with derivative H 1-+ - j(K' oF_) H da.
22.4 L-Estimators for Location 323
Proof. First rewrite the function in the form (22. 7). Suppose that H1 --+ H uniformly and
set F 1 = F + tH1 • By continuity of K, (K oF)_= K(F_). Because K o F(oo) = K(l)
for all F, the difference 4J(F1) - f/J(F) can be rewritten as - j(K o F1_ - K o F_) da.
Consider the integral
The integrand converges a-almost everywhere to zero and is bounded by M (F_) (II H 1 II 00 +
IIHIIoo) :;:: M(F-)(211HIIoo + 1), for small t. Thus, the lemma follows by the dominated-
convergence theorem. •
22.11 Example (Trimmed mean). The cumulative distribution function K of the uniform
distribution on (a, 1 -a) is uniformly Lipschitz and fails to be differentiable only at the
points a and 1- a. Thus, the trimmed-mean function is Hadamard-differentiable at every
F such that the set {x: F(x) =a, or 1- a} has Lebesgue measure zero. (We assume that
a > 0.) In other words, F should not have fiats at height a or 1- a. For such F the trimmed
mean is asymptotically normal with asymptotic influence function - J;-a (
lx~y- F (y)) dy
(see (22.8)), and asymptotic variance
1 p-1(1-a)1p-1(1-a)
F- 1 (a) F- 1 (a)
(F(x 1\ y)- F(x) F(y)) dx dy.
22.12 Example (Winsorized mean). The generating measure of the Winsorized mean is
the sum of a discrete measure on the two points a and 1 -a, and Lebesgue measure on
the interval (a, 1 -a). The Winsorized mean itself can be decomposed correspondingly.
Suppose that the underlying distribution function F has a positive derivative at the points
F- 1 (a) and F- 1 (1 -a). Then the first part of the decomposition is asymptotically linear
in view of Lemma 22.9 and Lemma 21.3, the second part is asymptotically linear by
Lemma 22.10 and Theorem 19.3. Combined, this yields the asymptotic linearity of the
Winsorized mean and hence its asymptotic normality. D
1 1 /'
,Jn(Tn- (J) = --2: n
- - ( X ; - (J) + Op(l).
,;ni=1If f
Comparison with equation (22.8) for the influence function of an £-statistic shows that the
choices of generating measure K and transformation a such that
lead to an £-statistic with the optimal asymptotic influence function. This can be accom-
modated by setting a(x) = x and
Notes
This chapter gives only a few of the many results available on £-statistics. For instance,
the results on Hadamard differentiability can be refined by using a weighted uniform norm
combined with convergence of the weighted empirical process. This allows greater weights
for the extreme-order statistics. For further results and references, see [74], [134], and
[136].
PROBLEMS
1. Find a formula for the asymptotic variance of the Winsorized mean.
2. Let T(F) = J F- 1 (u) k(u) du.
(i) Show that T (F) = 0 for every distribution F that is symmetric about zero if and only if
k is symmetric about 1/2.
(ii) Show that T(F) is location equivariant if and only if J k(u) du = 1.
(iii) Show that "efficient" L-statistics obtained from symmetric densities possess both prop-
erties (i) and (ii).
3. Let X 1, ... , Xn be a random sample from a continuous distribution function. Show that con-
ditionally on (Xn(k)• Xn(l)) = (x, y), the variables Xn(k+1)• ... , Xn(l-1) are distributed as the
order statistics of a random sample of size l - k - 1 from the conditional distribution of X 1
given that x ::: X 1 ::: y. How can you use this to study the properties of trimmed means?
4. Find an optimal L-statistic for estimating the location in the logistic and Laplace location
families.
5. Does there exist a distribution for which the trimmed mean is asymptotically optimal for esti-
mating location?
Problems 325
f adb= {
J(O,oo)
(b(oo)-b-)da+ {
J(-oo,O]
(b(-oo)-b-)da.
Prove this. If a is also bounded, then the righthand side can be written more succinctly as
abl~oo - J b_ da. (Substitute a(x) = f(O,x] da for x > 0 and a(x) = - f(x,OJ da for x :::;: 0
into the left side of the equation, and use Fubini's theorem separately on the integral over the
positive and negative part of the real line.)
23
Bootstrap
23.1 Introduction
In most estimation problems it is important to give an indication of the precision of a given
estimate. A simple method is to provide an estimate of the bias and variance of the estimator;
more accurate is a confidence interval for the parameter. In this chapter we concentrate on
bootstrap confidence intervals and, more generally, discuss the bootstrap as a method of
estimating the distribution of a given statistic.
Let ~ be an estimator of some parameter (} attached to the distribution P of the obser-
vations. The distribution of the difference ~ - (} contains all the information needed for
assessing the precision of ~. In particular, if ga is the upper a -quantile of the distribution
of(~- e)ja, then
326
23.1 Introduction 327
for quantiles, and next confidence intervals, are obtained from the bootstrap estimator for
the distribution.
The following type of notation is customary. Let fr and a* be computed from (hypo-
thetic) observations obtained according to P in the same way 8 and a are computed from
e
the true observations with distribution p. If is related to p in the same way (} is related to
P, then the bootstrap estimator for the distribution of (8 - (})fa under P is the distribution
of (8*- B) fa* under P. The latter is evaluated given the original observations, that is, for
a fixed realization of P.
A bootstrap estimator for a quantile ga of (8- (})fa is a quantile of the distribution of
(8*- 8)ja* under P. This is the smallest value x =~a that satisfies the inequality
The notation P(·IP) indicates that the distribution of (8*, a*) must be evaluated assum-
ing that the observations are sampled according to P given the original observations. In
particular, in the preceding display 8 is to be considered nonrandom. The left side of the
preceding display is a function of the original observations, whence the same is true for ~a.
IfP is close to the true underlying distribution P, then the bootstrap quantiles should be
close to the true quantiles, whence it should be true that
P( e-
8- (} ::5 ga I P
A )
:=:::! 1 -a.
In this chapter we show that this approximation is valid in an asymptotic sense: The
probability on the left converges to 1 - a as the number of observations tends to infinity.
Thus, the bootstrap confidence interval
A A A A
[(}- gpa, (} - g1-aa] = { (}: g1-rt
A
::5
8-e ::5 gp
A}
e-
possesses asymptotic confidence level1 - a - fJ.
a
The statistic is typically chosen equal to an estimator of the (asymptotic) standard
deviation of 8. The resulting bootstrap method is known as the percentile t -method, in view
of the fact that it is based on estimating quantiles of the "studentized" statistic (8 - (})fa.
(The notion of at-statistic is used here in an abstract manner to denote a centered statistic
divided by a scale estimate; in general, there is no relationship with Student's t-distribution
a
from normal theory.) A simpler method is to choose independent of the data. If we
choose a = a* = 1, then the bootstrap quantiles ~a are the quantiles of the centered statistic
8*- 8. This is known as the percentile method. Both methods yield asymptotically correct
confidence levels, although the percentile t-method is generally more accurate.
A third method, Efron's percentile method, proposes the confidence interval [~1-fJ• ~a]
for ~a equal to the upper a-quantile of 8*: the smallest value x = ~a such that
The logical justification for this interval is less strong than for the intervals based on boot-
strapping {) - e, but it appears to work well. The two types of intervals coincide in the case
that the conditional distribution of e* - {) is symmetric about zero. We shall see that the
difference is asymptotically negligible if e* - e converges to a normal distribution.
Efron's percentile interval is the only one among the three intervals that is invariant
under monotone transformations. For instance, if setting a confidence interval for the cor-
relation coefficient, the sample correlation coefficient might be transformed by Fisher's
transformation before carrying out the bootstrap scheme. Next, the confidence interval
for the transformed correlation can be transformed back into a confidence interval for the
correlation coefficient. This operation would have no effect on Efron's percentile interval
but can improve the other intervals considerably, in view of the skewness of the statistic. In
this sense Efron's method automatically "finds" useful (stabilizing) transformations. The
fact that it does not become better through transformations of course does not imply that it
is good, but the invariance appears desirable.
Several of the elements of the bootstrap scheme are still unspecified. The missing prob-
ability a + f3 can be distributed over the two tails of the confidence interval in several ways.
In many situations equal-tailed confidence intervals, corresponding to the choice a = {3,
are reasonable. In general, these do not have {) exactly as the midpoint of the interval. An
alternative is the interval
with~~ equal to the upper a-quantile of I{J*- elf a*. A further possibility is to choose a
and f3 under the side condition that the difference ~fJ - ~1-a• which is proportional to the
length of the confidence interval, is minimal.
More interesting is the choice of the estimator P for the underlying distribution. If the
original observations are a random sample X 1, ... , Xn from a probability distribution P,
then one candidate is the empirical distribution JIDn = n -! I:: 8x, of the observations, leading
to the empirical bootstrap. Generating a random sample from the empirical distribution
amounts to res ampling with replacement from the set {X 1 , ••• , Xn} of original observations.
The name "bootstrap" derives from this resampling procedure, which might be surprising
at first, because the observations are "sampled twice." If we view the bootstrap as a
nonparametric plug-in estimator, we see that there is nothing peculiar about resampling.
We shall be mostly concerned with the empirical bootstrap, even though there are many
other possibilities. If the observations are thought to follow a specified parametric model,
then it is more reasonable to set P equal to P{} for a given estimator {). This is what one
would have done in the first place, but it is called the parametric bootstrap within the
present context. That the bootstrapping methodology is far from obvious is clear from
the fact that the literature also considers the exchangeable, the Bayesian, the smoothed, and
the wild bootstrap, as well as several schemes for bootstrap corrections. Even "resampling"
can be carried out differently, for instance, by sampling fewer than n variables, or without
replacement.
It is almost never possible to calculate the bootstrap quantiles ~a numerically. In practice,
these estimators are approximated by a simulation procedure. A large number of indepen-
dent bootstrap samples Xi, ... , x: are generated according to the estimated distribution
P. Each sample gives rise to a bootstrap value (e* - e)ja* of the standardized statistic.
Finally, the bootstrap quantiles ~a are estimated by the empirical quantiles of these bootstrap
23.2 Consistency 329
values. This simulation scheme always produces an additional (random) error in the cover-
age probability of the resulting confidence interval. In principle, by using a sufficiently large
number of bootstrap samples, possibly combined with an efficient method of simulation,
this error can be made arbitrarily small. Therefore the additional error is usually ignored in
the theory of the bootstrap procedure. This chapter follows this custom and concerns the
"exact" distribution and quantiles of (e* -e) fa*, without taking a simulation error into
account.
23.2 Consistency
A confidence interval [en,1• en,z] is (conservatively) asymptotically consistent at levell-
a - fJ if, for every possible P,
liminf P(en,1 ::::; () ::::; en,zl P) 2: 1- a- {3.
n~oo
It is not a great loss of generality to assume that the sequence (en -())fan converges in
distribution to a continuous distribution function F (in our examples <I>). Then consistency
relative to the Kolmogorov-Smimov distance is equivalent to the requirements, for every x,
(See Problem 23.1.) This type of consistency implies the asymptotic consistency of confi-
dence intervals.
23.3 Lemma. Suppose that (en- ())fan- T, and that (e;- en)fa;- T given the orig-
inal observations, in probability, for a random variable T with a continuous distribution
function. Then the bootstrap confidence intervals [en - ~n,pan. en - ~n,1-aan1 are asymp-
. totically consistent at level 1 - a - {3. If the conditions hold for nonrandom an = a;,
and T is symmetrically distributed about zero, then the same is true for Efron's percentile
intervals.
Proof. Every subsequence has a further subsequence along which the sequence (e; -
en)fa; converges weakly toT, conditionally, almost surely. For simplicity, assume that
the whole sequence converges almost surely; otherwise, argue along subsequences.
If a sequence of distribution functions Fn converges weakly to a distribution function
F, then the corresponding quantile functions Fn- 1 converge to the quantile function F- 1 at
every continuity point (see Lemma 21.2). Apply this to the (random) distribution functions
F, of (e;-en)fa: and a continuity point 1-a of the quantile function F- 1 ofT to conclude
330 Bootstrap
that ~n,a = F,.- 1 (1 -a) converges almost surely to F- 1 (1 -a). By Slutsky's lemma, the
sequence (en- ())fan- ~n.a converges weakly toT- F- 1 (1- a). Thus
:~ -
P(e 2: 11n an~n.a)
A A
= P (en - e ::5 t;n,a
--;;;- t I P) --+ P ( T ::5 F -t (1 -a) ) = 1 -a.
This argument applies to all except at most countably many a. Because both the left and
the right sides of the preceding display are monotone functions of a and the right side
is continuous, it must be valid for every a. The consistency of the bootstrap confidence
interval follows.
Efron's percentile interval is the interval [tn,t-p. tn,a]. where tn,a =en+ ~n,a1· By the
preceding argument,
P(() 2: tn,t-p) = P(en- () ::5 -~n.t-.B I P)--+ P(T ::5 -F- 1 (/3)) = 1- {3.
The last equality follows by the symmetry of T. The consistency follows. •
From now on we consider the empirical bootstrap; that is, Pn = JIDn is the empirical
distribution of a random sample Xt, ... , Xn. We shall establish (23.2) for a large class of
statistics, with F the normal distribution. Our method is first to prove the consistency for en
equal to the sample mean and next to show that the consistency is retained under application
of the delta method. Combining these results, we obtain the consistency of many bootstrap
procedures, for instance for setting confidence intervals for the correlation coefficient.
In view of Slutsky's lemma, weak convergence of the centered sequence ..jTi(en - ())
combined with convergence in probability of an/ ..j7i yields the weak convergence of the
studentized statistics (en -())fan. An analogous statement is true for the bootstrap statis-
a;
tic, for which the convergence in probability of I ..j7i must be shown conditionally on
the original observations. Establishing (conditional) consistency of an/ ..j7i and I ..j7i is a-;
usually not hard. Therefore, we restrict ourselves to studying the nonstudentized statistics.
Let X n be the mean of a sample of n random vectors from a distribution with finite mean
vector fJ- and covariance matrix ~. According to the multivariate central limit theorem, the
sequence ..jTi(Xn- f..t) is asymptotically normal N(O, ~)-distributed. We wish to show the
same for ..jTi(X~- Xn). in which X~ is the average ofn observations from Pn. that is, ofn
values resampled from the set of original observations {Xt, ... , Xn} with replacement.
23.4 Theorem (Sample mean). Let X 1, X 2 , ••. be i.i.d. random vectors with mean f..t
and covariance matrix~. Then conditionally on Xt, X2, ... , for almost every sequence
Xt,X2, ... ,
Proof. For a fixed sequence X 1 , X2 , ••• , the variable X~ is the average of n observa-
tions Xj, ... , X~ sampled from the empirical distribution JIDn. The (conditional) mean and
covariance matrix of these observations are
n 1 -
E(X7 I JIDn) = L
-Xi= Xn,
i=t n
By the strong law of large numbers, the conditional covariance converges to ~ for almost
every sequence x,, x2. 0 0 0 0
The asymptotic distribution of X! can be established by the central limit theorem. Be-
cause the observations Xj, ... , X! are sampled from a different distribution JP>n for every
n, a central limit theorem for a triangular array is necessary. The Lindeberg central limit
theorem, Theorem 2.27, is appropriate. It suffices to show that, for every 8 > 0,
1 n
EIIX711 2 l{IIX711 > 8Jn} =- L)Xdi 2 1{11Xdl > 8Jn} ~ 0.
n i=l
The left side is smaller than n-'.L:7= 1 11Xdi 2 1{11Xi11 > M} as soon as 8.jn 2: M. By
the strong law oflarge numbers, the latter average converges to EIIXd1 2 1{11Xdl > M} for
almost every sequence X" X 2 , •••• For sufficiently large M, this expression is arbitrarily
small. Conclude that the limit superior of the left side of the preceding display is smaller
than any number 71 > 0 almost surely and hence the left side converges to zero for almost
every sequence x,, x2..... •
Assume that Bn is a statistic, and that ~ is a given differentiable map. If the sequence
.jn(Bn- e) converges in distribution, then so does the sequence .jn(~(Bn)- ~(e)), by the
delta method. The bootstrap estimator for the distribution of ~(Bn) -~(e) is ~(e;) -~(Bn).
e,
If the bootstrap is consistent for estimating the distribution of Bn - then it is also consistent
for estimating the distribution of ~(fJn)- ~(e).
23.5 Theorem (Delta method for bootstrap). Let~: IRk 1-+ !Rm be a measurable map
e.
defined and continuously differentiable in a neighborhood of Let Bn be random vectors
taking their values in the domain of~ that converge almost surely to e. If .jn(Bn- e) -v-+ T,
and .jn(e;- Bn) "-"+ T conditionally almost surely, then both .jn(~(Bn)- ~(e)) -v-+ ~(,(T)
and .jn(~(e;)- ~(fJn)) -v-+ ~(,(T), conditionally almost surely.
23.6 Example (Sample variance). The (biased) sample variance S~ = n- 1.L:7=1 (X; -
Xn) 2 equals ~(Xn, X~) for the map ~(x, y) = y -x 2 • The empirical bootstrap is consistent
332 Bootstrap
for estimation of the distribution of (Xn, X~) - (ai, a2), by Theorem 23.4, provided
that the fourth moment of the underlying distribution is finite. The delta method shows
that the empirical bootstrap is consistent for estimating the distribution of a 2 in s; -
that
Gn* = v'n(JP>n
- * - JP>n) = 1~
r.; L.)Mni - 1) 8x.,
v n i=I
in which Mni is the number of times that X; is "redrawn" from {XI, ... , Xn} to form
Xj, ... , X~. By construction, the vector of counts (Mni, ... , Mnn) is independent of
XI, ... , Xn and multinomially distributed with parameters nand (probabilities) 1/n, ... ,
1jn.
If the class F has a finite envelope function F, then both the empirical process Gn and
the bootstrap process G~ can be viewed as maps into the space l 00 (F). The analogue of
Theorem 23.4 is that the sequence G~ converges in l 00 (F) conditionally in distribution to
the same limit as the sequence Gn, a tight Brownian bridge process Gp. To give a precise
meaning to "conditional weak convergence" in eoo
(F), we use the bounded Lipschitz metric.
It can be shown that a sequence of random elements in eoo
(F) converges in distribution to
a tight limit in l 00 (F) if and only ift
We use the notation EM to denote "taking the expectation conditionally on X I, ... , Xn," or
the expectation with respect to the multinomial vectors Mn only. t
t For a metric space II>, the set BLt (II>) consists of all functions h: II> ~ [ -1, I] that are uniformly Lipschitz:
I
lh(zt) - h(z2) ~ d(zt, Z2) for every pair (zt, Z2)· See, for example, Chapter 1.12 of [146].
+ For a proof of Theorem 23.7, see the original paper [58], or, for example, Chapter 3.6 of [146].
23.2 Consistency 333
23.7 Theorem (Empirical bootstrap). For every Donsker class :F of measurable func-
tions with finite envelope function F,
Next, consider an analogue of Theorem 23.5, using the functional delta method. Theo-
rem 23.5 goes through without too many changes. However, for many infinite-dimensional
applications of the delta method the condition of continuous differentiability imposed in
Theorem 23.5 fails. This problem may be overcome in several ways. In particular, contin-
uous differentiability is not necessary for the consistency of the bootstrap "in probability"
(rather than "almost surely"). Because this appears to be sufficient for statistical applica-
tions, we shall limit ourselves to this case.
Consider sequences of maps en and e; with values in a normed space][)) (e.g., l 00 (:F)) such
that the sequence ,.fii(en -())converges unconditionally in distribution to a tight random
element T, and the sequence ,.fii(e; -en) converges conditionally given X 1 , X 2 , ••• in
distribution to the same random element T. A precise formulation of the second is that
Here the notation EM means the conditional expectation given the original data X 1, X 2 , •••
and is motivated by the application to the bootstrap empirical distribution. t By the preceding
theorem, the empirical distribution en = JP>n satisfies condition (23.8) if viewed as a map in
l 00 (:F) for a Donsker class :F.
23.9 Theorem (Delta method for bootstrap). Let][)) be a normed space and let ifJ: ][))</> c
:Ilk be Hadamard differentiable at () tangentially to a subspace ][J)o. Let en and e; be
][)) 1-+
maps with values in][))</> such that ,.fii(en - ()) -v-+ T and such that (23.8) holds, in which
,.fii(e; -en) is asymptotically measurable and Tis tight and takes its values in ][))0 • Then
the sequence Jn(ifJ(e;) - ifJ(en)) converges conditionally in distribution to ifJ~(T), given
Xt, X2, ... , in probability.
Proof. By the Hahn-Banach theorem it is not a loss of generality to assume that the deriva-
tive <P~ : ][)) 1-+ :Ilk is defined and continuous on the whole space. For every h E BL 1(llk),
the function h o ifJ~ is contained in BLn</>9ll (][))). Thus (23.8) implies
The theorem is proved once it has been shown that the conditional probability on the right
converges to zero in outer probability.
The sequence .jn(~;- ~n. ~n- 8) converges (unconditionally) in distribution to a pair
of two independent copies of T. This follows, because conditionally given X 1 , X 2 , ••• ,
the second component is deterministic, and the first component converges in distribution
to T, which is the same for every sequence X 1, X 2 , •••• Therefore, by the continuous-
mapping theorem both sequences .jn(~n- 8) and .jn.(~;- 8) converge (unconditionally)
in distribution to separable random elements that concentrate on the linear space liJ>0 . By
Theorem 20.8,
Subtract the second from the first equation to conclude that the sequence .jn.(~<~;) -
~(~n))- ~H.jn<~:- ~n)) converges (unconditionally) to 0 in outer probability. Thus,
the conditional probability on the right in (23.10) converges to zero in outer mean. This
concludes the proof. •
23.11 Example (Empirical distribution function). Because the cells ( -oo, t] c lR form
a Donsker class, the empirical distribution function IFn of a random sample of real-valued
variables satisfies the condition of the preceding theorem. Thus, conditionally on X 1 , X 2 , ••• ,
the sequence .jn (~(IF~) -~(IFn)) converges in distribution to the same limit as .jn (~(IFn) -
~ (F)), for every Hadamard-differentiable function ~.
This includes, among others, quantiles and trimmed means, under the same conditions
on the underlying measure F that ensure that empirical quantiles and trimmed means are
asymptotically normal. See Lemmas 21.3, 22.9, and 22.10. D
the left and right tails. In many cases the percentile method, the percentile t -method, and the
normal approximation all yield correct two-tailed confidence intervals up to order 0 (n -I).
Their relative qualities may be studied by a more refined analysis. This must also take into
account the length of the confidence intervals, for an increase in length of order 0 p (n- 312 )
may easily reduce the coverage error to the order O(n-k) for any k.
The technical tool to obtain these results are Edgeworth expansions. Edgeworth's clas-
sical expansion is a refinement of the central limit theorem that shows the magnitude of the
difference between the distribution function of a sample mean and its normal approxima-
tion. Edgeworth expansions have subsequently been obtained for many other statistics as
well.
An Edgeworth expansion for the distribution function of a statistic (en -(})I an is typically
an expansion in increasing powers of 1I Jn of the form
23.13 Example (Sample mean). Let Xn be the mean of a random sample of size n, and
lets; = n- 1 L7=
1 (X; - Xn) 2 be the (biased) sample variance. If f.J-, a , A. and K are the
2
These are the first two terms of the classical expansion of Edgeworth. If the standard
deviation of the observations is unknown, an Edgeworth expansion of the t-statistic is of
more interest. This takes the form (see [72, pp. 71-73])
-
p(Xn- fJ- ::::xI
SniJn
p) = <l>(x)
6Jn
2
+ A.(2x + 1) <f>(x)
Although the polynomials are different, these expansions are of the same form. Note that
the polynomial appearing in the 11 Jn term is even in both cases.
These expansions generally fail if the underlying distribution of the observations is
discrete. Cramer's condition requires that the modulus of the characteristic function of the
observations be bounded away from unity on closed intervals that do not contain the origin.
This condition is satisfied if the observations possess a density with respect to Lebesgue
measure. Next to Cramer's condition a sufficient number of moments of the observations
must exist. D
336 Bootstrap
23.14 Example (Studentized quantiles). The pth quantile F-I (p) of a distribution func-
tion F may be estimated by the empirical pth quantile JF;; I (p). This is the rth order statistic
of the sample for r equal to the largest integer not greater than np. Its mean square error
can be computed as
An empirical estimator an for the mean square error of JF;;I(p) is obtained by replacing
F by the empirical distribution function. If the distribution has a differentiable density f,
then
!' (F-I(p)) x 2
3 3 + [ 2 -lOp -12p(l- p) j2
PI(x I F) 12.jp(1- p) = .j]ix J
+ 3 +6./2
.j7i x- 8 + 4p- 12(r- np).
This expansion is unusual in two respects. First, the remainder is of the order O(n- 314 )
rather than of the order O(n-I ). Second, the polynomial appearing in the first term is not
even. For this reason several of the conclusions of this section are not valid for sample
quantiles. In particular, the order of correctness of all empirical bootstrap procedures is
Op(n-I12 ), not greater. In this case, a "smoothed bootstrap" based on "resampling" from
a density estimator (as in Chapter 24) may be preferable, depending on the underlying
distribution. D
If the distribution function of (Bn- ())fan admits an Edgeworth expansion (23.12), then
it is immediate that the normal approximation is correct up to order 0(1/ ,.fii). Evaluation
of the expansion at the normal quantiles z p and zI-a yields
The O(n- 1) term results from the Edgeworth expansion (23.12) and is universal, indepen-
dent of the sequence an. For an = a - MIn and a sufficiently large constant M, the right
side becomes
1 - 2a + 2: + 0 ( ~) 2: 1 - 2a - 0 ( :k) ·
Thus, a slight widening of the normal confidence interval yields asymptotically correct
(conservative) coverage probabilities up to any order O(n-k). If an= Op(n- 112 ), then the
widened interval is 2(za. - Za) an = 0 p (n-312 ) wider than the normal confidence interval.
This difference is small relatively to the absolute length of the interval, which is 0 p (n -l/2).
Also, the choice of the scale estimator an (which depends on en) influences the width of the
interval stronger than replacing ga by ga•.
An Edgeworth expansion usually remains valid in a conditional sense if a good estimator
Pn is substituted for the true underlying distribution P. The bootstrap version of expansion
(23.12) is
p(e;-:a; en :S: xI Pn) = <l>(x) + p,(x I Pn) ~(x) + P2(x I Pn) ~(x) + ....
Jn n
In this expansion the remainder term is a random variable, which ought to be of smaller
order in probability than the last term. In the given expansion the remainder ought to be
o p (n -I) uniformly in x. Subtract the bootstrap expansion from the unconditional expansion
(23.12) to obtain that
The functions q; are polynomials, which are generally different from the polynomials
occurring in the Edgeworth expansion for the studentized statistic. The bootstrap version
of this expansion is
338 Bootstrap
The Kolmogorov-Smimov distance between the distribution functions on the left in the pre-
ceding displays is of the same order as the difference between the leading terms <I> (xI un) -
<l>(xlfin) on the right. Because the estimator fin is typically not closer than 0 p(n-I12 ) to u,
this difference may be expected to be at best of the order 0p(n-I1 2 ). Thus, the percentile
method for estimating a distribution is correct only up to the order 0p(n-I1 2 ), whereas the
percentile t-method is seen to be correct up to the order Op(n-I).
One-sided bootstrap percentile t and percentile confidence intervals attain orders of
correctness that are equal to the orders of correctness of the bootstrap estimators of the
distribution functions: Op(n-I) and Op(n-I1 2 ), respectively. For equal-tailed confidence
intervals both methods typically have coverage error of the order Op(n-I). The dec-
rease in coverage error is due to the cancellation of the errors contributed by the left and
right tails, just as in the case of normal confidence intervals. The proofs of these assertions
are somewhat technical. The coverage probabilities can be expressed in probabilities of the
type
en-() A )
P ( ~::S~n,aiP · (23.15)
Thus we need an Edgeworth expansion of the distribution of (en - ())I fin - ~n,a' or a related
quantity. A technical complication is that the random variables ~n,a are only implicitly
defined, as the solution of (23.1).
To find the expansions, first evaluate the Edgeworth expansion for (e; - en)lfi: at its
the upper quantile ~n,a to find that
After expanding <1>, PI and 4J in Taylor series around za. we can invert this equation to
obtain the (conditional) Cornish-Fisher expansion
~ _
Sn,a - Za
_ PI (za I P)
r.;;
vn
+0p (.!.)n .
O(n- 1), whence the coverage error of a percentile t-confidence interval is of the order
O(n- 1).
For percentile intervals we proceed in the same manner, this time inverting the Edgeworth
expansion of the unstudentized statistic. The (conditional) Cornish-Fisher expansion for
the quantile gn,a of~: - On takes the form
g~,a = Za _
Un
qi(Za I Pn)
...jn
+ Op(!).
n
The coverage probabilities of percentile confidence intervals can be expressed in probab-
ilities of the type
P(On - e <- t
~n,a
I P) = P(On- e < gn,a I P).
,..
Un
- ,..
Un
Insert the Cornish-Fisher expansion, again neglect the 0 p (n -I) term, and use the Edgeworth
expansion (23.12) to rewrite this as
Because p 1 and q 1 are different, the cancellation that was found for the percentile t-method
does not occur, and this is generally equal to 1- o: up to the order O(n- 112 ). Consequently,
asymmetric percentile intervals have coverage error of the order O(n- 112 ). On the other
hand, the coverage probability of the symmetric confidence interval [On- gn,a• On- gn,l-al
is equal to the expression in the preceding display minus this expression evaluated for 1 - o:
instead of o:. In the common situation that both polynomials p 1 and q 1 are even, the terms
of order O(n- 112 ) cancel, and the difference is equal to 1- 2o: up to the order O(n- 1).
Then the percentile two-tailed confidence interval has the same order of correctness as the
symmetric normal interval and the percentile t-intervals.
Notes
For a wider scope on the applications of the bootstrap, see the book [44], whose first
author Efron is the inventor of the bootstrap. Hall [72] gives a detailed treatment of higher-
order expansions of a number of bootstrap schemes. For more information concerning
the consistency of the empirical bootstrap, and the consistency of the bootstrap under the
application of the delta method, see Chapter 3.6 and Section 3.9.3 of [146], or the paper by
Gine and Zinn [58].
PROBLEMS
1. Let Fn be a sequence of random distribution functions and F a continuous, fixed-distribution
function. Show that the following statements are equivalent:
' A p
(1) Fn(x)--+ F(x) for every x.
(ii) SUPx IFn(X)- F(x)l ~ 0.
340 Bootstrap
2. Compare in a simulation study Efron's percentile method, the normal approximation in combina-
tion with Fisher's transformation, and the percentile method to set a confidence interval for the
correlation coefficient.
3. Let X(n) be the maximum of a sample of size n from the uniform distribution on [0, 1], and let
X(n) be the maximum of a sample of size n from the empirical distribution 1Pn of the first sample.
Show thatP(X(n) = X(n) llPn)-+ 1- e- 1. What does this mean regarding the consistency of the
empirical bootstrap estimator of the distribution of the maximum?
4. Devise a bootstrap scheme for setting confidence intervals for f3 in the linear regression model
Yi = a + f3xi + ei. Show consistency.
5. (Parametric bootstrap.) Let On be an estimator based on observations from a parametric model
Pe such that ..jii(On- ()- hn/ ..jii) converges under()+ hn/ Jn to a continuous distribution Le
for every converging sequence hn and every(). (This is slightly stronger than regularity as defined
in the chapter on asymptotic efficiency.) Show that the parametric bootstrap is consistent: If o;
is On computed from observations obtained from Pb , then ..jii(o; -On)"-"+ Le conditionally on
the original observations, in probability. (The conditlonallaw of ..jii(o; -On) is Ln,b if Ln,fi is
the distribution of ..jii(On -())under().)
6. Suppose that ..jii(On -(}) "-"+ T and ..jii(o;-on) "-"+Tin probability given the original observations.
Show that Jn(if>(O;)- if>(On)) "-"+ if>~(T) in probability for every map if> that is differentiable
at e.
7. Let Un beaU-statistic based on a random sample X1, ... , Xn with kernel h(x, y) such that
both Eh(XI. X 1) and Eh 2(XI, X2) are finite. Let u;
be the same U-statistic based on a sample
Xi, ... , X~ from the empirical distribution of X1. ... , Xn. Show that ..jii(U;- Un) converges
conditionally in distribution to the same limit as ..jii(Un -()),almost surely.
8. Suppose that ..jii(On -(}) "-"+ T and ..jii(o; -On) "-"+ Tin probability given the original observations.
Show that, unconditionally, ..jii(On- (), o; -On)"-"+ (S, T) for independent copies SandT ofT.
Deduce the unconditional limit distribution of ..jii(o; - ()).
24
Nonparametric Density
Estimation
24.1 Introduction
Statistical models are called parametric models if they are described by a Euclidean param-
eter (in a nice way). For instance, the binomial model is described by a single parameter
p, and the normal model is given through two unknowns: the mean and the variance of
the observations. In many situations there is insufficient motivation for using a particular
parametric model, such as a normal model. An alternative at the other end of the scale
is a nonparametric model, which leaves the underlying distribution of the observations
essentially free. In this chapter we discuss one example of a problem of nonparametric
estimation: estimating the density of a sample of observations if nothing is known a priori.
From the many methods for this problem, we present two: kernel estimation and monotone
estimation. Notwithstanding its simplicity, this method can be fully asymptotically efficient.
1 l
X 1-+ ---e-2<x-Xn)
- 2 "'
l.,n
Sn../2ii
341
342 Nonparametric Density Estimation
6
~::::::. ··:·:::-<..:..::.::::~
Figure 24.1. The kernel estimator with normal kernel and two observations for three bandwidths:
small (left), intermediate (center) and large (right). The figures show both the contributions of the
two observations separately (dotted lines) and the kernel estimator (solid lines), which is the sum of
the two dotted lines.
In this section we suppose that we have no prior knowledge of the form of f and want to
"let the data speak as much as possible for themselves."
Let K be a probability density with mean 0 and variance 1, for instance the standard
normal density. A kernel estimator with kernel or window K is defined as
XI-+
1 (x- X;)
-K - - - .
nh h
Every small mountain is centred around an observation X; and has area 1/n under it, for
any bandwidth h. For a small bandwidth the mountain is very concentrated (a peak), while
for a large bandwidth the mountain is low and flat. Figure 24.1 shows how the mountains
add up to a single estimator. If the bandwidth is small, then the mountains remain separated
and their sum is peaky. On the other hand, if the bandwidth is large, then the sum of the
individual mountains is too flat. Intermediate values of the bandwidth should give the best
results.
Figure 24.2 shows the kernel method in action on a sample from the normal distribution.
The solid and dotted lines are the estimator and the true density, respectively. The three
pictures give the kernel estimates using three different bandwidths - small, intermediate,
and large - each time with the standard normal kernel.
h-0.68 h-1.82 h-4.5
"l
~ 0
:1 . ·.
..
~
A ~l
.. ..
-~
I tv
:""-
; tv
::J-l .: I I\ I ::l -1 /I \\ I ::l -1 I
~
3
~
~-~ I:'\/ I \ I ~ -1 :1 \ \ I ~ -1 j \ I f!l
~-
8"
~
di I \/I \ I -.:.. \ I di /7 \\ Idi / ./ \\
..
I
~ , .• --·1
I
-3 -2 I
-1
I
0
-
I
1
I
2
" ····j
I
3
~ -r·· -3
I
·2 .; I
0
I
1
i
2
:;:1 ~ t:···•'
3 -3
1
·2 .; b
..........
Figure 24.2. Kernel estimates for the density of a sample of size 15 from the standard normal density
for three different bandwidths h = 0:68 (left), 1.82 (center), and 4.5 (right), using a normal kernel.
The dotted line gives the true density.
u.>
.j:>..
u.>
344 Nonparametric Density Estimation
A popular criterion to judge the quality of density estimators is the mean integrated
square error (MISE), which is defined as
This is the mean square error Et(f(x)- f(x) ) 2 of j(x) as an estimator of f(x) integrated
over the argument x. If the mean integrated square error is small, then the function f is
close to the function f. <:We assume that j n is jointly measurable to make the mean square
error well defined.)
As can be seen from the second representation, the mean integrated square error is the
sum of an integrated "variance term" and a "bias term." The mean integrated square error
can be small only if both terms are small. We shall show that the two terms are of the orders
1
nh'
respectively. Then it follows that the variance and the bias terms are balanced for (nh) -I ""'
h4 , which implies an optimal choice of bandwidth equal to h ""' n- 115 and yields a mean
integrated square error of order n-4 15 •
Informally, these orders follow from simple Taylor expansions. For instance, the bias of
j(x) can be written
Etf(x)- f(x) =
A I 1
hK (X- t) f(t)dt- f(x)
-h-
=I K(y)(f(x-hy)-f(x))dy.
Developing fin a Taylor series aroundx and using that f y K (y) dy = 0, we see, informally,
that this is equal to
Thus, the squared bias is of the order h 4 . The variance term can be handled similarly. A
precise theorem is as follows.
Take the integral withrepect to x on both left and right sides. Because Jf(x- hy) dx = 1
J
is the same for every value of hy, the right side reduces to (nh)- 1 K 2 (y) dy, by Fubini's
theorem. This concludes the proof for the variance term.
To upper bound the bias term we first write the bias E f J(x) - f (x) in the form as given
preceding the statement of the theorem. Next we insert the formula
Because the kernel K has mean zero by assumption, the first term inside the square brackets
can be deleted. Using the Cauchy-Schwarz inequality (EU V) 2 ::; EU 2 E V2 on the variables
U = Y and V = Yf"(x- ShY)(l- S) for Y distributed with density K and S uniformly
distributed on [0, 1] independent of Y, we see that the square of the bias is bounded above by
h4 JK(y)y 2 dy 11 1
K(y)y 2 f"(x- shy) 2 (1- s) 2 ds dy.
h4 (! K(y)y 2 dy YJ f"(x) 2 dx ~
This concludes the derivation for the bias term.
The last assertion of the theorem is trivial. •
The rate O(n-415 ) for the mean integrated square error is not impressive if we compare
it to the rate that could be achieved if we knew a priori that f belonged to some parametric
e
family of densities f 9 • Then, likely, we would be able to estimate by an estimator such
e e
that = + Op(n- 112 ), and we would expect
This is a factor n- 115 smaller than the mean square error of a kernel estimator.
This loss in efficiency is only a modest price. After all, the kernel estimator works for
every density that is twice continuously differentiable whereas the parametric estimator
presumably fails miserably if the true density does not belong to the postulated parametric
model.
346 Nonparametric Density Estimation
Moreover, the lost factor n-'15 can be (almost) covered if we assume that f has suffi-
ciently many derivatives. Suppose that f is m times continuously differentiable. Drop the
condition that the kernel K is a probability density, but use a kernel K such that
Then, by the same arguments as before, the bias term can be expanded in the form
24.2 Theorem. Suppose that f ism times continuously differentiable with JIJ<ml(x)l 2
dx < oo. Then there exists a constant C1 such that for small h > 0
sup E1
IE:Fm,M
I n
(
1 )2m/(2m+l)
(/n(x)- f(x)) 2 dx ::5 Cm,M - .
24.3 Rate Optimality 347
In this section we show that this upper bound is sharp, and the kernel estimator rate optimal,
in that the maximum risk on the left side is bounded below by a similar expression for every
density estimator In, for every fixed m and M.
The proof is based on a construction of subsets :Fn c :Fm,M, consisting of 27 • functions,
with rn = Ln 1/(2m+l) J, and on bounding the supremum over :Fm,M by the average over :Fn.
Thus the number of elements in the average grows fairly rapidly with n. An approach,
such as in section 14.5, based on the comparison of In at only two elements of :Fm,M does
not seem to work for the integrated risk, although such an approach readily yields a lower
bound for the maximum risk sup1 E f (/n (x) - f (x)) 2 at a fixed x.
e
The subset :Fn is indexed by the set of all vectors E {0, 1 yn consisting of sequences
of rn zeros or ones. For hn = n-l/(2m+l), let Xn,! < Xn,2 < · · · < Xn,n be a regular grid
of meshwidth 2hn. For a fixed probability density f and a fixed function K with support
e
(-1, 1), define, for every E {0, !Y•,
If f is bounded away from zero on an interval containing the grid, IK I is bounded, and
J K (x) dx = 0, then fn.e is a probability density, at least for large n. Furthermore,
It follows that there exist many choices off and K such that fn,fi E :Fm,M for every e.
The following lemma gives a lower bound for the maximum risk over the parameter set
{0, 1Y, in an abstract form, applicable to the problem of estimating an arbitrary quantity
1/f(e) belonging to a metric space (with metric d). Let H(e, e') = .L:~=l 1e;- e[l be the
Hamming distance on {0, lY, which counts the number of positions at which and e e'
differ. For two probability measures P and Q with densities p and q, write II P A Q II for
jpAqdJL.
24.3 Lemma (Assouad). For any estimator T based on an observation in the experiment
(Pe: e E {0, IY), and any p > 0,
dP(lfr(e), 1/f(e')) r
max2PEedP(T, 1/f(e))?: min (e -2 min IIPe A Pe'll·
() H(fi,fi')~! H ' e') H(fi,fi')=!
Proof. Define an estimator S, taking its values in 8 = {0, IY, by letting S = e if e' 1-+
d(T, 1/f(e')) is minimal overeat e' =e. (If the minimum is not unique, choose a point of
minimum in any consistent way.) By the triangle inequality, for any e, d ( 1/f ( S), 1/f (e)) ::::
d(l/f(S), T) + d(l/f(e), T), which is bounded by 2d(l/f(e), T), by the definition of S. If
dP ( 1/f (e), 1/f(e')) ?: aH (e, e') for all pairs e # e', then
The maximum of this expression over 8 is bounded below by the average, which, apart
348 Nonparametric Density Estimation
This is minimized overS by choosing Si for each j separately to minimize the jth term in
the sum. The expression within brackets is the sum of the error probabilities of a test of
Equivalently, it is equal to 1 minus the difference of power and level. In Lemma 14.30 this
t
was seen to be at least 1 - II P o,j - P 1,j II = II P o,j 1\ P 1,j 11. Hence the preceding display
is bounded below by
21~- -
~ IIPo,j 1\ P1,jll.
j=1
Because the minimum Pm 1\ qmof two averages of numbers is bounded below by the average
m- 1 I:: Pi 1\ qi of the minima, the same is true for the total variation norm of a minimum:
IIPm 1\ Qmll 2: m- 1 I:: II Pi 1\ Qdl. The27 - 1 terms Po and Po' in the averages Po,j and P1,j
e
can be ordered and matched such that each pair and (J' differ only in their jth coordinate.
t
Conclude that the preceding display is bounded below by .L:j=1 min II Po 1\ Po, II. in which
e
the minimum is taken over all pairs and (J' that differ by exactly one coordinate. •
We wish to apply Assouad's lemma to the product measures resulting from the densities
fn,O· Then the following inequality, obtained in the proof of Lemma 14.31, is useful. It
relates the total variation, affinity, and Hellinger distance of product measures:
24.4 Theorem. There exists a constant Dm,M such that for any density estimator Jn
sup Et
/E:Fm,M
f n
(
1 )2m/(2m+1)
(/n(x)- f(x)) 2 dx 2: Dm,M - .
Proof. Because the functions fn,o are bounded away from zero and infinity, uniformly in
e, the squared Hellinger distance
/ ( !.1/2 _
n,O
!.1/~)2
n,O
dx = f( fn,O- fn,O' ) dx
!.1/2 + J.1/~
n,O n,O
2
is up to constants equal to the squared L2-distance between fn.e and fn.O'· Because the
24.4 Estimating a Unimodal Density 349
functions K ((x - Xn,j) f hn) have disjoint supports, the latter is equal to
h;:r' t l(;lj- ejl 2 I K2( X ~:n,j) dx = h;:r'+1 H((;l, (;I') I K 2 (x) dx.
This is of the order ljn. Inserting this in the lower bound given by Assouad's lemma, with
1/r((;l) = fn.o and d( 1/r((;l), 1/r((;l')) the L2-distance, we find up to a constant the lower bound
h;:r'+ 1 (rn/2) (1- O(ljn))2n. •
i=1
This optimization problem would not have a solution iff were only restricted by possessing
a certain number of derivatives, because very high peaks at the observations would yield an
arbitrarily large likelihood. However, under monotonicity there is a unique solution.
The solution must necessarily be a left-continuous step function, with steps only at the
observations. Indeed, if for a given f the limit from the right at Xo- 1> is bigger than the
limit from the left at X(i)• then we can redistribute the mass on the interval (Xo- 1>, Xo>]
by raising the value f(X(i)) and lowering f(X(i- 1)+), for instance by setting f equal to
the constant value (Xo> - X(i- 1))- 1 fxx<i>
(1-1)
f(t) dt on the whole interval, resulting in an
increase of the likelihood. By the same reasoning we see that the maximum likelihood
estimator must be zero on (X(n)• oo) (and ( -oo, 0)). Thus, with .fi = fn(Xo>), finding the
maximum likelihood estimator reduces to maximizing f17= 1.fi under the side conditions
(with X(O) = 0)
This problem has a nice graphical solution. The least concave majorant of the empirical
distribution function IFn is defined as the smallest concave function Fn with Fn(x) 2: IFn(x)
for every x. This can be found by attaching a rope at the origin (0, 0) and winding this
(from above) around the empirical distribution function 1Fn (Figure 24.3). Because Fn is
350 Nonparametric Density Estimation
q
~
a)
d
co
d
'¢
d
C\1
d
0
d
0 2 3
Figure 24.3. The empirical distribution and its concave majorant of a sample of size 75 from the
exponential distribution.
0 2 3
Figure 24.4. The derivative of the concave majorant of the empirical distribution and the true density
of a sample of size 75 from the exponential distribution.
concave, its derivative is nonincreasing. Figure 24.4 shows the derivative of the concave
majorant in Figure 24.3.
24.5 Lemma. The maximum likelihood estimator j n is the left derivative of the least
concave majorant Fn of the empirical distribution IFn, that is, on each of the intervals
(X(i-1)• X(i)] it is equal to the slope of Fn on this interval.
Proof. In this proof, let j n denote the left derivative of the least concave majorant. We
shall show that this maximizes the likelihood. Because the maximum likelihood estimator
24.4 Estimating a Unimodal Density 351
is necessarily constant on each interval (X(i- 1). X(i)]. we may restrict ourselves to densities
f with this property. For such an f we can write log f = La; 1[o,x(l)) for the constants
a; =log fdfi+1 (with fn+1 = 1), and we obtain
For f = j n this becomes an equality. To see this, let Y1 ::5 Y2 :::: · · · be the points where
Pn touches lFn. Then j n is constant on each of the intervals (y; -1 , y;], so that we can write
log f n = L b; 1[O,y;), and obtain
Third, by the identifiability property of the Kullback-Leibler divergence (see Lemma 5.35),
for any probability density f,
J log fn dFn?: J
logf dFn,
with strict inequality unless j n = f. Combining the three displays, we see that j n is the
unique maximizer of f ~---* flog f dlFn. •
Maximizing the likelihood is an important motivation for taking the derivative of the
concave majorant, but this operation also has independent value. Taking the concave majo-
rant (or convex minorant) of the primitive function of an estimator and next differentiating
the result may be viewed as a "smoothing" device, which is useful if the target function
is known to be monotone. The estimator j n can be viewed as the result of this procedure
applied to the "naive" density estimator
- 1
fn(x)= ( )'
n X<il - X(i-1)
This function is very rough and certainly not suitable as an estimator. Its primitive function
is the polygon that linearly interpolates the extreme points of the empirical distribution
function lFn, and its smallest concave majorant coincides with the one of lFn. Thus the
derivative of the concave majorant of Fn is exactly j n.
Consider the rate of convergence of the maximum likelihood estimator. Is the assumption
of monotonicity sufficient to obtain a reasonable performance? The answer is affirmative
if a rate of convergence of n 113 is considered reasonable. This rate is slower than the rate
nm/(2m+ 1>of a kernel estimator if m > 1 derivatives exist and is comparable to this rate given
one bounded derivative (even though we have not established a rate under m = 1). The rate
of convergence n 113 can be shown to be best possible if only monotonicity is assumed. It
is achieved by the maximum likelihood estimator.
24.6 Theorem. If the observations are sampled from a compactly supported, bounded,
monotone density f, then
352 Nonparametric Density Estimation
Therefore, we can obtain the rate of convergence of/n by an application of Theorem 5.52
or 5.55 with m f = ../2!1 (f + fo).
Because (m f - m fo)(fo - f) ::: 0 for every f and fo it follows that Fo(m f - mfo) :::
F(mf- mf0 ) and hence
Fo(mf- mf0 )::: t<Fo + F)(mf- mf 0) = -th 2 (!, t f + tfo) ~- h2 (f, fo),
in which the last inequality is elementary calculus. Thus the first condition of Theorem 5.52
is satisfied relative to the Hellinger distance h, with a = 2.
The map f ~--+ m f is increasing. Therefore, it turns brackets [/1 , fz] for the functions
x ~--+ f(x) into brackets [m fp m h] for the functionsx ~--+ m f(x). The squared Lz(Fo)-size
of these brackets satisfies
It follows that the L 2 (Fo)-bracketing numbers of the class of functions m f can be bounded
by the h-bracketing numbers of the functions f. The latter are the L 2 (>..)-bracketing numbers
of the functions..(!, which are monotone and bounded by assumption. In view of Example
19.11,
1
logN[](2e, {mf: f E F}, Lz(Fo))::: logN[](e, -JF, Lz(A.))~-.
8
Because the functions m f are uniformly bounded, the maximal inequality Lemma 19.36
gives, with 1(8) = J~ ,.jlf8de = 2-J8,
Eto sup
h(f,Jo)d
IGn(f- fo)l ~-J8(1 + 8~)·
v n
Therefore, Theorem 5.55 applies with 4Jn (8) equal to the right side, and the Hellinger
distance, and we conclude that h(/n, fo) = Op(n- 113 ).
24.4 Estimating a Unimodal Density 353
...···
..··..·· .....···
..·····
....····
....·····
..··
I
I ....···
I
... ····· I
..······ I ..·...·····
...···
.... ...·····+·······
...·· I
I
..... I
I
..·· I
..··· I
11
f~(a) X
Figure 24.5. If f n (x) ~ a, then a line of slope a moved down vertically from +oo first hits IFn to
the left of x. The point where the line hits is the point at which IFn is farthest above the line of slope
a through the origin.
The most striking known results about estimating a monotone density concern limit
distributions of tbe maximum likelihood estimator, for instance at a point.
24.7 Theorem. Iff is differentiable at x > 0 with derivative f'(x)<O, then, with {Z(h):
h E lR} a standard Brownian motion process (two-sided with Z(O) = 0),
in which tbe largest value is chosen when multiple maximizers exist. The suggestive notation
lr
~s justified, as tbe function 1 is tbe inverse of tbe maximum likelihood estimator / n in tbat
f n (x) .::: a if and only iff n (a) .::: x, for every x and a. This is explained in Figure 24.5.
We first derive tbe limit distribution of /~ 1 . Let 8n = n- 1/ 3 •
By tbe change of variable s 1-+ x + h8n in tbe definition of /~ 1, we have
Because tbe location of a maximum does not change by a vertical shift of tbe whole function,
we can drop tbe term f(x)x in tbe right side, and we may add a term 1Fn(x). For tbe same
354 Nonparametric Density Estimation
reason we may also multiply the process in the right side by n 213 • Thus the preceding
display is equal to the point of maximum hn of the process
The first term is the local empirical process studied in Example 19.29, and converges in
distribution to the process h 1--+ .J f(x) Z(h), for Z a standard Brownian motion process,
in .f. 00 (K), for every compact interval K. The second term is a deterministic "drift" process
and converges on compacta to h 1--+ t f' (x )h 2 • This suggests that
This argument remains valid if we replace x by Xn = x - 8nb throughout, where the limit
is the same for every b E JR.
We can write the limit in a more attractive form by using the fact that the processes h 1--+
Z(ah) and h 1--+ .j(iZ(h) are equal in distribution for every a > 0. First, apply the change
of variables h 1--+ ah, next pull a out of Z(ah), then divide the process by .J f(x)a, and
finally choose a suchthatthequadratictermreduces to -h 2 , that is .J f(x)a = -!f'(x)a 2 •
Then we obtain, for every b e IR,
(In Theorem 5.52 the function m8 can be allowed to depend on n, as is clear from its
=
generalization, Theorem 5.55.) By its definition, gn 8nhn maximizes g 1--+ 1Fnm 8 , whence
we wish to show that gn = Op(8n). By Example 19.6 the bracketing numbers of the class
of functions {l[o,x.+gl - l[o,x.J: lgl < 8} are of the order 8je 2 ; the envelope function
11[o,x.+8l- 1[o,x.JI has L2(F)-norm of the order J f(x)8. By Corollary 19.35,
By the concavity ofF, the function g 1--+ F(xn +g) - F(xn) - f(xn)g is nonpositive
and nonincreasing as g moves away from 0 in either direction (draw a picture.) Because
24.4 Estimating a Unimodal Density 355
f' (xn) --* f' (x) < 0, there exists a constant C such that, for sufficiently large n,
If we would know already that gn ~ 0, then Theorem 5.52, applied with a = 2 and f3 = ~.
yields that gn = Op(8n).
The consistency of gn can be shown by a direct argument. By the Glivenko-Cantelli
theorem, for every e > 0,
Because the right side is strictly smaller than 0 = lFnmo, the maximizer gn must be contained
in [-e, e] eventually. •
Results on density estimators at a point are perhaps not of greatest interest, because it is
the overall shape of a density that counts. Hence it is interesting that the preceding theorem
is also true in an L 1-sense, in that
This is true for every strictly decreasing, compactly supported, twice continuously differ-
entiable true density f. For boundary cases, such as the uniform distribution, the behavior
of j n is very different. Note that the right side of the preceding display is degenerate. This
is explained by the fact that the random variables n 113 (!n (x) - f (x)) for different values
of x are asymptotically independent, because they are only dependent on the observations
X; very close to x, so that the integral aggregates a large number of approximately indepen-
dent variables. It is also known that n 116 times the difference between the left side and the
right sides converges in distribution to a normal distribution with mean zero and variance
not depending on f. For uniformly distributed observations, the estimator fn(x) remains
dependent on all n observations, even asymptotically, and attains a Jn-rate of convergence
(see [62]).
We define a density f on the real line to be unimodal if there exists a number Mf, such
that f is nondecreasing on the interval ( -oo, M f] and non decreasing on [M f, oo). The
mode Mf need not be unique. Suppose that we observe a random sample from a unimodal
density.
If the true mode M f is known a priori, then a natural extension of the preceding discussion
is to estimate the distribution function F of the observations by the distribution function
fr n that is the least concave majorant of lFn on the interval [Mf, oo) and the greatest convex
minorant on (-oo, Mtl· Next we estimate f by the derivative fn of Fn. Provided that
none of the observations takes the value M 1 , this estimator maximizes the likelihood, as
can be shown by arguments as before. The limit results on monotone densities can also be
extended to the present case. In particular, because the key in the proof of Theorem 24.7
is the characterization of fn as the derivative of the concave majorant oflFn. this theorem
remains true in the unimodal case, with the same limit distribution.
If the mode is not known a priori, then the maximum likelihood estimator does not exist:
The likelihood can be maximized to infinity by placing an arbitrary large mode at some
fixed observation. It has been proposed to remedy this problem by restricting the likelihood
356 Nonparametric Density Estimation
to densities that have a modal interval of a given length (in which f must be constant and
maximal). Alternatively, we could estimate the mode by an independent method and next
apply the procedure for a known mode. Both of these possibilities break down unless f
possesses some additional properties. A third possibility is to try every possible value M as
a mode, calculate the estimator i,M for known mode M, and select the best fitting one. Here
"best" could be operationalized as (nearly) minimizing the Kolmogorov-Smimov distance
IIFnM -lFn lloo· It can be shown (see [13]) that this procedure renders the effect of the mode
being unknown asymptotically negligible, in that
Notes
The literature on nonparametric density estimation, or "smoothing," is large, and there is an
equally large literature concerning the parallel problem of nonparametric regression. Next
to kernel estimation popular methods are based on classical series approximations, spline
functions, and, most recently, wavelet approximation. Besides different methods, a good
deal is known concerning other loss functions, for instance L 1-loss and automatic methods
to choose a bandwidth. Most recently, there is a revived interest in obtaining exact constants
in minimax bounds, rather than just rates of convergence. See, for instance, [14], [15], [36],
[121], [135], [137], and [148] for introductions and further references. The kernel estimator
is often named after its pioneers in the 1960s, Parzen and Rosenblatt, and was originally
developed for smoothing the periodogram in spectral density estimation.
A lower bound for the maximum risk over HOlder classes for estimating a density at a
single point was obtained in [46]. The lower bound for the L 2-risk is more recent. Birge
[ 12] gives a systematic study of upper and lower bounds and their relationship to the metric
entropy of the model. An alternative for Assouad's lemma is Fano's lemma, which uses the
Kullback-Leibler distance and can be found in, for example, [80].
The maximum likelihood estimator for a monotone density is often called the Grenander
estimator, after the author who first characterized it in 1956. The very short proof of
Lemma 24.5 is taken from [64]. The limit distribution of the Grenander estimator at a
point was first obtained by Prakasa Rao in 1969 see [121]. Groeneboom [63] gives a
characterization of the limit distribution and other interesting related results.
PROBLEMS
1. Show, informally, that under sufficient regularity conditions
2. Let X 1, ... , Xn be a random sample from the normal distribution with variance I. Calculate the
mean square error of the estimator rp(x- Xn) of the common density.
3. Using the argument of section 14.5 and a submodel as in section 24.3, but with rn = 1, show that
the best rate for estimating a density at a fixed point is also n-mf(2m+l).
4. Using the argument of section 14.5, show that the rate of convergence n 113 of the maximum
likelihood estimator for a monotone density is best possible.
5. (Marshall's lemma.) Suppose that F is concave on [0, oo) with F(O) = 0. Show that the least
concave majorant Fn ofiFn satisfies the inequality IIFn - Flloo ::::: //IFn - Flloo· What does this
imply about the limiting behavior ofF n?
25
Semiparametric Models
25.1 Introduction
Semiparametric models are statistical models in which the parameter is not a Euclidean
vector but ranges over an "infinite-dimensional" parameter set. A different name is "model
with a large parameter space." In the situation in which the observations consist of a ran-
dom sample from a common distribution P, the model is simply the set P of all possible
values of P: a collection of probability measures on the sample space. The simplest type
of infinite-dimensional model is the nonparametric model, in which we observe a random
sample from a completely unknown distribution. Then P is the collection of all probability
measures on the sample space, and, as we shall see and as is intuitively clear, the empirical
distribution is an asymptotically efficient estimator for the underlying distribution. More
interesting are the intermediate models, which are not "nicely" parametrized by a Euclidean
parameter, as are the standard classical models, but do restrict the distribution in an im-
portant way. Such models are often parametrized by infinite-dimensional parameters, such
as distribution functions or densities, that express the structure under study. Many aspects
of these parameters are estimable by the same order of accuracy as classical parameters,
and efficient estimators are asymptotically normal. In particular, the model may have a
natural parametrization ((J, 17) t-+ Pe, 71 , where e is a Euclidean parameter and 17 runs through
a nonparametric class of distributions, or some other infinite-dimensional set. This gives a
semiparametric model in the strict sense, in which we aim at estimating e and consider 17
as a nuisance parameter. More generally, we focus on estimating the value 1/t(P) of some
function 1ft : P ~--+ IRk on the model.
In this chapter we extend the theory of asymptotic efficiency, as developed in Chapters 8
and 15, from parametric to semiparametric models and discuss some methods of estimation
and testing. Although the efficiency theory (lower bounds) is fairly complete, there are still
important holes in the estimation theory. In particular, the extent to which the lower bounds
are sharp is unclear. We limit ourselves to parameters that are .JTi-estimable, although
in most semiparametric models there are many "irregular" parameters of interest that are
outside the scope of "asymptotically normal" theory. Semiparametric testing theory has
358
25.1 Introduction 359
little more to offer than the comforting conclusion that tests based on efficient estimators
are efficient. Thus, we shall be brief about it.
We conclude this introduction with a list of examples that shows the scope of semipara-
metric theory. In this description, X denotes a typical observation. Random vectors Y, Z,
e, and f are used to describe the model but are not necessarily observed. The parameters
() and v are always Euclidean.
25.1 Example (Regression). Let Z and e be independent random vectors and suppose
that Y = JL 9 (Z) + u 9 (Z)e for functions JL 9 and u9 that are known up to(). The observation
is the pair X = (Y, Z). If the distribution of e is known to belong to a certain paramet-
ric family, such as the family of N(O, u 2 )-distributions, and the independent variables Z
are modeled as constants, then this is just a classical regression model, allowing for het-
eroscedasticity. Semiparametric versions are obtained by letting the distribution of e range
over all distributions on the real line with mean zero, or, alternatively, over all distributions
that are symmetric about zero. D
25.2 Example (Projection pursuit regression). Let Z and e be independent random vec-
tors and let Y = 77(fJT Z) +e for a function 'f1 ranging over a setof(smooth) functions, and e
having an N(O, u 2 )-distribution. In this model() and 'f1 are confounded, but the direction of
() is estimable up to its sign. This type of regression model is also known as a single-index
model and is intermediate between the classical regression model in which 'f1 is known and
the nonparametric regression model Y = 'f1 (Z) +e with 'f1 an unknown smooth function. An
extension is to let the error distribution range over an infinite-dimensional set as well. D
25.3 Example (Logistic regression). Given a vector Z, let the random variable Y take
the value 1 with probability 1/(1 + e-r(Z)) and be 0 otherwise. Let Z = (Z,, Z2), and let
the function r be of the form r(z 1 , z2 ) = q(z 1) + ()T z2 • Observed is the pair X= (Y, Z).
This is a semiparametric version of the logistic regression model, in which the response is
allowed to be nonlinear in part of the covariate. D
25.6 Example (Transformation regression). Suppose that X= (Y, Z), where the ran-
dom vectors Y and Z are known to satisfy q(Y) = ()T Z + e for an unknown map 'f1 and
independent random vectors e and Z with known or parametrically specified distributions.
360 Semiparametric Models
The transformation 'fl ranges over an infinite-dimensional set, for instance the set of all
monotone functions. D
25.7 Example (Cox). The observation is a pair X= (T, Z) of a "survival time" T and a
covariate Z. The distribution of Z is unknown and the conditional hazard function of T
given Z is of the form e'JT 2 A.(t) for A. being a completely unknown hazard function. The
e
parameter has an interesting interpretation in terms of a ratio of hazards. For instance, if
the ith coordinate Z; of the covariate is a 0-1 variable then efJ• can be interpreted as the ratio
of the hazards of two individuals whose covariates are Z; = 1 and Z; = 0, respectively, but
who are identical otherwise. D
25.9 Example (Frailty). Two survival times Y1 and Y2 are conditionally independent
given variables (Z, W) with hazard function of the form We 8T 2 A.(y). The random variable
W is not observed, possesses a gamma(v, v) distribution, and is independent of the variable
Z which possesses a completely unknown distribution. The observation is X= (Y1, Y2 , Z).
The variable W can be considered an unobserved regression variable in a Cox model. D
25.10 Example (Random censoring). A "time of death" T is observed only if death oc-
curs before the time C of a "censoring event" that is independent of T; otherwise C is
observed. A typical observation X is a pair of a survival time and a 0-1 variable and
is distributed as ( T A C, 1{T :=:: C}). The distributions of T and C may be completely
unknown. D
25.11 Example (Interval censoring). A "death" that occurs at time T is only observed to
have taken place or not at a known "check-up time" C. The observation is X = ( C, 1{T :=::
C}), and T and C are assumed independent with completely unknown or partially specified
distributions. D
and norm
llgll = y'Pgi.
Given a Hilbert space JHI, the projection lemma asserts that for every g E lHI and convex,
closed subset C c JHI, there exists a unique element ng E C that minimizes c ~--+ llg- ell
over C. If C is a closed, linear subspace, then the projection ng can be characterized by
the orthogonality relationship
The proof is the same as in Chapter 11. If C 1 c C2 are two nested, closed subspaces,
then the projection onto cl can be found by first projecting onto c2 and next onto cl.
1\vo subsets C1 and C2 are orthogonal, notation C j_ C2, if (c1, C2} = 0 for every pair Of
c; E C;. The projection onto the sum of two orthogonal closed subspaces is the sum of the
projections. The orthocomplement c.L of a set C is the set of all g ..L C.
A Banach space is a complete, normed space. The dual space JIB* of a Banach space JIB
is the set of all continuous, linear maps b* : JIB~--+ JR. Equivalently, all linear maps such that
lb*(b)l .::: llb*llllbll for every b E JIB and some number lib* II· The smallest number with
this property is denoted by II b* II and defines a norm on the dual space. According to the
Riesz representation theorem for Hilbert spaces, the dual of a Hilbert space !HI consists of
all maps
where h* ranges over !HI. Thus, in this case the dual space JHI* can be identified with the space
lHI itself. This identification is an isometry by the Cauchy-Schwarz inequality l(h, h*} I .:::
llhllllh*II-
A linear map A : lffi 1~--+ lffi2 from one Banach space into another is continuous if and only
if IIAbiii2 .::: IIAIIIIbiii for every b1 E lffi1 and some number IIAII. The smallest number
with this property is denoted by II A II and defines a norm on the set of all continuous, linear
maps, also called operators, from lffi1 into lffi2. Continuous, linear operators are also called
"bounded," even though they are only bounded on bounded sets. To every continuous, linear
operator A: lffi 1 ~-+ lffi2 exists an adjoint map A*: JIB~~--+ liB! defined by (A*bi)b 1 = b~Ab1.
This is a continuous, linear operator of the same norm II A* II = II A II· For Hilbert spaces the
dual space can be identified with the original space and then the adjoint of A: lHI 1 ~-+ lHI2 is
a map A*: lHI2~-+ lHI1. It is characterized by the property
An operator between Euclidean spaces can be identified with a matrix, and its adjoint with
the transpose. The adjoint of a restriction A 0 : lHI1,o c lHI1 ~--+ lHI2 of A is the composition
no A* of the projection n: lHI 1 ~-+ lHI1,0 and the adjoint of the original A.
The range R(A) = {Ab 1: b1 E lffid of a continuous, linear operator is not necessarily
closed. By the "bounded inverse theorem" the range of a 1-1 continuous, linear operator
between Banach spaces is closed if and only if its inverse is continuous. In contrast the
kernel N(A) = {b 1 : Ab 1 = 0} of a continuous, linear operator is always closed. For an
operator between Hilbert spaces the relationship R(A).L = N(A *) follows readily from the
362 Semiparametric Models
characterization of the adjoint. The range of A is closed if and only if the range of A* is
closed if and only if the range of A*A is closed. In that case R(A*) = R(A*A).
If A* A : IHI 1 1-+ IHI 1 is continuously invertible (i.e., is 1-1 and onto with a continuous
inverse), then A(A *A)- 1A*: IHI2 t-+ R(A) is the orthogonal projection onto the range of A,
as follows easily by checking the orthogonality relationship.
I[ dPt112 - dP 112
t
-
1
-gdpl/2
2
] 2
~ 0. (25.13)
In other words, the parametric submodel {P1 : 0 < t < e} is differentiable in quadratic
mean at t = 0 with score function g. Letting t 1-+ P1 range over a collection of submodels,
we obtain a collection of score functions, which we call a tangent set of the model P at P
and denote by Pp. Because Ph 2 is automatically finite, the tangent space can be identified
with a subset of L 2 (P), up to equivalence classes. The tangent set is often a linear space,
in which case we speak of a tangent space.
Geometrically, we may visualize the model P, or rather the corresponding set of "square
roots of measures" dP 112 , as a subset of the unit ball of L 2 (P), and Pp, or rather the set of
!
all objects g d P 112 , as its tangent set.
Usually, we construct the submodels t 1-+ P1 such that, for every x,
t If P and every one of the measures Pt possess densities p and Pt with respect to a measure J.tt, then the
expressions dP and dPt can be replaced by p and Pt and the integral can be understood relative to J.tt (addd~-t 1
on the right). We use the notations dPt and dP, because some models P of interest are not dominated, and the
choice of J.tt is irrelevant. However, the model could be taken dominated for simplicity, and then d P1 and d P
are just the densities of Pt and P.
25.3 Tangent Spaces and Information 363
However, the differentiability (25.13) is the correct definition for defining information,
because it ensures a type of local asymptotic normality. The following lemma is proved in
the same way as Theorem 7 .2.
25.14 Lemma. lfthe path t ~---* P1 in P satisfies (25.13), then Pg = 0, Pg 2 < oo, and
nn dP1!../ii
log i= 1 dP(Xi)
1 ~
= Jnf;;tg(X;)- 2,Pg
1 2
+op(1).
For defining the information for estimating 1/I(P), only those submodels t ~---* P1 along
which the parameter t ~---* 1ji(P1 ) is differentiable are of interest. Thus, we consider only
submodels t ~---* P1 such that t ~---* 1ji(P1 ) is differentiable at t = 0. More precisely, we
define 1/1: PI-* !Ilk to be differentiable at P relative to a given tangent set PP if there exists
a continuous linear map ~ P:L 2 (P) ~---*!Ilk such that for every g E PP and a submodel
t ~---* P1 with score function g,
1/I(P1) -1/I(P) .i.
t -+ 'I' pg.
This requires that the derivative of the map t ~---* 1/1 (P1 ) exists in the ordinary sense, and also
that it has a special representation. (The map ~ P is much like a Hadamard derivative of 1/1
viewed as a map on the space of "square roots of measures.") Our definition is also relative
to the submodels t ~---* Pt. but we speak of "relative to PP" for simplicity.
By the Riesz representation theorem for Hilbert spaces, the map ~ p can always be
written in the form of an inner product with a fixed vector-valued, measurable function
1/1- p : XI-* Illk ,
Here the function ifr pis not uniquely defined by the functional1jl and the model P, because
only inner products of ifr p with elements of the tangent set are specified, and the tangent set
does not span all of L 2 (P). However, it is always possible to find a candidate ifr p whose
coordinate functions are contained in lin Pp, the closure of the linear span of the tangent
set. This function is unique and is called the efficient influence function. It can be found as
the projection of any other "influence function" onto the closed linear span of the tangent
set.
In the preceding set-up the tangent sets PP are made to depend both on the model P and
the functional1jl. We do not always want to use the "maximal tangent set," which is the set
of all score functions of differentiable submodels t ~---* Pt. because the parameter 1/1 may not
be differentiable relative to it. We consider every subset of a tangent set a tangent set itself.
The maximal tangent set is a cone: If g E PP and a 2:: 0, then ag E Pp, because the
path t ~---* Pa 1 has score function ag when t ~---* P1 has score function g. It is rarely a loss of
generality to assume that the tangent set we work with is a cone as well.
25.15 Example (Parametric model). Consider a parametric model with parameter rang- e
ing over an open subset E> of lllm given by densities p8 with respect to some measure IL·
Suppose that there exists a vector-valued measurable map ie such that, ash -+ 0,
I [ 1/2
PO+h- Pe
1/2 1 T •
- 2h le Pe
1/2]
2
2
dtL = o(llhll ).
364 Semiparametric Models
Then a tangent set at Pe is given by the linear space {hT le: h E ~m} spanned by the score
functions for the coordinates of the parameter ().
If the Fisher information matrix 19 = P9 i 9 i~ is invertible, then every map x : e ~--+ ~k
that is differentiable in the ordinary sense as a map between Euclidean spaces is differentiable
as a map 1ji(P9 ) = x(O) on the model relative to the given tangent space. This follows
because the submodel t ~--+ P9+rh has score hT le and
This equation shows that the function 1f Po = X.e 19 1 le is the efficient influence function.
In view of the results of Chapter 8, asymptotically efficient estimator sequences for x (8)
are asymptotically linear in this function, which justifies the name "efficient influence
function." D
By a direct calculation or by using Lemma 7.6, we see that both models also have score
function gat t = 0 in the L 2-sense (25.13). For an unbounded function g, these submodels
are not necessarily well-defined. However, the models have the common structure p 1(x) =
c(t) k(tg(x)) po(x) for a nonnegative function k with k(O) = k'(O) = 1. The function
k(x) = 2(1 + e-2x)- 1 is bounded and can be used with any g. D
25.17 Example (Cox model).\The density of an observation in the Cox model takes the
form \
(t, z) ~--+ e-eor'A(t) A(t) eerz pz(z).
Differentiating the logarithm of this expression with respect to e gives the score function
fore,
z - ze 9r z A(t).
We can also insert appropriate parametric models s ~--+ As and differentiate with respect to
s. If a is the derivative of log As at s = 0, then the corresponding score for the model for
the observation is
Finally, scores for the density pz are functions b(z). The tangent space contains the linear
span of all these functions. Note that the scores for A can be found as an "operator" working
on functions a. D
25.3 Tangent Spaces and Information 365
4>' a'
~(q(y)-()Tz)a(y)+ 'fl'(y),
where a is the derivative for 11· If the distributions of e and Z are (partly) unknown, then
there are additional score functions corresponding to their distributions. Again scores take
the form of an operator acting on a set of functions. D
To motivate the definition of information, assume for simplicity that the parameter 1/1 (P)
is one-dimensional. The Fisher information about t in a submodel t 1-+ P1 with score function
g at t = 0 is P g 2 • Thus, the "optimal asymptotic variance" for estimating the function
t 1-+ 1ji(P1 ), evaluated at t = 0, is the Cramer-Rao bound
The supremum of this expression over all submodels, equivalently over all elements of the
tangent set, is a lower bound for estimating 1/I(P) given the model P, if the "true measure"
is P. This supremum can be expressed in the norm of the efficient influence function lit p.
25.19 Lemma. Suppose that the functional 1/1 : P 1-+ lR is differentiable at P relative to
the tangent set Pp. Then
- 2
(1/lp,g}p p.7.2
sup = 'I' P·
gelin PP (g • g} P
Proof. This is a consequence of the Cauchy-Schwarz inequality (Plit pg) 2 :::: Plit~Pg 2
and the fact that, by definition, the efficient influence function lit p is contained in the closure
of lin Pp. •
Thus, the squared norm Plit~ of the efficient influence function plays the role of an
"optimal asymptotic variance," just as does the expression ,fr 9 I(; 1 t~ in Chapter 8. Similar
considerations (take linear combinations) show that the "optimal asymptotic covariance" for
estimating a higher-dimensional parameter 1/1 : P 1-+ IRk is given by the covariance matrix
Plit p{t~ of the efficient influence function.
In Chapter 8, we developed three ways to give a precise meaning to optimal asymptotic
covariance: the convolution theorem, the almost-everywhere convolution theorem, and
the minimax theorem. The almost-everywhere theorem uses the Lebesgue measure on the
Euclidean parameter set, and does not appear to have an easy parallel for semiparametric
models. On the other hand, the two other results can be generalized.
For every gin a given tangent set Pp, write P1,g for a submodel with score function g
along which the function 1/1 is differentiable. As usual, an estimator Tn is a measurable
function Tn(Xt, ... , Xn) of the observations. An estimator sequence Tn is called regular
at P for estimating 1/1 (P) (relative to PP) if there exists a probability measure L such that
every g E Pp.
366 Semiparametric Models
25.20 Theorem (Convolution). Let the function 1fr: P 1--+ ~k be differentiable at P rel-
ative to the tangent cone PP with efficient influence function 1fr P· Then the asymptotic
covariance matrix of every regular sequence of estimators is bounded below by Plfr plfr~.
Furthermore, iJPP is a convex cone, then every limit distribution L of a regular sequence
*
of estimators can be written L = N(O, Plfrplfr~) M for some probability distribu-
tion M.
25.21 Theorem (LAM). Let the function 1fr : P 1--+ :Ilk be differentiable at P relative to
the tangent cone PP with efficient influence function 1fr P· /fPP is a convex cone, then, for
any estimator sequence {Tn} and subconvex function .e : Rk 1--+ [0, oo ),
supl~~~supEp
I gel
11 Jn,g.e( JTi(Tn -1fr(P11._;n,g)))) 2: I ldN(O, Plfrplfr~).
Here the first supremum is taken over all finite subsets I of the tangent set.
Proofs. These results follow essentially by applying the corresponding theorems for para-
metric models to sufficiently rich finite-dimensional submodels. However, because we have
defined the tangent set using one-dimensional submodels t 1--+ P1,8 , it is necessary to rework
the proofs a little.
Assume first that the tangent set is a linear space, and fix an orthonormal base g p =
(g!, ... , gml ofanarbitraryfinite-dimensionalsubspace.Forevery g E lin gp select a sub-
model t 1--+ P1,g as used in the statement of the theorems. Each of the submodels t 1--+ P1,8 is
locally asymptotically normal at t = 0 by Lemma 25.14. Therefore, because the covariance
matrix of g p is the identity matrix,
in the sense of convergence of experiments. The function '!frn(h) = lfr(P11._;n,hr gp) satisfies
JTi( '!frn (h) - '!frn(O)) --+ 'if, phT gp = (Plfr pg~)h = : Ah.
For the same (k x m) matrix the function Ag p is the orthogonal projection of 1fr P onto
lin g P, and it has covariance matrix AA T. Because 1fr P is, by definition, contained in the
closed linear span of the tangent set, we can choose g P such that 1fr P is arbitrarily close to
its projection and hence AAT is arbitrarily close to Plfr plfr~.
Under the assumption of the convolution theorem, the limit distribution of the sequence
.jii(Tn - '!frn(h)) under P11 .;n,hrgp is the same for every h E Rm. By the asymptotic
representation theorem, Proposition 7.1 0, there exists a randomized statistic T in the limit
experiment such that the distribution of T - Ah under h does not depend on h. By
Proposition 8.4, the null distribution ofT contains a normal N(O, AAT)-distribution as a
convolution factor. The proof of the convolution theorem is complete upon letting AA T
- -T
tend to Plfr plfr p·
Under the assumption that the sequence .jii( Tn -1/f (P)) is tight, the minimax theorem is
proved similarly, by first bounding the left side by the minimax risk relative to the submodel
corresponding to gp, and next applying Proposition 8.6. The tightness assumption can be
dropped by a compactification argument. (see, e.g., [139], or [146]).
If the tangent set is a convex cone but not a linear space, then the submodel constructed
previously can only be used for h ranging over a convex cone in Rm. The argument can
25.3 Tangent Spaces and Information 367
remain the same, except that we need to replace Propositions 8.4 and 8.6 by stronger results
that refer to convex cones. These extensions exist and can be proved by the same Bayesian
argument, now choosing priors that flatten out inside the cone (see, e.g., [139]).
If the tangent set is a cone that is not convex, but the estimator sequence is regular, then
we use the fact that the matching randomized estimator T in the limit experiment satisfies
EhT = Ah + EoT for every eligible h, that is, every h such that hr gp E Pp. Because
the tangent set is a cone, the latter set includes parameters h = th; fort 2:: 0 and directions
hi spanning !Rm. The estimator Tis unbiased for estimating Ah + E 0 T on this parameter
set, whence the covariance matrix of T is bounded below by AA T, by the Cramer-Rao
inequality. •
Both theorems have the interpretation that the matrix P lfr p lfr ~ is an optimal asymptotic
covariance matrix for estimating 1/r(P) given the model P. We might wish that this could
be formulated in a simpler fashion, but this is precluded by the problem of superefficiency,
as is already the case for the parametric analogues, discussed in Chapter 8. That the notion
of asymptotic efficiency used in the present interpretation should not be taken absolutely is
shown by the shrinkage phenomena discussed in section 8.8, but we use it in this chapter.
We shall say that an estimator sequence is asymptotically efficient at P, if it is regular at P
with limit distribution L = N(O, Plfr plfr~).t
The efficient influence function lfr p plays the same role as the normalized score function
I9- 1l 9 in parametric models. In particular, a sequence of estimators Tn is asymptotically
efficient at P if
(25.22)
25.23 Lemma. Let the function 1/r : P 1--+ IRk be differentiable at P relative to the tangent
cone PP with efficient influence function lfr P· A sequence of estimators Tn is regular at P
with limiting distribution N (0, Plfr plfr~) if and only if it satisfies (25.22).
Proof. Because the submodels t 1--+ Pr,g are locally asymptotically normal at t = 0, "if"
follows with the help ofLe Cam's third lemma, by the same arguments as for the analogous
result for parametric models in Lemma 8.14.
To prove the necessity of (25.22), we adopt the notation of the proof of Theorem 25.20.
The statistics Sn = 1/r(P) + n- 1L7= 1lfr p(X;) depend on P but can be considered a true
estimator sequence in the local subexperiments. The sequence Sn trivially satisfies (25.22)
and hence is another asymptotically efficient estimator sequence. We may assume for sim-
plicity that the sequence Jn(Sn -1/r(P11.;n, 8 ), Tn -1/r(P11.;n, 8 )) converges under every
local parameter g in distribution. Otherwise, we argue along subsequences, which can be
t If the tangent set is not a linear space, then the situation becomes even more complicated. If the tangent set is
a convex cone, then the minimax risk in the left side of Theorem 25.21 cannot fall below the normal risk on
the right side, but there may be nonregular estimator sequences for which there is equality. If the tangent set is
not convex, then the assertion of Theorem 25.21 may fail. Convex tangent cones arise frequently; fortunately,
nonconvex tangent cones are rare.
368 Semiparametric Models
selected with the help of Le Cam's third lemma. By Theorem 9.3, there exists a match-
ing randomized estimator (S, T) = (S, T)(X, U) in the normal limit experiment. By the
efficiency of both sequences Sn and Tn, the variables S - Ah and T - Ah are, under h,
marginally normally distributed with mean zero and covariance matrix P lit p lit~. In partic-
ular, the expectations EhS = EhT are identically equal to Ah. Differentiate with respect to
h at h = 0 to find that
It follows that the orthogonal projections of S and T onto the linear space spanned by the
=
coordinates of X are identical and given by ns nT =AX, and hence
(The inequality means that the difference of the matrices on the right and the left is
nonnegative-definite.) We have obtained this for a fixed orthonormal set gp = (g 1, ••• , gm).
If we choose gp such that AAT is arbitrarily close to _P~ ![.lit~. equivalently Cov0 nT =
AAT = Covo ns is arbitrarily close to Covo T = Plfr plfr p = Covo S, and then the right
side of the preceding display is arbitrarily close to zero, whence S - T ~ 0. The proof is
complete on noting that .jn(Sn - Tn) ~ S - T. •
As submodels, we use paths of the form t ~ Pe+ta,.,,, for given paths t ~ 'f/t in the
parameter set H. The score functions for such submodels (if they exist) typically have the
form of a sum of "partial derivatives" with respect to f) and 'fl· If i 9,., is the ordinary score
function for f) in the model in which 'f/ is fixed, then we expect
-
a T •
log d Pe+ta,.,, =a le,., + g.
atlt=o
e
The function g has the interpretation of a score function for 'f/ if is fixed and runs through
an infinite-dimensional set if we are concerned with a "true" semiparametric model. We
refer to this set as the tangent set for 'f/, and denote it by ., PP8 .~.
The parameter 1/f(P9+ta,.,) =f) + ta is certainly differentiable with respect to t in the
ordinary sense but is, by definition, differentiable as a parameter on the model if and only
if there exists a function ife,., such that
a
a= at
- T •
1/f(Pe+ta,.,,)=('l/fe,.,,a le,'f/ +g)pB.~'
it=O
Setting a = 0, we see that if 9 ,., must be orthogonal to the tangent set ., Pp8.~ for the nuisance
parameter. Define TI 9,., as the orthogonal projection onto the closure of the linear span of
.,PP8 .~ in L2(Pe,.,).
The function defined by
25.25 Lemma. Suppose that for every a e ~k and every g e ., PP8 .~ there exists a path
t ~ TJ 1 in H such that
I [
dp1/2 - dP1/2
9+ta,.,,t e,., -~(aT le,., +g) dPJ.~2
]2
-+ 0. (25.26)
Proof. The given set P p8 .~ is a tangent set by assumption. The function 1/f is differentiable
with respect to this tangent set because
The last equality follows, because the inner product of a function and its orthogonal
projection is equal to the square length of the projection. Thus, we may replace i 9 ,., by
le,.,. •
This equation is very similar to the equation derived for efficient estimators in parametric
models in Chapter 8. It differs only in that the ordinary score function i 9 ,11 has been replaced
by the efficient score function (and similarly for the information). The intuitive explanation
is that a part of the score function for () can also be accounted for by score functions for the
nuisance parameter 17. If the nuisance parameter is unknown, a part of the information for
() is "lost," and this corresponds to a loss of a part of the score function.
25.27 Example (Symmetric location). Suppose that the model consists of all densities
x ~--+ 17(x- ())with() E ll and the "shape" 17 symmetric about 0 with finite Fisher informat-
ion for location 111 • Thus, the observations are sampled from a density that is symmetric
about e.
By the symmetry, the density can equivalently be written as 'f/(lx- 01). It follows that
any score function for the nuisance parameter 17 is necessarily a function of lx - () 1. This
suggests a tangent set containing functions of the form a(17' /T/)(x - 0) + b(lx - () 1). It is
not hard to show that all square-integrable functions of this type with mean zero occur as
score functions in the sense of (25.26).t
A symmetric density has an asymmetric derivative and hence an asymmetric score func-
tion for location. Therefore, for every b,
I
This model includes the logistic regression model, for g9(x) = 1/(1 + e-eTx). It is also
a version of the ordinary linear regression model. However, in this example we do not
assume that X and e are independent, but only the relations in the preceding display, apart
from qualitative smoothness conditions that ensure existence of score functions, and the
existence of moments. We shall write the formulas assuming that (X, e) possesses a density
11· Thus, the observation (X, Y) has a density 17(x, y - ge(x) ), in which 17 is (essentially)
only restricted by the relations J e1J(X, e) de= 0.
Because any perturbation Tit of 17 within the model must satisfy this same relation
J e1J1(x, e) de= 0, it is clear that score functions for the nuisance parameter 17 are functions
t That no other functions can occur is shown in, for example, [8, p. 56-57] but need not concern us here.
25.5 Score and Information Operators 371
The efficient information takes the form 111 ,TJ = E(g9gr (X)/E(e 2 1 X)). D
g=Artb.
Then Art ii TJ is a tangent set for the model P at Prt. Because Art turns scores for the model H
into scores for the model Pit is called a score operator. It is seen subsequently here that if '1
372 Semiparametric Models
This equation can be rewritten in terms of the adjoint score operator A~: L2(P,) 1-+ lin if 11 •
By definition this satisfies (h, A 11 b} P• = (A~h, b}q for every h E L 2 (Pq) and b E if q·t The
preceding display is equivalent to
(25.29)
We conclude that the function 1fr(P11 ) = x(rJ) is differentiable relative to the tangent set
PP. = Aq if q if and only if this equation can be solved for 1f p•; equivalently, if and only if
Xq is contained in the range of the adjoint A~. Because A~ is not necessarily onto lin if q•
not even if it is one-to-one, this is a condition.
For multivariate functionals (25.29) is to be understood coordinate-wise. Two solutions
1f P. of (25.29) can differ only by an element of the kernel N(A~) of A~. which is the
• 0
orthocomplement R(A 11 )J.. of the range of A11 : lin H 11 1-+ L 2 (Pq). Thus, there is at most
one solution 1f P~ that is contained in R(A 11 ) = linAqif q• the closure of the range of Aq. as
required.
If Xq is contained in the smaller range of A~A,, then (25.29) can be solved, of course,
and the solution can be written in the attractive form
(25.30)
Here A~A 11 is called the information operator, and (A~Aq)- is a "generalized inverse."
(Here this will not mean more than that b = (A~Aq)- Xq is a solution to the equation
A~Aqb = Xq-) In the preceding equation the operator A~Aq performs a similar role as
the matrix xr X in the least squares solution of a linear regression model. The operator
Aq(A~Aq)- 1 A~ (if it exists) is the orthogonal projection onto the range space of Aq.
So far we have assumed that the parameter 11 is a probability distribution, but this is not
necessary. Consider the more general situation of a model P = {P11 : 71 E H} indexed by a
parameter 11 running through an arbitrary set H. Let lHl11 be a subset of a Hilbert space that
indexes "directions" b in which 11 can be approximated within H. Suppose that there exist
continuous, linear operators Aq: lin lHlq 1-+ L 2 (Pq) and Xq: lin lHlq 1-+ JRk, and for every
b E lHlq a path t 1-+ Tit such that, as t t 0,
f 1/2 1/2
[ dPq, - dPq -~A bdP1/2
t
]2
2
~0
q q '
t Note that we define A~ to have range lin li ~·so that it is the adjoint of A~: li ~ 1-+ Lz(P~)- This is the adjoint
of an extension A~: Lz(T/) 1-+ Lz(P~) followed by the orthogonal projection onto lin H~-
25.5 Score and Information Operators 373
By the Riesz representation theorem for Hilbert spaces, the "derivative" ic 11 has a repre-
sentation as an inner product ic 11 b = (x 71 , b}H. for an element X71 E lin !HI~. The preceding
discussion can be extended to this abstract set-up.
25.31 Theorem. The map 1/1 : P 1-+ IRk given by 1/1 (P11 ) = x ('q) is differentiable at P11
x
relative to the tangent set A 11 1HI 11 if and only if each coordinate function of 11 is contained in
the range of A~: Lz(P11 ) 1-+ lin!HI11 • The efficient influence function 'if,p" satisfies (25.29).
x
If each coordinate function of 11 is contained in the range of A~ A 11 : lin IHI11 1-+ lin IHI11 , then
it also satisfies (25.30).
Proof. By assumption, the set A 11 1HI11 is a tangent set. The map 1/1 is differentiable relative
to this tangent set (and the corresponding submodels t 1-+ P11,) by the argument leading up
to (25.29). •
x
The condition (25.29) is odd. By definition, the influence function 11 is contained in
the closed linear span of IHI11 and the operator A~ maps Lz ( P11 ) into lin IHI11 • Therefore, the
condition is certainly satisfied if A~ is onto. There are two reasons why it may fail to be
onto. First, its range R(A~) may be a proper subspace of lin!HI 11 • Because b l. R(A~)
if and only if b E N(A 71 ), this can happen only if A 11 is not one-to-one. This means that
two different directions b may lead to the same score function A 11 b, so that the information
matrix for the corresponding two-dimensional submodel is singular. A rough interpretation
is that the parameter is not locally identifiable. Second, the range space R(A~) may be
x
dense but not closed. Then for any 11 there exist elements in R(A~) that are arbitrarily
x
close to 11 , but (25.29) may still fail. This happens quite often. The following theorem
shows that failure has serious consequences. t
25.32 Theorem. Suppose that., I-+ x('f/) is differentiable with influence function XII ¢.
R(A~). Then there exists no estimator sequence for x('f/) that is regular at P11 •
onto the score-space for 'f/. Presently, the latter space is the range of the operator B8 ,'fl. If
the operator B9,'f/Be,'f/ is continuously invertible (but in many examples it is not), then the
operator Be,'f/(B9,'f/Be,'f/)- 1B9,'f/ is the orthogonal projection onto the nuisance score space,
and
(25.33)
This means that b =-(B9,'f/Be,'f/)- 1B9,i-e,'f/ is a "least favorable direction" in H, for esti-
mating(}. If(} iS One-dimensional, then the SUbmodel t 1-+ p!J+I,'f/t Where 'f/t approacheS 'f/ in
this direction, has the least information for estimating t and score function l 9,'fl, at t = 0.
A function x ('f/) of the nuisance parameter can, despite the name, also be of interest.
The efficient influence function for this parameter can be found from (25.29). The adjoint
of A 9,'f/: ~k x JH['f/ 1-+ L 2(P9,'f/), and the corresponding information operator A9,'f/Ae,'f/: ~k x
JH['f/ 1-+ :IRk x lin!Hl'f/ are given by, with B9,'f/: L2(P9,'fl 1-+ lin!Hl'f/ the adjoint of Be,'f/'
The diagonal elements in the matrix are the information operators for the parameters (} and
'f/, respectively, the former being just the ordinary Fisher information matrix I 9,'fl fore. If
'f/ 1-+ X ( 'f/) is differentiable as before, then the function ((}, 'f/) 1-+ X ( 'f/) is differentiable with
influence function (0, 5('1). Thus, for a real parameter X ('f/), equation (25.29) becomes
If i 9,'f/ is invertible and x'f/ is contained in the range of Be,q Be,'f/' then the solution 1ir Po," of
these equations is
The second part of this function is the part of the efficient score function for x('f/) that is
e
"lost" due to the fact that is unknown. Because it is orthogonal to the first part, it adds a
positive contribution to the variance.
If we consider the scores b and g as the carriers of information about t in the variables
Y with law 'f/ 1 and X with law P11,, respectively, then the intuitive meaning of the condi-
tional expectation operator is clear. The information contained in the observation X is the
information contained in Y diluted (and reduced) through conditioning.t
25.34 Lemma. Suppose that {'f/1 : 0 < t < 1} is a collection of probability measures on
a measurable space (Y, B) such that for some measurable function b : Y ~--+ lR
If we consider A 11 as an operator A 11 : Lz(T/) ~--+ Lz(P11 ), then its adjoint A~: Lz(P11 )
~--+ L 2 ( 'f/) is a conditional expectation operator also, reversing the roles of X and Y,
This follows because, by the usual rules for conditional expectations, EE(g(X) I Y)b(Y) =
Eg(X)b(Y) = Eg(X)E(b(Y) I X). Inthe"calculusofscores"ofTheorem25.31 the adjoint
is understood to be the adjoint of A 11 : IHI 11 ~--+ L 2 (P11 ) and hence to have range lin IHI 11 c L 2 (T/ ).
Then the conditional expectation in the preceding display needs to be followed by the
orthogonal projection onto liniHI11 •
This is a missing data problem if we think of X as a function of the pair Y = (X, Z). A
score for the mixing distribution 'f/ in the model for Y is a function b(z). Thus, a score space
for the mixing distribution in the model for X consists of the functions
t For a proof of the following lemma, see, for example, [139, pp. 188-193].
376 Semiparametric Models
in the maximal tangent set {g E L 2 (P71 ): P71 g = 0}, for every 'YJ whose support contains
an interval. This has as a consequence that empirical estimators lP'ng. for a fixed squared-
integrable function g, are efficient estimators for the functionall/f(YJ) = P71 g. For instance,
the sample mean is asymptotically efficient for estimating the mean of the observations.
Thus nonparametric mixtures over an exponential family form very large models, which
are only slightly smaller than the nonparametric model. For estimating a functional such
as the mean of the observations, it is of relatively little use to know that the underlying
distribution is a mixture. More precisely, the additional information does not decrease the
asymptotic variance, although there may be an advantage for finite n. On the other hand,
the mixture structure may express a structure in reality and the mixing distribution 'YJ may
define the functional of interest.
The closure of the range of the operator A 71 is the orthocomplement of the kernel N(A;)
of its adjoint. Hence our claim is proved if this kernel is zero. The equation
says exactly that g(X) is a zero-estimator under p(x I z). Because the adjoint is defined
on Lz(T/), the equation 0 = A;g should be taken to mean A;g(Z) = 0 almost surely under
YJ. In other words, the display is valid for every z in a set of 'Y}-measure 1. If the support
of 71 contains a limit point, then this set is rich enough to conclude that g = 0, by the
completeness of the exponential family.
If the support of 71 does not contain a limit point, then the preceding approach fails.
However, we may reach almost the same conclusion by using a different type of scores.
The paths Tit = (1 - ta)71 + ta711 are well-defined for 0 ::::; at ::::; 1, for any fixed a :=::: 0 and
T/1, and lead to scores
If the set of distributions {P71 : 71 E H} is complete, then we can typically conclude that g = 0
almost surely. Then the closed linear span of the tangent set is equal to the nonparametric,
maximal tangent set. Because this set of scores is also a convex cone, Theorems 25.20
and 25.21 next show that nonparametric estimators are asymptotically efficient. D
25.36 Example (Semiparametric mixtures). In the preceding example, replace the den-
sity p(x I z) by a parametric family pe(x I z). Then the model p 9 (x I z) d71(z) for the un-
observed data Y = (X, Z) has scores for both(} and 71· Suppose that the model t f-+ Tit is
differentiable with score b, and that
25.5 Score and Information Operators 377
Then the function aT i 9(x Iz) + b(z) can be shown to be a score function corresponding to
the model t ~--+ P!J+ra(x I z) dq 1 (z). Next, by Lemma 25.34, the function
25.37 Example (Random censoring). Suppose that the time T of an event is only ob-
served if the event takes place before a censoring time C that is generated independently of
T; otherwise we observe C. Thus the observation X = ( Y, 1::!..) is the pair of transformations
Y = T A C and 1::!.. = 1{T ~ C} of the "full data" (T, C). If T has a distribution function F
and t ~--+ F, is a differentiable path with score function a, then the submodel t ~--+ PF,G for
X has score function
}; 00 adF
AF,aa(x)=EF(a(T)IX=(y,8))=(1-8) ~~>F(y) +8a(y).
A score operator for the distribution of C can be defined similarly, and takes the form, with
G the distribution of C,
fr bdG
BF,ab(x) = (1 - 8)b(y) + 8 t.~oo~-(y).
The scores AF,aa and BF,ab form orthogonal spaces, as can be checked directly from
the formulas, because EAFa(X)B 0 b(X) = FaGb. (This is also explained by the product
structure in the likelihood.) A consequence is that knowing G does not help for estimating
Fin the sense that the information for estimating parameters of the form 1/I(PF,G) = x(F)
is the same in the models in which G is known or completely unknown, respectively. To
see this, note first that the influence function of such a parameter must be orthogonal to
every score function for G, because d/dt 1/I(PF,G) =0. Thus, due to the orthogonality of
the two score spaces, an influence function of this parameter that is contained in the closed
linear span ofR(AF,G) + R(BF,G) is automatically contained in R(AF,a). D
25.38 Example (Cu"ent status censoring). Suppose that we only observe whether an
event at time T has happened or not at an observation time C. Then we observe the trans-
formation X= (C, 1{T ~ C}) = (C, 1::!..) of the pair (C, T). If T and Care independent
with distribution functions F and G, respectively, then the score operators for F and G are
given by, with x = (c, 8),
}; adF fr adF
AF aa(x) = EF(a(T) I C = c, 1::!.. = 8) = (1 - 8) (c,oo) ( ) +8 [O,c\ ,
' 1- F c F c)
BF,ab(x) = E(b(C)IC=c,1::!..=8)=b(c).
These score functions can be seen to be orthogonal with the help of Fubini's theorem. If
we take F to be completely unknown, then the set of a can be taken all functions in L 2 (F)
with Fa= 0, and the adjoint operator A 'F. a restricted to the set of mean-zero functions in
Lz(PF,G) is given by
For simplicity assume that the true F and G possess continuous Lebesgue densities, which
are positive on their supports. The range of A 'F. a consists of functions as in the preceding
display for functions h that are contained in L 2 (PF,a). or equivalently
J h 2 (u, 0)(1 - F)(u) dG(u) < oo and J h 2 (u, 1)F(u) dG(u) < oo.
Thus the functions h(u, 1) and h(u, 0) are square-integrable with respect to G on any
interval inside the support ofF. Consequently, the range of the adjoint A F,G contains only
absolutely continuous functions, and hence (25.29) fails for every parameter x (F) with an
influence function XF that is discontinuous. More precisely, parameters x (F) with influence
functions that are not almost surely equal under F to an absolutely continuous function.
Because this includes the functions 1[0,,1-F(t), the distribution function F 1--+ x (F)= F(t)
at a point is not a differentiable functional of the model. In view of Theorem 25.32 this
means that this parameter is not estimable at .fil-rate, and the usual normal theory does not
apply to it.
On the other hand, parameters with a smooth influence function X. Fmay be differentiable.
The score operator for the model PF,G is the sum (a, b) 1--+ AF,aa + BF,ab of the score
operators for F and G separately. Its adjoint is the map h 1--+ (A}, 0 h, B';, 0 h). A parameter
of the form (F, G) 1--+ x(F) has an influence function of the form (X.F. 0). Thus, for a
parameter of this type equation (25.29) takes the form
The kernel N(A}, 0 ) consists of the functions he L 2 (PF,G) such that h(u,O)=h(u, 1)
almost surely under F and G. This is precisely the range of B F,G. and we can conclude that
Therefore, we can solve the preceding display by first solving A 'F.ah = XF and next project-
ing a solution h onto the closure of the range of AF,G· By the orthogonality of the ranges of
AF,G and BF,G. the latter projection is the identity minus the projection onto R(BF,a). This
is convenient, because the projection onto R(BF,a) is the conditional expectation relative
to C.
For example, consider a function x (F) = Fa for some fixed known, continuously dif-
ferentiable function a. Differentiating the equation a = A}, 0 h, we finda'(c) = (h(c, 0)-
h(c, 1))g(c). This can happen for some h e L 2 (PF,G) only if, for any r such that
0 < F(r) < 1,
'oo(d)2
1~ g (1 -F) dG = 1roo
~ (h(u, 0) - h(u, 1) ) 2 (1 - F)(u) dG(u) < oo,
10r(a')
g 10r (h(u, 0)- h(u, 1))
2
F dG = 2 F(u) dG(u) < oo.
If the left sides of these equations are finite, then the parameter PF,G 1--+ Fa is differentiable.
An influence function is given by the function h defined by
The efficient influence function is found by projecting this onto R(AF,a), and is given by
h(c, 8)- EF,a(h(C, ~)I C =c) = (h(c, 1)- h(c, 0))(8- F(c))
1 - F(c) 1 F(c) 1
= -8 a (c)+ (1 - 8)-a (c).
g(c) g(c)
For example, for the mean x(F) = J u dF(u), the influence function certainly exists if the
density g is bounded away from zero on the compact support of F. D
25.39 Example (Missing at random). If ~ e {0, 1} the requirements are both that
P(~ =0 I Y = y) depends only on <f>(y, 0) andO and thatP(~ = 11 Y = y) depends only on
<f>(y, 1) and 1. Thus the two functions y 1--+ P(~ = 0 I Y = y) andy 1--+ P(~ = 11 Y = y)
may be different (fortunately) but may depend on y only through <f>(y, 0) and <f>(y, 1),
respectively.
If 4> (y, 1) = y, then 8 = 1 corresponds to observing y completely. Then the require-
ment reduces to P(~ = 0 I Y = y) being a function of 4> (y, 0) only. If Y = (Y1, Yz) and
4> (y, 0) = y 1, then CAR reduces to MAR as defined in the introduction. D
t The density is relative to a dominating measure von the sample space for /1, and we suppose that (8, y) ~--+ r(81 y)
is a Markov kernel.
380 Semiparametric Models
Denote by Q and n the parameter spaces for the distribution Q of Y and the kernels
r (8 I y) giving the conditional distribution of b. given Y, respectively. Let Q x n = ( Q x
R: Q E Q, R E R) and P = (PQ,R: Q E Q, R E R) be the models for (Y, b.) and X,
respectively.
25.40 Theorem. Suppose that the distribution Q of Y is completely unspecified and the
Markov kernel r(81 y) is restricted by CAR, and only by CAR. Then there exists a tangent set
PPa.R for the model P = (PQ,R : Q E Q, R E R) whose closure consists of all mean-zero
functions in L2(PQ,R). Furthermore, any element ofPPa.R can be orthogonally decompo-
sed as
where a E QQ and b E RR. The functions a and b range exactly over the functions a E
L2(Q) with Qa = 0 and b E L2(PQ,R) with ER(b(X) I Y) = 0 almost surely, respectively.
Proof. Fix a differentiable submodel t ~---* Q1 with score a. Furthermore, for every fixed
y fix a differentiable submodel t ~---* r 1 (· I y) for the conditional density of b. given Y = y
r
with score b0 (81 y) such that
Because the conditional densities satisfy CAR, the function b0 (81 y) must actually be a
function b(x) of x only. Because it corresponds to a score for the conditional model, it
J
is further restricted by the equations b0 (81 y) r(81 y)dv(8) = ER(b(X) I Y =y) =0 for
every y. Apart from this and square integrability, b0 can be chosen freely, for instance
bounded.
By a standard argument, with Q x R denoting the law of (Y, b.) under Q and r,
The interest of the representation of scores given in the preceding theorem goes beyond
n
the case that the models Q and are restricted by CAR only, as is assumed in the theorem.
It shows that, under CAR, any tangent space for P can be decomposed into two orthogonal
pieces, the first part consisting of the conditional expectations EQ,R (a(Y) I X) of scores a
for the model of Y (and their limits) and the second part being scores b for the model n
25.5 Score and Information Operators 381
describing the "missingness pattern." CAR ensures that the latter are functions of x already
and need not be projected, and also that the two sets of scores are orthogonal. By the product
structure of the likelihood q (y )r (o I y), scores a and b for q and r in the model Q x n are
always orthogonal. This orthogonality may be lost by projecting them on the functions of
x, but not so under CAR, because b is equal to its projection.
In models in which there is a positive probability of observing the complete data, there
is an interesting way to obtain all influence functions of a given parameter PQ,R 1-+ x(Q).
Let C be a set of possible values of ll. leading to a complete observation, that is, 4> (y, o) = y
whenever o E C, and suppose that R (C I y) = P R( ll. E C I Y = y) is positive almost surely.
Suppose for the moment that R is known, so that the tangent space for X consists only of
functions of the form EQ,R(a(Y) I X). If XQ(Y) is an influence function of the parameter
Q 1-+ x (Q) on the model Q, then
. 1{8 E C} .
1/f PQ.R (x) = R(C I y) XQ(Y)
is an influence function for the parameter 1/f(PF,a) = x(Q) on the model P. To see this,
o
first note that, indeed, it is a function of x, as the indicator 1{ E C} is nonzero only if
(y, o) =x. Second,
. l{ll.eC}.
EQ,R'I/fPQ.R(X)EQ,R(a(Y) I X) = EQ,R R(C I Y) XQ(Y) a(Y)
= EQ,RX.Q(Y) a(Y).
The influence function we have found is just one of many influence functions, the other ones
being obtainable by adding the orthocomplement of the tangent set. This particular influence
function corresponds to ignoring incomplete observations altogether but reweighting the
influence function for the full model to eliminate the bias caused by such neglect. Usually,
ignoring all partial observations does not yield an efficient procedure, and correspondingly
this influence function is usually not the efficient influence function.
All other influence functions, including the efficient influence function, can be found by
adding the orthocomplement of the tangent set. An attractive way of doing this is:
- by varying .X Q over all possible influence functions for Q 1-+ x (Q), combined with
- by adding all functions b(x) withER (b(X) I Y) = 0.
This is proved in the following lemma. We still assume that R is known; if it is not, then
the resulting functions need not even be influence functions.
25.41 Lemma. Suppose that the parameter Q 1-+ x (Q) on the model Q is differentiable
at Q, and that the conditional probability R (C I Y) = P( ll. E C I Y) of having a complete
observation is bounded away from zero. Then the parameter PQ,R 1-+ x (Q) on the model
(PQ,R: Q E Q) is differentiable at PQ,R and any of its influence functions can be written
in the form
1{8 E C} .
R(C I y) XQ(Y) + b(x),
for XQ an influence function of the parameter Q 1-+ x(Q) on the model Q and a function
b E L2(PQ,R) satisfying ER(b(X) I Y) =0. This decomposition is unique. Conversely,
every function of this form is an influence function.
382 Semiparametric Models
Proof. The function in the display with b = 0 has already been seen to be an influence
function. (Note that it is square-integrable, as required.) Any function b(X) such that
ER (b(X) I Y) = 0 satisfies EQ,Rb(X)EQ,R (a(Y) I X)= 0 and hence is orthogonal to the
tangent set, whence it can be added to any influence function.
To see that the decomposition is unique, it suffices to show that the function as given in
x
the lemma can be identically zero only if Q = 0 and b = 0. If it is zero, then its conditional
expectation with respect to Y, which is XQ. is zero, and reinserting this we find that b = 0
as well.
Conversely, an arbitrary influence function ,fr Pa.R of PQ,R 1-+ x (Q) can be written in the
form
. 1{8eC}. [·
VtPa./x)= R(Ciy)X(Y)+ VtPa.R(x)- R(Ciy)X(y).
1{8eC}. J
For X(Y) = E R (,fr PQ,R (X) 1 Y), the conditional expectation of the part within square brackets
x
with respect to Y is zero and hence this part qualifies as a function b. This function is an in-
fluenc~functionfor Q 1-+ x(Q),asfollowsfromtheequalityEQ,RER(,frpQ,R(X) I Y)a(Y) =
EQ,RVt Pa.R (X)EQ,R (a(Y) I X) for every a. •
Even though the ·functions XQ and b in the decomposition given in the lemma are
uniquely determined, the decomposition is not orthogonal, and (even under CAR) the
decomposition does not agree with the decomposition of the (nonparametric) tangent space
given in Theorem 25.40. The second term is as the functions b in this theorem, but the
leading term is not in the maximal tangent set for Q.
The preceding lemma is valid without assuming CAR. Under CAR it obtains an inter-
esting interpretation, because in that case the functions b range exactly over all scores for
the parameter r that we would have had if R were completely unknown. If R is known,
then these scores are in the orthocomplement of the tangent set and can be added to any
influence function to find other influence functions.
A second special feature of CAR is that a similar representation becomes available in
the case that R is (partially) unknown. Because the tangent set for the model (PQ,R: Q e
Q, R e R) contains the tangent set for the model (PQ,R: Q e Q) in which R is known,
the influence functions for the bigger model are a subset of the influence functions of the
smaller model. Because our parameter x (Q) depends on Q only, they are exactly those
influence functions in the smaller model that are orthogonal to the set RPPa.R of all score
functions for R. This is true in general, also without CAR. Under CAR they can be found
by subtracting the projections onto the set of scores for R.
25.42 Corollary. Suppose that the conditions of the preceding lemma hold and that the
tan:gent space PPa.R for the model (PQ,R: Q e Q, R E R) is taken to. be the sum qPPa.R +
RPPa.R of tangent spaces of scores for Q and R separately. If QPPa.R and RPPa.R are
orthogonal, in particular under CAR, any influence function of PQ,R 1-+ x (Q) for the model
(PQ,R: Q E Q, R E R) can be obtained by taking the functions given by the preceding
lemma and subtracting their projection onto linRPPa.R·
Proof. The influence functions for the bigger model are exactly those influence functions
for the model in which R is known that are orthogonal to RPPa.R· These do not change
25.5 Score and Information Operators 383
by subtracting their projection onto this space. Thus we can find all influence functions as
claimed.
If the score spaces for Q and R are orthogonal, then the projection of an influence function
onto lin RPPQ,R is orthogonal to QPPQ.R, and hence the inner products with elements of this
set are unaffected by subtracting it. Thus we necessarily obtain an influence function. •
The efficient influence function 1f PQ.R is an influence function and hence can be written
in the form of Lemma 25.41 for some XQ and b. By definition it is the unique influence
function that is contained in the closed linear span of the tangent set. Because the parameter
of interest depends on Q only, the efficient influence function is the same (under CAR or,
more generally, if QPPQ.R l.. RPPQ.R), whether we assume R known or not. One way of
finding the efficient influence function is to minimize the variance of an arbitrary influence
function as given in Lemma 25.41 over XQ and b.
25.43 Example (Missing at random). In the case of MAR models there is a simple rep-
resentation for the functions b(x) in Lemma 25 .41. Because MAR is a special case of CAR,
these functions can be obtained by computing all the scores for R in the model for (Y, ~)
under the assumption that R is completely unknown, by Theorem 25.40. Suppose that~
takes only the values 0 and 1, where 1 indicates a full observation, as in Example 25.39,
and set rr(y): = P(~ = 11 Y = y). Under MAR rr(y) is actually a function of ~(y, 0) only.
The likelihood for (Y, ~)takes the form
q(y)r(81 y) = q(y)rr(yl(1- rr(y)) 1-.s.
Insert a path rr1 =rr + tc, and differentiate the log likelihood with respect tot at t =0 to
obtain a score for R of the form
8 1- 8 8 - rr(y)
rr(y) c(y)- 1 - rr(y) c(y) = rr(y)(l- rr)(y) c(y).
To remain within the model the functions rr1 and rr, whence c, may depend on y only
through ~ (y, 0). Apart from this restriction, the preceding display gives a candidate for b
in Lemma 25.41 for any c, and it gives all such b.
Thus, with a slight change of notation any influence function can be written in the form
8 . 8- rr(y)
rr(y) XQ(Y)- rr(y) c(y).
One approach to finding the efficient influence function in this case is first to minimize the
variance of this influence function with respect to c and next to optimize over X. Q. The first
step of this plan can be carried out in general. Minimizing with respect to c is a weighted
least-squares problem, whose solution is given by
Splitting the inner product of these functions on the first minus sign, we obtain two terms,
both of which reduce to EQ,RXQ(Y)c(Y)(l- rr)(Y)/rr(Y). D
384 Semiparametric Models
25.6 Testing
Theproblemoftestinganullhypothesis Ho: 1/I(P) .::: Oversusthealternative H 1 : 1/I(P) > 0
is closely connected to the problem of estimating the function 1/I(P). It ought to be true that
a test based on an asymptotically efficient estimator of 1/I(P) is, in an appropriate sense,
asymptotically optimal. For real-valued parameters 1/I(P) this optimality can be taken in
the absolute sense of an asymptotically (locally) uniformly most powerful test. With higher-
dimensional parameters we run into the same problem of defining a satisfactory notion of
asymptotic optimality as encountered for parametric models in Chapter 15. We leave the
latter case undiscussed and concentrate on real-valued functionals 1/1 : P 1-+ R
Given a model Panda measure P on the boundary of the hypotheses, that is, 1/I(P) =0,
we want to study the "local asymptotic power" in a neighborhood of P. Defining a local
power function in the present infinite-dimensional case is somewhat awkward, because
there is no natural "rescaling" of the parameter set, such as in the Euclidean case. We
shall utilize submodels corresponding to a tangent set. Given an element g in a tangent
set Pp, lett 1-+ P1, 8 be a differentiable submodel with score function g along which 1/1 is
differentiable. For every such g for which,;, pg = Plfr pg > 0, the submodel P1,8 belongs
to the alternative hypothesis H 1 for (at least) every sufficiently small, positive t, because
1ji(P1,8 ) = t P,lfr pg + o(t) if 1/I(P) = 0. We shall study the power at the alternatives Phf,fn,g·
Proof. This theorem is essentially Theorem 15.4 applied to sufficiently rich submodels.
Because the present situation does not fit exactly in the framework of Chapter 15, we
rework the proof. Fix arbitrary h 1 and g1 for which we desire to prove the upper bound.
For notational convenience assume that P = 1. gr
Fix an orthonormal base g p = (g1, ... , gml of an arbitrary finite-dimensional subspace
of Pp (containing the fixed g 1). For every g e lin g p, let t 1-+ P1,g be a sub model with score
g along which the parameter 1/1 is differentiable. Each of the submodels t 1-+ P1,8 is locally
asymptotically normal at t =0 by Lemma 25.14. The~efore, with sm-l the unit sphere
of!Rm,
in the sense of convergence of experiments. Fix a subsequence along which the limsup
in the statement of the theorem is taken for h = h 1 and g = g 1• By contiguity arguments,
we can extract a further subsequence along which the functions 7rn(Phf,fTi,aTg) converge
pointwise to a limit rr(h, a) for every (h, a). By Theorem 15.1, the function rr(h, a) is the
power function of a test in the normal limit experiment. If it can be shown that this test is
oflevel a for testing H0 : aT Plfr pgp = 0, then Proposition 15.2 shows that, for every (a, h)
25.6 Testing 385
a T Ptfr- pgp )
rr(h, a) :S 1 - <I> ( Za - h _ T _ 1/2 ·
( Pt/1' pgpPt/1' pgp)
The orthogonal projection of if P onto lin gp is equal to (Pif pg~)gp, and has length
Pifpg~Pifpgp. By choosing lin gp large enough, we can ensure that this length is
arbitrarily close to Pif~. Choosing (h, a) = (h 1, e 1) completes the proof, because
limsuprrn(Ph 1/.J7i,g) ::;: rr(h,, e,), by construction.
To complete the proof, we show that rr is of level a. Fix any h > 0 and an a E sm-l
such that aT Pif pg p < 0. Then
Thus, the test with power function rr is of level a for testing Ho: aT Pif pgp < 0. By
continuity it is oflevel a for testing Ho: aT Pif pgp ::;: 0. •
25.45 Lemma. Let the functional tfr: P 1-+ R be differentiable at P with tfr(P) = 0. Sup-
pose that the sequence Tn is regular at P with a N (0, P if~) -limit distribution. Furthermore,
2 p -2 .
suppose that Sn -+ Ptfr p· Then, for every h 2: 0 and g E 'Pp,
Proof. By the efficiency of Tn and the differentiability of tfr, the sequence .jliTn converges
under Phf.J1i,g to a normal distribution with mean hPif pg and variance Pif~. •
25.46 Example (Wilcoxon test). Suppose that the observations are two independent ran-
dom samples X 1, ... , Xn and Y,, ... , Yn from distribution functions F and G, respectively.
To fit this two-sample problem in the present i.i.d. set-up, we pair the two samples and think
of (X;, Y;) as a single observation from the product measure F x G on R 2 • We wish to
!
test the null hypothesis Ho : J F d G ::: versus the alternative H, : J F d G > !.
The
Wilcoxon test, which rejects for large values of f1Fn dGn, is asymptotically efficient, rel-
ative to the model in which F and G are completely unknown. This gives a different
perspective on this test, which in Chapters 14 and 15 was seen to be asymptotically effi-
cient for testing location in the logistic location-scale family. Actually, this finding is an
386 Semiparametric Models
example of the general principle that, in the situation that the underlying distribution of the
observations is completely unknown, empirical-type statistics are asymptotically efficient
for whatever they naturally estimate or test (also see Example 25.24 and section 25. 7). The
present conclusion concerning the Wilcoxon test extends to most other test statistics.
By the preceding lemma, the efficiency of the test follows from the efficiency of the
f
Wilcoxon statistic as an estimator for the function 1/r(F x G)= F dG. This may be
proved by Theorem 25.47, or by the following direct argument.
The model P is the set of all product measures F x G. To generate a tangent set, we can
perturb both F and G. If t ~--+ F1 and t ~--+ G 1 are differentiable submodels (of the collection
of all probability distributions on ll) with score functions a and b at t = 0, respectively, then
the submodelt ~--+ F1 x G 1 has scorefunctiona(x)+b(y). Thus, as atangentspacewemay
take the set of all square-integrable functions with mean zero of this type. For simplicity, we
could restrict ourselves to bounded functions a and b and use the paths d F1 = (1 + ta) d F
and d G1 = (1 + t b) d G. The closed linear span of the resulting tangent set is the same as
before. Then, by simple algebra,
We conclude that the function (x, y) ~--+ (1 - G _) (x) + F (y) is an influence function of 1/r.
This is of the form a (x) + b (y) but does not have mean zero; the efficient influence function
is found by subtracting the mean.
The efficiency of the Wilcoxon statistic is now clear from Lemma 25.23 and the asymp-
totic linearity of the Wilcoxon statistic, which is proved by various methods in Chapters 12,
13, and20. D
The asymptotic efficiency of 4J(Tn) follows, provided that the function x ~--+ 4J~(P) {if p(x)
is the efficient influence function of the parameter P ~--+ 4J o 1/r(P). If the coordinates of
25.7 Efficiency and the Delta Method 387
1f p are contained in the closed linear span of the tangent set, then so are the coordinates
of 4>',p(P) 1f P• because the matrix multiplication by 4>',p(P) means taking linear combinations.
Furthermore, if 1ft is differentiable at P (as a statistical parameter on the model P) and 4> is
differentiable at 1/t(P) (in the ordinary sense of calculus), then
4> o 1/t(P1 ) - 4> o 1/t(P) 1 • 1 -
t --* 4>1/t(P) 1ft pg = P¢1/t(P) 1ft pg.
Thus the function 4>',p(P) 1f p is an influence function and hence the efficient influence func-
tion.
More involved is the same question, but with Tn an estimator of a parameter in a Banach
space, for instance a distribution in the space D [ -oo, oo] or in a space .eoo (F). The question
is empty until we have defined efficiency for this situation. A definition of asymptotic
efficiency of Banach-valued estimators can be based on generalizations of the convolution
and minimax theorems to general Banach spaces.t We shall avoid this route and take a
more naive approach.
The dual space ][])* of a Banach space ][)) is defined as the collection of all continuous,
linear maps d*: ][)) ~--+ R If Tn is a][J)-valuedestimatorfor a parameter 1/t(P) E ][)), thend*Tn
is a real-valued estimator for the parameter d*1/t(P) e JR. This suggests to defining Tn to
be asymptotically efficient at P E P if Jn(Tn -1/t(P)) converges underPin distribution
to a tight limit and d*Tn is asymptotically efficient at P for estimating d*1/t(P), for every
d* E ][))*.
This definition presumes that the parameters d* 1ft are differentiable at P in the sense of
section 25.3. We shall require a bit more. Say that 1ft : P ~--+ ][))is differentiable at P relative
to a given tangent set PP if there exists a continuous linear map -tfr P: L 2 (P) ~--+][))such that,
for every g E PP and a submodel t ~--+ P1 with score function g,
1/t(P1) -1/t(P)
___ t___ --* .i.
'I' pg.
This implies that every parameter d* 1ft : P ~--+ lR is differentiable at P, whence, for every
d* e ][))*, there exists a function 1f p d• : X~-+ lR in the closed linear span of PP such that
d* 'fit p (g) = P 1f p,d• g for every g ~ Pp. The efficiency of d* Tn for d* 1ft can next be
understood in terms of asymptotic linearity of d* Jn(Tn - 1/t(P) ), as in (25.22), with
influence function 1f p ,d• •
To avoid measurability issues, we also allow nonmeasurable functions Tn = Tn (X 1, ••• ,
Xn) of the data as estimators in this section. Let both][)) and lE be Banach spaces.
25.47 Theorem. Suppose that 1ft : P ~--+ ][)) is differentiable at P and takes its values in
a subset ][J).p C ][)), and suppose that 4> : ][J).p c ][)) ~--+ lE is Hadamard-differentiable at 1ft (P)
tangentially to lin -tfr p ('P p). Then 4> o 1ft : P ~--+ lE is differentiable at P. If Tn is a sequence
of estimators with values in ][]).p that is asymptotically efficient at P for estimating 1/t(P),
then </>(Tn) is asymptotically efficient at P for estimating 4> o 1/t(P).
Proof. The differentiability of 4> o 1ft is essentially a consequence of the chain rule for
Hadamard-differentiable functions (see Theorem 20.9) and is proved in the same way. The
derivative is the composition 4>',p(P) o -tfr P·
t See for example, Chapter 3.11 in [146] for some possibilities and references.
388 Semiparametric Models
First, we show that the limit distribution L of the sequence ,Jn(Tn -1/f(P)) concentrates
on the subspace lin -tfr p(Pp ). By the Hahn-Banach theorem, for any S c ][)),
For a separable set S, we can replace the intersection by a countable subintersection. Be-
cause L is tight, it concentrates on a separable set S, and hence L gives mass 1 to the left
side provided L(d: d*d = 0) = 1 for every d* as on the right side. This probability is equal
toN(O, li~d·PII~){O}=l.
Now we can conclude that under the assumptions the sequence .Jli(4J(Tn)- 4J o 1/f(P))
converges in distribution to a tight limit, by the functional delta method, Theorem 20.8.
Furthermore, for every e* e JE*
where, if necessary, we can extend the definition of d* = e*4J~(P) to all of][)) in view of
the Hahn-Banach theorem. Because d* e ][))*, the asymptotic efficiency of the sequence
Tn implies that the latter sequence is asymptotically linear in the influence function lfr P,d•·
This is also the influence function of the real-valued map e*4J o 1/f, because
Thus, e*4J(Tn) is asymptotically efficient at P for estimating e*4J o 1/f(P), for every
e* elE*. •
The proof of the preceding theorem is relatively simple, because our definition of an
efficient estimator sequence, although not unnatural, is relatively involved.
Consider, for instance, the case that][))= l 00 (S) for some set S. This corresponds to
estimating a (bounded) functions 1-+ 1/f(P)(s) by a random functions 1-+ Tn(s). Then the
"marginal estimators" d*Tn include the estimators rrsTn = Tn(s) for every fixed s -the
coordinate projections rrs: d 1-+ d(s) are elements of the dual space l 00 (S)*-, but include
mariy other, more complicated functions of Tn as well. Checking the efficiency of every
marginal of the general type d*Tn may be cumbersome.
The deeper result of this section is that this is not necessary. Under the conditions
of Theorem 17.14, the limit distribution of the sequence ,Jn(Tn - 1/f(P)) in l 00 (S) is
determined by the limit distributions of these processes evaluated at finite sets of ''times"
s1 , ... , Sk. Thus, we may hope that the asymptotic efficiency of Tn can also be characterized
by the behavior of the marginals Tn (s) only. Our definition of a differentiable parameter
1/f : P 1-+ ][)) is exactly right for this purpose.
25.48 Theorem (Efficiency in l 00 (S)). Suppose that 1/f: P 1-+ l 00 (S) is differentiable at
P, and suppose that Tn(s) is asymptotically efficient at P for estimating 1/f(P)(s),for every
s E S. Then Tn is asymptotically efficient at P provided that the sequence .J1i(Tn -1/f ( P))
converges under P in distribution to a tight limit in l 00 (S).
The theorem is a consequence of a more general principle that obtains the efficiency of
Tn from the efficiency of d* Tn for a sufficient number of elements d* e ][))*. By definition,
efficiency of Tn means efficiency of d* Tn for all d* e ][))*. In the preceding theorem the
efficiency is deduced from efficiency of the estimators rrs Tn for all coordinate projections rrs
25.7 Efficiency and the Delta Method 389
on f. 00 (S). The coordinate projections are a fairly small subset of the dual space of f. 00 (S).
What makes them work is the fact that they are of norm 1 and satisfy llzlls =sups lrrszl.
25.49 Lemma. Suppose that 1/f: P ~--+ Jl)) is differentiable at P, and suppose that d'Tn is
asymptotically efficient at P for estimating d''l/f(P)for every d' in a subset Jl))' c Jl))* such
that, for some constant C,
Proof. The efficiency of all estimators d'Tn for every d' E Jl))' implies their asymptotic
linearity. This shows that d'Tn is also asymptotically linear and efficient for every d' E
lin Jl))'. Thus, it is no loss of generality to assume that Jl))' is a linear space.
By Prohorov' s theorem, every subsequence of Jn (Tn -1/f ( P)) has a further subsequence
that converges weakly under P to a tight limit T. For simplicity, assume that the whole
sequence converges; otherwise argue along subsequences. By the continuous-mapping the-
orem, d* ,Jn(Tn - 1/f(P)) converges in distribution to d*T for every d* E Jl))*. By the
assumption of efficiency, the sequenced* ,Jn(Tn - 1/f(P)) is asymptotically linear in the
influence function ifr p d* for every d* E Jl))'. Thus, the variable d*T is normally distributed
with mean zero and v~ance Pij,; d* for every d* E Jl))'. We show below that this is then
automatically true for every d* E Jl))*.
By Le Cam's third lemma (which by inspection of its proof can be seen to be valid
for general metric spaces), the sequence ,Jn(Tn - 1/f(P)) is asymptotically tight under
P11 .;n as well, for every differentiable path t ~--+ P,. By the differentiability of 1/f, the
sequence ,Jn(Tn - 1/f(P11 .;n)) is tight also. Then, exactly as in the preceding paragraph,
we can conclude that the sequence d* Jn(Tn - 1/f(P1t_,..tj)) converges in distribution to
a normal distribution with mean zero and variance P'l/f P,d•• for every d* E Jl))*. Thus,
d*Tn is asymptotically efficient for estimating d*'l/f(P) for every d* E Jl))* and hence Tn is
asymptotically efficient for estimating 1/f(P), by definition.
It remains to prove that a tight, random element T in Jl)) such that d*T has law N(O,
lid* vi p 11 2) for every d* E Jl))' necessarily verifies this same relation for every d* E Jl))* .t First
assume that Jl)) = .eoo (S) and that Jl))' is the linear space spanned by all coordinate projections.
Because T is tight, there exists a semimetric p on S such that S is totally bounded
and almost all sample paths ofT are contained in UC(S, p) (see Lemma 18.15). Then
automatically the range of vi pis contained in UC(S, p) as well.
To see the latter, we note first that the maps~--+ ET(s)T(u) is contained in UC(S, p)
for every fixed u: If p(sm, tm) --* 0, then T(sm) - T(tm) --* 0 almost surely and hence in
second mean, in view of the zero-mean normality of T(sm)- T(tm) for every m, whence
IET(sm)T(u)- ET(tm)T(u)l --* 0 by the Cauchy-Schwarz inequality. Thus, the map
s ~--+vi P(ifr P,n.)(s) = 1l"svi p(ifr P,n.) = (.ffr P,n., ifr P,n,) p= ET(u)T(s)
t The proof of this lemma would be considerably shorter if we knew already that there exists a tight random
element T with values in lil> such that d*T has a N{O, lld*l/i p 11~, 2 }-distribution for every d* e lll>*. Then it
suffices to show that the distribution of T is uniquely determined by the distributions of d* T for d* e lll>'.
390 Semiparametric Models
is contained in the space UC(S, p) for every u. By the linearity and continuity of the
derivative .,f p, the same is then true for the map s ~---* .,f p (g) (s) for every g in the closed
linear span of the gradients lit p rr as u ranges over S. It is even true for every g in the
tangent set, because .,f p (g) (s) ~ ~ p (llg) (s) for every g and s, and n the projection onto
the closure of lin lit P,rr••
By a minor extension of the Riesz representation theorem for the dual space of C(S, p),
the restriction of a fixed d* E ][))* to U C (S, p) takes the form
Another useful application of the lemma concerns the estimation of functionals 1fr ( P) =
(lfr1(P), 1fr2(P)) with values in a product ID> 1 x ID>2 of two Banach spaces. Even though
marginal weak convergence does not imply joint weak convergence, marginal efficiency
implies joint efficiency!
25.50 Theorem (Efficiency in product spaces). Suppose that 1/f; : P I-* ID>; is differentiable
at P, and suppose that Tn,i is asymptotically efficient at P for estimating 1/f; (P),for i = 1, 2.
Then (Tn,!• Tn,2) is asymptotically efficient at P for estimating (1/f! (P), 1fr2(P)) provided
that the sequences ,Jn(Tn,i -1/f;(P)) are asymptotically tight in lD>; under P,for i = 1, 2.
Proof. Let JD)' be the setofallmaps (d1, d2) ~---* d;*(d;) ford;* ranging over !D>7, and i = 1, 2.
BytheHahn-Banachtheorem, llddl = sup{ld;*(d;)l: lid;* II= 1, d;* E !D>7}. Thus, the product
I
norm (d~o d2) II= lld1ll v lldzll satisfies the condition of the preceding lemma (with C 1 =
and equality). •
25.51 Example (Random censoring). In section 25.10.1 it is seen that the distribution of
X = ( C 1\ T, 1{T :::: C}) in the random censoring model can be any distribution on the
sample space. It follows by Example 20.16 that the empirical subdistribution functions lH!on
and JH[ln are asymptotically efficient. By Example 20.15 the product limit estimator is a
Hadamard-differentiable functional of the empirical subdistribution functions. Thus, the
product limit-estimator is asymptotically efficient. D
25.8 Efficient Score Equations 391
Here we use the efficient score function instead of the ordinary score function, and we
substitute an estimator ~n for the unknown nuisance parameter. A refinement of this method
has been applied successfully to a number of examples, and the method is likely to work
in many other examples. A disadvantage is that the method requires an explicit form of
the efficient score function, or an efficient algorithm to compute it. Because, in general,
the efficient score function is defined only implicitly as an orthogonal projection, this may
preclude practical implementation.
A variation on this approach is to obtain an estimator ~n((}) of 71 for each given value of
e, and next to solve(} from the equation
n
Ll9,fi.<9l(Xi) =0.
i=l
t The notation Pl~ is an abbreviation for the integral J l~(x) dP(x). Thus the expectation is taken with respect
to x only and not with respect to ij.
392 Semiparametric Models
The second condition (25.53) merely requires that the "plug-in" estimator lo,r;. is a consistent
estimator for the true efficient influence function. Because Po. ,qle.,q = 0, the first condition
(25.52) requires that the "bias" of the plug-in estimator, due to estimating the nuisance
parameter, converge to zero faster than 1j .jTi. Such a condition comes out naturally of the
proofs. A partial motivation is that the efficient score function is orthogonal to the score
functions for the nuisance parameter, so that its expectation should be insensitive to changes
in 'I·
25.54 Theorem. Suppose that the model {Po,'f/ : 8 E 8} is differentiable in quadratic mean
with respect to 8 at (8, 11) and let the efficient information matrix lo, qbe nonsingular. Assume
that (25.52) and (25.53) hold. Let en satisfy .jTiJP>nlo•. r;. =op(l) and be consistent for
8. Furthermore, suppose that there exists a Donsker class with square-integrable envelope
function that contains every function lo•. r;. with probability tending to 1. Then the sequence
en is asymptotically efficient at (e' 11).
Proof. Let Gn(O', 17') = .jTi(IJ!>n- Po,q)lo',q' be the empirical process indexed by the func-
tions l 9',q'· By the assumption that the functions lo,r; are contained in a Donsker class,
together with (25.53),
The remainder of the proof consists of showing that the left side is asymptotically equivalent
to (l 9 ,'f/ +op(1) ).jii(en -8), from which the theorem follows. Because lo,q = P9 ,'flle,ql~,'fl'
the difference of the left side of the preceding display and l 9,'f/.jTi(en - 8) can be written
as the sum of three terms:
vr.:n I- ( 1/2
lo.,r;. Po.,'f/
1/2) [( 1/2
+ Pe,'f/ Po.,'f/ -
1/2) 1 ~ T •
Pe,'f/ - i(On - 8) le,'fl Pe,'f/ dJL
1/2]
The first and third term can easily be seen to be op (.jiillen- ()II) by applying the Cauchy-
Schwarz inequality together with the differentiability of the model and (25.53). The square
of the norm of the integral in the middle term can for every sequence of constants mn --+ oo
be bounded by a multiple of
mn2111lO.,r;. II Pe,q
1/211/2 1/21d fL 2
Po•. q - Pe,'f/
I
+ lllo•. r;,.II 2 CPo•. q + Po,q) dj.L r.
Jllio,qll>mn
iile,'f/ 11 2 P9,'f/ dj.L.
In view of (25.53), the differentiability of the model in(), and the Cauchy-Schwarz inequal-
ity, the first term converges to zero in probability provided mn --+ oo sufficiently slowly
25.8 Efficient Score Equations 393
to ensure that mnllen- 011 ~ 0. (Such a sequence exists. If Zn ~ 0, then there exists
a sequence en -!-0 such that P(IZnl >en)--+ 0. Then e;; 112 Zn ~ 0.) In view of the last
part of (25.53), the second term converges to zero in probability for every mn --+ oo. This
concludes the proof of the theorem. •
The preceding theorem is best understood as applying to the efficient score functions
l9. 71 • However, its proof only uses this to ensure that, at the true value (e, TJ),
- - ·T
I 8,71 = P8,77e8,71.e9 71 •
The theorem remains true for arbitrary, mean-zero functions l 9,71 provided that this identity
holds. Thus, if an estimator (e, fj) only approximately satisfies the efficient score equation,
then the latter can be replaced by an approximation.
The theorem applies to many examples, but its conditions may be too stringent. A
modification that can be theoretically carried through under minimal conditions is based
on the one-step method. Suppose that we are given a sequence of initial estimators {jn that
is .Jil-consistent for e. We can assume without loss of generality that the estimators are
discretized on a grid of meshwidth n- 112 , which simplifies the constructions and proof.
Then the one-step estimator is defined as
A -
n
en =O n + ( L .e-9•. 71••.;.e--T8•. 11./
-
• X·,)
)-! L 8•. 11./x.,).
n -
.e- •
i=l i=!
if i > m
if i ::5 m.
Thus, for X; belonging to the first half of the sample, we use an estimator fin,i based on
the second half of the sample, and vice versa. This sample-splitting trick is convenient
in the proof, because the estimator of TJ used in l9, 71 (X1) is always independent of X;,
simultaneously for X; running through each of the two halves of the sample.
The discretization of {j n and the sample-splitting are mathematical devices that rarely are
useful in practice. However, the conditions of the preceding theorem can now be relaxed
to, for every deterministic sequence On= 0 + O(n- 112 ),
(25.55)
(25.56)
25.57 Theorem. Suppose that the model {P9, 71 : 0 E E>} is differentiable in quadratic
mean with respect to 0 at (0, TJ ), and let the efficient information matrix i 9, 71 be nonsingular.
394 Semiparametric Models
Assume that (25.55) and (25.56) hold. Then the sequence en is asymptotically efficient at
(0, TJ).
varo., 71 ( .JmJP>m(lo•. ~•. ; -lo., 11 ) I Xm+l, ... , Xn) :5 Po., 11 lllo•. ~•.;-lo., 71 11 2•
Both expressions converge to zero in probability by assumption (25.55). We conclude
that the sum inside the conditional expectations converges conditionally, and hence also
unconditionally, to zero in probability. By symmetry, the same is true for the second half
of the sample, whence
We have proved this for the probability under (On. TJ), but by contiguity the convergence is
also under (0, TJ).
The second part of the proof is technical, and we only report the result. The condition
of differentiabily of the model and (25.56) imply that
(see [139], p. 185). Under stronger regularity conditions, this can also be proved by a Taylor
expansion of l 8 ,71 in 0.) By the definition of the efficient score function as an orthogonal
i;,
projection, Po,io. 11 11 = i 8 , 71 • Combining the preceding displays, we find that
In view of the discretized nature of en' this remains true if the deterministic sequence en is
replaced by en; see the argument in the proof of Theorem 5.48.
Next we study the estimator for the information matrix. For any vector h E ~k, the
triangle inequality yields
By (25.55), the conditional expectation under (en. TJ) of the right side given Xm+l• ... , Xn
converges in probability to zero. A similar statement is valid for the second half of the
observations. Combining this with (25.56) and the law of large numbers, we see that
- -T p -
JP>nlo ;, .l8
RI'IR 1l
;. .
fti'IR,I
--* I o1'/...
In view of the discretized nature of en' this remains true if the deterministic sequence en is
replaced by en.
25.8 Efficient Score Equations 395
The theorem follows combining the results of the last two paragraphs with the definition
ofen. •
A further refinement is not to restrict the estimator for the efficient score function to
be a plug-in type estimator. Both theorems go through if l9 .~ is replaced by a general
estimator ln,IJ =ln,o(·i X 1 , ••• , Xn), provided that this satisfies the appropriately modi-
fied conditions of the theorems, and in the second theorem we use the sample-splitting
scheme. In the generalization of Theorem 25.57, condition (25.55) must be replaced
by
(25.58)
The proofs are the same. This opens the door to more tricks and further relaxation of
the regularity conditions. An intermediate theorem concerning one-step estimators, but
without discretization or sample-splitting, can also be proved under the conditions of The-
orem 25.54. This removes the conditions of existence and consistency of solutions to the
efficient score equation.
The theorems reduce the problem of efficient estimation of() to estimation of the efficient
score function. The estimator of the efficient score function must satisfy a "no-bias" and a
consistency conditions. The consistency is usually easy to arrange, but the no-bias condition,
such as (25.52) or the first part of (25.58), is connected to the structure and the size of the
model, as the bias of the efficient score equations must converge to zero at a rate faster than
1/ .jn. Within the context of Theorem 25.54 condition (25.52) is necessary. If it fails, then
the sequence en is not asymptotically efficient and may even converge at a slower rate than
.jn. This follows by inspection of the proof, which reveals the following adaptation of the
theorem. We assume that l 9 , 11 is the efficient score function for the true parameter((), 17)
but allow it to be arbitrary (mean-zero) for other parameters.
25.59 Theorem. Suppose that the conditions of Theorem 25.54 hold except possibly con-
dition (25.52). Then
Because by Lemma 25.23 the sequence en can be asymptotically efficient (regular with
N (0, 1;,~)-limit distribution) only if it is asymptotically equivalent to the sum on the right,
condition (25.52) is seen to be necessary for efficiency.
The verification of the no-bias condition may be easy due to special properties of the
model but may also require considerable effort. The derivative of Po,.,lo.~ with respect
to () ought to converge to fJ jfJ() P9 , 11 l 9 , 11 = 0. Therefore, condition (25.52) can usually be
simplified to
r.:: - p
v "Po, 11 lo,fl. --* 0.
The dependence on fj is more interesting and complicated. The verification may boil down to
a type of Taylor expansion of P9 , 11 l 9 ,fl in fj combined with establishing a rate of convergence
for fj. Because 11 is infinite-dimensional, a Taylor series may be nontrivial. If fj - 11 can
396 Semiparametric Models
occur as a direction of approach to TJ that leads to a score function Be, 11 ( ~ - TJ), then we can
write
We have used the fact that Pe, 11 l 9 , 11 B9 , 11 h = 0 for every h, by the orthogonality property of
the efficient score function. (The use of Be, 11 (~ - TJ) corresponds to a score operator that
yields scores Be, 11 h from paths of the form TJr = TJ + th. If we use paths dTJr = (1 + th) dTJ,
then Be, 11 (d~fdTJ - 1) is appropriate.) The display suggests that the no-bias condition
(25.52) is certainly satisfied if II~- TJii = Op(n-'1 2 ), for 11·11 a norm relative to which the
two terms on the right are both of the order o p (II~ - TJ II). In cases in which the nuisance
parameter is not estimable at Jn-rate the Taylor expansion must be carried into its second-
order term. If the two terms on the right are both 0 p (II~ - TJ 11 2), then it is still sufficient
to have II~- TJII =op(n- 114 ). This observation is based on a crude bound on the bias, an
integral in which cancellation could occur, by norms and can therefore be too pessimistic
(See [35] for an example.) Special properties of the model may also allow one to take
the Taylor expansion even further, with the lower order derivatives vanishing, and then a
slower rate of convergence of the nuisance parameter may be sufficient, but no examples
of this appear to be known. However, the extreme case that the expression in (25.52)
is identically zero occurs in the important class of models that are convex-linear in the
parameter.
25.61 Example (Convex-linear models). Suppose that for every fixed () the model
{P9 ,11 : TJ e H} is convex-linear: H is a convex subset of a linear space, and the depen-
dence TJ 1-+ Pe, 11 is linear. Then for every pair (TJ 1, TJ) and number 0 :::; t :::; 1, the convex
combination TJr = tTJ 1 +(1-t)TJiS aparameterandthedistributiont Pe, 111 +(1-t)Pe, 11 = Pe, 11,
belongs to the model. The score function at t = 0 of the submodel t 1-+ Pe, 11, is
a dPe, 111
at it=O logdP9,,111 +(!-t)q = -dPe,
--1.
11
Because the efficient score function for () is orthogonal to the tangent set for the nuisance
parameter, it should satisfy
This means that the unbiasedness conditions in (25.52) and (25.55) are trivially satisfied,
with the expectations Pe, 11 le,fi even equal to 0.
A particular case in which this convex structure arises is the case of estimating a
linear functional in an information-loss model. Suppose we observe X= m(Y) for a
known function m and an unobservable variable Y that has an unknown distribution TJ
on a measurable space (Y, A). The distribution P11 = TJ o m- 1 of X depends linearly on
TJ. Furthermore, if we are interested in a linear function()= x(TJ), then the nuisance-
parameter space He = {TJ : x (TJ) = () } is a convex subset of the set of probability measures on
(Y. A). o
25.8 Efficient Score Equations 397
Because this function is skew-symmetric about the pointe, the bias condition in (25.58) is
satisfied, with a bias of zero. Because the efficient score function can be written in the form
- g'
le, 71 (x) = -g(lx- fJI) sign(x- fJ),
the consistency condition in (25.58) reduces to consistency of kn for the function g' I g in
that
/(kn - gg')2 p
(s) g(s) ds --+ 0. (25.62)
Estimators kn can be constructed by several methods, a simple one being the kernel method
of density estimation. For a fixed twice continuously differentiable probability density w
with compact support, a bandwidth parameter an. and further positive tuning parameters
IXn, f3n. and Yn. set
~ (s )
gn 1
=-~w
ani=!
~ (s---1i),
an
~t
= ~n
kn(s)
gn
(s)lb (s),
n
(25.63)
25.64 Lemma. Let T1, ... , Tn be a random sample from a density g that is supported and
J
absolutely continuous on [0, oo) and satisfies (g' / ..(i) 2 (s) ds < oo. Then kn given by
(25.63)for a probability density w that is twice continuously differentiable and supported
on [-1, 1] satisfies (25.62), if ant 00, Yn t 0, f3n t 0, and an t 0 in such a way that
an .:S Yn• a?, an/ {3'?, --+ 0, na: {3'?, --+ 00.
Proof. Start by noting that llglloo .::: Jlg'(s)l ds .::: .[f;, by the Cauchy-Schwarz inequal-
ity. The expectations and variances of gn and its derivative are given by
vargn(s) = - 1 2 varw - -
A
na a
(s-
T1) .::: - 1 11wll 2 ,
na 2
00
varg~(s).::: ~llw'll;,.
na
By the dominated-convergence theorem, gn(s) --+ g(s), for every s > 0. Combining this
with the preceding display, we conclude that gn(s) ~ g(s). If g' is sufficiently smooth,
then the analogous statement is true for g~(s). Under only the condition of finite Fisher
information for location, this may fail, but we still have that g~ (s) - g~ (s) ~ 0 for every
s; furthermore, g~l[a,oo)--+ g' in L1, because
by the L 1-continuity theorem on the inner integral, and next the dominated-convergence
theorem on the outer integral.
The expectation of the integral in (25.62) restricted to the complement of the set Bn is
equal to
A/
gn ( 1/2 _ 1/2) + gn- gn/ ) gn1/2 _ gn/ (Agn- gn )
(A/
+ (
~
/
_ ..2_ / )
On iJ n the sum of the squares of the four terms on the right is bounded above by
25.8 Efficient Score Equations 399
The expectations of the integrals over Bn of these four terms converge to zero. First, the
integral over the first term is bounded above by
a:
p
/1 s>y
lg(s -at) - I
g(s) w(t) dt ds ::::; I
ap2~ lg' (t) Idt I itlw(t) dt.
Next, the sum of the second and third terms gives the contribution
na1 , 2
4 p2 llw lloo
I gn(s) ds + na12p2 llwll 2
00
I (g~ 2
g~ 12 ) ds.
The first term in this last display converges to zero, and the second as well, provided the
integral remains finite. The latter is certainly the case if the fourth term converges to zero.
By the Cauchy-Schwarz inequality,
(Jg'(s-ay)w(y)dy) 2
<
~::-----.....:.__:...__~
I( -
g' ) 2
(s-ay) w(y) dy.
J
g(s- ay) w(y) dy - g 1/ 2
Using Fubini's theorem, we see that, for any set B, and B" its a-enlargement,
g~ )2 r ( g' )2
JBr(g~/2 (s)ds::::; JB" gl/2 ds.
In particular, we have this for B = B" = IR, and B = {s: g(s) = 0}. For the second
choice of B, the sets B" decrease to B, by the continuity of g. On the complement of B,
g~j g~ 12 --* g' j g 112 in Lebesgue measure. Thus, by Proposition 2.29, the integral of the
fourth term converges to zero. •
25.8.2 E"ors-in-Variables
Let the observations be a random sample of pairs (Xi, Y;) with the same distribution as
X=Z+e
Y =a+PZ+f,
for a bivariate normal vector (e, f) with mean zero and covariance matrix :E and a random
variable Z with distribution 'fl, independent of (e, f). Thus Y is a linear regression on a
variable Z which is observed with error. The parameter of interest is() = (a, p, :E) and
the nuisance parameter is 'fl· To make the parameters identifiable one can put restrictions
on either :E or 'fl· It suffices that 'f1 is not normal (if a degenerate distribution is considered
normal with variance zero); alternatively it can be assumed that :E is known up to a scalar.
Given (0, :E) the statistic 1ft9 (X, Y) = (1, p):E- 1(X, Y -al is sufficient(andcomplete)
for 'fl. This suggests to define estimators for (a, p, :E) as the solution of the "conditional
score equation" lP'nl9.~ = 0, for
Therefore, the no-bias condition is trivially satisfied, and the estimator ~ need only be
consistent for TJ (in the sense of (25.53)). One possibility for ~ is the maximum likelihood
estimator, which can be shown to be consistent by Wald's theorem, under some regularity
conditions.
As the notation suggests, the function l 9 ,q is equal to the efficient score function for
(). We can prove this by showing that the closed linear span of the set of nuisance scores
contains all measurable, square-integrable functions of 1/f9(x, y), because then projecting
on the nuisance scores is identical to taking the conditional expectation.
As explained in Example 25.61, the functions P9,q 1 / P9,q - 1 are score functions for the
nuisance parameter (at((), 'f})). As is clear from the factorization theorem or direct calcu-
lation, they are functions of the sufficient statistic 1/f9(X, Y). If some function b( 1/f9(x, y))
is orthogonal to all scores of this type and has mean zero, then
25.65 Lemma. For every 0 < a ::;:: 1 and every probability distribution TJo on ~ and
compact K C (0, oo), there exists an open neighborhood U of TJo in the weak topology
such that the class F of all functions
J z ez<bo+b1x+~y) e-cz2dTJ(Z)
(x, y) 1-+ (ao + atX + a2y) J ez<bo+bix+~y) e-cz2 dTJ(Z) '
with TJ ranging over U, c ranging over K, and a and b ranging over compacta in ~3 ,
satisfies
We retain the requirement that the estimator is .JTi-consistent and regular at every dis-
tribution P in the model. A somewhat stronger but still reasonable requirement is that it be
asymptotically linear in that
This type of expansion and regularity implies that ,f P is an influence function of the
parameter 1/f(P), and the difference ,f p -1/fp must be orthogonal to the tangent set Pp.
This suggests that we compute the set of all influence functions to obtain an indication
of which estimators Tn might be possible. If there is a nice parametrization ..fro.~ of these
sets of functions in terms of a parameter of interest (} and a nuisance parameter r, then a
possible estimation procedure is to solve (} from the estimating equation, for given r,
n
L:..fo.~(X;) = 0.
i=l
The choice of the parameter r determines the efficiency of the estimator 0. Rather than
fixing it at some value we also can make it data-dependent to obtain efficiency at every
element of a given submodel, or perhaps even the whole model. The resulting estimator
can be analyzed with the help of, for example, Theorem 5.31.
If the model is parametrized by a partitioned parameter ((}, 'f1), then any influence function
for(} must be orthogonal to the scores for the nuisance parameter 'fl· The parameter r might
be indexing both the nuisance parameter 'f1 and "position" in the tangent set at a given
((}, 'f1). Then the unknown 'f1 (or the aspect of it that plays a role in r) must be replaced by an
estimator. The same reasoning as for the "no-bias" condition discussed in (25.60) allows
us to hope that the resulting estimator for (} behaves as if the true 'f1 had been used.
25.66 Example (Regression). In the regression model considered in Example 25.28, the
set of nuisance scores is the orthocomplement of the set e?-i of all functions of the form
(x, y) 1-+ (y- go(x) )h(x), up to centering at mean zero. The efficient score function for
(} is equal to the projection of the score for (} onto the set e?-i, and an arbitrary influence
function is obtained, up to a constant, by adding any element from e?-i to this. The estimating
equation
n
L(Y;- go(X;))h(X;) = 0
i=l
leads to an estimator with influence function in the direction of (y- go(x) )h(x). Because
the equation is unbiased for any h, we easily obtain .JTi-consistent estimators, even for data-
dependent h. The estimator is more efficient if h is closer to the function g9 (x) /E, (e2 I X =
x), which gives the efficient influence function. For full efficiency it is necessary to estimate
the function x 1-+ Eq (e 2 1 X = x) nonparametrically, where consistency (for the right norm)
suffices. D
25.67 Example (Missing at random). In Lemma 25.41 and Example 25.43 the influence
functions in a MAR model are characterized as the sums of reweighted influence functions
in the original model and the influence functions obtained from the MAR specification. If
402 Semiparametric Models
the function rr is known, then this leads to estimating equations of the form
P ~---+ nn
i=!
P{X;}.
25.68 Example (Empirical distribution). Let P be the set of all probability distributions
on the measurable space (X, A) (in which one-point sets are measurable). Then, for n
25.10 Maximum Likelihood Estimators 403
fixed different values Xt, ... , Xn, the vector (P{xt}, ... , P{xnl) ranges over all vectors
p 2:: 0 such that I:: Pi :::: 1 when P ranges over P. To maximize p 1-+ Ili Pi, it is
clearly best to choose p maximal : Li Pi = 1. Then, by symmetry, the maximizer must
be p = (ljn, ... , 1/n). Thus, the empirical distribution IP'n = n- 1 .L:8x; maximizes
the empirical likelihood over the nonparametric model, whence it is referred to as the
nonparametric maximum likelihood estimator.
If there are ties in the observations, this argument must be adapted, but the result is the
same.
The empirical likelihood is appropriate for the nonparametric model. For instance, in the
case of a Euclidean space, even if the model is restricted to distributions with a continuous
Lebesgue density p, we still cannot use the map p 1-+ Il7= 1p(Xi) as a likelihood. The
supremum of this likelihood is infinite, for we could choose p to have an arbitrarily high,
very thin peak at some observation. 0
Given a partitioned parameter (8, 'f/), it is sometimes helpful to consider the profile
likelihood. Given alikelihoodlikn(8, 'f/)(X 1, ••• , Xn). the profile likelihoodfor8 is defined
as the function
The supremum is taken over all possible values of 'f/. The point of maximum of the profile
likelihood is exactly the first coordinate of the maximum likelihood estimator ({}, ~). We
are simply computing the maximum of the likelihood over (8, 'f/) in two steps.
It is rarely possible to compute a profile likelihood explicitly, but its numerical evaluation
is often feasible. Then the profile likelihood may serve to reduce the dimension of the
likelihood function. Profile likelihood functions are often used in the same way as (ordinary)
likelihood functions of parametric models. Apart from taking their points of maximum as
estimators {}, the second derivative at {} is used as an estimate of minus the inverse of the
asymptotic covariance matrix of {}. Recent research appears to validate this practice.
25.69 Example (Cox model). Suppose that we observe a random sample from the distri-
bution of X = (T, Z), where the conditional hazard function of the "survival time" T with
covariate Z takes the form
The hazard function A. is completely unspecified. The density of the observation X = (T, Z)
is equal to
where A is the primitive function of A. (with A(O) = 0). The usual estimator for (8, A)
based on a sample of size n from this model is the maximum likelihood estimator ({}, A),
where the likelihood is defined as, with A {t} the jump of A at t,
n
(8, A) t-+ nellz; A{ti}e-eB'iA(t;).
i=l
This is the product of the density at the observations, but with the hazard function A.(t)
replaced by the jumps A {t} of the cumulative hazard function. (This likelihood is close
404 Semiparametric Models
but not exactly equal to the empirical likelihood of the model.) The form of the likelihood
forces the maximizer A to be a jump function with jumps at the observed "deaths" ti, only
and hence the likelihood can be reduced to a function of the unknowns A{t,}, ... , A{tn}. It
appears to be impossible to derive the maximizers (8, A) in closed-form formulas, but we
can make some headway in characterizing the maximum likelihood estimators by "profiling
out" the nuisance parameter A. Elementary calculus shows that, for a fixed(), the function
n o
().. ,, ... 'An) 1-+ ne8z; Aje -e 'I Lj:tj~t; Aj
i=l
is maximal for
The profile likelihood for () is the supremum of the likelihood over f._ for fixed (). In view
of the preceding display this is given by
The latter expression is known as the Cox partial likelihood. The original motivation for
this criterion function is that the terms in the product are the conditional probabilities that
the ith subject dies at time i given that one of the subjects at risk dies at that time. The
maximum likelihood estimator for A is the step function with jumps
A 1
A{tk} = O .
"·
L...,,: t;?:tk e z;
e
The estimators and A are asymptotically efficient, under some restrictions. (See sec-
tion 25 .12.1.) We note that we have ignored the fact that jumps of hazard functions are
smaller than 1 and have maximized over all measures A. D
25.70 Example (Scale mixture). Suppose we observe a sample from the distribution of
X=()+ Ze, where the unobservable variables Z and e are independent with completely
unknown distribution 17 and a known density~. respectively. Thus, the observation has a
J
mixture density pe(x I z) d17(z) for the kernel
Po (x I z) = ;.~ - z - .
1 (X-{})
If ~ is symmetric about zero, then the mixture density is symmetric about (), and we
can estimate () asymptotically efficiently with a fully adaptive estimator, as discussed in
Section 25.8.1. Alternatively, we can take the mixture form of the underlying distribution
into account and use, for instance, the maximum likelihood estimator, which maximizes
the likelihood
Because the efficient score function for () equals the ordinary score function for (),
the maximum likelihood estimator satisfies the efficient score equation JP>nl11, 11 = 0. By
the convexity of the model in 1J, this equation is unbiased in 1J. Thus, the asymptotic
efficiency of the maximum likelihood estimator {) follows under the regularity conditions
of Theorem 25 .54. Consistency of the sequence of maximum likelihood estimators ({jn, ~n)
for the product of the Euclidean and the weak topology can be proved by the method of
Wald. The verification that the functions l 11 ,11 form a Donsker class is nontrivial but is
possible using the techniques of Chapter 19. D
25.71 Example (Penalized logistic regression). In this model we observe a random sam-
ple from the distribution of X = (V, W, Y), for a 0-1 variable Y that follows the logistic
regression model
Here k ~ 1 is a fixed integer and T/(k) is the kth derivative of 17 with respect to z.
The density of an observation is given by
Here >..n is a "smoothing parameter" that determines the importance of the penalty ! 2 (17). A
large value of ln leads to smooth maximizers ~, for small values the maximizer is more like
the unrestricted maximum likelihood estimator. Intermediate values are best and are often
chosen by a data-dependent scheme, such as cross-validation. The penalized estimator{)
can be shown to be asymptotically efficient if the smoothing parameter is constructed to
satisfy ln = op(n- 112) and >..~ 1 = 0p(nk/(2k+l)) (see [102]). D
25.72 Example (Proportional odds). Suppose that we observe a random sample from
the distribution of the variable X= (T A C, 1{T :::: C}, z), in which, given Z, the variables
406 Semiparametric Models
T and C are independent, as in the random censoring model, but with the distribution
function F(t I z) ofT given Z restricted by
F(t I z) zr 9 ( )
1 - F (t I z) = e TJ t ·
In other words, the conditional odds given z of survival until t follows a Cox-type regression
model. The unknown parameter TJ is a nondecreasing, cadlag function from [0, oo) into
itself with TJ(O) = 0. It is the odds of survival if() = 0 and Tis independent of Z.
If TJ is absolutely continuous, then the density of X = (Y, b., Z) is
(
e-zr 9TJ'(y) (1 - Fc(Y- I z)) )
------'------::;-2----'-
8
(e-zr 9 fc(Y I z))
re
1 8
-
fz(z).
(1'/(Y) + e-zTB) TJ(y) + e-z
We cannot use this density as a likelihood, for the supremum is infinite unless we restrict TJ
in an important way. Instead, we view TJ as the distribution function of a measure and use
the empirical likelihood. The probability that X = x is given by
For likelihood inference concerning((), TJ) only, we may drop the terms involving Fe and
F z and define the likelihood for one observation as
PF,a(X:::;x,b.=O)= r
J[O,x]
(1-F)dG,
PF,a(X:::: X, i}. = 1) = r
J[O,x]
(1- G_)dF.
25.10 Maximum Likelihood Estimators 407
Consequently, ifF and G have densities f and g (relative to some dominating measures),
then (X, ~)has density
1 :::: i :::: n.
Because the empirical distribution maximizes P r+ 07= 1 P{Xi, ~dover all distributions,
it follows that (F, G) maximizes (F, G) t-+ n?=l PF,a{Xj, ~dover all (F, G). That fr is
the product limit estimator next follows from Example 20.15.
To complete the discussion, we study the map ( F, G) ~ PF,G. A probability distribution
on [0, oo) x {0, 1} can be identified with a pair (Ho, H 1) of subdistribution functions on
[0, oo) such thatH0 (oo)+H1(oo) = 1, by letting H;(x) be the mass of the set [0, x] x {i}. A
given pair of distribution functions (F0 , F 1) on [0, oo) yields such a pair of subdistribution
functions (Ho, Ht). by
Conversely, the pair (Fo, Ft) can be recovered from a given pair (Ho, Ht) by, with ~Hi the
jump in Hi, H = Ho + H 1 and A'f the continuous part of A;,
Ao(x) = 1[O,x]
dHo
1 - H_ - ~H1
, At(X) = 1
[O,x]
dH1
1 - H_
,
1 - F; (x) = fl ( 1 - A;{s })e-Af(x).
oss::::x
25.74 Lemma. Given any pair (Ho, Ht) ofsubdistributionfunctions on [0, oo) such that
Ho(oo) + H1 (oo) = 1, the preceding display defines a pair (Fo, Ft) of subdistribution
functions on [0, oo) such that (25.73) holds.
408 Semiparametric Models
Proof. For any distribution function A and cumulative hazard function B on [0, oo), with
Be the continuous part of B,
1- A(t) = n
o9 :sr
(1- B{sl)e-BC(I) iff B(t) = r
1[0,11
~.
1 - A_
To see this, rewrite the second equality as (1 - A_) dB = dA and B(O) = A(O), and
integrate this to rewrite it again as the Volterra equation
(1-A)=1+ { (1-A-)d(-B).
1[0,·]
It is well known that the Volterra equation has the first equation of the display as its unique
solution.t
Combined with the definition of F;, the equivalence in the preceding display implies
immediately that dA; = dF;/(1 - F;-). Secondly, as immediate consequences of the
definitions,
(1 - Fo)(l - F,)(t) = n
s:st
(1 - I::!..Ao - I::!..A1 + I::!..Aoi::!..A,)(s)e-(Ao+AIY<I),
lb j)(x) = aa
' t 11=0
loglik(e + t, 111(8, ~))(x),
then the maximum likelihood estimator satisfies the efficient score equation; if not, then this
is not clear. The existence of an exact least favorable submodel appears to be particularly
e,
uncertain at the maximum likelihood estimator ( ~), as this tends to be on the "boundary"
of the parameter set.
A method around this difficulty is to replace the efficient score equation by an approxi-
e,
mation. First, it suffices that ( ~) satisfies the efficient score equation approximately, for
Theorem 25.54 goes through provided -/ii lP'nliJ, ~ = o p ( 1). Second, it was noted following
the proof of Theorem 25.54 that this theorem is valid for estimating equations of the form
IP'nle.~ = 0 for arbitrary mean-zero functions le, 11 ; its assertion remains correct provided
that at the true value of ((), TJ) the function le, 71 is the efficient score function. This suggests
to replace, in our proof, the function l 9 , 11 by functions R9 , 11 that are proper score functions
and are close to the efficient score function, at least for the true value of the parameter.
These are derived from "approximately-least favorable submodels."
We define such submodels as maps t 1-+ 'f/ 1 (0, TJ) from a neighborhood ofO e IRk to the
parameter set for TJ with TJo(O, TJ) = TJ (for every((), TJ)) such that
25.77 Theorem. Suppose that the model {Pe, 11 : () e 8}, is differentiable in quadratic
mean with respect to() at (Oo, 77o) and let the efficient information matrix leo.'lo be nonsingu-
lar. Assume that Re, 11 are the score functions ofapproximately least-favorable submodels (at
(Oo, 7Jo)), that the functions KiJ.~ belong to a P!Jo.'lo -Donsker class with square-integrable en-
velope with probability tending to 1, and that (25.75) and (25.76) hold. Then the maximum
en
likelihood estimator is asymptotically efficient at (Oo, TJo) provided that it is consistent.
The no-bias condition (25.75) can be analyzed as in (25.60), with le,f! replaced by iCe.~·
Alternatively, it may be useful to avoid evaluating the efficient score function ate or~. and
(25.60) may be adapted to
e
Replacing by e0 should make at most a difference of o P (II e-
00 II), which is negligible
in the preceding display, but the presence of~ may require a rate of convergence for ~·
Theorem 5.55 yields such rates in some generality and can be translated to the present
setting as follows.
410 Semiparametric Models
25.81 Theorem. Suppose that (25.79) and (25.80) are valid for functions 4Jn such that
0 ~ 4Jn (o) I oa is decreasing for some a < 2 and sets An X Hn such that P(ln E An' fn E
Hn)-+ 1. Then d).(fn, ro) ::::; O~(on + ln)for any sequence of positive numbers On such
that 4JnC8n) ::::; .jn o~for every n.
For every nondecreasing, nonnegative function h and positive number t, the submodel
A 1 = A+ this well defined. Inserting this in the log likelihood and differentiating with
respect to t at t = 0, we obtain a score function for A of the form
25.11 Approximately Least-Favorable Submodels 411
The linear span of these score functions contains Be,Ah for all bounded functions h of
bounded variation. In view of the similar structure of the scores for (J and A, projecting
ie,A onto the closed linear span of the nuisance scores is a weighted least-squares problem
with weight function Qe,A· The solution is given by the vector-valued function
E (z Q 2 (X) I C - c)
he A(c) = A(c) e,A e,A - . (25.82)
' Ee,A(Q~,A(X) I C =c)
The efficient score function for (J takes the form
Then A 1 (fJ, A) is a cumulative hazard function for every t that is sufficiently close to zero,
because for every u ::::; v,
If evaluated at (fJo, Ao) this reduces to the efficient score function leo,A 0 (x) provided
</J (Ao) = 1, whence the submodel is approximately least favorable. To prove the asymptotic
efficiency of en it suffices to verify the conditions of Theorem 25.77.
The function </J is a technical device that has been introduced in order to ensure that
0 ::::; A 1 (fJ, A) ::::; M for all t that are sufficiently close to 0. This is guaranteed if 0 ::::;
y</J(y)::::; c(yA(M-y))foreveryO::::; y::::; M,forasufficientlylargeconstantc. Because
by assumption [ Ao(a-), Ao(r)] C (0, M), there exists such a function </J that also fulfills
</J(Ao) = 1 on [a, r].
In order to verify the no-bias condition (25.52) we need a rate of convergence for An.
25.83 Lemma. Under the conditions listed previously, en is consistent and II An- Ao II Po,2 =
Op(n-!13).
The densities p 9,A are bounded above by 1, and under our assumptions the density p0 i::;
bounded away from zero. It follows that the functions m 9 ,A (x) are uniformly bounded in
(e, A) andx.
By the concavity of the logarithm and the definition of (fJ, A),
4Jn(8) = ~( 1 + 8 0n).
This leads to a convergence rate of n- 113 for both llfJ- e0 11 and IIA- A 0 1!2. •
To verify the no-bias condition (25.75), we use the decomposition (25.78). The inte-
grands in the two terms on the right can both be seen to be bounded, up to a constant, by
(A- A 0 ) 2 , with probability tending to one. Thus the bias Pb,.,/cb.~ is actually of the order
Op(n-213).
The functions x 1-+K9,A(x) can be written in the form 1/f(z, e9rz, A(c), 8) for a function
1/1 that is Lipschitz in its first three coordinates, for 8 E {0, 1} fixed. (Note that A
1-+ A Q9 ,A
is Lipschitz, as A 1-+ h9o,Ao o A 01 (A)/ A = (h9o,Aol Ao) o A 01(A).) The functions z 1-+ z,
z 1-+ exp eT z. c 1-+ A(c) and 8 1-+ 8 form Donsker classes if e and A range freely. Hence
the functions x 1-+ A(c)Q 9,A(x) form a Donsker class, by Example 19.20. The efficiency
of fJn follows by Theorem 25.77.
25.84 Lemma. Under the conditions listed previously, the~e exists a constant C such
that, for every e > 0,
e.
Proof. First consider the class of functions m 9 ,A for a fixed These functions depend on
A monotonely if considered separately for 8 = 0 and 8 = 1. Thus a bracket A 1 :::: A :::: A 2
for A leads, by substitution, readily to a bracket for m 9,A· Furthermore, because this
dependence is Lipschitz, there exists a constant D such that
Thus, brackets for A of £ 2-size e translate into brackets for me,A of L 2 (Pe,A)-size propor-
tional to e. By Example 19.11 we can cover the set of all A by expC(lje) brackets of
size e.
Next, we allow e to vary freely as well. Because e is finite-dimensional and aI ae me' A(x)
is uniformly bounded in (8, A, x), this increases the entropy only slightly. •
25.85 Lemma. Under the conditions listed previously there exist constants C, e > 0 such
that,forall A and all liB- Boll < e,
f 1/2 1/2 2
(Pe,A - Pllo,Ao) dtL 2:: C 1 (j
T
2
(A- Ao) (c) de+ CIIB- Boll .
2
f (Pe,A - Pl/o,Ao) 2 d
(Pe,A
1/2
+ P9o,Ao
112 )2 IL·
Because Po is bounded away from zero, and the densities p 9,A are uniformly bounded, the
denominator can be bounded above and below by positive constants. Thus the Hellinger
distance (in the display) is equivalent to the £ 2-distance between the densities, which can
be rewritten
Let g(t) be the function exp(-e9TzA(c)) evaluated ate, = te + (1- t)80 and A 1 =
tA + (1- t)A 0 , for fixed (c, z). Then the integrand is equal to (g(l)- g(0)) 2 , and hence,
by the mean value theorem, there exists 0:::: t = t(c, z) :::: 1 such that the preceding display
is equal to
( Po(Be0 ,A 0 g)a T l9
•
0 ,A0
)2 = (Po(B90 ,A0 g)a T (ll/o,Ao
·
- - )2
lo)
:::: Po(B90 ,A0 g) 2 a T (/o - lo)a,
-
25.86 Lemma. Let h, g1 and gz be measurable functions such that c1 :::: h :::: c 2 and
( P g 1g2) 2 :::: c P gr P g~ for a constant c < 1 and constants c 1 < 1 < c2 close to 1. Then
Next, we integrate this with respect to P, and use the inequality for Pg 1g2 on the second
term to see that the left side of the lemma is bounded below by
This is a density with respect to Lebesgue measure on the positive quadrant of R 2 , and
we may take the likelihood equal to just the joint density of the observations. Let C~n, f)n)
maximize
n
((), 17) ~---* nP9,11(Ui, V;).
i=l
This estimator can be shown to be consistent, under some conditions, for the Euclidean and
weak topology, respectively, by, for instance, the method ofWald, Theorem 5.14.
The "statistic" 1/f9(U, V) = U + () V is, for fixed and known(), sufficient for the nuisance
parameter. Because the likelihood depends on 11 only through this statistic, the tangent
set 11 Pp8·" for 'f1 consists of functions of U + ()V only. Furthermore, because U + ()V is
distributed according to a mixture over an exponential family (a gamma-distribution with
shape parameter 2), the closed linear span of 11 PP8•• consists of all mean-zero, square-
integrable functions of U + ()V, by Example 25.35. Thus, the projection onto the closed
linear span of 11 PP8•• is the conditional expectation with respect to U + () V, and the efficient
score function for () is the "conditional score," given by
where we may use that, given U + (J V = s, the variables U and (J V are uniformly distributed
on the interval [0, s ]. This function turns out to be also an actual score function, in that
there exists an exact least favorable submodel, given by
Inserting q1 (fJ, q) in the log likelihood, making the change of variables z(l- t /(2fJ)) -+ z,
and computing the (ordinary) derivative with respect tot at t = 0, we obtain le, 11 (x). It
follows that the maximum likelihood estimator satisfies the efficient score equation, and its
asymptotic normality can be proved with the help of Theorem 25.54.
The linearity of the model in Y1 (or the formula involving the conditional expectation)
implies that
where (a 1 , •.• , a 3) ranges over a bounded subset of:ll3, (b 1 , ~)ranges over a compact
subset of (0, oo )2, and Y7 ranges over V, is Pe0 , 110 -Donsker with square-integrable envelope.
Suppose that we observe a random sample from the distribution of X = (V, W, Y), in
which for some unobservable errore independent of (V, W),
Thus, the independent variable Y is a regression on (V, W) that is linear in V with slope
(J but may depend on W in a nonlinear way. We assume that V and W take their values in
the unit interval [0, 1], and that Y1 is twice differentiable with J(q) < oo, for
We cannot use this directly to define a maximum likelihood estimator for (8, 17), as a
maximizer for '7 will interpolate the data exactly: A choice of '7 such that IJ(W;) = y;- 8v;
for every i maximizes 0 PtJ, 11 (x;) but does not provide a useful estimator. The problem is
that so far '7 has only been restricted to be differentiable, and this does not prevent it from
being very wiggly. To remedy this we use a penalized log likelihood estimator, defined as
the minimizer of
Here ln is a "smoothing parameter" that may depend on the data, and determines the weight
of the "penalty" 1 2 (1]). A large value of ln gives much influence to the penalty term and
hence leads to a smooth estimate of IJ, and conversely. Intermediate values are best. For
the purpose of estimating (J we may use any values in the range
"2- Op (n -1/2) ,
"'n-
There are simple numerical schemes to compute the maximizer (en. ~n). the function ~n
being a natural cubic spline with knots at the values WI, ... , Wn. The sequence en Can be
shown to be asymptotically efficient provided that the regression components involving V
and W are not confounded or degenerate. More precisely, we assume that the conditional
distribution of V given W is nondegenerate, that the distribution of W has at least two
support points, and that ho(w) = E(V I W = w) has a version with 1(ho) < oo. Then, we
have the following lemma on the behavior of (en. ~n).
Let ll·llw denote the norm of L2(Pw).
25.88 Lemma. Under the conditions listed previously, the sequence en is consistent for
8o, ll~nlloo = Op(l), 1(~n) = Op(l), and ll~n- 11llw = Op(ln), under (8o, '7o).
Proof. Write g(v, w) = ev + 17(w), let JIDn and Po be the empirical and true distribution
of the variables (e;, lti, W; ), and define functions
This suggests to apply Theorem 25.81 with r = (8, 17) and df(r, ro) equal to the sum of
the first three terms on the right.
Because >..~ 1 = 0 p (1 fAn) for An = n- 215 , it is not a real loss of generality to assume
that ln E An = [An, oo). Then d'J..(t', ro) < 8 and A E An implies that 18 - &ol < 8, that
11'7- '1ollw < 8 and that 1('7) .::: 8/An· Assume first that it is known already that 1e1 and
25.11 Approximately Least-Favorable Submodels 417
ll~lloo are bounded in probability, so that it is not a real loss of generality to assume that
1~1 V ll~lloo:::; 1. Then
Thus a bound on the II · II w-norm of 'f/ yields a bound on the "Bernstein norm" of e'f/ (given
on the left) of proportional magnitude. A bracket ['f/ 1 , 'f/ 2 ] for 'f/ induces a bracket [e+'f/ 1 -
e-'f/ 2 , e+'f/ 2 -e-'f/d forthefunctionse'f/. In viewofLemma 19.37 andExample 19.10, we
obtain
for
1/2
( 1 + 8/An ) d < -;,3/4 + _8_
e rv u 1/4.
e An
This bound remains valid if we replace 'f/- 'f/o by g - g0 , for the parametric part (J v adds little
to the entropy. We can obtain a similar maximal inequality for the process Gn (g - go) 2 ,
in view of the inequality P0 (g - g0 ) 4 :::; 4P0 (g - g0 ) 2 , still under our assumption that
lfJ 1vII 'f/ II 00 :::; 1. We conclude that Theorem 25.81 applies and yields the rate of convergence
I~- fJol +II~- 'f/ollw = Op(n- 215 + )..n) = Op(}..n).
Finally, we must prove that~ and ll~lloo are bounded in probability. By the Cauchy-
Schwarz inequality, for every wand 'f/,
The empirical measure applied to the square of the left side is equal to aT Ana for a =
(e, 'f/(0), 'f/1 (0)) and An = 1Pn(v, 1, w)(v, 1, wl the sample second moment matrix of the
variables (V;, 1, Wi). By the conditions on the distribution of (V, W), the corresponding
population matrix is positive-definite, whence we can conclude that is bounded in prob- a
ability as soon as aT Ana is bounded in probability, which is certainly the case if JIDn82 and
J ( ~) are bounded in probability.
We can prove the latter by applying the preceding argument conditionally, given the
sequence V1, W1, V2, W2 , ••.. Given these variables, the variables ei are the only random
part in mg,J...- mg0 ,i.. and the parts (g- go) 2 only contribute to the centering function. We
apply Theorem 25.81 with square distance equal to
An appropriate maximal inequality can be derived from, for example Corollary 2.2.8 in
[146], because the stochastic process Gneg is sub-Gaussian relative to the L2C JIDn)-metric
on the set of g. Because d'A.(r, ro) < 8 implies that JIDn(g- go) 2 < 82 , l('f/) :::; 8/An, and
lfJI 2 vII 'fill~ :::; C( JIDn(g- g0 ) 2 + 1 2 ('¥/)) for C dependent on the smallest eigenvalue of
418 Semiparametric Models
the second moment matrix An, the maximal inequality has a similar form as before, and we
conclude that lP'n (g - g0 ) 2 + ).. 2 1 2 (~) = 0 p ().. \ This implies the desired result. •
The normality of the error e motivates the least squares criterion and is essential for
the efficiency of e.
However, the penalized least-squares method makes sense also for
nonnormal error distributions. The preceding lemma remains true under the more general
condition of exponentially small error tails: Eeclel < oo for some c > 0.
Under the normality assumption (with a = 1 for simplicity) the score function for (J is
given by
Given a function h with J(h) < oo, the path '1t = '1 + th defines a submodel indexed by
the nuisance parameter. This leads to the nuisance score function
On comparing these expressions, we see that finding the projection of ie, 11 onto the set of
17-scores is a weighted least squares problem. By the independence of e and (V, W), it
follows easily that the projection is equal to B 8 , 11 ho for ho(w) = E(V I W = w), whence
the efficient score function for (J is given by
Therefore, an exact least-favorable path is given by 171 (fJ, 17) = '1- tho.
Because (en. ~n) maximizes a penalized likelihood rather than an ordinary likelihood, it
certainly does not satisfy the efficient score equation as considered in section 25.8. However,
it satisfies this equation up to a term involving the penalty. Inserting (e + t, 1J1 (e, ~)) into
the least-squares criterion, and differentiating at t = 0, we obtain the stationary equation
- A 2r1 II II
lP'nl~.~- 2A. Jo ~ (w)h 0 (w) dw = 0.
The second term is the derivative of ).. 2 J 2 ('1r(e, ~)) at t = 0. By the Cauchy-Schwarz
inequality, it is bounded in absolute value by 2).. 2 J(~)J(ho) = op(n- 112 ), by the first as-
sumption on).. and because J(~) = Op(l) by Lemma 25.88. We conclude that (en. ~n)
satisfies the efficient score equation up to a op(n- 112 )-term. Within the context of Theo-
rem 25.54 a remainder term of this small order is negligible, and we may use the theorem
to obtain the asymptotic normality of en.
A formulation that also allows other estimators ~ is as follows.
That the functions lb.~ are contained in a Donsker class, with probability tending to 1,
follows from Example 19.10 and Theorem 19.5.
The remaining regularity conditions of Theorem 25.54 can be seen to be satisfied by
standard arguments. •
In this example we use the smoothness of 'f/ to define a penalized likelihood estimator
for e. This automatically yields a rate of convergence of n- 215 for~. However, efficient
estimators for f) exist under weaker smoothness assumptions on 'f/, and the minimal smooth-
ness of 'f/ can be traded against smoothness of the function g(w) = E(VIW = w), which
also appears in the formula for the efficient score function and is unknown in practice. The
trade-off is a consequence of the bias P9 ,'fl, 8 le,r,,§ being equal to the cross product of the
biases in ~ and g. The square terms in the second order expansion (25.60), in which the
derivative relative to ('f/, g) (instead of 'f/) is a (2 x 2)-matrix, vanish. See [35] for a detailed
study of this model.
This is the usual maximum likelihood equation, except that we evaluate the score function
at the joint estimator (fi, fi), rather than at the single value e.
A precise condition for this
equation to be valid is that the partial derivative of log lik(O, 17)(x) with respect to 0 exists
and is equal to ie,q(x), for every x, (at least for 11 = fi and at 0 = fi).
Varying the nuisance parameter 11 is conceptually more difficult. '!Ypically, we can use
a selection of the submodels t 1--+ 1'/r used for defining the tangent set and the information
in the model. If scores for 11 take the form of an "operator" Bo,q working on a set of indices
h, then a typical likelihood equation takes the form
Here we have made it explicit in our notation that a score function always has mean zero,
by writing the score function as x 1--+ B8 ,TJh(x)- P8,TJB8,TJh rather than as x 1--+ Bo,TJh(x).
The preceding display is valid if, for every (0, 11), there exists some path t 1--+ 1'/r (0, 11) such
that 1'/o (0, 11) = 11 and, for every x,
The expectation of these maps under the parameter (00 , 1'/o) is the deterministic map Ill =
(WI,%) given by
By construction, the maximum likelihood estimators (fin. fin) and the "true" parameter
(Oo, 1'/o) are zeros of these maps,
25.90 Theorem. Suppose that the functions ie,q and Be,TJh, ifh ranges over'H and (0, 17)
over a neighborhood of (Oo, 1'/o), are contained in a PIJo,qo -Donsker class, and that
Furthermore, suppose that the map Ill : e X H I-+ :Ilk X .eoo (1-l) is Frechet-differentiable at
(Oo, 1'/o), with a derivative ~o: Rk x lin H 1--+ :Ilk x .f. 00 ('H) that has a continuous inverse
25.12 Likelihood Equations 421
on its range. If the sequence (en. ~n) is consistent for ((~o. 'flo) and satisfies 'ltn(en. ~n) =
op(n- 112 ), then
The theorem gives the joint asymptotic distribution of en and ~n· Because .fii'ltn (8Q, 'flo)
is the empirical process indexed by the Donsker class consisting of the functions le0 ,q0
and BIJ0 ,q0 h, this process is asymptotically normally distributed. Because normality is
retained under a continuous, linear map, such as ~0 1 , the limit distribution of the sequence
.,fti(Bn -eo, ~n - 'flo) is Gaussian as well.
The case of a partitioned parameter (e, '11) is an interesting one and illustrates most
aspects of the application of the preceding theorem. Therefore, we continue to write the
formulas in the corresponding partitioned form. However, the preceding theorem, applies
more generally. In Example 25.5.1 we wrote the score operator for a semiparametric model
in the form
Corresponding to this, the system of likelihood equations can be written in the form
If the partitioned parameter (e, '11) and the partitioned "directions" (a, b) are replaced by
a general parameter -r and general direction c, then this formulation extends to general
models. The maps 'lin and 'It then take the forms
The theorem requires that these can be considered maps from the parameter set into a
Banach space, for instance a space l 00 (C).
To gain more insight, consider the case that '11 is a measure on a measurable space
(Z, C). Then the directions h can often be taken equal to bounded functions h: Z 1-+ JR.,
corresponding to the paths d'flr = (1 + th) d'fl if '11 is a completely unknown measure, or
d'flr = (1 + t(h- 'flh)) d'fl if the total mass of each '11 is fixed to one. In the remainder of
the discussion, we assume the latter. Now the derivative map ~0 typically takes the form
'~'2)
1122 (e- eo)
'11- 'flo
where
. • ·T
= - Peo,q leo,q l8o,qo (e - 8Q),
'lin (e - 8o) 0 0
~!2('17- 'flo) =
.
-I
B~.qoi8o.qo d('fl- 'flo),
·T (25.91)
%,(e- eo)h = -Peo,q0 (BtJ0 ,q0 h)leo,qo (e- eo),
~22('17- 'flo)h = -I B90 ,q0 Be0 ,q0 h d('fl- 'flo).
422 Semiparametric Models
For instance, to find the last identity in an informal manner, consider a path 'f/r in the direction
of g, so that d'f/ 1 - d'f/o = tg d'f/o + o(t). Then by the definition of a derivative
'lt2(0o, 'f/r)h- 'lt(Oo, 'f/o)h = -(Peo, 11, - Pe0, 710 )Be0, 71,h
Rj -tPeo, 110 (Beo, 110 g)(Beo, 110 h) + o(t)
= - J (B;0, 110 Beo, 110 h) tg d'f/o + o(t).
On comparing the preceding pair of displays, we obtain the last line of (25.91), at least for
d'f/ - d'f/o= g d'f/o· These arguments are purely heuristic, and this form of the derivative
must be established for every example. For instance, within the context of Theorem 25.90,
we may need to apply Wo to 'f/ that are not absolutely continuous with respect to 'f/O· Then the
validity of (25.91) depends on the version that is used to define the adjoint operator B~. 110 •
By definition, an adjoint is an operator between L 2 -spaces and hence maps equivalence
classes into equivalence classes.
The four partial derivatives wij in (25.91) involve the four parts of the information
operator -:'-8. 11 Ae, 11 , which was written in a partitioned form in Exampl~ 25.5.1. In particular,
the map 'It 11 is exactly the Fisher information for 0, and the operator '1122 is defined in terms
of the information operator for the nuisance parameter. This is no coincidence, because
the formulas can be considered a version of the general identity "expectation of the second
derivative is equal to minus the information." An abstract form of the preceding argument
applied to the map 'lt(~)c = P,0 A,c - P,A,c leads to the identity, with ~~ a path with
derivative io at t = 0 and score function A,0 d,
In the case of a partitioned parameter~ = (0, 'f/), the inner inner product on the right is
J
defined as ((a, b), (a, {3)},0 =aT a+ b{J d'f/o, and the four formulas in (25.91) follow by
Example 25.5.1 and some algebra. A difference with the finite-dimensional situation is that
the derivatives io may not be dense in the domain of W0 , so that the formula determines W0
only partly.
An important condition in Theorem 25.90 is the continuous invertibility of the derivative.
Because a linear map between Euclidean spaces is automatically continuous, in the finite-
dimensional set-up this condition reduces to the derivative being one-to-one. For infinite-
dimensional systems of estimating equations, the continuity is far from automatic and may
be the condition that is hardest to verify. Because it refers to the .f 00 (1l)-norm, we have
some control over it while setting up the system of estimating equations and choosing the
set of functions 1-l. A bigger set 1l makes W 01 more readily continuous but makes the
differentiability of 'It and the Donsker condition more stringent.
In the partitioned case, the continuous invertibility of W0 can be verified by ascertaining
v
the continuous invertibility of the two operators w II and = w22 - w21 w i!1 w 12. In that
case we have
25.12 Likelihood Equations 423
The operator W11 is the Fisher information matrix for e if 'fl is known. If this would not be
invertible, then there would be no hope of finding asymptotically normal estimators for e.
The operator V has the form
The operator V: lin H ~---* l 00 (1i) is certainly continuously invertible if there exists a
positive number E such that
In the case that 'fl is identified with the map h I-* 'flh in l 00 (1i), the norm on the right is
given by suphe'HI ('fl - 'flo)h I· Then the display is certainly satisfied if, for some e > 0,
25.92 Lemma. Let Ia be a Banach space contained in .eoo (.C). If leo, 110 is nonsingu-
lar, B~. 710 B90 , 710 : Ia I-* Ia is onto and continuously invertible and B90 , 11/·9o,1Jo E Ia, then
B~, 710 Beo, 710 + K : Ia ~---* Ia is onto and continuously invertible.
Proof. Abbreviate the index (e0 , 'flo) to 0. The operator K is compact, because it has a
finite-dimensional range. Therefore, by Lemma 25.93 below, the operator B0B 0 + K is
continuously invertible provided that it is one-to-one.
Suppose that (B 0Bo + K)h = 0 for some h E R By assumption there exists a path
t ~---* 'flt with score function Boh = Boh- PoBoh at t = 0. Then the submodel indexed by
t ~---*(eo +tao, 'fl1 ), forao = -/0 Po(Boh)lo, has scorefunctiona0 lo + Boh att = 0, and
1 ' T' -
information
-2 -2
Polo(Boh) = Po(Boh) -
T
a0 Ioao + Po(Boh) + 2a0T ' T
a0 Ioao.
Because the efficient information matrix is nonsingular, this information must be strictly
positive, unless ao = 0. On the other hand,
This expression is at least the right side of the preceding display and is positive if a 0 'f:. 0.
Thus ao = 0, whence K h = 0. Reinserting this in the equation (B0Bo + K)h = 0, we find
that B0B0 h = 0 and hence h = 0. •
The proof of the preceding lemma is based on the Fredholm theory of linear operators.
An operator K : B 1--+ B is compact if it maps the unit ball into a totally bounded set. The
following lemma shows that for certain operators continuous invertibility is a consequence
of their being one-to-one, as is true for matrix operators on Euclidean space. t It is also
useful to prove the invertibility of the information operator itself.
25.93 Lemma. Let B be a Banach space, let the operator A : B 1--+ B be continuous, onto
and continuously invertible and let K : B 1--+ B be a compact operator. Then R(A + K) is
closed and has codimension equal to the dimension ofN(A + K). In particular, if A + K
is one-to-one, then A + K is onto and continuously invertible.
The asymptotic covariance matrix of the sequence .jn(en- 80 ) can be computed from
the expression for ~0 and the covariance function of the limiting process of the sequence
.jni.J!n(Bo, 'lo). However, it is easier to use an asymptotic representation of .jn(en- 80 ) as
a sum. For a continuously invertible information operator B90 ,'10 B1Jo,I'Jo this can be obtained
as follows.
In view of (25.91), the assertion of Theorem 25.90 can be rewritten as the system of
equations, with a subscript 0 denoting (Oo, 'lo),
random censoring model, and T follows the Cox model. Thus, the density of X = (Y, 1::!.., Z)
is given by
We define a likelihood for the parameters (0, A) by dropping the factors involving the
distribution of (C, Z), and replacing A.(y) by the pointmass A{y},
For any bounded, measurable function h: [0, r] ~---* IR, the path defined by dA, = (1 +
t h) d A defines a submodel passing through A at t = 0. Its score function at t = 0 takes
the form
The function h ~---* B9,Ah(x) is bounded on every set of uniformly bounded functions h,
for any finite measure A, and is even uniformly bounded in x and in (0, A) ranging over a
neighborhood of (Oo, Ao).
It is not difficult to find a formula for the adjoint Bo,A of B11,A : Lz(A) ~---* Lz(PI!,A). but
this is tedious and not insightful. The information operator Bo,A B9 ,A: L 2 (A) ~---* L 2 (A) can
be calculated from the identity P11 ,A (B9 ,Ag)(B9 ,Ah) = Ag(Bo,A B9 ,Ah). For continuous A
it takes the surprisingly simple form
426 Semiparametric Models
To see this, write the product B 9 ,Ag B 9 ,Ah as the sum of four terms
Take the expectation under P9 ,A and interchange the order of the integrals to represent
Be, AB 9,Ah also as a sum of four terms. Partially integrate the fourth term to see that
this cancels the second and third terms. We are left with the first term. The function
B8,Ai 9 ,A can be obtained by a similar argument, starting from the identity P9 ,Aio,ABo,Ah =
A(Be.A io,A)h. It is given by
The calculation of the information operator in this way is instructive, but only to check
(25.91) for this example. As in other examples a direct derivation of the derivative of the
map \If= (\III,%) given by \III(O, A)= Poio,A and \112(0, A)h = PoB9 ,Ah requires less
work. In the present case this is almost trivial, for the map \If is already linear in A. Writing
Go (y I Z) for the distribution function of Y given Z, this map can be written as
If we take 1i equal to the unit ball of the space BV[O, r] of bounded functions of bounded
variation, then the map \If: lR x l 00 (1i) ~---* lR x l 00 (1i) is linear and continuous in A, and
its partial derivatives with respect to 0 can be found by differentiation under the expectation
and are continuous in a neighborhood of (Oo, Ao). Several applications ofFubini's theorem
show that the derivative takes the form (25.91).
We can consider B 0B 0 as an operator of the space BV[O, r] into itself. Then it is
continuously invertible if the function y ~---* Eoo,Ao1Y;:::ye 902 is bounded away from zero on
[0, r]. This we have (indirectly) assumed. Thus, we can apply Lemma 25.92. The efficient
score function takes the form (25.33), which, with M;(y) = EIJo,Aoh:::yZieiJoZ, reduces to
-
IIJo,Ao = EeiJo
z
I ( MI
Z- Mo (y)
)
2
-
Go(Y I Z) dAo(y).
(x, y, z) 1-+ j Po(x Is) d17(s) Po(Y I z) d17(z) =: Po(x I 17) Po(Y I z) d17(z).
Here the parameter 17 is a completely unknown distribution, and the kernel p 8 ( • Is) is a given
e
parametric model indexed by the parameters and s, relative to some density IL· Thus,
we obtain equal numbers of bad and good (direct) observations concerning '7· Typically,
by themselves the bad observations do not contribute positive information concerning the
cumulative distribution function 17, but along with the good observations they help to cut
the asymptotic variance of the maximum likelihood estimators.
25.94 Example. This model can arise if we are interested in the relationship between a
response Y and a covariate Z, but because of the cost of measurement we do not observe Z
for a fraction of the population. For instance, a full observation (Y, Z) = (D, W, Z) could
consist of
- a logistic regression D on exp Z with intercept and slope /30 and /3 1 , respectively, and
- a linear regression W on Z with intercept and slope a 0 and a 1, respectively, and an
N(O, u 2 )-error.
Given Z the variables D and W are assumed independent, and Z has a completely unspec-
ified distribution 17 on an interval in R The kernel is equal to, with Ill denoting the logistic
distribution function and </J denoting the standard normal density,
The precise form of this density does not play a major role in the following.
In this situation the covariate Z is a gold standard, but, in view of the costs of measure-
ment, for a selection of observations only the "surrogate covariate" W is available. For
instance, Z corresponds to the LDL cholesterol and W to total cholesterol, and we are
interested in heart disease D = 1. For simplicity, each observation in our set-up consists of
one full observation (Y, Z) = (D, W, Z) and one reduced observation X= (D, W). D
25.95 Example. If the kernel p 9 (y I z) is equal to the normal density with mean z and
e,
variance then the observations are a random sample Z 1, ••• , Zn from 17, a random sample
X1, ... , Xn from 17 perturbed by an additive (unobserved) normal error, and a sample
Y1, ... , Yn of random variables that given Z 1, ... , Zn are normally distributed with means
Z; and variance e. In this case the interest is perhaps focused on estimating 17, rather than
e. o
The distribution of an observation (X, Y, Z) is given by two densities and a nonparametric
part. We choose as likelihood
Thus, for the completely unknown distribution 17 of Z we use the empirical likelihood for
the other part of the observations we use the density, as usual. It is clear that the maximum
428 Semiparametric Models
likelihood estimator~ charges all observed values Zt •... , Zn. but the term pe(x I 'f/) leads
to some additional support points as well. In general, these are not equal to values of the
observations.
The score function for () is given by
· . . fke(xls)pe(xls)d17(s).
le, 71 (x, y, z) = Ke. 71 (x) + Ke(Y Iz) = ( I ) + Ke(Y I z).
Pe x 11
Here ke(Y I z) = a;ae log p 9 (y I z) is the score function for() for the conditional density
pe(Y I z), and k 9 , 71 (x) is the score function for() of the mixture density pe(x I 'f/).
Paths of the form d17 1 = (1 + th) d'f/ (with 'f/h = 0) yield scores
The operator C8, 71 : L 2 (77) ~--* L 2(pe(·l 'f/)) is the score operator for the mixture part of the
model. Its Hilbert-space adjoint is given by
The range of Be, 71 is contained in the subset G of L2(Pe(·l 'f/) x 'f/) consisting of functions
of the form (x, z) ~--* g 1(x) + g2(z) +c. This representation of a function of this type is
unique if both g 1 and g2 are taken to be mean-zero functions. With P9 ,71 the distribution of
the observation (X, Y, Z),
The unit ball of one of these spaces is a good choice for the set 1-l indexing the likelihood
equations if the maps z ~--* Peo (x I z) are sufficiently smooth.
25.96 Lemma. Let Z be a bounded, convex subset of !Rd and assume that the maps z ~--*
po(x I z) are continuously differentiable for each x with partial derivatives a;az; Peo (x I z)
25.12 Likelihood Equations 429
Proof. By its strict positive-definiteness in the Hilbert-space sense, the operator B0B 0 :
l""(Z) 1-+ l""(Z) is certainly one-to-one in that B0B0 h = 0 implies that h = 0 almost
surely under "flO· On reinserting this we find that -h = COCoh = COO = 0 everywhere.
Thus B 0B 0 is alsoone-to-oneinapointwise sense. If it can be shown that COCo: cP(Z) 1-+
cP (Z) is compact, then B 0Bo is onto and continuously invertible, by Lemma 25.93.
It follows from the Lipschitz condition on the partial derivatives that C0h(z) is differ-
entiable for every bounded function h : X 1-+ R and its partial derivatives can be found by
differentiating under the integral sign:
a C0h(z)
-a
Zi
= I h(x) a po(x 1z) dJL(x).
-a
Zi
The two conditions of the lemma imply that this function has Lipschitz norm of order a
bounded by K llh lloo· Let hn be a uniformly bounded sequence in l""(X). Then the partial
derivatives of the sequence C0hn are uniformly bounded and have uniformly bounded
Lipschitz norms of order a. Because Z is totally bounded, it follows by a strengthening
of the Arzela-Ascoli theorem that the sequences of partial derivatives are precompact with
respect to the Lipschitz norm of order {J for every {J < a. Thus there exists a subsequence
along which the partial derivatives converge in the Lipschitz norm of order {J. By the
Arzela-Ascoli theorem there exists a further subsequence such that the functions COhn(Z)
converge uniformly to a limit. If both a sequence of functions itself and their continuous
partial derivatives converge uniformly to limits, then the limit of the functions must have
the limits of the sequences of partial derivatives as its partial derivatives. We conclude that
COhn converges in the II· III+p-norm, whence CO: l""(X) 1-+ CP(Z) is compact. Then the
operator COCo is certainly compact as an operator from cP (Z) into itself. •
Because the efficient information for () is bounded below by the information for () in
a "good" observation (Y, Z), it is typically positive. Then the preceding lemma together
with Lemma 25.92 shows that the derivative $ 0 is continuously invertible as a map from
Rk x l""(?-l) x Rk x l""(?-l) for 1-l the unit ball of CP(Z). This is useful in the cases that
the dimension of Z is not bigger than 3, for, in view of Example 19.9, we must have that
fJ > d /2 in order that the functions Be,.,h = C 9 ,.,h EB h form a Donsker class, as required
by Theorem 25.90. Thus a > 1/2, 2, 3/2 suffice in dimensions 1, 2, 3, but we need fJ > 2
if Z is of dimension 4.
Sets Z of higher dimension can be treated by extending Lemma 25.96 to take into account
higher-order derivatives, or alternatively, by not using a ca (Z)-unit ball for 1-l. The general
requirements for a class 1i that is the unit ball of a Banach space :Ia are that 1i is 710 -Donsker,
that CO Cola c :Ia, and that COCo : :Ia 1-+ :Ia is compact. For instance, if pe (x Iz) corresponds
to a linear regression on z, then the functions z 1-+ COCoh(z) are of the form z 1-+ g(aT z)
430 Semiparametric Models
for functions g with a one-dimensional domain. Then the dimensionality of Z does not
really play an important role, and we can apply similar arguments, under weaker conditions
than required by treating Z as general higher dimensional, with, for instance, lffi equal to
the Banach space consisting of the linear span of the functions z ~--+ g(aT z) in Cf (Z) and
1-l its unit ball.
The second main condition of Theorem 25.92 is that the functions i9,q and B9,qh form
a Donsker class. Dependent on the kernel p 9 (x I z), a variety of methods may be used
to verify this condition. One possibility is to employ smoothness of the kernel in x in
combination with Example 19.9. If the map x ~--+ p 9 (x I z) is appropriately smooth, then so
is the map x ~--+ C 9,qh(x). Straightforward differentiation yields
Depending on the function a;ax; log p9 (x I z), this leads to abound on the first derivative of
the functionx 1-+ C9 ,'flh(x). If Xis an interval in IR, then this is sufficient for applicability of
Example 19.9. If X is higher dimensional, the we can bound higher-order partial derivatives
in a similar manner.
If the main interest is in the estimation of Y1 rather than (), then there is also a nontechnical
criterion for the choice of 1-l, because the final result gives the asymptotic distribution of fJ h
for every h E 1-l, but not necessarily for h ¢. 1-l. Typically, a particular h of interest can be
added to a set 1-l that is chosen for technical reasons without violating the results as given
previously. The addition of an infinite set would require additional arguments. Reference
[107] gives more details concerning this example.
Notes
Most of the results in this chapter were obtained during the past 15 years, and the area is
still in development. The monograph by Bickel, Klaassen, Ritov, and Wellner [8] gives
many detailed information calculations, and heuristic discussions of methods to construct
estimators. See [77], [101], [102], [113], [122], [145] for a number of other, also more re-
cent, papers. For many applications in survival analysis, counting processes offer a flexible
modeling tool, as shown in Andersen, Borgan, Gill, and Keiding [1], who also treat semi-
parametric models for survival analysis. The treatment of maximum likelihood estimators
is motivated by (partially unpublished) joint work with Susan Murphy. Apparently, the
present treatment of the Cox model is novel, although proofs using the profile likelihood
function and martingales go back at least 15 years. In connection with estimating equations
and CAR models we profited from discussions with James Robins, the representation in
section 25.53 going back to [129]. The use of the empirical likelihood goes back a long
way, in particular in survival analysis. More recently it has gained popularity as a basis
for constructing likelihood ratio based confidence intervals. Limitations of the information
25.12 Likelihood Equations 431
bounds and the type of asymptotics discussed in this chapter are pointed out in [128]. For
further information concerning this chapter consult recent journals, both in statistics and
econometrics.
PROBLEMS
1. Suppose that the underlying distribution of a random sample of real-valued observations is known
to have mean zero but is otherwise unknown.
(i) Derive a tangent set for the model.
(ii) Find the efficient influence function for estimating 1/f(P) = P(C) for a fixed set C.
(iii) Find an asymptotically efficient sequence of estimators for 1/f(P).
2. Suppose that the model consists of densities p(x - e) on Rk, where p is a smooth density with
p(x) = p( -x). Find the efficient influence function for estimating e.
3. In the regression model of Example 25.28, assume in addition that e and X are independent. Find
the efficient score function fore.
J
4. Find a tangent set for the set of mixture distributions p(x 1z) dF(z) for x r+ p(x 1z) the
uniform distribution on [z, z + 1]. Is the linear span of this set equal to the nonparametric tangent
set?
5. (Neyman-Scott problem) Suppose that a typical observation is a pair (X, Y) of variables that
are conditionally independent and N(Z, e)-distributed given an unobservable variable Z with a
e
completely unknown distribution 11 on JR. A natural approach to estimating is to "eliminate"
the unobservable Z by taking the difference X - Y. The maximum likelihood estimator based
on a sample of such differences is Tn = !n- Ll=l
1 (X; - Y;) 2 •
(i) Show that the closed linear span of the tangent set for 11 contains all square-integrable,
mean-zero functions of X + Y.
(ii) Show that Tn is asymptotically efficient.
(iii) Is Tn equal to the semiparametric maximum likelihood estimator?
6. In Example 25.72, calculate the score operator and the information operator for 11·
7. In Example 25.12, express the density of an observation X in the marginal distributions F and
G of Y and C and
(i) Calculate the score operators for F and G.
(ii) Show that the empirical distribution functions fr* and fJ* of theY; and Ci are asymptotically
efficient for estimating the marginal distributions F* and G* of Y and C, respectively;
(iii) Prove the asymptotic normality of the estimator for F given by
F(y) = 1- fl
09:Sy
(1- A{s}), A
A(y) =
l [O,yJ
dfr* A*;
A*
G - F
9. Show that a U-statistic is an asymptotically efficient estimator for its expectation if the model is
nonparametric.
10. Suppose that the model consists of all probability distributions on the real line that are symmetric.
(i) If the symmetry point is known to be 0, find the maximum likelihood estimator relative to
the empirical likelihood.
(ii) If the symmetry point is unknown, characterize the maximum likelihood estimators relative
to the empirical likelihood; are they useful?
11. Find the profile likelihood function for the parameter ein the Cox model with censoring discussed
in Section 25.12.1.
12. Let P be the set of all probability distributions on R with a positive density and let 1/f(P) be the
median of P.
(i) Find the influence function of 1/f.
(ii) Prove that the sample median is asymptotically efficient.
References
[1] Andersen, P.K., Horgan, 0., Gill, R.D., and Keiding, N. (1992). Statistical Models Based on
Counting Processes. Springer, Berlin.
[2] Arcones, M.A., and Gine, E. (1993). Limit theorems for U-processes. Annals of Probability
21, 1494--1542.
[3] Bahadur, R.R. (1967). An optimal property of the likelihood ratio statistic. Proceedings of
the Fifth Berkeley Symposium on Mathematical Statistics and Probability (1965/66) I, 13-26.
University of California Press, Berkeley.
[4] Bahadur, R.R. ( 1971 ). Some limit theorems in statistics. Conference Board ofthe Mathematical
Sciences Regional Conference Series in Applied Mathematics 4. Society for Industrial and
Applied Mathematics, Philadelphia.
[5] Bamdoff-Nielsen, O.E., and Hall, P. (1988). On the level-error after Bartlett adjustment of the
likelihood ratio statistic. Biometrika 75, 378-378.
[6] Bauer, H. (1981). Probability Theory and Elements of Measure Theory. Holt, Rinehart, and
Winston, New York.
[7] Bentkus, V., GOtze, F., van Zwet, W.R. (1997). An Edgeworth expansion for symmetric statis-
tics. Annals of Statistics 25, 851-896.
[8] Bickel, P.J., Klaassen, C.A.J., Ritov, Y., and Wellner, J.A. (1993). Efficient and Adaptive
Estimation for Semiparametric Models. Johns Hopkins University Press, Baltimore.
[9] Bickel, P.J., and Ghosh, J.K. (1990). A decomposition for the likelihood ratio statistic and the
Bartlett correction-a Bayesian argument. Annals of Statistics 18, 1070-1090.
[10] Bickel, P.J., and Rosenblatt, M. (1973). On some global measures of the deviations of density
function estimates. Annals of Statistics 1, 1071-1095.
[11] Billingsley, P. (1968). Convergence of Probability Measures. John Wiley, New York.
[12] Birge, L. (1983). Approximation dans les espaces metriques et theorie de !'estimation.
Zeitschriftfii.r Wahrscheinlichkeitstheorie und Verwandte Gebiete 65, 181-238.
[13] Birge, L. (1997). Estimation of unimodal densities without smoothness assumptions. Annals
of Statistics 25, 970-981.
[14] Birge, L., and Massart, P. (1993). Rates of convergence for minimum contrast estimators.
Probability Theory and Related Fields 97, 113-150.
[15] Birge, L., and Massart, P. (1997). From model selection to adaptive estimation. Festschrift for
Lucien Le Cam. Springer, New York, 55-87.
[16] Birman, M.S., and Solomjak, M.Z. (1967). Piecewise-polynomial approximation of functions
of the classes Wp. Mathematics of the USSR Sbomik13, 295-317.
[ 17] Brown, L. ( 1987). Fundamentals of Statistical Exponential Families with Applications in Sta-
tistical Decision Theory. Institute of Mathematical Statistics, California.
[18] Brown, L.D., and Fox, M. (1974). Admissibility of procedures in two-dimensional location
parameter problems. Annals of Statistics 2, 248-266.
[19] Cantelli, F.P. (1933). Sulla determinazione empirica delle leggi di probabilita. Giornale
dell'lstituto Italiano degli Attuari 4, 421-424.
433
434 References
[20] Chernoff, H. (1952). A measure of asymptotic efficiency for tests of a hypothesis based on the
sum of observations. Annals of Mathematical Statistics 23, 493-507.
[21] Chernoff, H. (1954 ). On the distribution of the likelihood ratio statistic. Annals ofMathematical
Statistics 25, 573-578.
[22] Chernoff, H., and Lehmann, E.L. ( 1954). The use of maximum likelihood estimates in x2 tests
for goodness of fit. Annals of Mathematical Statistics 25, 579-586.
[23] Chow, Y.S., and Teicher, H. (1978). Probability Theory. Springer-Verlag, New York.
[24] Cohn, D.L. (1980). Measure Theory. Birkhauser, Boston.
[25] Copas, J. (1975). On the unimodality of the likelihood for the Cauchy distribution. Biometrika
62, 701-704.
[26] Cramer, H. (1938). Sur un nouveau theoreme-limite de Ia theorie des probabilites. Actualites
Scientifiques et Industrielles 736, 5-23.
[27] Cramer, H. (1946). Mathematical Methods of Statistics. Princeton University Press, Princeton.
[28] Csorgo, M. (1983). Quantile Processes with Statistical Applications. CBMS-NSF Regional
Conference Series in Applied Mathematics 42. Society for Industrial and Applied Mathematics
(SIAM), Philadelphia.
[29] Dacunha-Castelle, D., and Dufto, M. (1993). Probabilites et Statistiques, tome II. Masson,
Paris.
[30] Davies, R.B. (1973). Asymptotic inference in stationary Gaussian time-series. Advances in
Applied Probability 4, 469-497.
[31] Dembo, A., and Zeitouni, 0. (1993). Large Deviation Techniques and Applications. Jones and
Bartlett Publishers, Boston.
[32] Deuschel, J.D., and Stroock, D.W. (1989). Large Deviations. Academic Press, New York.
[33] Devroye, L., and Gyorfi, L. (1985). Nonparametric Density Estimation: The L1-View. John
Wiley & Sons, New York.
[34] Diaconis, P., and Freedman, D. (1986). On the consistency of Bayes estimates. Annals of
Statistics 14, 1-26.
[35] Donald, S.G., and Newey, W.K. (1994). Series estimation of sernilinear models. Journal of
Multivariate Analysis 50, 30-40.
[36] Donoho, D.L., and Johnstone, I.M. (1994). Idea spatial adaptation by wavelet shrinkage.
Biometrika 81,425-455.
[37] Donoho, D.L., and Liu, R.C. (1991). Geometrizing rates of convergence II, ill. Annals of
Statistics 19, 633-701.
[38] Donsker, M.D. (1952). Justification and extension of Doob's heuristic approach to the
Kolmogorov-Srnirnov theorems. Annals of Mathematical Statistics 23, 277-281.
[39] Doob, J. (1948). Application of the theory of martingales. Le Calcul des Probabilites et ses
Applications. Colloques Internationales du CNRS Paris, 22-28.
[40] Drost, F. C. (1988). Asymptotics for Generalized Chi-Square Goodness-of-Fit Tests. CWI tract
48. Centrum voor Wiskunde en Informatica, Amsterdam.
[41] Dudley, R.M. (1976). Probability and Metrics: Convergence of Laws on Metric Spaces. Math-
ematics Institute Lecture Notes Series 45. Aarhus University, Denmark.
[42] Dudley, R.M. (1989). Real Analysis and Probability, Wadsworth, Belmont, California.
[43] Dupac, V., and Hajek, J. (1969). Asymptotic normality of simple linear rank statistics under
alternatives, Annals of Mathematical Statistics II 40, 1992-2017.
[44] Efron, B., and Tibshirani, R.J. (1993). An Introduction to the Bootstrap. Chapman and Hall,
London.
[45] Fabrmeir, L., and Kaufmann, H. (1985). Consistency and asymptotic normality of the maximum
likelihood estimator in generalized linearmodels.Annals ofStatistics 13, 342-368. (Correction:
Annals of Statistics 14, 1643.)
[46] Farrell, R.H. (1972). On the best obtainable asymptotic rates of convergence in estimation of
a density function at a point. Annals of Mathematical Statistics 43, 170-180.
[47] Feller, W. (1971). An Introduction to Probability Theory and Its Applications, vol. II. John
Wiley & Sons, New York.
References 435
[48] Fisher, R.A. (1922). On the mathematical foundations of theoretical statistics. Philosophical
Transactions of the Royal Society of London, Series A 222, 309-368.
[49] Fisher, R.A. (1924). The conditions under which x2 measures the discrepancy between obser-
vations and hypothesis. Journal Royal Statist. Soc. 87, 442-450.
[50] Fisher, R.A. (1925). Theory of statistical estimation. Proceedings ofthe Cambridge Philosoph-
ical Society 22,700-725.
[51] van de Geer, S.A. ( 1988). RegressionAnalysis and Empirical Processes. CWI Tract 45. Centrum
voor Wiskunde en Informatica, Amsterdam.
[52] Ghosh, J.K. (1994). Higher Order Asymptotics. Institute of Mathematical Statistics, Hayward.
[53] Gill, R.D. (1989). Non- and semi-parametric maximum likelihood estimators and the von-Mises
method (part 1). Scandinavian Journal of Statistics 16, 97-128.
[54] Gill, R.D. (1994). Lectures on survival analysis. Lecture Notes in Mathematics 1581, 115-241.
[55] Gill, R.D., and Johansen, S. (1990). A survey of product-integration with a view towards
application in survival analysis. Annals of Statistics 18, 1501-1555.
[56] Gill, R.D., and van der Vaart, A.W. (1993). Non- and semi-parametric maximum likelihood
estimators and the von Mises method (part II). Scandinavian Journal of Statistics 20, 271-288.
[57] Gine, E., and Zinn, J. (1986). Lectures on the central limit theorem for empirical processes.
Lecture Notes in Mathematics 1221, 50-113.
[58] Gine, E., and Zinn, J. ( 1990). Bootstrapping general empirical measures. Annals ofProbability
18, 851-869.
[59] Glivenko, V. (1933). Sulla determinazione empirica della leggi di probabilita. Giornale
dell' lstituto Italiano degli Attuari 4, 92-99.
[60] Greenwood, P.E., and Nikulin, M.S. (1996). A Guide to Chi-Squared Testing. John Wiley &
Sons, New York.
[61] Groeneboom, P. (1980). Large Deviations and Bahadur Efficiencies. MC tract 118, Centrum
voor Wiskunde en Informatica, Amsterdam.
[62] Groeneboom, P. (1985). Estimating a monotone density. Proceedings of the Berkeley Confer-
ence in Honor ofJerzy Neyman and Jack Kiefer 2, 539-555. Wadsworth, Monterey, California.
[63] Groeneboom, P. (1988). Brownian Motion with a parabolic drift and Airy functions. Probability
Theory and Related Fields 81, 79-109.
[64] Groeneboom, P., Lopuhaii, H.P. (1993). Isotonic estimators of monotone densities and distri-
bution functions: basic facts. Statistica Neerlandica 47, 175-183.
[65] Groeneboom, P., Oosterhoff, J., and Ruymgaart, F. (1979). Large deviation theorems for em-
pirical probability measures. Annals of Probability 7, 553-586.
[66] de Haan, L. (1976). Sample extremes: An elementary introduction. Statistica Neerlandica 30,
161-172.
[67] Hajek, J. (1961). Some extensions of the Wald-Wolfowitz-Noether theorem. Annals of Math-
ematical Statistics 32, 506-523.
[68] Hajek, J. (1968). Asymptotic normality of simple linear rank statistics under altematives.Annals
of Mathematical Statistics 39, 325-346.
[69] Hajek, J. (1970). A characterization of limiting distributions of regular estimates. Zeitschrift
fiir Wahrscheinlichkeitstheorie und Verwandte Gebiete 14, 323-330.
[70] Hajek, J. (1972). Local asymptotic minimax and admissibility in estimation. Proceedings of
the Sixth Berkeley Symposium on Mathematical Statistics and Probability 1, 175-194.
[71] Hajek, J., and Sidak, Z. (1967). Theory of Rank Tests. Academic Press, New York.
[72] Hall, P. (1992). The Bootstrap and Edgeworth Expansion. Springer Series in Statistics. Springer-
Verlag, New York.
[73] Hampel, F.R., Ronchetti, E.M., Rousseeuw, P.J., and Stahel, W.A. (1986). Robust Statistics:
the Approach Based on Influence Functions. Wiley, New York.
[74] Helmers, R. (1982). Edgeworth Expansions for Linear Combinations of Order Statistics. Math-
ematical Centre Tracts 105. Mathematisch Centrum, Amsterdam.
[75] Hoeffding, W. (1948). A class of statistics with asymptotically normal distribution. Annals of
Mathematical Statistics 19, 293-325.
436 References
[104] von Mises, R. (1947). On the asymptotic distribution of differentiable statistical functions.
Annals of Mathematical Statistics 18, 309-348.
[105] Murphy, S.A., Rossini, T.J., and van der Vaart, A.W. (1997). MLE in the proportional odds
model. Journal of the American Statistical Association 92, 968-976.
[106] Murphy, S.A., and van der Vaart, A.W. (1997). Semiparametric likelihood ratio inference.
Annals of Statistics 25, 1471-1509.
[ 107] Murphy, S .A., and van der Vaart, A. W. (1996). Semiparametric mixtures in case-control studies.
[108] Murphy, S.A., and van der Vaart, A.W. (1996). Likelihood ratio inference in the errors-in-
variables model. Journal of Multivariate Analysis 59, 81-108.
[109] Noether, G.B. (1955). On a theorem of Pitman. Annals of Mathematical Statistics 25, 64-68.
[110] Nussbaum, M. (1996). Asymptotic equivalence of density estimation and Gaussian white noise.
Annals of Statistics 24, 2399-2430.
[111] Ossiander, M. (1987). A central limit theorem under metric entropy with L2 bracketing. Annals
of Probability 15,897-919.
[112] Pearson, K. (1900). On the criterion that a given system of deviations from the probable in
the case of a correlated system of variables is such that it can be reasonably supposed to have
arisen from random sampling. Philosopical Magazine, Series 5 SO, 157-175. (Reprinted in:
Karl Pearson's Early Statistical Papers, Cambridge University Press, 1956.)
[113] Pfanzagl, J., and Wefelmeyer, W. (1982). Contributions to a General Asymptotic Statistical
Theory. Lecture Notes in Statistics 13. Springer-Verlag, New York.
[114] Pfanzagl, J., and Wefelmeyer, W. (1985). Asymptotic Expansions for General Statistical Mod-
els. Lecture Notes in Statistics 31. Springer-Verlag, New York.
[115] Pflug, G.C. (1983). The limiting loglikelihood process for discontinuous density families.
Zeitschriftfor Wahrscheinlichkeitstheorie und Verwandte Gebiete 64, 15-35.
[116] Pollard, D. (1982). A central limit theorem for empirical processes. Journal of the Australian
Mathematical Society A 33, 235-248.
[117] Pollard, D. (1984). Convergence of Stochastic Processes. Springer-Verlag, New York.
[118] Pollard, D. (1985). New ways to prove central limit theorems. Econometric Theory 1, 295-314.
[119] Pollard, D. (1989). A maximal inequality for sums of independent processes under a bracketing
condition.
[120] Pollard, D. (1990). Empirical Processes: Theory and Applications. NSF-CBMS Regional Con-
ference Series in Probability and Statistics 2. Institute of Mathematical Statistics and American
Statistical Association. Hayward, California.
[121] Prakasa Rao, B.L.S. (1983). Nonparametric Functional Estimation. Academic Press, Orlando.
[122] Qin, J., and Lawless, J. (1994). Empirical likelihood and general estimating equations. Annals
of Statistics 22, 300-325.
[123] Rao, C.R. (1973). Linear Statistical Inference and Its Applications. Wiley, New York.
[124] Reed, M., and Simon, B. (1980). Functional Analysis. Academic Press, Orlando.
[ 125] Reeds, J.A. ( 1976). On the Definition of von Mises Functionals. Ph.D. dissertation, Department
of Statistics, Harvard University, Cambridge, MA.
[126] Reeds, J.A. (1985). Asymptotic number of roots of Cauchy location likelihood equations.
Annals of Statistics 13, 775-784.
[127] Revesz, P. (1968). The Laws of Large Numbers. Academic Press, New York.
[128] Robins, J.M., and Ritov, Y. (1997). Towards a curse of dimensionality appropriate (CODA)
asymptotic theory for semi-parametric models. Statistics in Medicine 16, 285-319.
[129] Robins, J.M., and Rotnitzky, A. (1992). Recovery of information and adjustment for dependent
censoring using surrogate markers. In AIDS Epidemiology-Methodological Issues, 297-331,
eds: N. Jewell, K. Dietz, and V. Farewell. Birkhliuser, Boston.
[130] Roussas, G.G. (1972). Contiguity of Probability Measures. Cambridge University Press,
Cambridge.
[131] Rubin, H., and Vitale, R.A. (1980). Asymptotic distribution of symmetric statistics. Annals of
Statistics 8, 165-170.
[132] Rudin, W. (1973). Functional Analysis. McGraw-Hill, New York.
438 References
439
440 Index