Mathematical Relations

Download as pdf or txt
Download as pdf or txt
You are on page 1of 22

Monotonic function

en.wikipedia.org
Chapter 1

Bernsteins theorem on monotone


functions

In real analysis, a branch of mathematics, Bernsteins theorem states that every real-valued function on the half-line
[0, ) that is totally monotone is a mixture of exponential functions. In one important special case the mixture is a
weighted average, or expected value.
Total monotonicity (sometimes also complete monotonicity) of a function f means that f is continuous on [0, ),
innitely dierentiable on (0, ), and satises

dn
(1)n f (t) 0
dtn
for all nonnegative integers n and for all t > 0. Another convention puts the opposite inequality in the above denition.
The weighted average statement can be characterized thus: there is a non-negative nite Borel measure on [0, ),
with cumulative distribution function g, such that


f (t) = etx dg(x),
0

the integral being a RiemannStieltjes integral.


Nonnegative functions whose derivative is completely monotone are called Bernstein functions. Every Bernstein
function has the Lvy-Khintchine representation:


f (t) = a + bt + (1 etx )(dx)
0

where a, b 0 and is a measure on the positive real half-line such that


(1 x)(dx) < .
0

In more abstract language, the theorem characterises Laplace transforms of positive Borel measures on [0,). In
this form it is known as the BernsteinWidder theorem, or HausdorBernsteinWidder theorem. Felix Haus-
dor had earlier characterised completely monotone sequences. These are the sequences occurring in the Hausdor
moment problem.

1.1 References
S. N. Bernstein (1928). Sur les fonctions absolument monotones. Acta Mathematica. 52: 166. doi:10.1007/BF02592679.

2
1.2. EXTERNAL LINKS 3

D. Widder (1941). The Laplace Transform. Princeton University Press.

Rene Schilling, Renming Song and Zoran Vondracek (2010). Bernstein functions. De Gruyter.

1.2 External links


MathWorld page on completely monotonic functions
Chapter 2

Monotone cubic interpolation

In the mathematical subeld of numerical analysis, monotone cubic interpolation is a variant of cubic interpolation
that preserves monotonicity of the data set being interpolated.
Monotonicity is preserved by linear interpolation but not guaranteed by cubic interpolation.

2.1 Monotone cubic Hermite interpolation

Example showing non-monotone cubic interpolation (in red) and monotone cubic interpolation (in blue) of a monotone data set.

Monotone interpolation can be accomplished using cubic Hermite spline with the tangents mi modied to ensure the
monotonicity of the resulting Hermite spline.
An algorithm is also available for monotone quintic Hermite interpolation.

4
2.1. MONOTONE CUBIC HERMITE INTERPOLATION 5

2.1.1 Interpolant selection


There are several ways of selecting interpolating tangents for each data point. This section will outline the use of the
FritschCarlson method.
Let the data points be (xk , yk ) for k = 1, ..., n

1. Compute the slopes of the secant lines between successive points:


yk+1 yk
k = xk+1 xk

for k = 1, . . . , n 1 .
2. Initialize the tangents at every data point as the average of the secants,
k1 +k
mk = 2

for k = 2, . . . , n 1 ; if k1 and k have dierent sign, set mk = 0 . These may be updated in further
steps. For the endpoints, use one-sided dierences:
m1 = 1 and mn = n1
3. For k = 1, . . . , n 1 , if k = 0 (if two successive yk = yk+1 are equal), then set mk = mk+1 = 0, as the
spline connecting these points must be at to preserve monotonicity. Ignore step 4 and 5 for those k .
4. Let k = mk /k and k = mk+1 /k . If k or k1 are computed to be less than zero, then the input data
points are not strictly monotone, and (xk , yk ) is a local extremum. In such cases, piecewise monotone curves
can still be generated by choosing mk = 0 , although global strict monotonicity is not possible.
5. To prevent overshoot and ensure monotonicity, at least one of the following conditions must be met:
(a) the function
(2+3)2
(, ) = 3(+2)
must have a value greater than or equal to zero;
(b) + 2 3 0 ; or
(c) 2 + 3 0 .

If monotonicity must be strict then (, ) must have a value strictly greater than zero.
One simple way to satisfy this constraint is to restrict the vector (k , k ) to a circle of radius 3. That is, if k2 +k2 > 9
, then set mk = k k k and mk+1 = k k k where k = 23 2 .
k +k

Alternatively it is sucient to restrict k 3 and k 3 . To accomplish this if k > 3 , then set mk = 3 k .


Similarly for .
Note that only one pass of the algorithm is required.

2.1.2 Cubic interpolation


After the preprocessing, evaluation of the interpolated spline is equivalent to cubic Hermite spline, using the data xk
, yk , and mk for k = 1, ..., n .
To evaluate at x , nd the smallest value larger than x , xupper , and the largest value smaller than x , xlower , among
xk such that xlower x xupper . Calculate

h = xupper xlower and t = xxlower


h

then the interpolant is

finterpolated (x) = ylower h00 (t) + hmlower h10 (t) + yupper h01 (t) + hmupper h11 (t)
where hii are the basis functions for the cubic Hermite spline.
6 CHAPTER 2. MONOTONE CUBIC INTERPOLATION

2.2 Example implementation


The following JavaScript implementation takes a data set and produces a monotone cubic spline interpolant function:
/* Monotone cubic spline interpolation Usage example: var f = createInterpolant([0, 1, 2, 3, 4], [0, 1, 4, 9, 16]); var
message = ''; for (var x = 0; x <= 4; x += 0.5) { var xSquared = f(x); message += x + ' squared is about ' + xSquared +
'\n'; } alert(message); */ var createInterpolant = function(xs, ys) { var i, length = xs.length; // Deal with length issues
if (length != ys.length) { throw 'Need an equal count of xs and ys.'; } if (length === 0) { return function(x) { return 0;
}; } if (length === 1) { // Impl: Precomputing the result prevents problems if ys is mutated later and allows garbage
collection of ys // Impl: Unary plus properly converts values to numbers var result = +ys[0]; return function(x) { re-
turn result; }; } // Rearrange xs and ys so that xs is sorted var indexes = []; for (i = 0; i < length; i++) { indexes.push(i);
} indexes.sort(function(a, b) { return xs[a] < xs[b] ? 1 : 1; }); var oldXs = xs, oldYs = ys; // Impl: Creating new
arrays also prevents problems if the input arrays are mutated later xs = []; ys = []; // Impl: Unary plus properly con-
verts values to numbers for (i = 0; i < length; i++) { xs.push(+oldXs[indexes[i]]); ys.push(+oldYs[indexes[i]]); } //
Get consecutive dierences and slopes var dys = [], dxs = [], ms = []; for (i = 0; i < length - 1; i++) { var dx = xs[i +
1] - xs[i], dy = ys[i + 1] - ys[i]; dxs.push(dx); dys.push(dy); ms.push(dy/dx); } // Get degree-1 coecients var c1s =
[ms[0]]; for (i = 0; i < dxs.length - 1; i++) { var m = ms[i], mNext = ms[i + 1]; if (m*mNext <= 0) { c1s.push(0); }
else { var dx_ = dxs[i], dxNext = dxs[i + 1], common = dx_ + dxNext; c1s.push(3*common/((common + dxNext)/m
+ (common + dx_)/mNext)); } } c1s.push(ms[ms.length - 1]); // Get degree-2 and degree-3 coecients var c2s = [],
c3s = []; for (i = 0; i < c1s.length - 1; i++) { var c1 = c1s[i], m_ = ms[i], invDx = 1/dxs[i], common_ = c1 + c1s[i + 1]
- m_ - m_; c2s.push((m_ - c1 - common_)*invDx); c3s.push(common_*invDx*invDx); } // Return interpolant func-
tion return function(x) { // The rightmost point in the dataset should give an exact result var i = xs.length - 1; if (x ==
xs[i]) { return ys[i]; } // Search for the interval x is in, returning the corresponding y if x is one of the original xs var
low = 0, mid, high = c3s.length - 1; while (low <= high) { mid = Math.oor(0.5*(low + high)); var xHere = xs[mid]; if
(xHere < x) { low = mid + 1; } else if (xHere > x) { high = mid - 1; } else { return ys[mid]; } } i = Math.max(0, high);
// Interpolate var di = x - xs[i], diSq = di*di; return ys[i] + c1s[i]*di + c2s[i]*diSq + c3s[i]*di*diSq; }; };

2.3 References
Fritsch, F. N.; Carlson, R. E. (1980). Monotone Piecewise Cubic Interpolation. SIAM Journal on Numerical
Analysis. SIAM. 17 (2): 238246. doi:10.1137/0717021.

Dougherty, R.L.; Edelman, A.; Hyman, J.M. (April 1989). Positivity-, monotonicity-, or convexity-preserving
cubic and quintic Hermite interpolation. Mathematics of Computation. 52 (186): 471494. doi:10.2307/2008477.

2.4 External links


GPLv3 licensed C++ implementation: MonotCubicInterpolator.cpp MonotCubicInterpolator.hpp
Chapter 3

Monotonic function

Monotonicity redirects here. For information on monotonicity as it pertains to voting systems, see monotonicity
criterion.
Monotonic redirects here. For other uses, see Monotone (disambiguation).
In mathematics, a monotonic function (or monotone function) is a function between ordered sets that preserves

Figure 1. A monotonically increasing function. It is strictly increasing on the left and right while just monotonic (unchanging) in the
middle.

7
8 CHAPTER 3. MONOTONIC FUNCTION

Figure 2. A monotonically decreasing function

or reverses the given order. This concept rst arose in calculus, and was later generalized to the more abstract setting
of order theory.

3.1 Monotonicity in calculus and analysis


In calculus, a function f dened on a subset of the real numbers with real values is called monotonic if and only
if it is either entirely non-increasing, or entirely non-decreasing. That is, as per Fig. 1, a function that increases
monotonically does not exclusively have to increase, it simply must not decrease.
A function is called monotonically increasing (also increasing or nondecreasing), if for all x and y such that
x y one has f (x) f (y) , so f preserves the order (see Figure 1). Likewise, a function is called monotonically
decreasing (also decreasing or nonincreasing) if, whenever x y , then f (x) f (y) , so it reverses the order
(see Figure 2).
If the order in the denition of monotonicity is replaced by the strict order < , then one obtains a stronger require-
ment. A function with this property is called strictly increasing. Again, by inverting the order symbol, one nds a
corresponding concept called strictly decreasing. Functions that are strictly increasing or decreasing are one-to-one
(because for x not equal to y , either x < y or x > y and so, by monotonicity, either f (x) < f (y) or f (x) > f (y) ,
thus f (x) is not equal to f (y) .)
When functions between discrete sets are considered in combinatorics, it is not always obvious that increasing and
3.1. MONOTONICITY IN CALCULUS AND ANALYSIS 9

Figure 3. A function that is not monotonic

decreasing are taken to include the possibility of repeating the same value at successive arguments, so one nds the
terms weakly increasing and weakly decreasing to stress this possibility.
The terms non-decreasing and non-increasing should not be confused with the (much weaker) negative quali-
cations not decreasing and not increasing. For example, the function of gure 3 rst falls, then rises, then falls
again. It is therefore not decreasing and not increasing, but it is neither non-decreasing nor non-increasing.
The term monotonic transformation can also possibly cause some confusion because it refers to a transformation
by a strictly increasing function. Notably, this is the case in economics with respect to the ordinal properties of a
utility function being preserved across a monotonic transform (see also monotone preferences).[1]
A function f (x) is said to be absolutely monotonic over an interval (a, b) if the derivatives of all orders of f are
nonnegative or all nonpositive at all points on the interval.

3.1.1 Some basic applications and results

The following properties are true for a monotonic function f : R R :

f has limits from the right and from the left at every point of its domain;

f has a limit at positive or negative innity ( ) of either a real number, , or () .


10 CHAPTER 3. MONOTONIC FUNCTION

f can only have jump discontinuities;


f can only have countably many discontinuities in its domain. The discontinuities, however, do not necessarily
consist of isolated points and may even be dense in an interval (a,b).

These properties are the reason why monotonic functions are useful in technical work in analysis. Two facts about
these functions are:

if f is a monotonic function dened on an interval I , then f is dierentiable almost everywhere on I , i.e. the
set {x : x I} of numbers x in I such that f is not dierentiable in x has Lebesgue measure zero. In addition,
this result cannot be improved to countable: see Cantor function.
if f is a monotonic function dened on an interval [a, b] , then f is Riemann integrable.

An important application of monotonic functions is in probability theory. If X is a random variable, its cumulative
distribution function FX (x) = Prob(X x) is a monotonically increasing function.
A function is unimodal if it is monotonically increasing up to some point (the mode) and then monotonically decreas-
ing.
When f is a strictly monotonic function, then f is injective on its domain, and if T is the range of f , then there is an
inverse function on T for f .

3.2 Monotonicity in topology


A map f : X Y is said to be monotone if each of its bers is connected i.e. for each element y in Y the
(possibly empty) set f 1 (y) is connected.

3.3 Monotonicity in functional analysis


In functional analysis on a topological vector space X , a (possibly non-linear) operator T : X X is said to be a
monotone operator if

(T u T v, u v) 0 u, v X.

Kachurovskiis theorem shows that convex functions on Banach spaces have monotonic operators as their derivatives.
A subset G of X X is said to be a monotone set if for every pair [u1 , w1 ] and [u2 , w2 ] in G ,

(w1 w2 , u1 u2 ) 0.

G is said to be maximal monotone if it is maximal among all monotone sets in the sense of set inclusion. The graph
of a monotone operator G(T ) is a monotone set. A monotone operator is said to be maximal monotone if its graph
is a maximal monotone set.

3.4 Monotonicity in order theory


Order theory deals with arbitrary partially ordered sets and preordered sets in addition to real numbers. The above
denition of monotonicity is relevant in these cases as well. However, the terms increasing and decreasing are
avoided, since their conventional pictorial representation does not apply to orders that are not total. Furthermore, the
strict relations < and > are of little use in many non-total orders and hence no additional terminology is introduced
for them.
A monotone function is also called isotone, or order-preserving. The dual notion is often called antitone, anti-
monotone, or order-reversing. Hence, an antitone function f satises the property
3.5. MONOTONICITY IN THE CONTEXT OF SEARCH ALGORITHMS 11

x y implies f(x) f(y),

for all x and y in its domain. The composite of two monotone mappings is also monotone.
A constant function is both monotone and antitone; conversely, if f is both monotone and antitone, and if the domain
of f is a lattice, then f must be constant.
Monotone functions are central in order theory. They appear in most articles on the subject and examples from special
applications are found in these places. Some notable special monotone functions are order embeddings (functions for
which x y if and only if f(x) f(y)) and order isomorphisms (surjective order embeddings).

3.5 Monotonicity in the context of search algorithms


In the context of search algorithms monotonicity (also called consistency) is a condition applied to heuristic functions.
A heuristic h(n) is monotonic if, for every node n and every successor n' of n generated by any action a, the estimated
cost of reaching the goal from n is no greater than the step cost of getting to n' plus the estimated cost of reaching
the goal from n' ,

h(n) c(n, a, n ) + h(n ).

This is a form of triangle inequality, with n, n', and the goal Gn closest to n. Because every monotonic heuristic is
also admissible, monotonicity is a stricter requirement than admissibility. In some heuristic algorithms, such as A*,
the algorithm can be considered optimal if it is monotonic.[2]

3.6 Boolean functions


In Boolean algebra, a monotonic function is one such that for all ai and bi in {0,1}, if a1 b1 , a2 b2 , ..., an bn
(i.e. the Cartesian product {0, 1}n is ordered coordinatewise), then f(a1 , ..., an) f(b1 , ..., bn). In other words, a
Boolean function is monotonic if, for every combination of inputs, switching one of the inputs from false to true can
only cause the output to switch from false to true and not from true to false. Graphically, this means that a Boolean
function is monotonic when in its Hasse diagram (dual of its Venn diagram), there is no 1 connected to a higher 0.
The monotonic Boolean functions are precisely those that can be dened by an expression combining the inputs
(which may appear more than once) using only the operators and and or (in particular not is forbidden). For instance
at least two of a,b,c hold is a monotonic function of a,b,c, since it can be written for instance as ((a and b) or (a
and c) or (b and c)).
The number of such functions on n variables is known as the Dedekind number of n.

3.7 See also


Monotone cubic interpolation

Pseudo-monotone operator

Spearmans rank correlation coecient - measure of how monotonic a set of data

Total monotonicity

3.8 Notes
[1] See the section on Cardinal Versus Ordinal Utility in Simon & Blume (1994).

[2] Conditions for optimality: Admissibility and consistency pg. 94-95 (Russell & Norvig 2010).
12 CHAPTER 3. MONOTONIC FUNCTION

3.9 Bibliography
Bartle, Robert G. (1976). The elements of real analysis (second ed.).

Grtzer, George (1971). Lattice theory: rst concepts and distributive lattices. ISBN 0-7167-0442-0.
Pemberton, Malcolm; Rau, Nicholas (2001). Mathematics for economists: an introductory textbook. Manch-
ester University Press. ISBN 0-7190-3341-1.

Renardy, Michael & Rogers, Robert C. (2004). An introduction to partial dierential equations. Texts in
Applied Mathematics 13 (Second ed.). New York: Springer-Verlag. p. 356. ISBN 0-387-00444-0.

Riesz, Frigyes & Bla Szkefalvi-Nagy (1990). Functional Analysis. Courier Dover Publications. ISBN 978-
0-486-66289-3.

Russell, Stuart J.; Norvig, Peter (2010). Articial Intelligence: A Modern Approach (3rd ed.). Upper Saddle
River, New Jersey: Prentice Hall. ISBN 978-0-13-604259-4.

Simon, Carl P.; Blume, Lawrence (April 1994). Mathematics for Economists (rst ed.). ISBN 978-0-393-
95733-4. (Denition 9.31)

3.10 External links


Hazewinkel, Michiel, ed. (2001) [1994], Monotone function, Encyclopedia of Mathematics, Springer Sci-
ence+Business Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4

Convergence of a Monotonic Sequence by Anik Debnath and Thomas Roxlo (The Harker School), Wolfram
Demonstrations Project.

Weisstein, Eric W. Monotonic Function. MathWorld.


Chapter 4

Pseudo-monotone operator

In mathematics, a pseudo-monotone operator from a reexive Banach space into its continuous dual space is one
that is, in some sense, almost as well-behaved as a monotone operator. Many problems in the calculus of variations
can be expressed using operators that are pseudo-monotone, and pseudo-monotonicity in turn implies the existence
of solutions to these problems.

4.1 Denition
Let (X, || ||) be a reexive Banach space. A map T : X X from X into its continuous dual space X is said to be
pseudo-monotone if T is a bounded operator (not necessarily continuous) and if whenever

uj u in X as j

(i.e. uj converges weakly to u) and

lim supT (uj ), uj u 0,


j

it follows that, for all v X,

lim infT (uj ), uj v T (u), u v.


j

4.2 Properties of pseudo-monotone operators


Using a very similar proof to that of the Browder-Minty theorem, one can show the following:
Let (X, || ||) be a real, reexive Banach space and suppose that T : X X is continuous, coercive and pseudo-
monotone. Then, for each continuous linear functional g X , there exists a solution u X of the equation T(u) =
g.

4.3 References
Renardy, Michael & Rogers, Robert C. (2004). An introduction to partial dierential equations. Texts in
Applied Mathematics 13 (Second ed.). New York: Springer-Verlag. p. 367. ISBN 0-387-00444-0. (Denition
9.56, Theorem 9.57)

13
Chapter 5

Spearmans rank correlation coecient

A Spearman correlation of 1 results when the two variables being compared are monotonically related, even if their relationship is
not linear. This means that all data-points with greater x-values than that of a given data-point will have greater y-values as well.
In contrast, this does not give a perfect Pearson correlation.

In statistics, Spearmans rank correlation coecient or Spearmans rho, named after Charles Spearman and often
denoted by the Greek letter (rho) or as rs , is a nonparametric measure of rank correlation (statistical dependence
between the ranking of two variables). It assesses how well the relationship between two variables can be described

14
5.1. DEFINITION AND CALCULATION 15

When the data are roughly elliptically distributed and there are no prominent outliers, the Spearman correlation and Pearson corre-
lation give similar values.

using a monotonic function.


The Spearman correlation between two variables is equal to the Pearson correlation between the rank values of those
two variables; while Pearsons correlation assesses linear relationships, Spearmans correlation assesses monotonic
relationships (whether linear or not). If there are no repeated data values, a perfect Spearman correlation of +1 or 1
occurs when each of the variables is a perfect monotone function of the other.
Intuitively, the Spearman correlation between two variables will be high when observations have a similar (or identical
for a correlation of 1) rank (i.e. relative position label of the observations within the variable: 1st, 2nd, 3rd, etc.)
between the two variables, and low when observations have a dissimilar (or fully opposed for a correlation of 1)
rank between the two variables.
Spearmans coecient is appropriate for both continuous and discrete ordinal variables.[1][2] Both Spearmans and
Kendalls can be formulated as special cases of a more general correlation coecient.

5.1 Denition and calculation


The Spearman correlation coecient is dened as the Pearson correlation coecient between the ranked variables.[3]
For a sample of size n, the n raw scores Xi , Yi are converted to ranks rg Xi , rg Yi , and rs is computed from:
16 CHAPTER 5. SPEARMANS RANK CORRELATION COEFFICIENT

The Spearman correlation is less sensitive than the Pearson correlation to strong outliers that are in the tails of both samples. That
is because Spearmans rho limits the outlier to the value of its rank.

cov(rgX ,rgY )
rs = rgX ,rgY = rgX rgY

where
denotes the usual Pearson correlation coecient, but applied to the rank variables.
cov(rgX , rgY ) is the covariance of the rank variables.
rgX and rgY are the standard deviations of the rank variables.

Only if all n ranks are distinct integers, it can be computed using the popular formula

2
6 di
rs = 1 n(n2 1) .

where
di = rg(Xi ) rg(Yi ) , is the dierence between the two ranks of each observation.
n is the number of observations

Identical values are usually[4] each assigned fractional ranks equal to the average of their positions in the ascending
order of the values, which is equivalent to averaging over all possible permutations.
5.2. RELATED QUANTITIES 17

If ties are present in the data set, this equation yields incorrect results: Only if in both variables all ranks are distinct,
then rgX rgY = Var rgX = Var rgY = n(n2 1)/6 (cf. tetrahedral number Tn1 ). The rst equation
normalizing by the standard deviationmay even be used even when ranks are normalized to [0;1] (relative ranks)
because it is insensitive both to translation and linear scaling.
This method should also not be used in cases where the data set is truncated; that is, when the Spearman correlation
coecient is desired for the top X records (whether by pre-change rank or post-change rank, or both), the user should
use the Pearson correlation coecient formula given above.
The standard error of the coecient () was determined by Pearson in 1907 and Gosset in 1920. It is

0.6325
rs =
n1

5.2 Related quantities


Main article: Correlation and dependence

There are several other numerical measures that quantify the extent of statistical dependence between pairs of ob-
servations. The most common of these is the Pearson product-moment correlation coecient, which is a similar
correlation method to Spearmans rank, that measures the linear relationships between the raw numbers rather than
between their ranks.
An alternative name for the Spearman rank correlation is the grade correlation;[5] in this, the rank of an obser-
vation is replaced by the grade. In continuous distributions, the grade of an observation is, by convention, always
one half less than the rank, and hence the grade and rank correlations are the same in this case. More generally, the
grade of an observation is proportional to an estimate of the fraction of a population less than a given value, with
the half-observation adjustment at observed values. Thus this corresponds to one possible treatment of tied ranks.
While unusual, the term grade correlation is still in use.[6]

5.3 Interpretation
The sign of the Spearman correlation indicates the direction of association between X (the independent variable)
and Y (the dependent variable). If Y tends to increase when X increases, the Spearman correlation coecient is
positive. If Y tends to decrease when X increases, the Spearman correlation coecient is negative. A Spearman
correlation of zero indicates that there is no tendency for Y to either increase or decrease when X increases. The
Spearman correlation increases in magnitude as X and Y become closer to being perfect monotone functions of each
other. When X and Y are perfectly monotonically related, the Spearman correlation coecient becomes 1. A perfect
monotone increasing relationship implies that for any two pairs of data values Xi, Yi and Xj, Yj, that Xi Xj and
Yi Yj always have the same sign. A perfect monotone decreasing relationship implies that these dierences always
have opposite signs.
The Spearman correlation coecient is often described as being nonparametric. This can have two meanings: First,
a perfect Spearman correlation results when X and Y are related by any monotonic function. Contrast this with the
Pearson correlation, which only gives a perfect value when X and Y are related by a linear function. The other sense
in which the Spearman correlation is nonparametric in that its exact sampling distribution can be obtained without
requiring knowledge (i.e., knowing the parameters) of the joint probability distribution of X and Y.

5.4 Example
In this example, the raw data in the table below is used to calculate the correlation between the IQ of a person with
the number of hours spent in front of TV per week.
Firstly, evaluate d2i . To do so use the following steps, reected in the table below.

1. Sort the data by the rst column ( Xi ). Create a new column xi and assign it the ranked values 1,2,3,...n.
18 CHAPTER 5. SPEARMANS RANK CORRELATION COEFFICIENT

2. Next, sort the data by the second column ( Yi ). Create a fourth column yi and similarly assign it the ranked
values 1,2,3,...n.

3. Create a fth column di to hold the dierences between the two rank columns ( xi and yi ).

4. Create one nal column d2i to hold the value of column di squared.


With d2i found, add them
to nd d2i = 194 . The value of n is 10. These values can now be substituted back into
6 d2i
the equation: = 1 n(n2 1) . to give

6 194
=1
10(102 1)

which evaluates to = 29/165 = 0.175757575... with a P-value = 0.627188 (using the t distribution)

Chart of the data presented. It can be seen that there might be a negative correlation, but that the relationship does not appear
denitive.

This low value shows that the correlation between IQ and hours spent watching TV is very low, although the negative
value suggests that the longer the time spent watching television the lower the IQ. In the case of ties in the original
values, this formula should not be used; instead, the Pearson correlation coecient should be calculated on the ranks
(where ties are given ranks, as described above).

5.5 Determining signicance


One approach to test whether an observed value of is signicantly dierent from zero (r will always maintain 1
r 1) is to calculate the probability that it would be greater than or equal to the observed r, given the null hypothesis,
by using a permutation test. An advantage of this approach is that it automatically takes into account the number of
tied data values there are in the sample, and the way they are treated in computing the rank correlation.
5.6. CORRESPONDENCE ANALYSIS BASED ON SPEARMANS RHO 19

Another approach parallels the use of the Fisher transformation in the case of the Pearson product-moment correlation
coecient. That is, condence intervals and hypothesis tests relating to the population value can be carried out using
the Fisher transformation:

1 1+r
F (r) = ln = artanh(r).
2 1r
If F(r) is the Fisher transformation of r, the sample Spearman rank correlation coecient, and n is the sample size,
then


n3
z= F (r)
1.06
is a z-score for r which approximately follows a standard normal distribution under the null hypothesis of statistical
independence ( = 0).[7][8]
One can also test for signicance using


n2
t=r
1 r2
which is distributed approximately as Students t distribution with n 2 degrees of freedom under the null hypothe-
sis.[9] A justication for this result relies on a permutation argument.[10]
A generalization of the Spearman coecient is useful in the situation where there are three or more conditions, a
number of subjects are all observed in each of them, and it is predicted that the observations will have a particular
order. For example, a number of subjects might each be given three trials at the same task, and it is predicted that
performance will improve from trial to trial. A test of the signicance of the trend between conditions in this situation
was developed by E. B. Page[11] and is usually referred to as Pages trend test for ordered alternatives.

5.6 Correspondence analysis based on Spearmans rho


Classic correspondence analysis is a statistical method that gives a score to every value of two nominal variables. In
this way the Pearson correlation coecient between them is maximized.
There exists an equivalent of this method, called grade correspondence analysis, which maximizes Spearmans rho or
Kendalls tau.[12]

5.7 See also


Kendall tau rank correlation coecient
Chebyshevs sum inequality, rearrangement inequality (These two articles may shed light on the mathematical
properties of Spearmans .)
Distance correlation

5.8 References
[1] Scale types

[2] Lehman, Ann (2005). Jmp For Basic Univariate And Multivariate Statistics: A Step-by-step Guide. Cary, NC: SAS Press.
p. 123. ISBN 1-59047-576-3.

[3] Myers, Jerome L.; Well, Arnold D. (2003). Research Design and Statistical Analysis (2nd ed.). Lawrence Erlbaum. p. 508.
ISBN 0-8058-4037-0.
20 CHAPTER 5. SPEARMANS RANK CORRELATION COEFFICIENT

[4] Dodge, Yadolah (2010). The Concise Encyclopedia of Statistics. Springer-Verlag New York. p. 502. ISBN 978-0-387-
31742-7.

[5] Yule, G. U.; Kendall, M. G. (1968) [1950]. An Introduction to the Theory of Statistics (14th ed.). Charles Grin & Co. p.
268.

[6] Piantadosi, J.; Howlett, P.; Boland, J. (2007). Matching the grade correlation coecient using a copula with maximum
disorder. Journal of Industrial and Management Optimization. 3 (2): 305312.

[7] Choi, S. C. (1977). Tests of Equality of Dependent Correlation Coecients. Biometrika. 64 (3): 645647. doi:10.1093/biomet/64.3.645.

[8] Fieller, E. C.; Hartley, H. O.; Pearson, E. S. (1957). Tests for rank correlation coecients. I. Biometrika. 44: 470481.
doi:10.1093/biomet/44.3-4.470.

[9] Press; Vettering; Teukolsky; Flannery (1992). Numerical Recipes in C: The Art of Scientic Computing (2nd ed.). p. 640.

[10] Kendall, M. G.; Stuart, A. (1973). The Advanced Theory of Statistics, Volume 2: Inference and Relationship. Grin. ISBN
0-85264-215-6. (Sections 31.19, 31.21)

[11] Page, E. B. (1963). Ordered hypotheses for multiple treatments: A signicance test for linear ranks. Journal of the
American Statistical Association. 58 (301): 216230. doi:10.2307/2282965.

[12] Kowalczyk, T.; Pleszczyska, E.; Ruland, F., eds. (2004). Grade Models and Methods for Data Analysis with Applications
for the Analysis of Data Populations. Studies in Fuzziness and Soft Computing. 151. Berlin Heidelberg New York: Springer
Verlag. ISBN 978-3-540-21120-4.

5.9 Further reading


Corder, G.W. & Foreman, D.I. (2014). Nonparametric Statistics: A Step-by-Step Approach, Wiley. ISBN
978-1118840313.
Daniel, Wayne W. (1990). Spearman rank correlation coecient. Applied Nonparametric Statistics (2nd
ed.). Boston: PWS-Kent. pp. 358365. ISBN 0-534-91976-6.
Spearman C (1904). The proof and measurement of association between two things. American Journal of
Psychology. 15: 72101. doi:10.2307/1412159.
Bonett DG, Wright, TA (2000). Sample size requirements for Pearson, Kendall, and Spearman correlations.
Psychometrika. 65: 2328. doi:10.1007/bf02294183.
Kendall MG (1970). Rank correlation methods (4th ed.). London: Grin. ISBN 978-0-852-6419-96. OCLC
136868.
Hollander M, Wolfe DA (1973). Nonparametric statistical methods. New York: Wiley. ISBN 978-0-471-
40635-8. OCLC 520735.
Caruso JC, Cli N (1997). Empirical size, coverage, and power of condence intervals for Spearmans Rho.
Educational and Psychological Measurement. 57: 637654. doi:10.1177/0013164497057004009.

5.10 External links


Understanding Correlation vs. Copulas in Excel by Eric Torkia, Technology Partnerz 2011
Table of critical values of for signicance with small samples
Chapter 3 part 1 shows the formula to be used when there are ties
An example of how to calculate Spearmans Rho along with basic R code.
Spearmans Rank Correlation Coecient Excel Guide: sample data and formulae for Excel, developed by
the Royal Geographical Society.
Spearmans rank correlation: Simple notes for students with an example of usage by biologists and a spreadsheet
for Microsoft Excel for calculating it (a part of materials for a Research Methods in Biology course).
5.11. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 21

5.11 Text and image sources, contributors, and licenses


5.11.1 Text
Bernsteins theorem on monotone functions Source: https://en.wikipedia.org/wiki/Bernstein{}s_theorem_on_monotone_functions?
oldid=587727813 Contributors: Michael Hardy, Charles Matthews, Giftlite, MarSch, R.e.b., SmackBot, RDBury, G716, Vanish2, Lor-
dAnubisBOT, Yecril, VolkovBot, Melcombe, Addbot, DOI bot, JWroblewski, FrescoBot, Biker333, EmausBot, Brad7777 and Anony-
mous: 4
Monotone cubic interpolation Source: https://en.wikipedia.org/wiki/Monotone_cubic_interpolation?oldid=769464647 Contributors:
Damian Yerrick, Olathe, Chadernook, Bender235, Rjwilmsi, Miquelre, InverseHypercube, Nbarth, Berland, Lambiam, Cydebot, BSVul-
turis, Cesarth~enwiki, Glorndel83, Apparition11, Addbot, DOI bot, AndrewGarber, Rubinbot, Materialscientist, Ivar1337, Luca Ghio,
Induction314, Monkbot, Narky Blert, Tuello and Anonymous: 13
Monotonic function Source: https://en.wikipedia.org/wiki/Monotonic_function?oldid=785811989 Contributors: AxelBoldt, Andre En-
gels, Miguel~enwiki, Edemaine, Patrick, Michael Hardy, Ixfd64, TakuyaMurata, Cherkash, Charles Matthews, Dino, Fibonacci, Hen-
rygb, Epheterson, Tea2min, Tosha, Giftlite, Markus Krtzsch, BenFrantzDale, MSGJ, Macrakis, Luqui, Smyth, Paul August, Bender235,
Rgdboer, Scott Ritchie, Haham hanuka, AppleJuggler, Jumbuck, Jrme, Caesura, SteinbDJ, Totalcynic, Oleg Alexandrov, MFH, Smmur-
phy, Qwertyus, FreplySpang, Sinsearach, Intgr, Kri, Chobot, RussBot, Trovatore, SmackBot, KocjoBot~enwiki, Eskimbot, Mcld, Mhss,
Roberto.zanasi, Craig t moore, Berland, Addshore, TrogdorPolitiks, SashatoBot, DafadGoch, Supertigerman, Jackzhp, CBM, Myasuda,
Simeon, Gregbard, Dugwiki, Escarbot, JAnDbot, Jdrumgoole, Albmont, Sullivan.t.j, David Eppstein, Hans Dunkelberg, Policron, Also-
said1987, Yecril, VolkovBot, Pleasantville, Saxobob, Gavin.collins, SieBot, Gerakibot, Dawn Bard, AlanUS, ClueBot, Justin W Smith,
Watchduck, Bender2k14, Kausikghatak, Qwfp, Marc van Leeuwen, Addbot, Fgnievinski, Topology Expert, Econotechie, Tide rolls,
PV=nRT, Luckas-bot, TaBOT-zerem, Calle, AnomieBOT, Citation bot, Papppfae, Isheden, Hisoka-san, ANDROBETA, RandomDS-
devel, Unbitwise, EmausBot, 478jjjz, TuHan-Bot, Dcirovic, ZroBot, Informavoreglutton, Jonpatterns, Zap Rowsdower, ResearchRave,
ClueBot NG, Helpful Pixie Bot, Costeaeb, The1337gamer, Miguelcruzf, BattyBot, ChrisGualtieri, Monkbot, Mgkrupa, Trackteur, Ksa-
balo, GeoreyT2000, Some1Redirects4You, Maformatiker, Tomasvasconcelos and Anonymous: 80
Pseudo-monotone operator Source: https://en.wikipedia.org/wiki/Pseudo-monotone_operator?oldid=723782477 Contributors: Oleg
Alexandrov, Phuzion, Sullivan.t.j, Dcirovic, BattyBot and Anonymous: 1
Spearmans rank correlation coecient Source: https://en.wikipedia.org/wiki/Spearman{}s_rank_correlation_coefficient?oldid=801404677
Contributors: The Anome, Gareth Owen, Edward, Michael Hardy, Oliver Pereira, Menchi, Tomi, Den fjttrade ankan~enwiki, Jtzg,
Cherkash, Charles Matthews, Tpbradbury, Mrand, Jmartinezot, Robbot, Seglea, Giftlite, Chinasaur, Andris, Beland, MarkSweep, Karol
Langner, Johnux, Thorwald, Mike Rosoft, Rich Farmbrough, Bender235, Sietse Snel, Jpgordon, Arcadian, Danski14, Alansohn, Hoary,
Cgmusselman, Yuckfoo, Drummond, Oleg Alexandrov, Male1979, Rjwilmsi, Joe Decker, MZMcBride, Spudbeach, Ms2ger, Yonidebest,
Fmccown, Arthur Rubin, Cedar101, Reyk, Darrel francis, KnightRider~enwiki, SmackBot, Imz, Melchoir, Blue520, Jtneill, Mcld,
Gilliam, Hitman012, Ladislav Mecir, Avb, Richard001, G716, Qmwne235, Lambiam, Archimerged, Tim bates, CyrilB, Rofti, Alex
Selby, HisSpaceResearch, Megajuice, Chris53516, Hammer Raccoon, CBM, Nunquam Dormio, AndrewHowse, Cydebot, Iss246, Tal-
galili, Thijs!bot, Hunt.topher, Headbomb, AntiVandalBot, WinBot, Jvstone, Rayjapan, JAnDbot, Olaf, Wser, BrotherE, Michaltomek,
Username550, MartinBot, Aetheogamous, Manticore, Reedy Bot, Paulmkgordon, Chuckles Richardson, WWGB, Xenonice, Treasury-
Tag, DrMicro, Philip Trueman, Selain03, George bennett, Gdupont, EJF, SieBot, Winchelsea, Skedra, Jholdn, Faradayplank, Melcombe,
ClueBot, PipepBot, The Thing That Should Not Be, Rjd0060, AirdishStraus, Puchiko, Baladorr, Shabbychef, Skbkekas, SchreiberBike,
Qwfp, Gjnaasaa, Rror, ErkinBatu, Mudfud85, Northdevonian, Tayste, Addbot, Jncraton, MrOllie, Luckas-bot, Yobot, AnomieBOT, Cita-
tion bot, White whirlwind, iedas, Nicolas Perrault III, Hobsonlane, Gstatistics, Dger, Chulp, Ichbin-dcw, Kiefer.Wolfowitz, 10jerome10,
Savinashr, Vrenator, Larry.europe, Tominic0307, J36miles, Rotcaeroib, Wijobs, Dcirovic, Aseemasters, Kash 959, Chire, Quondum,
Donner60, ClueBot NG, Mathstat, Piast93, Vldscore, Frietjes, Habil zare, Timutre, Helpful Pixie Bot, Marcocapelle, Joydeep, Xmu alan,
Jerry Hintze, Dexbot, 7804j, Epicgenius, Blythwood, FooCow, Leegrc, Jicawi, Fred628496, Yourvaskar, I am grungy, HelpUsStopSpam,
Begab~huwiki, Michelangelo1992 and Anonymous: 211

5.11.2 Images
File:Commons-logo.svg Source: https://upload.wikimedia.org/wikipedia/en/4/4a/Commons-logo.svg License: PD Contributors: ? Orig-
inal artist: ?
File:Fisher_iris_versicolor_sepalwidth.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/40/Fisher_iris_versicolor_sepalwidth.
svg License: CC BY-SA 3.0 Contributors: en:Image:Fisher iris versicolor sepalwidth.png Original artist: en:User:Qwfp (original); Pbroks13
(talk) (redraw)
File:Folder_Hexagonal_Icon.svg Source: https://upload.wikimedia.org/wikipedia/en/4/48/Folder_Hexagonal_Icon.svg License: Cc-
by-sa-3.0 Contributors: ? Original artist: ?
File:MonotCubInt.png Source: https://upload.wikimedia.org/wikipedia/en/f/fe/MonotCubInt.png License: PD Contributors: ? Original
artist: ?
File:Monotonicity_example1.png Source: https://upload.wikimedia.org/wikipedia/commons/3/32/Monotonicity_example1.png License:
Public domain Contributors: Own work with Inkscape Original artist: Oleg Alexandrov
File:Monotonicity_example2.png Source: https://upload.wikimedia.org/wikipedia/commons/5/59/Monotonicity_example2.png License:
Public domain Contributors: self-made with en:Inkscape Original artist: Oleg Alexandrov
File:Monotonicity_example3.png Source: https://upload.wikimedia.org/wikipedia/commons/8/8c/Monotonicity_example3.png License:
Public domain Contributors: self-made with en:Inkscape Original artist: Oleg Alexandrov
File:People_icon.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/37/People_icon.svg License: CC0 Contributors: Open-
Clipart Original artist: OpenClipart
File:Portal-puzzle.svg Source: https://upload.wikimedia.org/wikipedia/en/f/fd/Portal-puzzle.svg License: Public domain Contributors:
? Original artist: ?
22 CHAPTER 5. SPEARMANS RANK CORRELATION COEFFICIENT

File:Spearman{}s_Rank_chart.png Source: https://upload.wikimedia.org/wikipedia/commons/8/8f/Spearman%27s_Rank_chart.png


License: CC BY-SA 4.0 Contributors: Own work Original artist: Blythwood
File:Spearman_fig1.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/4e/Spearman_fig1.svg License: CC BY-SA 3.0
Contributors: Own work Original artist: Skbkekas
File:Spearman_fig2.svg Source: https://upload.wikimedia.org/wikipedia/commons/8/80/Spearman_fig2.svg License: CC BY-SA 3.0
Contributors: Own work Original artist: Skbkekas
File:Spearman_fig3.svg Source: https://upload.wikimedia.org/wikipedia/commons/6/67/Spearman_fig3.svg License: CC BY-SA 3.0
Contributors: Own work Original artist: Skbkekas
File:Spearman_fig4.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/a6/Spearman_fig4.svg License: CC BY-SA 3.0
Contributors: Own work Original artist: Skbkekas
File:Spearman_fig5.svg Source: https://upload.wikimedia.org/wikipedia/commons/7/71/Spearman_fig5.svg License: CC BY-SA 3.0
Contributors: Own work Original artist: Skbkekas
File:Text_document_with_red_question_mark.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/a4/Text_document_
with_red_question_mark.svg License: Public domain Contributors: Created by bdesham with Inkscape; based upon Text-x-generic.svg
from the Tango project. Original artist: Benjamin D. Esham (bdesham)
File:Wikiversity-logo.svg Source: https://upload.wikimedia.org/wikipedia/commons/9/91/Wikiversity-logo.svg License: CC BY-SA
3.0 Contributors: Snorky (optimized and cleaned up by verdy_p) Original artist: Snorky (optimized and cleaned up by verdy_p)

5.11.3 Content license


Creative Commons Attribution-Share Alike 3.0

You might also like