Get Mathematical Foundations of Data Science 1st Edition Tomas Hrycej Free All Chapters
Get Mathematical Foundations of Data Science 1st Edition Tomas Hrycej Free All Chapters
Get Mathematical Foundations of Data Science 1st Edition Tomas Hrycej Free All Chapters
com
https://ebookmeta.com/product/mathematical-
foundations-of-data-science-1st-edition-tomas-
hrycej-2/
OR CLICK BUTTON
DOWLOAD EBOOK
https://ebookmeta.com/product/mathematical-foundations-of-data-
science-1st-edition-tomas-hrycej/
https://ebookmeta.com/product/mathematical-foundations-of-data-
science-using-r-2nd-edition-frank-emmert-streib/
https://ebookmeta.com/product/mathematical-foundations-of-big-
data-analytics-vladimir-shikhman/
https://ebookmeta.com/product/mathematical-methods-in-data-
science-jingli-ren/
Fundamentals of Cryptography: Introducing Mathematical
and Algorithmic Foundations (Undergraduate Topics in
Computer Science) Duncan Buell
https://ebookmeta.com/product/fundamentals-of-cryptography-
introducing-mathematical-and-algorithmic-foundations-
undergraduate-topics-in-computer-science-duncan-buell/
https://ebookmeta.com/product/mathematical-foundations-of-
infinite-dimensional-statistical-models-evarist-gine/
https://ebookmeta.com/product/the-crystal-ball-instruction-
manual-volume-two-foundations-for-data-science-stephen-davies/
https://ebookmeta.com/product/foundations-of-mathematical-
modelling-for-engineering-problem-solving-1st-edition-parikshit-
narendra-mahalle/
https://ebookmeta.com/product/statistical-foundations-reasoning-
and-inference-for-science-and-data-science-springer-series-in-
statistics-goran-kauermann/
Texts in Computer Science
Series Editors
David Gries, Department of Computer Science, Cornell University, Ithaca, NY,
USA
Orit Hazzan , Faculty of Education in Technology and Science, Technion—Israel
Institute of Technology, Haifa, Israel
Titles in this series now included in the Thomson Reuters Book Citation Index!
‘Texts in Computer Science’ (TCS) delivers high-quality instructional content for
undergraduates and graduates in all areas of computing and information science,
with a strong emphasis on core foundational and theoretical material but inclusive
of some prominent applications-related content. TCS books should be reasonably
self-contained and aim to provide students with modern and clear accounts of topics
ranging across the computing curriculum. As a result, the books are ideal for
semester courses or for individual self-study in cases where people need to expand
their knowledge. All texts are authored by established experts in their fields,
reviewed internally and by the series editors, and provide numerous examples,
problems, and other pedagogical tools; many contain fully worked solutions.
The TCS series is comprised of high-quality, self-contained books that have
broad and comprehensive coverage and are generally in hardback format and
sometimes contain color. For undergraduate textbooks that are likely to be more
brief and modular in their approach, require only black and white, and are under
275 pages, Springer offers the flexibly designed Undergraduate Topics in Computer
Science series, to which we refer potential authors.
Tomas Hrycej • Bernhard Bermeitinger •
Matthias Cetto • Siegfried Handschuh
Mathematical
Foundations of Data
Science
123
Tomas Hrycej Bernhard Bermeitinger
Institute of Computer Science Institute of Computer Science
University of St. Gallen University of St. Gallen
St. Gallen, Switzerland St. Gallen, Switzerland
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
Data Science is a rapidly expanding field with increasing relevance. There are
correspondingly numerous textbooks about the topic. They usually focus on various
Data Science methods. In a growing field, there is a danger that the number of
methods grows, too, in a pace that it is difficult to compare their specific merits and
application focus.
To cope with this method avalanche, the user is left alone with the judgment
about the method selection. He or she can be helped only if some basic principles
such as fitting model to data, generalization, and abilities of numerical algorithms
are thoroughly explained, independently from the methodical approach. Unfortu-
nately, these principles are hardly covered in the textbook variety. This book would
like to close this gap.
v
vi Preface
Besides students as the intended audience, we also see a benefit for researchers
in the field who want to gain a proper understanding of the mathematical foun-
dations instead of sole computing experience as well as practitioners who will get
mathematical exposure directed to make clear the causalities.
Comprehension Checks
In all chapters, important theses are summarized in their own paragraphs. All
chapters have comprehension checks for the students.
Preface vii
Acknowledgments
During the writing of this book, we have greatly benefited from students taking our
course and providing feedback on earlier drafts of the book. We would like to
explicitly mention the help of Jonas Herrmann for thorough reading of the manu-
script. He gave us many helpful hints for making the explanations comprehensible,
in particular from a student’s viewpoint. Further, we want to thank Wayne Wheeler
and Sriram Srinivas from Springer for their support and their patience with us in
finishing the book.
Finally, we would like to thank our families for their love and support.
ix
x Contents
Part II Applications
6 Specific Problems of Natural Language Processing . . . . . . . . . . . . . . 167
6.1 Word Embeddings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
6.2 Semantic Similarity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
6.3 Recurrent Versus Sequence Processing Approaches . . . . . . . . . . . 171
6.4 Recurrent Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
6.5 Attention Mechanism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
6.6 Autocoding and Its Modification . . . . . . . . . . . . . . . . . . . . . . . . . 180
6.7 Transformer Encoder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
6.7.1 Self-attention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
6.7.2 Position-Wise Feedforward Networks . . . . . . . . . . . . . . . . 184
6.7.3 Residual Connection and Layer Normalization . . . . . . . . . 184
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
7 Specific Problems of Computer Vision . . . . . . . . . . . . . . . . . . . . . . . 195
7.1 Sequence of Convolutional Operators . . . . . . . . . . . . . . . . . . . . . 196
7.1.1 Convolutional Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
7.1.2 Pooling Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
Contents xi
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
Acronyms
AI Artificial Intelligence
ARMA Autoregressive Moving Average
BERT Bidirectional Encoder Representations from Transformers
CNN Convolutional Neural Network
CV Computer Vision
DL Deep Learning
DS Data Science
FIR Finite Impulse Response
GRU Gated Recurrent Unit
IIR Infinite Impulse Response
ILSVRC ImageNet Large Scale Visual Recognition Challenge
LSTM Long Short-Term Memory Neural Network
MIMO Multiple Input/Multiple Output
MSE Mean Square Error
NLP Natural Language Processing
OOV Out-of-Vocabulary
PCA Principal Component Analysis
ReLU Rectified linear units
ResNet Residual Neural Network
RNN Recurrent Neural Network
SGD Stochastic Gradient Descent
SISO Single Input/Single Output
SVD Singular value decomposition
SVM Support vector machine
xiii
Data Science and Its Tasks
1
As the name Data Science (DS) suggests, it is a scientific field concerned with data.
However, this definition would encompass the whole of information technology.
This is not the intention behind delimiting the Data Science. Rather, the focus is on
extracting useful information from data.
In the last decades, the volume of processed and digitally stored data has reached
huge dimensions. This has led to a search for innovative methods capable of coping
with large data volumes. A naturally analogous context is that of intelligent infor-
mation processing by higher living organisms. They are supplied by a continuous
stream of voluminous sensor data (delivered by senses such as vision, hearing, or
tactile sense) and use this stream for immediate or delayed acting favorable to the
organism. This fact makes the field of Artificial Intelligence (AI) a natural source of
potential ideas for Data Science. These technologies complement the findings and
methods developed by classical disciplines concerned with data analysis, the most
prominent of which is statistics.
The research subject of Artificial Intelligence (AI) is all aspects of sensing, recog-
nition, and acting necessary for intelligent or autonomous behavior. The scope of
Data Science is similar but focused on the aspects of recognition. Given the data,
collected by sensing or by other data accumulation processes, the Data Science tasks
consist in recognizing patterns interesting or important in some defined sense. More
concretely, these tasks can adopt the form of the following variants (but not limited
to them):
Depending on the character of the task, the data processing may be static or
dynamic. The static variant is characterized by a fixed data set in which a pattern is
to be recognized. This corresponds to the mathematical concept of a mapping: Data
patterns are mapped to their pattern labels. Static recognition is a widespread setting
for image processing, text search, fraud detection, and many others.
With dynamic processing, the recognition takes place on a stream of data provided
continuously in time. The pattern searched can be found only by observing this stream
and its dynamics. A typical example is speech recognition.
Historically, the first approaches to solving these tasks date back to several cen-
turies ago and have been continually developed. The traditional disciplines have
been statistics as well as systems theory investigating dynamic system behavior.
These disciplines provide a large pool of scientifically founded findings and meth-
ods. Their natural focus on linear systems results from the fact that these systems are
substantially easier to treat analytically. Although some powerful theory extensions
to nonlinear systems are available, a widespread approach is to treat the nonlinear
systems as locally linear and use linear theory tools.
AI has passed several phases. Its origins in the 1950s focused on simple learn-
ing principles, mimicking basic aspects of the behavior of biological neuron cells.
Information to be processed has been represented by real-valued vectors. The corre-
sponding computing procedures can be counted to the domain of numerical mathe-
matics. The complexity of algorithms has been limited by the computing power of
information processing devices available at that time. The typical tasks solved have
been simple classification problems encompassing the separation of two classes.
Limitations of this approach with the given information processing technology
have led to an alternative view: logic-based AI. Instead of focusing on sensor informa-
tion, logical statements, and correspondingly, logically sound conclusions have been
investigated. Such data has been representing some body of knowledge, motivating
to call the approach knowledge based. The software systems for such processing
have been labeled “expert systems” because of the necessity of encoding expert
knowledge in an appropriate logic form.
This field has reached a considerable degree of maturity in machine processing of
logic statements. However, the next obstacle had to be surmounted. The possibility of
describing a real world in logic terms showed its limits. Many relationships important
for intelligent information processing and behavior turned out to be too diffuse for
the unambiguous language of logic. Although some attempts to extend the logic
by probabilistic or pseudo-probabilistic attributes (fuzzy logic) delivered applicable
results, the next change of paradigm has taken place.
With the fast increase of computing power, also using interconnected computer
networks, the interest in the approach based on numerical processing of real-valued
data revived. The computing architectures are, once more, inspired by neural systems
of living organisms. In addition to the huge growth of computing resources, this phase
1 Data Science and Its Tasks 3
The authors hope to present concise and transparent answers to these questions
wherever allowed by the state of the art.
Part I
Mathematical Foundations
Application-Specific Mappings
and Measuring the Fit to Data 2
with a parameter vector w. For linear mappings of type (2.2), the parameter vector
w consists of the elements of matrix B.
There are several basic application types with their own interpretation of the
mapping sought. The task of fitting a mapping of a certain type to the data requires a
measure of how good this fit is. An appropriate definition of this measure is important
for several reasons:
• In most cases, a perfect fit with no deviation is not possible. To select from alter-
native solutions, comparing the values of fit measure is necessary.
• For optimum mappings of a simple type such as linear ones, analytical solutions
are known. Others can only be found by numerical search methods. To control the
search, repeated evaluation of the fit measure is required.
• The most efficient search methods require smooth fit measures with existing or
even continuous gradients, to determine the search direction where the chance for
improvement is high.
For some mapping types, these two groups of requirements are difficult to meet
in a single fit measure.
There are also requirements concerning the correspondence of the fit measure
appropriate from the viewpoint of the task on one hand and of that used for (mostly
numerical) optimization on the other hand:
• The very basic requirement is that both fit measures should be the same. This
seemingly trivial requirement may be difficult to satisfy for some tasks such as
classification.
• It is desirable that a perfect fit leads to a zero minimum of the fit measure. This is
also not always satisfied, for example, with likelihood-based measures. Difficulties
to satisfy these requirements frequently lead to using different measures for the
search on one hand and for the evaluation of the fit on the other hand. In such
cases, it is preferable if both measures have at least a common optimum.
The most straightforward application type is using the mapping as what it mathe-
matically is: a mapping of real-valued input vectors to equally real-valued output
vectors. This type encompasses many physical, technical, and econometric applica-
tions. Examples of this may be:
2.1 Continuous Mappings 9
• Failure rates (y) determined from operation time and conditions of a component
(x).
• Credit scoring, mapping the descriptive features (x) of the credit recipient to a
number denoting the creditworthiness (y).
• Macroeconomic magnitudes such as inflation rate (y) estimated from others such
as unemployment rate and economic growth (x).
For a vector mapping f (x, w), the error (2.4) is a column vector. The vector
product e e is the sum of the squares of the errors of individual output vector elements.
Summing these errors over K training examples result in the error measure
K
K
M
E= ek ek = 2
emk (2.5)
k=1 k=1 m=1
Different scaling
of individual elements of vector patterns can make scaling
weights S = s1 . . . s M appropriate. Also, some training examples may be more
important than others, which can be expressed by additional weights rk . The error
measure (2.5) has then the generalized form
K
K
M
E= ek Sek rk = 2
emk sm rk (2.6)
k=1 k=1 m=1
For linear mappings (2.2), explicit solutions for reaching zero in the error measure
(2.5) and (2.6) are known. Their properties have been thoroughly investigated and
some important aspects are discussed in Chap. 4. Unfortunately, most practical ap-
plications deviate to a greater or lesser extent from the linearity assumption. Good
analytical tractability may be a good motivation to accept a linear approximation if
the expected deviations from the linearity assumption are not excessive. However, a
lot of applications will not allow such approximation. Then, some nonlinear approach
is to be used.
Modeling nonlinearities in the mappings can be done in two ways that strongly
differ their application.
2.1 Continuous Mappings 11
The first approach preserves linearity in parameters. The mapping (2.3) is ex-
pressed as
y = Bh (x) (2.7)
with a nonparametric function h (x) which plays the role of the input vector x itself.
In other words, h (x) can be substituted for x in all algebraic relationships valid for
linear systems. This includes also explicit solutions for Mean Square Errors (MSEs)
(2.5) and (2.6).
The function h (x) can be an arbitrary function but a typical choice is a polynomial
in vector x. This is motivated by the well-known Taylor expansion of an arbitrary
multivariate function [7]. This expansion enables an approximation of a multivariate
function by a polynomial of a given order on an argument interval, with known error
bounds.
For a vector x with two elements x1 and x2 , a quadratic polynomial is
h x1 x2 = 1 x1 x2 x12 x22 x1 x2 (2.8)
For a vector x with three elements x1 , x2 , and x3 , it is already as complex as
follows:
h x1 x2 x3 = 1 x1 x2 x3 x12 x22 x32 x1 x2 x1 x3 x2 x3 (2.9)
For a vector x of length N , the length of vector h (x) is
(N − 1) N N 2 + 3N
1+ N + N + =1+ (2.10)
2 2
For a polynomial of order p, the size of vector h (x) grows with the pth power of
N . This is the major shortcoming of the polynomial approach for typical applications
of DS where input variable numbers of many thousands are common. Already with
quadratic polynomials, the input width would increase to millions and more.
Another disadvantage is the growth of higher polynomial powers outside of the
interval covered by the training set—a minor extrapolation may lead to excessively
high output values.
So, modeling the multivariate nonlinearities represented by polynomials is practi-
cal only for low-dimensional problems or problems in which it is justified to refrain
from taking full polynomials (e.g., only powers of individual scalar variables). With
such problems, it is possible to benefit from the existence of analytical optima and
statistically well-founded statements about the properties of the results.
These properties of parameterized mappings linear in parameters have led to
the high interest in more general approximation functions. They form the second
approach: mappings nonlinear in parameters. A prominent example are neural net-
works, discussed in detail in Chap. 3. In spite of intensive research, practical state-
ments about their representational capacity are scarce and overly general, although
there are some interesting concepts such as Vapnik–Chervonenkis dimension [21].
Neural networks with bounded activation functions such as sigmoid do not exhibit
the danger of unbounded extrapolation. They frequently lead to good results if the
number of parameters scales linearly with the input dimension, although the optimal-
ity or appropriateness of their size is difficult to show. Determining their optimum
size is frequently a result of lengthy experiments.
12 2 Application-Specific Mappings and Measuring …
Minimizing the MSE (2.5) or (2.6) leads to a mapping making a good (or even
perfect, in the case of a zero error) forecast of the output vector y. This corresponds
to the statistical concept of point estimation of the expected value of y.
In the presence of an effect unexplained by input variable or of some type of noise,
the true values of the output will usually not be exactly equal to their expected values.
Rather, they will fluctuate around these expected values according to some probability
distribution. If the scope of these fluctuations is different for different input patterns
x, the knowledge of the probability distribution may be of crucial interest for the
application. In this case, it would be necessary to determine a conditional probability
distribution of the output pattern y conditioned on the input pattern x
g (y | x) (2.11)
If the expected probability distribution type is parameterized by parameter vector
p, then (2.11) extends to
g (y | x, p) (2.12)
From the statistical viewpoint, the input/output mapping (2.3) maps the input
pattern x directly to the point estimator of the output pattern y. However, we are
free to adopt a different definition: input pattern x can be mapped to the conditional
parameter vector p of the distribution of output pattern y. This parameter vector
has nothing in common with the fitted parameters of the mapping—it consists of
parameters that determine the shape of a particular probability distribution of the
output patterns y, given an input pattern x. After the fitting process, the conditional
probability distribution (2.12) becomes
g (y, f (x, w)) (2.13)
It is an unconditional distribution of output pattern y with distribution parameters
determined by the function f (x, w). The vector w represents the parameters of the
mapping “input pattern x ⇒ conditional probability distribution parameters p” and
should not be confused with the distribution parameters p themselves. For example,
in the case of mapping f () being represented by a neural network, w would corre-
spond to the network weights. Distribution parameters p would then correspond to
the activation of the output layer of the network for a particular input pattern x.
This can be illustrated on the example of a multivariate normal distribution with
a mean vector m and covariance matrix C. The distribution (2.12) becomes
1 −1 −1
g (y | x, p) = N (m (x) , C (x)) = e 2 (y−m(x)) C(x) (y−m(x))
(2π ) |C (x)|
N
(2.14)
2.1 Continuous Mappings 13
The vector y can, for example, represent the forecast of temperature and humidity
for the next day, depending on today’s meteorological measurements x. Since the
point forecast would scarcely hit the tomorrow’s state and thus be of limited use, it
will be substituted by the forecast that the temperature/humidity vector is expected
to have the mean m (x) and the covariance matrix C (x), both depending on today’s
measurement vector x. Both the mean vector and the elements of the covariance ma-
trix together constitute the distribution parameter vector p in (2.12). This parameter
vector depends on the vector of meteorological measurements x as in (2.13).
What remains is to choose an appropriate method to find the optimal mappings
m (x) and C (x) which depend on the input pattern x. In other words, we need
some optimality measure for the fit, which is not as simple as in the case of point
estimation with its square error. The principle widely used in statistics is that of
maximum likelihood. It consists of selecting distribution parameters (here: m and C)
such that the probability density value for the given data is maximum.
For a training set pattern pair (xk , yk ), the probability density value is
1 1 −1 (y −m(x ))
e− 2 (yk −m(xk )) C(xk ) k k (2.15)
(2π ) |C (xk )|
N
For independent samples (xk , yk ), the likelihood of the entire training set is the
product
K
1 1 −1
e− 2 (yk −m(xk )) C(xk ) (yk −m(xk )) (2.16)
k=1 (2π ) |C (x k )|
N
• Every symmetric matrix such as C (xk )−1 can be expressed as a product of a lower
triangular matrix L and its transpose L , that is, C (xk )−1 = L (xk ) L (xk ) .
• The determinant of a lower diagonal matrix L is the product of its diagonal ele-
ments.
• The determinant of L L is the square of the determinant of L.
• The inverse L −1 of a lower diagonal matrix L is a lower diagonal matrix and its
determinant is the reciprocal value of the determinant of L.
We are then seeking the parameter pair (β(x), η(x)) depending on the input pattern
x such that the log-likelihood over the training set
K
β (xk ) yk yk β(xk )
ln + β (xk ) − ln −
η (xk ) η (xk ) η (xk )
k=1
(2.23)
K
yk β(xk )
= ln β (xk ) − β (xk ) ln η (xk ) + (β (xk ) − 1) ln yk −
η (xk )
k=1
is minimum. The parameter pair can, for example, be the output layer (of size 2)
activation vector
β η = f (x, w) (2.24)
2.2 Classification
• images in which the object type is sought (e.g., a face, a door, etc.);
• radar signature assigned to flying objects;
• object categories on the road or in its environment during autonomous driving.
Sometimes, the classes are only discrete substitutes for a continuous scale. Dis-
crete credit scores such as “fully creditworthy” or “conditionally creditworthy” are
only distinct values of a continuous variable “creditworthiness score”. Also, many
social science surveys classify the answers to “I fully agree”, “I partially agree”, “I
am indifferent”, “I partially disagree”, and “I fully disagree”, which can be mapped
to a continuous scale, for example [−1, 1]. Generally, this is the case whenever the
classes can be ordered in an unambiguous way.
Apart from this case with inherent continuity, the classes may be an order-free
set of exclusive alternatives. (Nonexclusive classifications can be viewed as separate
tasks—each nonexclusive class corresponding to a dichotomy task “member” vs.
“nonmember”.) For such class sets, a basic measure of the fit to a given training or test
set is the misclassification error. The misclassification error for a given pattern may
be defined as a variable equal to zero if the classification by the model corresponds to
the correct class and equal to one if it does not. More generally, assigning the object
with the correct class i erroneously to the class j is evaluated by a nonnegative real
number called loss L i j . The loss of a correct class assignment is L ii = 0.
The so-defined misclassification loss is a transparent measure, frequently directly
reflecting application domain priorities. By contrast, it is less easy to make it opera-
tional for fitting or learning algorithms. This is due to its discontinuous character—a
class assignment can only be correct or wrong. So far, solutions have been found
only for special cases.
16 2 Application-Specific Mappings and Measuring …
Let us consider a simple problem with two classes and two-dimensional patterns
[x1 , x2 ] as shown in Fig. 2.3. The points corresponding to Class 1 and Class 2
can be completely separated by a straight line, without any misclassification. This
is why such classes are called linearly separable. The attainable misclassification
error is zero.
The existence of a separating line guarantees the possibility to define regions in
the pattern vector space corresponding to individual classes. What is further needed
is a function whose value would indicate the membership of a pattern in a particular
class. Such function for the classes of Fig. 2.3 is that of Fig. 2.4. Its value is unity
for patterns from Class 1 and zero for those from Class 2.
Unfortunately, this function has properties disadvantageous for treatment by nu-
merical algorithms. It is discontinuous along the separating line and has zero gradient
elsewhere. This is why it is usual to use an indicator function of type shown in Fig. 2.5.
It is a linear function of the pattern variables. The patterns are assigned to Class
1 if this function is positive and to Class 2 otherwise.
Many or even the most class pairs cannot be separated by a linear hyperplane. It
is not easy to determine whether they can be separated by an arbitrary function if the
2.2 Classification 17
family of these functions is not fixed. However, some classes can be separated by
simple surfaces such as quadratic ones. An example of this is given in Fig. 2.6. The
separating curve corresponds to the points where the separating function of Fig. 2.7
intersects the plane with y = 0.
The discrete separating function such as that of Fig. 2.4 can be viewed as a
nonlinear step function of the linear function of Fig. 2.5, that is,
1 for b x ≥ 0
s bx = (2.25)
0 for b x < 0
18 2 Application-Specific Mappings and Measuring …
To avoid explicitly mentioning the absolute term, it will be assumed that the last
element of input pattern vector x is equal to unity, so that
⎡ ⎤
x1 ⎡ ⎤
⎢ .. ⎥ x1
⎢ ⎥ ⎢ ⎥
b x = b1 · · · b N −1 b N ⎢ . ⎥ = b1 · · · b N −1 ⎣ ... ⎦ + b N
⎣x N −1 ⎦
x N −1
1
The misclassification sum for a training set with input/output pairs (xk , yk ) is
equal to
K
s b xk − yk
2
E= (2.26)
k=1
2.2 Classification 19
Here, yk is the class indicator of the kth training pattern with values 0 or 1. For
most numerical minimization methods for error functions E, the gradient of E with
regard to parameters b is required to determine the direction of descent towards low
values of E. The gradient is
∂E K
ds
=2 s b xk − yk xk (2.27)
∂b dz
k=1
with z being the argument of function s (z).
However, the derivative of the nonlinear step function (2.25) is zero everywhere
except for the discontinuity at z = 0 where it does not exist. To receive a useful
descent direction, the famous perceptron rule [16] has used a gradient modification.
This pioneering algorithm iteratively updates the weight vector b in the direction of
the (negatively taken) modified gradient
∂E K
= s b xk − yk xk (2.28)
∂b
k=1
ds
This modified gradient can be viewed as (2.27) with dz substituted by unity (the
derivative of linear function s (z) = z). Taking a continuous gradient approxima-
tion is an idea used by optimization algorithms for non-smooth functions, called
subgradient algorithms [17].
The algorithm using the perceptron rule converges to zero misclassification rate
if the classes, as defined by the training set, are separable. Otherwise, convergence
is not guaranteed.
An error measure focusing on critical patterns in the proximity of separating
line is used by the approach called the support vector machine (SVM) [2]. This
approach is looking for a separating line with the largest orthogonal distance to the
nearest patterns of both classes. In Fig. 2.8, the separating line is surrounded by
the corridor defined by two boundaries against both classes, touching the respective
nearest points. The goal is to find a separating line for which the width of this corridor
is the largest. In contrast to the class indicator of Fig. 2.4 (with unity for Class 1
and zero for Class 2), the support vector machine rule is easier to represent with a
symmetric class indicator y equal to 1 for one class and to −1 for another one. With
this class indicator and input pattern vector containing the element 1 to provide for the
absolute bias term, the classification task is formulated as a constrained optimization
task with constraints
yk b xk ≥ 1 (2.29)
If these constraints are satisfied, the product b xk is always larger than 1 for Class
1 and smaller than −1 for Class 2.
The separating function b x of (2.29) is a hyperplane crossing the x1 /x2 -coordinates
plane at the separating line (red line in Fig. 2.8). At the boundary lines, b x is equal
to constants larger than 1 (boundary of Class 1) and smaller than −1 (boundary
of Class 2). However, there are infinitely many such separating functions. In the
20 2 Application-Specific Mappings and Measuring …
cross section perpendicular to the separating line (i.e., viewing the x1 /x2 -coordinates
plane “from aside”), they may appear as in Fig. 2.9.
There are infinitely many such hyperplanes (appearing as dotted lines in the cross
section of Fig. 2.9), some of which becoming very “steep”. The most desirable
variant would be that exactly touching the critical points of both classes at a unity
“height” (solid line). This is why the optimal solution of the SVM is such that it has
the minimum norm of vector b:
simple: “separated” (successful fit) and “non-separated” (failing to fit). The absence
of intermediary results makes the problem of discontinuous misclassification error
or loss irrelevant—every separation is a full success.
For Gaussian classes with column vector means m 1 and m 2 , and common co-
variance matrix C, matrix A and some parts of the constant d become zero. The
discriminant function becomes linear:
b x + d > 0
with
b = (m 1 − m 2 ) C −1
1 p1
d = − b (m 1 + m 2 ) + ln
2 p2 (2.37)
1 p1
= − (m 1 + m 2 ) C −1 (m 1 + m 2 ) + ln
2 p2
This linear function is widely used in the linear discriminant analysis.
Interestingly, the separating function (2.37) can, under some assumptions, be
received also with a least squares approach. For simplicity, it will be assumed that
the mean over both classes m 1 p2 + m 2 p2 is zero. Class 1 and Class 2 are
coded by 1 and −1, and the pattern vector x contains 1 at the last position.
The zero gradient is reached at
b X X = y X (2.38)
By dividing both sides by the number of samples, matrices X and X X contain
sample moments (means and covariances). Expected values are
1 1
E b XX = E yX (2.39)
K K
The expression X X corresponds to the sample second moment matrix. With the
zero mean, as assumed above, it is equal to the sample covariance matrix. Every
covariance matrix over a population divided into classes can be decomposed to the
intraclass covariance C (in this case, identical for both classes) and the interclass
covariance
M = m1 m2
p1 0
P= (2.40)
0 p2
Ccl = M P M
This can be then rewritten as
C + MPM 0
b = p1 m 1 − p2 m 2 p1 − p2 (2.41)
0 1
resulting in
C + Ccl 0 −1
b = p1 m 1 − p2 m 2 p1 − p2
0 1
−1
C + Ccl 0 (2.42)
= p1 m 1 − p2 m 2 p1 − p2
0 1
= p1 m 1 − p2 m 2 C + Ccl −1 p1 − p2
24 2 Application-Specific Mappings and Measuring …
It is interesting to compare the linear discriminant (2.37) with least square solution
(2.37) and (2.42). With an additional assumption of both classes having identical prior
probabilities p1 = p2 (and identical counts in the training set), the absolute term of
both (2.37) and (2.42) becomes zero. The matrix Ccl contains covariances of only two
classes and is thus of maximum rank two. The additional condition of overall mean
equal to zero reduces the rank to one. This results in least squares-based separating
vector b to be only rescaled in comparison with that of separating function (2.37).
This statement can be inferred in the following way.
In the case of identical prior probabilities of both classes, the condition of zero
mean of distribution of all patterns is m 1 +m 2 = 0, or m 2 = −m 1 . It can be rewritten
as m 1 = m and m 2 = −m with the help of a single column vector of class means m.
The difference of both means is m 1 − m 2 = 2m. The matrix Ccl is
1 1
Ccl = m1 m2 m1 m2 = m 1 m 1 + m 2 m 2 = mm (2.43)
2 2
with rank equal to one—it is an outer product of only one vector m with itself.
The equation for separating function b of the linear discriminant is
bC = 2m (2.44)
while for separating function bLS of least squares, it is
bLS (C + Ccl ) = 2m (2.45)
Let us assume the proportionality of both solutions by factor d:
bLS = db (2.46)
Then
db (C + Ccl ) = 2dm + 2dm C −1 Ccl = 2m (2.47)
or
1−d
m C −1 Ccl = m C −1 mm = m = em (2.48)
d
with
1−d
e= (2.49)
d
and
1
d= (2.50)
1+e
The scalar proportionality factor e in (2.48) can always be found since Ccl = mm
is a projection operator to a one-dimensional space. It projects every vector, i.e.,
also the vector m C −1 , to the space spanned by vector m. In other words, these
two vectors ale always proportional. Consequently, a scalar proportionality factor
d for separating functions can always be determined via (2.50). This means that
proportional separating functions are equivalent since they separate identical regions.
The result of this admittedly tedious argument is that the least square solution
fitting the training set to the class indicators 1 and −1 is equivalent with the optimum
linear discriminant, under the assumption of
2.2 Classification 25
This makes the least squares solution interesting since it can be applied without
assumptions about the distribution—of course with the caveat that is not Bayesian
optimal for other distributions. This seems to be the foundation of the popularity of
this approach beyond the statistical community, for example, in neural network-based
classification.
Its weakness is that the MSE reached cannot be interpreted in terms of misclassifi-
cation error—we only know that in the MSE minimum, we are close to the optimum
separating function. The reason for this lack of interpretability is that the function
values of the separating function are growing with the distance from the hyperplane
separating both classes while the class indicators (1 and −1) are not—they remain
constant at any distance. Consequently, the MSE attained by optimization may be
large even if the classes are perfectly separated. This can be seen if imagining a “lat-
eral view” of the vector space given in Fig. 2.10. It is a cross section in the direction
of the class separating line. The class indicators are constant: 1 (Class 1 to the
left) and −1 (Class 2 to the right).
More formally, the separating function (for the case of separable classes) assigns
the patterns, according to the test b x + d > 0 for Class 1 membership, to the
respective correct class. However, the value of b x + d is not equal to the class
2
indicator y (1 or −1). Consequently, the MSE b x + d − y is far away from zero
in the optimum. Although alternative separating functions with identical separating
lines can have different slopes, no one of them can reach zero MSE. So, the MSE
does not reflect the misclassification rate.
This shortcoming can be alleviated by using a particular nonlinear function of
the term b x + d. Since this function is usually used in the form producing class
26 2 Application-Specific Mappings and Measuring …
indicators 1 for Class 1 and zero for Class 2, it will reflect the rescaled linear
situation of Fig. 2.11.
The nonlinear function is called logistic or logit function in statistics and econo-
metrics. With neural networks, it is usually referred to as sigmoid function, related
via rescaling to tangent hyperbolicus (tanh). It is a function of scalar argument z:
1
y = s (z) = (2.51)
1 + e−z
This function is mapping the argument z ∈ (−∞, ∞) to the interval [0, 1], as
shown in Fig. 2.12.
Applying (2.51) to the linear separating function b x+d, that is, using the nonlinear
separating function
1
y = s b x + d = −(b x+d) (2.52)
1+e
will change the picture of Fig. 2.11 to that of Fig. 2.13. The forecast class indicators
(red crosses) are now close to the original ones (blue and green circles).
The MSE is
s b x + d − y
2
(2.53)
For separable classes, MSE can be made arbitrarily close to zero, as depicted in
Fig. 2.14. The proximity of the forecast and true class indicators can be increased
2.2 Classification 27
where the exponents yk and 1 − yk acquire values 0 or 1 and thus “select” the correct
alternative from (2.55).
For a sample (or training set) of mutually independent samples, the likelihood
over this sample is the product
K
f (xk , w) yk (1 − f (xk , w))1−yk
k=1
(2.57)
K
K
= ( f (xk , w)) (1 − f (xk , w))
k=1,yk =1 k=1,yk =0
If the training set is a representative sample from the statistical population as-
sociated with pattern xk , the expected value of likelihood per pattern L/K can be
evaluated. The only random variable in (2.58) is the class indicator y, with probability
p of being equal to one and 1 − p of being zero:
With a parameterized approximator f (x, w) that can exactly compute the class
probability for a given pattern x and some parameter vector w, the exact fit is
at both the maximum of likelihood and the MSE minimum (i.e., least squares).
Of course, to reach this exact fit, an optimization algorithm that is capable of
finding the optimum numerically has to be available. This may be difficult for
strongly nonlinear approximators.
Least squares with logistic activation function seem to be the approach to
classification that satisfies relatively well the requirements formulated at the
beginning of Sect. 2.2.
Another random document with
no related content on Scribd:
On but encore un coup à sa santé et à la nôtre, et l’on se remit
en marche à travers les bois, causant, riant, chantant, contents de
vivre et de nous sentir un même cœur, un cœur léger comme
l’oiselet que notre gaîté faisait envoler, limpide comme le ruisseau
qui gazouillait sur les cailloux le long du sentier.
Quand le Père s’aperçut que la route commençait à nous paraître
longuette, il nous apprit à fabriquer instantanément, avec une simple
cupule de gland, convenablement serrée entre les dernières
phalanges de l’index et du médius, un fifre naturel. Nous
organisâmes sur place une marche militaire, qui mit en émoi tous les
échos endormis de la vallée et nous fit complètement oublier la
fatigue.
Une brave fermière, au sortir de la forêt, nous offrit en réconfort
un bol de lait délicieux, et bientôt nous reposions nos membres
rompus (nous ne le sentîmes qu’alors), sur les banquettes de bois
du train, qui nous parurent douces.
En route, Louis me dit à l’oreille :
« Excellence, voilà encore un bon usage à introduire dans votre
Université !
— Je n’y manquerai pas, dès qu’elle aura des Congréganistes
comme toi. »
Dieu de clémence,
O Dieu vainqueur,
Sauvez Rome et la France,
Au nom du Sacré-Cœur !
Paul.
AUJOURD’HUI
Mars 1903.