Exam in Statistical Machine Learning Statistisk Maskininlärning (1RT700)
Exam in Statistical Machine Learning Statistisk Maskininlärning (1RT700)
Exam in Statistical Machine Learning Statistisk Maskininlärning (1RT700)
Number of problems: 5
• For subproblems (a), (b), (c), . . . , it is usually possible to answer later subproblems
independently of the earlier subproblems (for example, you can most often answer
(b) without answering (a)).
With the exception of Problem 1, all your answers must be clearly motivated!
A correct answer without a proper motivation will score zero points!
Good luck!
Some relevant formulas
Pages 1–3 contain some expressions that may or may not be useful for solving the exam
problems. This is not a complete list of formulas used in the course, but some of the
problems may require knowledge about certain expressions not listed here. Furthermore,
the formulas listed below are not self-explanatory, meaning that you need to be familiar
with the expressions to be able to interpret them. They are possibly a support for solving
the problems, but not a comprehensive summary of the course.
1
where log `(β) = ni=1 log p(yi | xi ; β) is the log-likelihood function (the last equality
P
holds when the n training data points are modeled to be independent).
Logistic regression: The logistic regression combines linear regression with the logistic
function to model the class probability
T
eβ x
p(y = 1 | x) = T .
1 + eβ x
For multi-class logistic regression we use the softmax function and model
T
eβk xi
p(y = k | xi ) = PK T
.
l=1 eβl xi
where
bk = nk /n for k = 1, . . . , K
π
1 X
µ
bk = xi for k = 1, . . . , K
nk i:y =k
i
K
1 X X
Σ
b = (xi − µ b k )T .
b k )(xi − µ
n − K k=1 i:y =k
i
where µ
b k and π
bk are as for LDA, and
1 X
Σ
bk = (xi − µ b k )T .
b k )(xi − µ
nk − 1 i:y =k
i
2
P|T |
Classification trees: The cost function for tree splitting is m=1 nm Qm where T is the
tree, |T | the number of terminal nodes, nm the number of training data points falling in
node m, and Qm the impurity of node m. Three common impurity measures for splitting
classification trees are:
where π 1
P
bmk = nm i:xi ∈Rm I(yi = k)
Loss functions for classification: For a binary classifier expressed as yb(x) = sign{C(x)},
for some real-valued function C(x), the margin is defined as y · C(x) (note the conven-
tion y ∈ {−1, 1} here). A few common loss functions expressed in terms of the margin,
L(y, C(x)) are,
otherwise.
0
(
1 for yc < 0,
Misclassification loss: L(y, c) =
0 otherwise.
3
1. This problem is composed of 10 true-or-false statements. You only have to classify
these as either true or false. For this problem (only!) no motivation is required.
Each correct answer scores 1 point, each incorrect answer scores -1 point and each
missing answer scores 0 points. The total score for the whole problem is capped
below at 0.
i. LASSO and Ridge Regression are two different methods for regularization
ii. Regularization decreases the bias of the model
iii. Deep learning is a nonparametric method
iv. The model y = β0 + β1 x1 + β2 sin(x2 ) + ε is a linear regression model
(β0 , β1 and β2 are the unknown parameters)
v. Classification problems have only qualitative inputs
vi. k-NN is a linear classifier if k = 1
vii. Random forest is a special version of boosting with trees
viii. The training error usually increases when we increase the model flexibility
ix. An epoch, in the context of stochastic gradient descent, is the number of
iterations required for the training to converge
x. The partitioning of the input space shown below could be generated by
recursive binary splitting
x2
x1
(10p)
4
2. A wholesaler of chocolate has asked you to build a model for predicting the
purchase price of different chocolate bars based on various types of information
about the chocolates origin, production, etc. They have collected a database with
their data, containing the following columns:
(a) The customer want’s to try a simple model first, like a linear regression or a logistic
regression. Which one of these two methods do you suggest? (2p)
(b) For each column of the customer’s database as listed above, specify whether you
would consider that variable as an input of the model, an output of the model, or
neither. (3p)
(c) For each of the inputs and outputs of your model (from the previous question),
specify whether that variable is best viewed as quantitative or qualitative. (3p)
(d) In a previous attempt to design such a system, the inputs used were origin (treated
as qualitative), producer (treated as qualitative) and percentage (treated as
quantitative). At that time, the database contained 183 rows. No satisfactory
performance was obtained. Give a plausible explanation why. (2p)
5
3. Consider the following training data
i x y
x1 x2
1 3 -2 1
2 3 -7 0
3 9 -3 1
4 10 -5 1
5 2 -2 1
6 -7 1 0
7 0 5 0
8 9 -8 0
where x is the two-dimensional input variable, y the output and i is the data point
index.
(a) Illustrate the training data points in a graph with x1 and x2 on the two axes.
Represent the points belonging to class 0 with a cross and those belonging to
class 1 with a circle. Also annotate the data points with their data point indices.
(1p)
(b) Perform leave-one-out cross-validation (which is equivalent to 8-fold cross-validation
here) for k-NN with k = 1 and k = 3 to estimate the misclassification rate for new
data points.
It is OK to determine the closest neighbors graphically using your figure from
problem (a), as long as your approach is well documented by your solution. A
misclassification rate stated without any comments or explanations will score
0 points.
(4p)
(c) What is your conclusion from (b) regarding a good choice of k in k-NN for this
problem?
(1p)
(d) Describe (∼ 1/2 page) how the flexibility of k-NN is different for different values
of k (in general, not restricted to the specific problem in (a)–(c)), and discuss how
it relates to the bias-variance trade-off. What does it mean in practice for a user of
k-NN, and why can we not achieve low bias and low variance at the same time?
(3p)
(e) For the binary classification problem, what potential issue is there to use an even k
(such as k = 2) in k-NN, and how can it be handled in practice?
(1p)
6
4. (a) Figure 1 shows the training data for a binary classification problem with two inputs,
where the two classes are marked by blue dots and orange crosses, respectively. A
logistic regression classifier is constructed for this problem, which attains a zero
misclassification training error. Describe how this is possible despite the fact that
logistic regression is a linear classifier (i.e., it has linear decision boundaries).
(2p)
0.5
-0.5
-1
-1 -0.5 0 0.5 1
(b) Derive an expression for the decision boundary in QDA for a binary classification
problem, when the ‘decision threshold’ is 0.5. The expression should be on the
format {x : xT v + xT Bx = c}, with c a scalar, v a vector and B a matrix.
(3p)
(c) Give an example on an application of binary classification where it could be
motivated to use a ‘decision threshold’ different than 0.5.
(2p)
7
(d) Your colleague has a regression problem he needs to solve, with two possible inputs
x1 and x2 . He tries two different linear regression models
(M1) y = β0 + β1 x1 + ε, and
(M1) y = β0 + β1 x1 + β2 x2 + ε.
He train the two models successfully from the same data set using least squares,
and obtain
(T1) yb = 7.2 + 1.1x1 , and
(T1) yb = 8.1 − 2.9x1 + 5.1x2 .
Your colleague studies the result, becomes puzzled and asks you: “If I increase
the input x1 with one unit in model (T1), my prediction yb will increase by 1.1.
However, a unit increase in x1 for model (T2) will instead decrease my prediction
yb with 2.9. How could it be that the two models do not even agree on whether an
increase in x1 should decrease or increase y?”
Give a plausible explanation to your colleague on possible reasons for this situation.
(3p)
8
5. (a) Explain using a few sentences (∼ max 1/2 page) the differences and similarities
between bagging and boosting. (4p)
(b) Consider a binary classification problem with one input and with training data set
where αb > 0 is the “confidence” of the bth classifier, and each base classifier is
assumed to be linear, i.e., on the form
(c) Show that it is not possible to attain zero training error using only two ensemble
members in the previous question. You may assume that α1 6= α2 for simplicity
(though, the statement holds also for α1 = α2 ). (2p)