Applied Statistics PDF
Applied Statistics PDF
Applied Statistics PDF
2019-10-02
2
Contents
1 Introduction 9
1.1 About This Book . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.2 Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.3 Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.4 License . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2 Introduction to R 13
2.1 Getting Started . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.2 Basic Calculations . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.3 Getting Help . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.4 Installing Packages . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4 Summarizing Data 47
4.1 Summary Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.2 Plotting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.2.1 Histograms . . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.2.2 Barplots . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.2.3 Boxplots . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.2.4 Scatterplots . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3
4 CONTENTS
6 R Resources 73
6.1 Beginner Tutorials and References . . . . . . . . . . . . . . . . . 73
6.2 Intermediate References . . . . . . . . . . . . . . . . . . . . . . . 73
6.3 Advanced References . . . . . . . . . . . . . . . . . . . . . . . . . 74
6.4 Quick Comparisons to Other Languages . . . . . . . . . . . . . . 74
6.5 RStudio and RMarkdown Videos . . . . . . . . . . . . . . . . . . 74
6.6 RMarkdown Template . . . . . . . . . . . . . . . . . . . . . . . . 75
14 Transformations 277
14.1 Response Transformation . . . . . . . . . . . . . . . . . . . . . . 277
14.1.1 Variance Stabilizing Transformations . . . . . . . . . . . . 280
14.1.2 Box-Cox Transformations . . . . . . . . . . . . . . . . . . 285
14.2 Predictor Transformation . . . . . . . . . . . . . . . . . . . . . . 292
14.2.1 Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . 295
14.2.2 A Quadratic Model . . . . . . . . . . . . . . . . . . . . . . 317
14.2.3 Overfitting and Extrapolation . . . . . . . . . . . . . . . . 321
14.2.4 Comparing Polynomial Models . . . . . . . . . . . . . . . 322
14.2.5 poly() Function and Orthogonal Polynomials . . . . . . . 325
14.2.6 Inhibit Function . . . . . . . . . . . . . . . . . . . . . . . 327
14.2.7 Data Example . . . . . . . . . . . . . . . . . . . . . . . . 328
14.3 R Markdown . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
15 Collinearity 335
15.1 Exact Collinearity . . . . . . . . . . . . . . . . . . . . . . . . . . 335
15.2 Collinearity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
15.2.1 Variance Inflation Factor. . . . . . . . . . . . . . . . . . . 340
15.3 Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
15.4 R Markdown . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
18 Beyond 417
18.1 What’s Next . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417
18.2 RStudio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417
18.3 Tidy Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417
18.4 Visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418
18.5 Web Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . 418
18.6 Experimental Design . . . . . . . . . . . . . . . . . . . . . . . . . 418
18.7 Machine Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . 419
18.7.1 Deep Learning . . . . . . . . . . . . . . . . . . . . . . . . 419
18.8 Time Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419
18.9 Bayesianism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420
18.10High Performance Computing . . . . . . . . . . . . . . . . . . . . 420
18.11Further R Resources . . . . . . . . . . . . . . . . . . . . . . . . . 420
8 CONTENTS
19 Appendix 421
Chapter 1
Introduction
This book was originally (and currently) designed for use with STAT 420, Meth-
ods of Applied Statistics, at the University of Illinois at Urbana-Champaign. It
may certainly be used elsewhere, but any references to “this course” in this book
specifically refer to STAT 420.
This book is under active development. When possible, it would be best to
always access the text online to be sure you are using the most up-to-date
version. Also, the html version provides additional features such as changing
text size, font, and colors. If you are in need of a local copy, a pdf version
is continuously maintained, however, because a pdf uses pages, the formatting
may not be as functional. (In other words, the author needs to go back and
spend some time working on the pdf formatting.)
Since this book is under active development you may encounter errors ranging
from typos, to broken code, to poorly explained topics. If you do, please let us
know! Simply send an email and we will make the changes as soon as possible.
(dalpiaz2 AT illinois DOT edu) Or, if you know RMarkdown and are famil-
iar with GitHub, make a pull request and fix an issue yourself! This process is
partially automated by the edit button in the top-left corner of the html version.
If your suggestion or fix becomes part of the book, you will be added to the list
at the end of this chapter. We’ll also link to your GitHub account, or personal
website upon request.
This text uses MathJax to render mathematical notation for the web. Occa-
sionally, but rarely, a JavaScript error will prevent MathJax from rendering
9
10 CHAPTER 1. INTRODUCTION
correctly. In this case, you will see the “code” instead of the expected math-
ematical equations. From experience, this is almost always fixed by simply
refreshing the page. You’ll also notice that if you right-click any equation you
can obtain the MathML Code (for copying into Microsoft Word) or the TeX
command used to generate the equation.
a2 + b2 = c2
1.2 Conventions
R code will be typeset using a monospace font which is syntax highlighted.
a = 3
b = 4
sqrt(a ^ 2 + b ^ 2)
R output lines, which would appear in the console will begin with ##. They will
generally not be syntax highlighted.
## [1] 5
We use the quantity p to refer to the number of β parameters in a linear model,
not the number of predictors. Don’t worry if you don’t know what this means
yet!
1.3 Acknowledgements
Material in this book was heavily influenced by:
• Alex Stepanov
– Longtime instructor of STAT 420 at the University of Illinois at
Urbana-Champaign. The author of this book actually took Alex’s
STAT 420 class many years ago! Alex provided or inspired many of
the examples in the text.
• David Unger
– Another STAT 420 instructor at the University of Illinois at Urbana-
Champaign. Co-taught with the author during the summer of 2016
while this book was first being developed. Provided endless hours of
copy editing and countless suggestions.
• James Balamuta
– Current graduate student at the University of Illinois at Urbana-
Champaign. Provided the initial push to write this book by intro-
ducing the author to the bookdown package in R. Also a frequent
contributor via GitHub.
1.4. LICENSE 11
Your name could be here! Suggest an edit! Correct a typo! If you submit a
correction and would like to be listed below, please provide your name as you
would like it to appear, as well as a link to a GitHub, LinkedIn, or personal
website.
• Daniel McQuillan
• Mason Rubenstein
• Yuhang Wang
• Zhao Liu
• Jinfeng Xiao
• Somu Palaniappan
• Michael Hung-Yiu Chan
• Eloise Rosen
• Kiomars Nassiri
• Jeff Gerlach
• Brandon Ching
• Ray Fix
• Tyler Kim
• Yeongho Kim
• Elmar Langholz
• Thai Duy Cuong Nguyen
• Junyoung Kim
• Sezgin Kucukcoban
• Tony Ma
• Radu Manolescu
• Dileep Pasumarthi
1.4 License
12 CHAPTER 1. INTRODUCTION
Chapter 2
Introduction to R
13
14 CHAPTER 2. INTRODUCTION TO R
RStudio has a large number of useful keyboard shortcuts. A list of these can be
found using a keyboard shortcut – the keyboard shortcut to rule them all:
• On Windows: Alt + Shift + K
• On Mac: Option + Shift + K
The RStudio team has developed a number of “cheatsheets” for working with
both R and RStudio. This particular cheatsheet for “Base” R will summarize
many of the concepts in this document. (“Base” R is a name used to differentiate
the practice of using built-in R functions, as opposed to using functions from
outside packages, in particular, those from the tidyverse. More on this later.)
When programming, it is often a good practice to follow a style guide. (Where do
spaces go? Tabs or spaces? Underscores or CamelCase when naming variables?)
No style guide is “correct” but it helps to be aware of what others do. The more
import thing is to be consistent within your own code.
• Hadley Wickham Style Guide from Advanced R
• Google Style Guide
For this course, our main deviation from these two guides is the use of = in place
of <-. (More on that later.)
Math R Result
3+2 3 + 2 5
3−2 3 - 2 1
3·2 3 * 2 6
3/2 3 / 2 1.5
Exponents
Math R Result
32 3 ^ 2 9
2(−3) 2 ^ (-3) 0.125
1/2
100
√ 100 ^ (1 / 2) 10
100 sqrt(100) 10
2.3. GETTING HELP 15
Mathematical Constants
Math R Result
π pi 3.1415927
e exp(1) 2.7182818
Logarithms
Note that we will use ln and log interchangeably to mean the natural logarithm.
There is no ln() in R, instead it uses log() to mean the natural logarithm.
Math R Result
log(e) log(exp(1)) 1
log10 (1000) log10(1000) 3
log2 (8) log2(8) 3
log4 (16) log(16, base = 4) 2
Trigonometry
Math R Result
sin(π/2) sin(pi / 2) 1
cos(0) cos(0) 1
Frequently one of the most difficult things to do when learning R is asking for
help. First, you need to decide to ask for help, then you need to know how
to ask for help. Your very first line of defense should be to Google your error
message or a short description of your issue. (The ability to solve problems
using this method is quickly becoming an extremely valuable skill.) If that fails,
16 CHAPTER 2. INTRODUCTION TO R
and it eventually will, you should ask for help. There are a number of things
you should include when emailing an instructor, or posting to a help website
such as Stack Exchange.
• Describe what you expect the code to do.
• State the end goal you are trying to achieve. (Sometimes what you expect
the code to do, is not what you want to actually do.)
• Provide the full text of any errors you have received.
• Provide enough code to recreate the error. Often for the purpose of this
course, you could simply email your entire .R or .Rmd file.
• Sometimes it is also helpful to include a screenshot of your entire RStudio
window when the error occurs.
If you follow these steps, you will get your issue resolved much quicker, and
possibly learn more in the process. Do not be discouraged by running into
errors and difficulties when learning R. (Or any technical skill.) It is simply part
of the learning process.
Once a package is installed, it must be loaded into your current R session before
being used. Think of this as taking the book off of the shelf and opening it up
to read.
library(ggplot2)
Once you close R, all the packages are closed and put back on the imaginary
shelf. The next time you open R, you do not have to install the package again,
but you do have to load any packages you intend to use by invoking library().
Chapter 3
17
18 CHAPTER 3. DATA AND PROGRAMMING
3.2.1 Vectors
## [1] 1 3 5 7 8 9
Here R simply outputs this vector. If we would like to store this vector in
a variable we can do so with the assignment operator =. In this case the
variable x now holds the vector we just created, and we can access the vector
by typing x.
x = c(1, 3, 5, 7, 8, 9)
x
## [1] 1 3 5 7 8 9
As an aside, there is a long history of the assignment operator in R, partially
due to the keys available on the keyboards of the creators of the S language.
(Which preceded R.) For simplicity we will use =, but know that often you will
see <- as the assignment operator.
The pros and cons of these two are well beyond the scope of this book, but
know that for our purposes you will have no issue if you simply use =. If you
are interested in the weird cases where the difference matters, check out The R
Inferno.
If you wish to use <-, you will still need to use =, however only for argument
passing. Some users like to keep assignment (<-) and argument passing (=)
separate. No matter what you choose, the more important thing is that you
stay consistent. Also, if working on a larger collaborative project, you should
use whatever style is already in place.
Because vectors must contain elements that are all the same type, R will au-
tomatically coerce to a single type when attempting to create a vector that
combines multiple types.
3.2. DATA STRUCTURES 19
## [1] 42 1
Frequently you may wish to create a vector based on a sequence of numbers.
The quickest and easiest way to do this is with the : operator, which creates a
sequence of integers between two specified integers.
(y = 1:100)
## [1] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
## [18] 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34
## [35] 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51
## [52] 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68
## [69] 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85
## [86] 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100
Here we see R labeling the rows after the first since this is a large vector. Also,
we see that by putting parentheses around the assignment, R both stores the
vector in a variable called y and automatically outputs y to the console.
Note that scalars do not exists in R. They are simply vectors of length 1.
2
## [1] 2
If we want to create a sequence that isn’t limited to integers and increasing by
1 at a time, we can use the seq() function.
seq(from = 1.5, to = 4.2, by = 0.1)
## [1] 1.5 1.6 1.7 1.8 1.9 2.0 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 3.0 3.1
## [18] 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 4.0 4.1 4.2
We will discuss functions in detail later, but note here that the input labels
from, to, and by are optional.
seq(1.5, 4.2, 0.1)
## [1] 1.5 1.6 1.7 1.8 1.9 2.0 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 3.0 3.1
## [18] 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 4.0 4.1 4.2
Another common operation to create a vector is rep(), which can repeat a
single value a number of times.
rep("A", times = 10)
## [1] "A" "A" "A" "A" "A" "A" "A" "A" "A" "A"
20 CHAPTER 3. DATA AND PROGRAMMING
The rep() function can be used to repeat a vector some number of times.
rep(x, times = 3)
## [1] 1 3 5 7 8 9 1 3 5 7 8 9 1 3 5 7 8 9
We have now seen four different ways to create vectors:
• c()
• :
• seq()
• rep()
So far we have mostly used them in isolation, but they are often used together.
c(x, rep(seq(1, 9, 2), 3), c(1, 2, 3), 42, 2:4)
## [1] 1 3 5 7 8 9 1 3 5 7 9 1 3 5 7 9 1 3 5 7 9 1 2
## [24] 3 42 2 3 4
The length of a vector can be obtained with the length() function.
length(x)
## [1] 6
length(y)
## [1] 100
3.2.1.1 Subsetting
## [1] 1 3 5 7 8 9
x[1]
## [1] 1
x[3]
## [1] 5
We see that x[1] returns the first element, and x[3] returns the third element.
x[-2]
## [1] 1 5 7 8 9
We can also exclude certain indexes, in this case the second element.
3.2. DATA STRUCTURES 21
x[1:3]
## [1] 1 3 5
x[c(1,3,4)]
## [1] 1 5 7
Lastly we see that we can subset based on a vector of indices.
All of the above are subsetting a vector using a vector of indexes. (Remember a
single number is still a vector.) We could instead use a vector of logical values.
z = c(TRUE, TRUE, FALSE, TRUE, TRUE, FALSE)
z
## [1] 1 3 7 8
3.2.2 Vectorization
## [1] 2 3 4 5 6 7 8 9 10 11
2 * x
## [1] 2 4 6 8 10 12 14 16 18 20
2 ^ x
## [1] 5 7 8 9
x[x != 3]
## [1] 1 5 7 8 9
• TODO: coercion
3.2. DATA STRUCTURES 23
sum(x > 3)
## [1] 4
as.numeric(x > 3)
## [1] 0 0 1 1 1 1
Here we see that using the sum() function on a vector of logical TRUE and FALSE
values that is the result of x > 3 results in a numeric result. R is first auto-
matically coercing the logical to numeric where TRUE is 1 and FALSE is 0. This
coercion from logical to numeric happens for most mathematical operations.
which(x > 3)
## [1] 3 4 5 6
x[which(x > 3)]
## [1] 5 7 8 9
max(x)
## [1] 9
which(x == max(x))
## [1] 6
which.max(x)
## [1] 6
x = c(1, 3, 5, 7, 8, 9)
y = 1:100
x + 2
## [1] 3 5 7 9 10 11
x + rep(2, 6)
## [1] 3 5 7 9 10 11
x > 3
x + y
## [1] 6
length(y)
## [1] 100
length(y) / length(x)
## [1] 16.66667
(x + y) - y
## [1] 2 5 8 11 13 15 8 11 14 17 19 21 14 17 20 23 25 27 20 23 26 29 31
## [24] 33 26 29 32 35 37 39 32 35 38 41 43 45 38 41 44 47 49 51 44 47 50 53
## [47] 55 57 50 53 56 59 61 63 56 59 62 65 67 69
length(y) / length(x)
## [1] 10
rep(x, 10) + y
## [1] 2 5 8 11 13 15 8 11 14 17 19 21 14 17 20 23 25 27 20 23 26 29 31
## [24] 33 26 29 32 35 37 39 32 35 38 41 43 45 38 41 44 47 49 51 44 47 50 53
## [47] 55 57 50 53 56 59 61 63 56 59 62 65 67 69
all(x + y == rep(x, 10) + y)
3.2. DATA STRUCTURES 25
## [1] TRUE
identical(x + y, rep(x, 10) + y)
## [1] TRUE
# ?any
# ?all.equal
3.2.5 Matrices
R can also be used for matrix calculations. Matrices have rows and columns
containing a single data type. In a matrix, the order of rows and columns is
important. (This is not true of data frames, which we will see later.)
Matrices can be created using the matrix function.
x = 1:9
x
## [1] 1 2 3 4 5 6 7 8 9
X = matrix(x, nrow = 3, ncol = 3)
X
## [1] 4
Here we accessed the element in the first row and the second column. We could
also subset an entire row or column.
X[1, ]
## [1] 1 4 7
X[, 2]
## [1] 4 5 6
We can also use vectors to subset more than one row or column at a time. Here
we subset to the first and third column of the second row.
X[2, c(1, 3)]
## [1] 2 8
Matrices can also be created by combining vectors as columns, using cbind, or
combining vectors as rows, using rbind.
x = 1:9
rev(x)
## [1] 9 8 7 6 5 4 3 2 1
rep(1, 9)
## [1] 1 1 1 1 1 1 1 1 1
rbind(x, rev(x), rep(1, 9))
## 1 1 1 1 1 1 1 1 1
cbind(col_1 = x, col_2 = rev(x), col_3 = rep(1, 9))
X * Y
function which checks for equality, with some small tolerance which accounts
for some computational issues. The identical() function is used to check for
exact equality.
solve(Z) %*% Z
## [1] TRUE
R has a number of matrix specific functions for obtaining dimension and sum-
mary information.
X = matrix(1:6, 2, 3)
X
## [1] 2 3
rowSums(X)
## [1] 9 12
colSums(X)
## [1] 3 7 11
rowMeans(X)
## [1] 3 4
colMeans(X)
diag(Z)
## [1] 9 4 16
Or, lastly, create a square matrix of a certain dimension with 1 for every element
of the diagonal and 0 for the off-diagonals.
diag(5)
Certain operations in R, for example %*% have different behavior on vectors and
matrices. To illustrate this, we will first create two vectors.
a_vec = c(1, 2, 3)
b_vec = c(2, 2, 2)
Note that these are indeed vectors. They are not matrices.
c(is.vector(a_vec), is.vector(b_vec))
When this is the case, the %*% operator is used to calculate the dot product,
also know as the inner product of the two vectors.
3.2. DATA STRUCTURES 31
∑
n
a·b= ai bi = a1 b1 + a2 b2 + · · · an bn .
i=1
## [,1]
## [1,] 12
a_vec %o% b_vec # outer product
## [,1]
## [1,] 1
## [2,] 2
## [3,] 3
If we use the %*% operator on matrices, %*% again performs the expected matrix
multiplication. So you might expect the following to produce an error, because
the dimensions are incorrect.
as.matrix(a_vec) %*% b_vec
Another way to calculate a dot product is with the crossprod() function. Given
32 CHAPTER 3. DATA AND PROGRAMMING
two vectors, the crossprod() function calculates their dot product. The func-
tion has a rather misleading name.
crossprod(a_vec, b_vec) # inner product
## [,1]
## [1,] 12
tcrossprod(a_vec, b_vec) # outer product
X ⊤ Y.
X ⊤X
is used repeatedly.
C_mat = matrix(c(1, 2, 3, 4, 5, 6), 2, 3)
D_mat = matrix(c(2, 2, 2, 2, 2, 2), 2, 3)
This is useful both as a shortcut for a frequent calculation and as a more efficient
implementation than using t() and %*%.
crossprod(C_mat, D_mat)
## [1] TRUE
3.2. DATA STRUCTURES 33
crossprod(C_mat, C_mat)
## [1] TRUE
3.2.6 Lists
## [[1]]
## [1] 42
##
## [[2]]
## [1] "Hello"
##
## [[3]]
## [1] TRUE
ex_list = list(
a = c(1, 2, 3, 4),
b = TRUE,
c = "Hello!",
d = function(arg = 42) {print("Hello World!")},
e = diag(5)
)
Lists can be subset using two syntaxes, the $ operator, and square brackets [].
The $ operator returns a named element of a list. The [] syntax returns a
list, while the [[]] returns an element of a list.
34 CHAPTER 3. DATA AND PROGRAMMING
## $a
## [1] 1 2 3 4
##
## $b
## [1] TRUE
ex_list[1]
## $a
## [1] 1 2 3 4
ex_list[[1]]
## [1] 1 2 3 4
ex_list[c("e", "a")]
## $e
## [,1] [,2] [,3] [,4] [,5]
## [1,] 1 0 0 0 0
## [2,] 0 1 0 0 0
## [3,] 0 0 1 0 0
## [4,] 0 0 0 1 0
## [5,] 0 0 0 0 1
##
## $a
## [1] 1 2 3 4
ex_list["e"]
## $e
## [,1] [,2] [,3] [,4] [,5]
## [1,] 1 0 0 0 0
## [2,] 0 1 0 0 0
## [3,] 0 0 1 0 0
3.2. DATA STRUCTURES 35
## [4,] 0 0 0 1 0
## [5,] 0 0 0 0 1
ex_list[["e"]]
We have previously seen vectors and matrices for storing data as we introduced
R. We will now introduce a data frame which will be the most common way
that we store and interact with data in this course.
example_data = data.frame(x = c(1, 3, 5, 7, 9, 1, 3, 5, 7, 9),
y = c(rep("Hello", 9), "Goodbye"),
z = rep(c(TRUE, FALSE), 5))
Unlike a matrix, which can be thought of as a vector rearranged into rows and
columns, a data frame is not required to have the same data type for each
element. A data frame is a list of vectors. So, each vector must contain the
same data type, but the different vectors can store different data types.
example_data
## x y z
## 1 1 Hello TRUE
## 2 3 Hello FALSE
## 3 5 Hello TRUE
## 4 7 Hello FALSE
## 5 9 Hello TRUE
## 6 1 Hello FALSE
## 7 3 Hello TRUE
## 8 5 Hello FALSE
## 9 7 Hello TRUE
## 10 9 Goodbye FALSE
36 CHAPTER 3. DATA AND PROGRAMMING
Unlike a list which has more flexibility, the elements of a data frame must all
be vectors, and have the same length.
example_data$x
## [1] 1 3 5 7 9 1 3 5 7 9
all.equal(length(example_data$x),
length(example_data$y),
length(example_data$z))
## [1] TRUE
str(example_data)
## [1] 10
ncol(example_data)
## [1] 3
dim(example_data)
## [1] 10 3
The data.frame() function above is one way to create a data frame. We can
also import data from various file types in into R, as well as use data stored in
packages.
The example data above can also be found here as a .csv file. To read this data
into R, we would use the read_csv() function from the readr package. Note
that R has a built in function read.csv() that operates very similarly. The
readr function read_csv() has a number of advantages. For example, it is
much faster reading larger data. It also uses the tibble package to read the
data as a tibble.
library(readr)
example_data_from_csv = read_csv("data/example-data.csv")
This particular line of code assumes that the file example_data.csv exists in a
folder called data in your current working directory.
example_data_from_csv
## # A tibble: 10 x 3
3.2. DATA STRUCTURES 37
## x y z
## <dbl> <chr> <lgl>
## 1 1 Hello TRUE
## 2 3 Hello FALSE
## 3 5 Hello TRUE
## 4 7 Hello FALSE
## 5 9 Hello TRUE
## 6 1 Hello FALSE
## 7 3 Hello TRUE
## 8 5 Hello FALSE
## 9 7 Hello TRUE
## 10 9 Goodbye FALSE
A tibble is simply a data frame that prints with sanity. Notice in the output
above that we are given additional information such as dimension and variable
type.
## # A tibble: 10 x 3
## x y z
## <dbl> <fct> <lgl>
## 1 1 Hello TRUE
## 2 3 Hello FALSE
## 3 5 Hello TRUE
## 4 7 Hello FALSE
## 5 9 Hello TRUE
## 6 1 Hello FALSE
## 7 3 Hello TRUE
## 8 5 Hello FALSE
## 9 7 Hello TRUE
## 10 9 Goodbye FALSE
library(ggplot2)
Inside the ggplot2 package is a dataset called mpg. By loading the package
using the library() function, we can now access mpg.
When using data from inside a package, there are three things we would generally
like to do:
• Look at the raw data.
• Understand the data. (Where did it come from? What are the variables?
Etc.)
• Visualize the data.
To look at the data, we have two useful commands: head() and str().
head(mpg, n = 10)
## # A tibble: 10 x 11
## manufacturer model displ year cyl trans drv cty hwy fl class
## <chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
## 1 audi a4 1.8 1999 4 auto~ f 18 29 p comp~
## 2 audi a4 1.8 1999 4 manu~ f 21 29 p comp~
## 3 audi a4 2 2008 4 manu~ f 20 31 p comp~
## 4 audi a4 2 2008 4 auto~ f 21 30 p comp~
## 5 audi a4 2.8 1999 6 auto~ f 16 26 p comp~
## 6 audi a4 2.8 1999 6 manu~ f 18 26 p comp~
## 7 audi a4 3.1 2008 6 auto~ f 18 27 p comp~
## 8 audi a4 q~ 1.8 1999 4 manu~ 4 18 26 p comp~
## 9 audi a4 q~ 1.8 1999 4 auto~ 4 16 25 p comp~
## 10 audi a4 q~ 2 2008 4 manu~ 4 20 28 p comp~
The function head() will display the first n observations of the data frame. The
head() function was more useful before tibbles. Notice that mpg is a tibble
already, so the output from head() indicates there are only 10 observations.
Note that this applies to head(mpg, n = 10) and not mpg itself. Also note that
tibbles print a limited number of rows and columns by default. The last line of
the printed output indicates which rows and columns were omitted.
mpg
## # A tibble: 234 x 11
## manufacturer model displ year cyl trans drv cty hwy fl class
## <chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
## 1 audi a4 1.8 1999 4 auto~ f 18 29 p comp~
## 2 audi a4 1.8 1999 4 manu~ f 21 29 p comp~
## 3 audi a4 2 2008 4 manu~ f 20 31 p comp~
## 4 audi a4 2 2008 4 auto~ f 21 30 p comp~
## 5 audi a4 2.8 1999 6 auto~ f 16 26 p comp~
## 6 audi a4 2.8 1999 6 manu~ f 18 26 p comp~
3.2. DATA STRUCTURES 39
The function str() will display the “structure” of the data frame. It will display
the number of observations and variables, list the variables, give the type of
each variable, and show some elements of each variable. This information can
also be found in the “Environment” window in RStudio.
str(mpg)
It is important to note that while matrices have rows and columns, data frames
(tibbles) instead have observations and variables. When displayed in the console
or viewer, each row is an observation and each column is a variable. However
generally speaking, their order does not matter, it is simply a side-effect of how
the data was entered or stored.
To understand more about the data set, we use the ? operator to pull up the
documentation for the data.
?mpg
R has a number of functions for quickly working with and extracting basic in-
formation from data frames. To quickly obtain a vector of the variable names,
we use the names() function.
names(mpg)
## [1] 1999 1999 2008 2008 1999 1999 2008 1999 1999 2008 2008 1999 1999 2008
## [15] 2008 1999 2008 2008 2008 2008 2008 1999 2008 1999 1999 2008 2008 2008
## [29] 2008 2008 1999 1999 1999 2008 1999 2008 2008 1999 1999 1999 1999 2008
## [43] 2008 2008 1999 1999 2008 2008 2008 2008 1999 1999 2008 2008 2008 1999
## [57] 1999 1999 2008 2008 2008 1999 2008 1999 2008 2008 2008 2008 2008 2008
## [71] 1999 1999 2008 1999 1999 1999 2008 1999 1999 1999 2008 2008 1999 1999
## [85] 1999 1999 1999 2008 1999 2008 1999 1999 2008 2008 1999 1999 2008 2008
## [99] 2008 1999 1999 1999 1999 1999 2008 2008 2008 2008 1999 1999 2008 2008
## [113] 1999 1999 2008 1999 1999 2008 2008 2008 2008 2008 2008 2008 1999 1999
## [127] 2008 2008 2008 2008 1999 2008 2008 1999 1999 1999 2008 1999 2008 2008
## [141] 1999 1999 1999 2008 2008 2008 2008 1999 1999 2008 1999 1999 2008 2008
## [155] 1999 1999 1999 2008 2008 1999 1999 2008 2008 2008 2008 1999 1999 1999
## [169] 1999 2008 2008 2008 2008 1999 1999 1999 1999 2008 2008 1999 1999 2008
## [183] 2008 1999 1999 2008 1999 1999 2008 2008 1999 1999 2008 1999 1999 1999
## [197] 2008 2008 1999 2008 1999 1999 2008 1999 1999 2008 2008 1999 1999 2008
## [211] 2008 1999 1999 1999 1999 2008 2008 2008 2008 1999 1999 1999 1999 1999
## [225] 1999 2008 2008 1999 1999 2008 2008 1999 1999 2008
mpg$hwy
## [1] 29 29 31 30 26 26 27 26 25 28 27 25 25 25 25 24 25 23 20 15 20 17 17
## [24] 26 23 26 25 24 19 14 15 17 27 30 26 29 26 24 24 22 22 24 24 17 22 21
## [47] 23 23 19 18 17 17 19 19 12 17 15 17 17 12 17 16 18 15 16 12 17 17 16
## [70] 12 15 16 17 15 17 17 18 17 19 17 19 19 17 17 17 16 16 17 15 17 26 25
## [93] 26 24 21 22 23 22 20 33 32 32 29 32 34 36 36 29 26 27 30 31 26 26 28
## [116] 26 29 28 27 24 24 24 22 19 20 17 12 19 18 14 15 18 18 15 17 16 18 17
## [139] 19 19 17 29 27 31 32 27 26 26 25 25 17 17 20 18 26 26 27 28 25 25 24
## [162] 27 25 26 23 26 26 26 26 25 27 25 27 20 20 19 17 20 17 29 27 31 31 26
## [185] 26 28 27 29 31 31 26 26 27 30 33 35 37 35 15 18 20 20 22 17 19 18 20
## [208] 29 26 29 29 24 44 29 26 29 29 29 29 23 24 44 41 29 26 28 29 29 29 28
## [231] 29 26 26 26
We can use the dim(), nrow() and ncol() functions to obtain information
about the dimension of the data frame.
dim(mpg)
## [1] 234 11
nrow(mpg)
## [1] 234
ncol(mpg)
## [1] 11
3.3. PROGRAMMING BASICS 41
Here nrow() is also the number of observations, which in most cases is the
sample size.
Subsetting data frames can work much like subsetting matrices using square
brackets, [,]. Here, we find fuel efficient vehicles earning over 35 miles per
gallon and only display manufacturer, model and year.
mpg[mpg$hwy > 35, c("manufacturer", "model", "year")]
## # A tibble: 6 x 3
## manufacturer model year
## <chr> <chr> <int>
## 1 honda civic 2008
## 2 honda civic 2008
## 3 toyota corolla 2008
## 4 volkswagen jetta 1999
## 5 volkswagen new beetle 1999
## 6 volkswagen new beetle 1999
An alternative would be to use the subset() function, which has a much more
readable syntax.
subset(mpg, subset = hwy > 35, select = c("manufacturer", "model", "year"))
Lastly, we could use the filter and select functions from the dplyr package
which introduces the %>% operator from the magrittr package. This is not
necessary for this course, however the dplyr package is something you should
be aware of as it is becoming a popular tool in the R world.
library(dplyr)
mpg %>% filter(hwy > 35) %>% select(manufacturer, model, year)
All three approaches produce the same results. Which you use will be largely
based on a given situation as well as user preference.
When subsetting a data frame, be aware of what is being returned, as sometimes
it may be a vector instead of a data frame. Also note that there are differences
between subsetting a data frame and a tibble. A data frame operates more like
a matrix where it is possible to reduce the subset to a vector. A tibble operates
more like a list where it always subsets to another tibble.
if (...) {
some R code
} else {
more R code
}
For example,
x = 1
y = 3
if (x > y) {
z = x * y
print("x is larger than y")
} else {
z = x + 5 * y
print("x is less than or equal to y")
}
## [1] 16
R also has a special function ifelse() which is very useful. It returns one of
two specified values based on a conditional statement.
ifelse(4 > 3, 1, 0)
## [1] 1
The real power of ifelse() comes from its ability to be applied to vectors.
fib = c(1, 1, 2, 3, 5, 8, 13, 21)
ifelse(fib > 6, "Foo", "Bar")
## [1] 22 24 26 28 30
Note that this for loop is very normal in many programming languages, but
not in R. In R we would not use a loop, instead we would simply use a vectorized
operation.
3.3. PROGRAMMING BASICS 43
x = 11:15
x = x * 2
x
## [1] 22 24 26 28 30
3.3.2 Functions
So far we have been using functions, but haven’t actually discussed some of their
details.
function_name(arg1 = 10, arg2 = 20)
To use a function, you simply type its name, followed by an open parenthesis,
then specify values of its arguments, then finish with a closing parenthesis.
An argument is a variable which is used in the body of the function. Specifying
the values of the arguments is essentially providing the inputs to the function.
We can also write our own functions in R. For example, we often like to “stan-
dardize” variables, that is, subtracting the sample mean, and dividing by the
sample standard deviation.
x − x̄
s
Here the name of the function is standardize, and the function has a single
argument x which is used in the body of function. Note that the output of
the final line of the body is what is returned by the function. In this case the
function returns the vector stored in the variable result.
To test our function, we will take a random sample of size n = 10 from a normal
distribution with a mean of 2 and a standard deviation of 5.
44 CHAPTER 3. DATA AND PROGRAMMING
Let’s look at a number of ways that we could run this function to perform the
operation 10^2 resulting in 100.
power_of_num(10)
## [1] 100
power_of_num(10, 2)
## [1] 100
power_of_num(num = 10, power = 2)
## [1] 100
power_of_num(power = 2, num = 10)
## [1] 100
Note that without using the argument names, the order matters. The following
code will not evaluate to the same output as the previous example.
power_of_num(2, 10)
## [1] 1024
Also, the following line of code would produce an error since arguments without
a default value must be specified.
3.3. PROGRAMMING BASICS 45
power_of_num(power = 5)
1 ∑
n
s2 = (x − x̄)2
n − 1 i=1
It will also have the ability to return the biased estimate (based on maximum
likelihood) which we will call σ̂ 2 .
1∑
n
σ̂ 2 = (x − x̄)2
n i=1
## [1] 11.44477
get_var(test_sample, biased = FALSE)
## [1] 11.44477
var(test_sample)
## [1] 11.44477
We see the function is working as expected, and when returning the unbiased
estimate it matches R’s built in function var(). Finally, let’s examine the biased
estimate of σ 2 .
get_var(test_sample, biased = TRUE)
## [1] 10.3003
46 CHAPTER 3. DATA AND PROGRAMMING
Chapter 4
Summarizing Data
Central Tendency
Measure R Result
Mean mean(mpg$cty) 16.8589744
Median median(mpg$cty) 17
Spread
Measure R Result
Variance var(mpg$cty) 18.1130736
Standard Deviation sd(mpg$cty) 4.2559457
IQR IQR(mpg$cty) 5
Minimum min(mpg$cty) 9
Maximum max(mpg$cty) 35
Range range(mpg$cty) 9, 35
47
48 CHAPTER 4. SUMMARIZING DATA
Categorical
For categorical variables, counts and percentages can be used for summary.
table(mpg$drv)
##
## 4 f r
## 103 106 25
table(mpg$drv) / nrow(mpg)
##
## 4 f r
## 0.4401709 0.4529915 0.1068376
4.2 Plotting
Now that we have some data to work with, and we have learned about the
data at the most basic level, our next tasks is to visualize the data. Often, a
proper visualization can illuminate features of the data that can inform further
analysis.
We will look at four methods of visualizing data that we will use throughout
the course:
• Histograms
• Barplots
• Boxplots
• Scatterplots
4.2.1 Histograms
Histogram of mpg$cty
80
60
Frequency
40
20
0
5 10 15 20 25 30 35
mpg$cty
40
30
Frequency
20
10
0
10 15 20 25 30 35
Importantly, you should always be sure to label your axes and give the plot a
title. The argument breaks is specific to hist(). Entering an integer will give
a suggestion to R for how many bars to use for the histogram. By default R will
attempt to intelligently guess a good number of breaks, but as we can see here,
it is sometimes useful to modify this yourself.
4.2.2 Barplots
100
80
60
40
20
0
4 f r
barplot(table(mpg$drv),
xlab = "Drivetrain (f = FWD, r = RWD, 4 = 4WD)",
ylab = "Frequency",
main = "Drivetrains",
col = "dodgerblue",
border = "darkorange")
52 CHAPTER 4. SUMMARIZING DATA
Drivetrains
100
80
Frequency
60
40
20
0
4 f r
4.2.3 Boxplots
First note that we can use a single boxplot as an alternative to a histogram for
visualizing a single numerical variable. To do so in R, we use the boxplot()
function.
boxplot(mpg$hwy)
4.2. PLOTTING 53
45
40
35
30
25
20
15
However, more often we will use boxplots to compare a numerical variable for
different values of a categorical variable.
boxplot(hwy ~ drv, data = mpg)
45
40
35
30
hwy
25
20
15
4 f r
drv
54 CHAPTER 4. SUMMARIZING DATA
35
30
25
20
15
4 f r
Again, boxplot() has a number of additional arguments which have the ability
to make our plot more visually appealing.
4.2.4 Scatterplots
Lastly, to visualize the relationship between two numeric variables we will use a
scatterplot. This can be done with the plot() function and the ~ syntax we
just used with a boxplot. (The function plot() can also be used more generally;
see the documentation for details.)
4.2. PLOTTING 55
25
20
15
2 3 4 5 6 7
displ
45
40
Miles Per Gallon (Highway)
35
30
25
20
15
2 3 4 5 6 7
5.1 Probability in R
5.1.1 Distributions
57
58 CHAPTER 5. PROBABILITY AND STATISTICS IN R
deviation.)
To calculate the value of the pdf at x = 3, that is, the height of the curve at x
= 3, use:
dnorm(x = 3, mean = 2, sd = 5)
## [1] 0.07820854
To calculate the value of the cdf at x = 3, that is, P (X ≤ 3), the probability
that X is less than or equal to 3, use:
pnorm(q = 3, mean = 2, sd = 5)
## [1] 0.5792597
Or, to calculate the quantile for probability 0.975, use:
qnorm(p = 0.975, mean = 2, sd = 5)
## [1] 11.79982
Lastly, to generate a random sample of size n = 10, use:
rnorm(n = 10, mean = 2, sd = 5)
Command Distribution
*binom Binomial
*t t
*pois Poisson
*f F
*chisq Chi-Squared
Where * can be d, p, q, and r. Each distribution will have its own set of
parameters which need to be passed to the functions as arguments. For ex-
ample, dbinom() would not have arguments for mean and sd, since those are
not parameters of the distribution. Instead a binomial distribution is usually
parameterized by n and p, however R chooses to call them something else. To
find the names that R uses we would use ?dbinom and see that R instead calls
the arguments size and prob. For example:
dbinom(x = 6, size = 10, prob = 0.75)
## [1] 0.145998
Also note that, when using the dname functions with discrete distributions, they
5.2. HYPOTHESIS TESTS IN R 59
are the pmf of the distribution. For example, the above command is P (Y = 6)
if Y ∼ b(n = 10, p = 0.75). (The probability of flipping an unfair coin 10 times
and seeing 6 heads, if the probability of heads is 0.75.)
x̄ − µ0
t= √ ∼ tn−1 ,
s/ n
v
∑n u
x u 1 ∑ n
where x̄ = i=1 i
and s = t (xi − x̄)2 .
n n − 1 i=1
s
x̄ ± tn−1 (α/2) √
n
60 CHAPTER 5. PROBABILITY AND STATISTICS IN R
where tn−1 (α/2) is the critical value such that P (t > tn−1 (α/2)) = α/2 for n−1
degrees of freedom.
Suppose a grocery store sells “16 ounce” boxes of Captain Crisp cereal. A
random sample of 9 boxes was taken and weighed. The weight in ounces are
stored in the data frame capt_crisp.
capt_crisp = data.frame(weight = c(15.5, 16.2, 16.1, 15.8, 15.6, 16.0, 15.8, 15.9, 16.2
The company that makes Captain Crisp cereal claims that the average weight
of a box is at least 16 ounces. We will assume the weight of cereal in a box is
normally distributed and use a 0.05 level of significance to test the company’s
claim.
To test H0 : µ ≥ 16 versus H1 : µ < 16, the test statistic is
x̄ − µ0
t= √
s/ n
The sample mean x̄ and the sample standard deviation s can be easily computed
using R. We also create variables which store the hypothesized mean and the
sample size.
x_bar = mean(capt_crisp$weight)
s = sd(capt_crisp$weight)
mu_0 = 16
n = 9
## [1] -1.2
Under the null hypothesis, the test statistic has a t distribution with n − 1
degrees of freedom, in this case 8.
To complete the test, we need to obtain the p-value of the test. Since this is a
one-sided test with a less-than alternative, we need the area to the left of -1.2
for a t distribution with 8 degrees of freedom. That is,
pt(t, df = n - 1)
## [1] 0.1322336
5.2. HYPOTHESIS TESTS IN R 61
We now have the p-value of our test, which is greater than our significance level
(0.05), so we fail to reject the null hypothesis.
Alternatively, this entire process could have been completed using one line of R
code.
t.test(x = capt_crisp$weight, mu = 16, alternative = c("less"), conf.level = 0.95)
##
## One Sample t-test
##
## data: capt_crisp$weight
## t = -1.2, df = 8, p-value = 0.1322
## alternative hypothesis: true mean is less than 16
## 95 percent confidence interval:
## -Inf 16.05496
## sample estimates:
## mean of x
## 15.9
We supply R with the data, the hypothesized value of µ, the alternative, and
the confidence level. R then returns a wealth of information including:
• The value of the test statistic.
• The degrees of freedom of the distribution under the null hypothesis.
• The p-value of the test.
• The confidence interval which corresponds to the test.
• An estimate of µ.
Since the test was one-sided, R returned a one-sided confidence interval. If
instead we wanted a two-sided interval for the mean weight of boxes of Captain
Crisp cereal we could modify our code.
capt_test_results = t.test(capt_crisp$weight, mu = 16,
alternative = c("two.sided"), conf.level = 0.95)
This time we have stored the results. By doing so, we can directly access portions
of the output from t.test(). To see what information is available we use the
names() function.
names(capt_test_results)
## [1] 0.95
Let’s check this interval “by hand.” The one piece of information we are missing
is the critical value, tn−1 (α/2) = t8 (0.025), which can be calculated in R using
the qt() function.
qt(0.975, df = 8)
## [1] 2.306004
So, the 95% CI for the mean weight of a cereal box is calculated by plugging
into the formula,
s
x̄ ± tn−1 (α/2) √
n
c(mean(capt_crisp$weight) - qt(0.975, df = 8) * sd(capt_crisp$weight) / sqrt(9),
mean(capt_crisp$weight) + qt(0.975, df = 8) * sd(capt_crisp$weight) / sqrt(9))
(x̄ − ȳ) − µ0
t= √ ∼ tn+m−2 ,
sp n1 + m 1
∑n ∑m
i=1 xi i=1 yi (n − 1)s2x + (m − 1)s2y
where x̄ = , ȳ = , and s2p = .
n m n+m−2
A 100(1 − α)% CI for µx − µy is given by
( √ )
(x̄ − ȳ) ± tn+m−2 (α/2) sp n1 + 1
m ,
where tn+m−2 (α/2) is the critical value such that P (t > tn+m−2 (α/2)) = α/2.
Assume that the distributions of X and Y are N(µ1 , σ 2 ) and N(µ2 , σ 2 ), respec-
tively. Given the n = 6 observations of X,
5.2. HYPOTHESIS TESTS IN R 63
√
(n − 1)s2x + (m − 1)s2y
sp =
n+m−2
(x̄ − ȳ) − µ0
t= √ .
sp n1 + m 1
## [1] 1.823369
Note that t ∼ tn+m−2 = t12 , so we can calculate the p-value, which is
1 - pt(t, df = n + m - 2)
## [1] 0.04661961
But, then again, we could have simply performed this test in one line of R.
t.test(x, y, alternative = c("greater"), var.equal = TRUE)
##
## Two Sample t-test
64 CHAPTER 5. PROBABILITY AND STATISTICS IN R
##
## data: x and y
## t = 1.8234, df = 12, p-value = 0.04662
## alternative hypothesis: true difference in means is greater than 0
## 95 percent confidence interval:
## 0.1802451 Inf
## sample estimates:
## mean of x mean of y
## 80 72
Recall that a two-sample t-test can be done with or without an equal variance
assumption. Here var.equal = TRUE tells R we would like to perform the test
under the equal variance assumption.
Above we carried out the analysis using two vectors x and y. In general, we will
have a preference for using data frames.
t_test_data = data.frame(values = c(x, y),
group = c(rep("A", length(x)), rep("B", length(y))))
We now have the data stored in a single variables (values) and have created a
second variable (group) which indicates which “sample” the value belongs to.
t_test_data
## values group
## 1 70 A
## 2 82 A
## 3 78 A
## 4 74 A
## 5 94 A
## 6 82 A
## 7 64 B
## 8 72 B
## 9 60 B
## 10 76 B
## 11 72 B
## 12 80 B
## 13 84 B
## 14 68 B
Now to perform the test, we still use the t.test() function but with the ~
syntax and a data argument.
t.test(values ~ group, data = t_test_data,
alternative = c("greater"), var.equal = TRUE)
##
## Two Sample t-test
5.3. SIMULATION 65
##
## data: values by group
## t = 1.8234, df = 12, p-value = 0.04662
## alternative hypothesis: true difference in means is greater than 0
## 95 percent confidence interval:
## 0.1802451 Inf
## sample estimates:
## mean in group A mean in group B
## 80 72
5.3 Simulation
1∑
n
X̄1 = X1i
n i=1
1∑
n
X̄2 = X2i
n i=1
D = X̄1 − X̄2 .
Suppose we would like to calculate P (0 < D < 2). First we will need to obtain
the distribution of D.
Recall,
( )
σ2
X̄1 ∼ N µ1 ,
n
and
( )
σ2
X̄2 ∼ N µ2 , .
n
Then,
( ) ( )
σ2 σ2 4 4
D = X̄1 − X̄2 ∼ N µ1 − µ2 , + = N 6 − 5, + .
n n 25 25
So,
D ∼ N (µ = 1, σ 2 = 0.32).
Thus,
This can then be calculated using R without a need to first standardize, or use
a table.
pnorm(2, mean = 1, sd = sqrt(0.32)) - pnorm(0, mean = 1, sd = sqrt(0.32))
## [1] 0.9229001
An alternative approach, would be to simulate a large number of observations
of D then use the empirical distribution to calculate the probability.
Our strategy will be to repeatedly:
• Generate a sample of 25 random observations from N (µ1 = 6, σ 2 = 4).
Call the mean of this sample x̄1s .
• Generate a sample of 25 random observations from N (µ1 = 5, σ 2 = 4).
Call the mean of this sample x̄2s .
• Calculate the differences of the means, ds = x̄1s − x̄2s .
We will repeat the process a large number of times. Then we will use the
distribution of the simulated observations of ds as an estimate for the true
distribution of D.
set.seed(42)
num_samples = 10000
differences = rep(0, num_samples)
Before starting our for loop to perform the operation, we set a seed for repro-
ducibility, create and set a variable num_samples which will define the number
of repetitions, and lastly create a variables differences which will store the
simulate values, ds .
By using set.seed() we can reproduce the random results of rnorm() each
time starting from that line.
for (s in 1:num_samples) {
x1 = rnorm(n = 25, mean = 6, sd = 2)
x2 = rnorm(n = 25, mean = 5, sd = 2)
differences[s] = mean(x1) - mean(x2)
}
## [1] 0.9222
Recall that above we derived the distribution of D to be N (µ = 1, σ 2 = 0.32)
If we look at a histogram of the differences, we find that it looks very much like
a normal distribution.
68 CHAPTER 5. PROBABILITY AND STATISTICS IN R
Empirical Distribution of D
1400
1000
Frequency
600
200
0
-1 0 1 2 3
Simulated Values of D
Also the sample mean and variance are very close to to what we would expect.
mean(differences)
## [1] 1.001423
var(differences)
## [1] 0.3230183
We could have also accomplished this task with a single line of more “idiomatic”
R.
set.seed(42)
diffs = replicate(10000, mean(rnorm(25, 6, 2)) - mean(rnorm(25, 5, 2)))
Use ?replicate to take a look at the documentation for the replicate function
and see if you can understand how this line performs the same operations that
our for loop above executed.
mean(differences == diffs)
## [1] 1
5.3. SIMULATION 69
We see that by setting the same seed for the randomization, we actually obtain
identical results!
X ∼ P ois(µ)
then
E[X] = µ
and
V ar[X] = µ.
Also, recall that for a random variable X with finite mean µ and finite variance
σ 2 , the central limit theorem tells us that the mean, X̄ of a random sample of
size n is approximately normal for large values of n. Specifically, as n → ∞,
( )
d σ2
X̄ → N µ, .
n
The following verifies this result for a Poisson distribution with µ = 10 and a
sample size of n = 50.
set.seed(1337)
mu = 10
sample_size = 50
samples = 100000
x_bars = rep(0, samples)
for(i in 1:samples){
x_bars[i] = mean(rpois(sample_size, lambda = mu))
}
x_bar_hist = hist(x_bars, breaks = 50,
main = "Histogram of Sample Means",
xlab = "Sample Means")
70 CHAPTER 5. PROBABILITY AND STATISTICS IN R
8000
6000
Frequency
4000
2000
0
8 9 10 11 12
Sample Means
Now we will compare sample statistics from the empirical distribution with their
known values based on the parent distribution.
c(mean(x_bars), mu)
## [1] 0.95429
This last histogram uses a bit of a trick to approximately shade the bars that
are within two standard deviations of the mean.)
shading = ifelse(x_bar_hist$breaks > mu - 2 * sqrt(mu) / sqrt(sample_size) &
x_bar_hist$breaks < mu + 2 * sqrt(mu) / sqrt(sample_size),
"darkorange", "dodgerblue")
5.3. SIMULATION 71
4000
2000
0
8 9 10 11 12
Sample Means
72 CHAPTER 5. PROBABILITY AND STATISTICS IN R
Chapter 6
R Resources
So far, we have seen a lot of R, and a lot of R quickly. Again, the preceding
chapters were in no way meant to be a complete reference for the R language,
but rather an introduction to many of the concepts we will need in this text.
The following resources are not necessary for the remainder of this text, but you
may find them useful if you would like a deeper understanding of R:
73
74 CHAPTER 6. R RESOURCES
7.1 Modeling
Let’s consider a simple example of how the speed of a car affects its stopping
distance, that is, how far it travels before it comes to a stop. To examine this
relationship, we will use the cars dataset which, is a default R dataset. Thus,
we don’t need to load a package first; it is immediately available.
To get a first look at the data you can use the View() function inside RStudio.
View(cars)
We could also take a look at the variable names, the dimension of the data
frame, and some sample observations with str().
str(cars)
77
78 CHAPTER 7. SIMPLE LINEAR REGRESSION
As we have seen before with data frames, there are a number of additional
functions to access some of this information directly.
dim(cars)
## [1] 50 2
nrow(cars)
## [1] 50
ncol(cars)
## [1] 2
Other than the two variable names and the number of observations, this data
is still just a bunch of numbers, so we should probably obtain some context.
?cars
Reading the documentation we learn that this is data gathered during the 1920s
about the speed of cars and the resulting distance it takes for the car to come
to a stop. The interesting task here is to determine how far a car travels before
stopping, when traveling at a certain speed. So, we will first plot the stopping
distance against the speed.
plot(dist ~ speed, data = cars,
xlab = "Speed (in Miles Per Hour)",
ylab = "Stopping Distance (in Feet)",
main = "Stopping Distance vs Speed",
pch = 20,
cex = 2,
col = "grey")
7.1. MODELING 79
100
80
60
40
20
0
5 10 15 20 25
Let’s now define some terminology. We have pairs of data, (xi , yi ), for i =
1, 2, . . . n, where n is the sample size of the dataset.
We use i as an index, simply for notation. We use xi as the predictor (ex-
planatory) variable. The predictor variable is used to help predict or explain
the response (target, outcome) variable, yi .
Other texts may use the term independent variable instead of predictor and
dependent variable in place of response. However, those monikers imply math-
ematical characteristics that might not be true. While these other terms are
not incorrect, independence is already a strictly defined concept in probability.
For example, when trying to predict a person’s weight given their height, would
it be accurate to say that height is independent of weight? Certainly not, but
that is an unintended implication of saying “independent variable.” We prefer
to stay away from this nomenclature.
In the cars example, we are interested in using the predictor variable speed to
predict and explain the response variable dist.
Broadly speaking, we would like to model the relationship between X and Y
using the form
Y = f (X) + ϵ.
The function f describes the functional relationship between the two variables,
and the ϵ term is used to account for error. This indicates that if we plug in a
80 CHAPTER 7. SIMPLE LINEAR REGRESSION
What sort of function should we use for f (X) for the cars data?
We could try to model the data with a horizontal line. That is, the model for y
does not depend on the value of x. (Some function f (X) = c.) In the plot below,
we see this doesn’t seem to do a very good job. Many of the data points are very
far from the orange line representing c. This is an example of underfitting.
The obvious fix is to make the function f (X) actually depend on x.
100
80
60
40
20
0
5 10 15 20 25
We could also try to model the data with a very “wiggly” function that tries to
go through as many of the data points as possible. This also doesn’t seem to
work very well. The stopping distance for a speed of 5 mph shouldn’t be off the
chart! (Even in 1920.) This is an example of overfitting. (Note that in this
example no function will go through every point, since there are some x values
that have several possible y values in the data.)
7.1. MODELING 81
100
80
60
40
20
0
5 10 15 20 25
Lastly, we could try to model the data with a well chosen line rather than one
of the two extremes previously attempted. The line on the plot below seems to
summarize the relationship between stopping distance and speed quite well. As
speed increases, the distance required to come to a stop increases. There is still
some variation about this line, but it seems to capture the overall trend.
82 CHAPTER 7. SIMPLE LINEAR REGRESSION
100
80
60
40
20
0
5 10 15 20 25
With this in mind, we would like to restrict our choice of f (X) to linear functions
of X. We will write our model using β1 for the slope, and β0 for the intercept,
Y = β0 + β1 X + ϵ.
We now define what we will call the simple linear regression model,
Yi = β0 + β1 xi + ϵi
where
ϵi ∼ N (0, σ 2 ).
That is, the ϵi are independent and identically distributed (iid) normal random
variables with mean 0 and variance σ 2 . This model has three parameters to be
estimated: β0 , β1 , and σ 2 , which are fixed, but unknown constants.
We have slightly modified our notation here. We are now using Yi and xi , since
we will be fitting this model to a set of n data points, for i = 1, 2, . . . n.
7.1. MODELING 83
Recall that we use capital Y to indicate a random variable, and lower case
y to denote a potential value of the random variable. Since we will have n
observations, we have n random variables Yi and their possible values yi .
In the simple linear regression model, the xi are assumed to be fixed, known
constants, and are thus notated with a lower case variable. The response Yi
remains a random variable because of the random behavior of the error vari-
able, ϵi . That is, each response Yi is tied to an observable xi and a random,
unobservable, ϵi .
Essentially, we could explicitly think of the Yi as having a different distribution
for each Xi . In other words, Yi has a conditional distribution dependent on the
value of Xi , written xi . Doing so, we still make no distributional assumptions of
the Xi , since we are only interested in the distribution of the Yi for a particular
value xi .
Yi | Xi ∼ N (β0 + β1 xi , σ 2 )
The random Yi are a function of xi , thus we can write its mean as a function of
xi ,
E[Yi | Xi = xi ] = β0 + β1 xi .
Var[Yi | Xi = xi ] = σ 2 .
This is visually displayed in the image below. We see that for any value x, the
expected value of Y is β0 + β1 x. At each value of x, Y has the same variance
σ2 .
Often, we directly talk about the assumptions that this model makes. They can
be cleverly shortened to LINE.
• Linear. The relationship between Y and x is linear, of the form β0 + β1 x.
• Independent. The errors ϵ are independent.
• Normal. The errors, ϵ are normally distributed. That is the “error”
around the line follows a normal distribution.
• Equal Variance. At each value of x, the variance of Y is the same, σ 2 .
We are also assuming that the values of x are fixed, that is, not random. We do
not make a distributional assumption about the predictor variable.
As a side note, we will often refer to simple linear regression as SLR. Some
explanation of the name SLR:
• Simple refers to the fact that we are using a single predictor variable.
Later we will use multiple predictor variables.
84 CHAPTER 7. SIMPLE LINEAR REGRESSION
Figure 7.1: Simple Linear Regression Model Introductory Statistics (Shafer and
Zhang), UC Davis Stat Wiki
• Linear tells us that our model for Y is a linear combination of the pre-
dictors X. (In this case just the one.) Right now, this always results in a
model that is a line, but later we will see how this is not always the case.
• Regression simply means that we are attempting to measure the relation-
ship between a response variable and (one or more) predictor variables. In
the case of SLR, both the response and the predictor are numeric variables.
So SLR models Y as a linear function of X, but how do we actually define a
good line? There are an infinite number of lines we could use, so we will attempt
to find one with “small errors.” That is a line with as many points as close to
it as possible. The questions now becomes, how do we find such a line? There
are many approaches we could take.
We could find the line that has the smallest maximum distance from any of the
points to the line. That is,
We could find the line that minimizes the sum of all the distances from the
points to the line. That is,
∑
n
argmin |yi − (β0 + β1 xi )|.
β0 ,β1 i=1
We could find the line that minimizes the sum of all the squared distances from
the points to the line. That is,
7.2. LEAST SQUARES APPROACH 85
∑
n
argmin (yi − (β0 + β1 xi ))2 .
β0 ,β1 i=1
This last option is called the method of least squares. It is essentially the
de-facto method for fitting a line to data. (You may have even seen it before
in a linear algebra course.) Its popularity is largely due to the fact that it is
mathematically “easy.” (Which was important historically, as computers are a
modern contraption.) It is also very popular because many relationships are
well approximated by a linear function.
∑
n ∑
n
f (β0 , β1 ) = (yi − (β0 + β1 xi ))2 = (yi − β0 − β1 xi )2 .
i=1 i=1
∂f ∑n
= −2 (yi − β0 − β1 xi )
∂β0 i=1
∂f ∑n
= −2 (xi )(yi − β0 − β1 xi )
∂β1 i=1
We then set each of the partial derivatives equal to zero and solving the resulting
system of equations.
∑
n
(yi − β0 − β1 xi ) = 0
i=1
∑
n
(xi )(yi − β0 − β1 xi ) = 0
i=1
∑
n ∑
n
nβ0 + β1 xi = yi
i=1 i=1
∑
n ∑n ∑n
β0 x i + β1 x2i = xi yi
i=1 i=1 i=1
Here, we have defined some notation for the expression we’ve obtained. Note
that they have alternative forms which are much easier to work with. (We won’t
do it here, but you can try to prove the equalities below on your own, for “fun.”)
We use the capital letter S to denote “summation” which replaces the capital
letter Σ when we calculate these values based on observed data, (xi , yi ). The
subscripts such as xy denote over which variables the function (z − z̄) is applied.
∑ ∑n ∑n
( i=1 xi )( i=1 yi ) ∑
n n
Sxy = xi yi − = (xi − x̄)(yi − ȳ)
i=1
n i=1
∑n ∑n ∑n
( i=1 xi )2
Sxx = x2i − = (xi − x̄)2
i=1
n i=1
∑n ∑n 2 ∑n
( i=1 yi )
Syy = yi2 − = (yi − ȳ)2
i=1
n i=1
Note that these summations S are not to be confused with sample standard
deviation s.
By using the above alternative expressions for Sxy and Sxx , we arrive at a
cleaner, more useful expression for β̂1 .
∑n
Sxy (x − x̄)(yi − ȳ)
β̂1 = = ∑n i
i=1
i=1 (xi − x̄)
Sxx 2
Traditionally we would now calculate β̂0 and β̂1 by hand for the cars dataset.
However because we are living in the 21st century and are intelligent (or lazy
or efficient, depending on your perspective), we will utilize R to do the number
crunching for us.
To keep some notation consistent with above mathematics, we will store the
response variable as y and the predictor variable as x.
7.2. LEAST SQUARES APPROACH 87
x = cars$speed
y = cars$dist
ŷ = β̂0 + β̂1 x.
In this case,
88 CHAPTER 7. SIMPLE LINEAR REGRESSION
ŷ = −17.58 + 3.93x.
We can now use this line to make predictions. First, let’s see the possible x
values in the cars dataset. Since some x values may appear more than once,
we use the unique() to return each unique value only once.
unique(cars$speed)
## [1] 4 7 8 9 10 11 12 13 14 15 16 17 18 19 20 22 23 24 25
Let’s make a prediction for the stopping distance of a car traveling at 8 miles
per hour.
ŷ = −17.58 + 3.93 × 8
beta_0_hat + beta_1_hat * 8
## [1] 13.88018
This tells us that the estimated mean stopping distance of a car traveling at 8
miles per hour is 13.88.
Now let’s make a prediction for the stopping distance of a car traveling at 21
miles per hour. This is considered interpolation as 21 is not an observed value
of x. (But is in the data range.) We can use the special %in% operator to quickly
verify this in R.
8 %in% unique(cars$speed)
## [1] TRUE
21 %in% unique(cars$speed)
## [1] FALSE
min(cars$speed) < 21 & 21 < max(cars$speed)
## [1] TRUE
ŷ = −17.58 + 3.93 × 21
beta_0_hat + beta_1_hat * 21
## [1] 65.00149
Lastly, we can make a prediction for the stopping distance of a car traveling at
50 miles per hour. This is considered extrapolation as 50 is not an observed
value of x and is outside data range. We should be less confident in predictions
of this type.
7.2. LEAST SQUARES APPROACH 89
range(cars$speed)
## [1] 4 25
range(cars$speed)[1] < 50 & 50 < range(cars$speed)[2]
## [1] FALSE
ŷ = −17.58 + 3.93 × 50
beta_0_hat + beta_1_hat * 50
## [1] 179.0413
Cars travel 50 miles per hour rather easily today, but not in the 1920s!
This is also an issue we saw when interpreting β̂0 = −17.58, which is equivalent
to making a prediction at x = 0. We should not be confident in the estimated
linear relationship outside of the range of data we have observed.
7.2.2 Residuals
y = ŷ + e.
We then define a residual to be the observed value minus the predicted value.
ei = yi − ŷi
Let’s calculate the residual for the prediction we made for a car traveling 8 miles
per hour. First, we need to obtain the observed value of y for this x value.
which(cars$speed == 8)
## [1] 5
cars[5, ]
## speed dist
## 5 8 16
cars[which(cars$speed == 8), ]
## speed dist
## 5 8 16
90 CHAPTER 7. SIMPLE LINEAR REGRESSION
e = 16 − 13.88 = 2.12
16 - (beta_0_hat + beta_1_hat * 8)
## [1] 2.119825
The positive residual value indicates that the observed stopping distance is
actually 2.12 feet more than what was predicted.
We’ll now use the residuals for each of the points to create an estimate for the
variance, σ 2 .
Recall that,
E[Yi | Xi = xi ] = β0 + β1 xi .
So,
1 ∑
n
s2e = (yi − (β̂0 + β̂1 xi ))2
n − 2 i=1
1 ∑
n
= (yi − ŷi )2
n − 2 i=1
1 ∑ 2
n
= e
n − 2 i=1 i
This probably seems like a natural estimate, aside from the use of n − 2, which
we will put off explaining until the next chapter. It should actually look rather
similar to something we have seen before.
7.3. DECOMPOSITION OF VARIATION 91
1 ∑
n
s2 = (xi − x̄)2
n − 1 i=1
## [1] 236.5317
Just as with the univariate measure of variance, this value of 236.53 doesn’t
have a practical interpretation in terms of stopping distance. Taking the square
root, however, computes the standard deviation of the residuals, also known as
residual standard error.
s_e = sqrt(s2_e)
s_e
## [1] 15.37959
This tells us that our estimates of mean stopping distance are “typically” off by
15.38 feet.
This is the common mathematical trick of “adding zero.” In this case we both
added and subtracted ŷi .
Here, yi − ŷi measures the deviation of an observation from the fitted regression
line and ŷi − ȳ measures the deviation of the fitted regression line from the
sample mean.
If we square then sum both sides of the equation above, we can obtain the
following,
92 CHAPTER 7. SIMPLE LINEAR REGRESSION
∑
n ∑
n ∑
n
(yi − ȳ)2 = (yi − ŷi )2 + (ŷi − ȳ)2 .
i=1 i=1 i=1
This should be somewhat alarming or amazing. How is this true? For now we
will leave this questions unanswered. (Think about this, and maybe try to prove
it.) We will now define three of the quantities seen in this equation.
∑
n
SST = (yi − ȳ)2
i=1
The quantity “Sum of Squares Total,” or SST, represents the total variation
of the observed y values. This should be a familiar looking expression. Note
that,
1 ∑
n
1
s2 = (yi − ȳ)2 = SST.
n − 1 i=1 n−1
∑
n
SSReg = (ŷi − ȳ)2
i=1
∑
n
SSE = RSS = (yi − ŷi )2
i=1
The quantity “Sum of Squares Error,” SSE, represents the unexplained vari-
ation of the observed y values. You will often see SSE written as RSS, or
“Residual Sum of Squares.”
SST = sum((y - mean(y)) ^ 2)
SSReg = sum((y_hat - mean(y)) ^ 2)
SSE = sum((y - y_hat) ^ 2)
c(SST = SST, SSReg = SSReg, SSE = SSE)
7.3. DECOMPOSITION OF VARIATION 93
SSE
s2e = .
n−2
SSE / (n - 2)
## [1] 236.5317
We can use R to verify that this matches our previous calculation of s2e .
s2_e == SSE / (n - 2)
## [1] TRUE
These three measures also do not have an important practical interpretation in-
dividually. But together, they’re about to reveal a new statistic to help measure
the strength of a SLR model.
## [1] 0.6510794
For the cars example, we calculate R2 = 0.65. We then say that 65% of the
observed variability in stopping distance is explained by the linear relationship
with speed.
The following plots visually demonstrate the three “sums of squares” for a sim-
ulated dataset which has R2 = 0.92 which is a somewhat high value. Notice
94 CHAPTER 7. SIMPLE LINEAR REGRESSION
in the final plot, that the orange arrows account for a larger proportion of the
total arrow.
25
20
20
15
15
y
y
10
10
5
5
0
0 2 4 6 8 10 0 2 4 6 8 10
x x
25
20
20
15
15
y
y
10
10
5
5
0
0 2 4 6 8 10 0 2 4 6 8 10
x x
The next plots again visually demonstrate the three “sums of squares,” this time
for a simulated dataset which has R2 = 0.19. Notice in the final plot, that now
the blue arrows account for a larger proportion of the total arrow.
7.4. THE LM FUNCTION 95
10
0
0
-10
-10
y
y
-20
-20
-30
-30
0 2 4 6 8 10 0 2 4 6 8 10
x x
10
0
0
-10
-10
y
y
-20
-20
-30
-30
0 2 4 6 8 10 0 2 4 6 8 10
x x
So far we have done regression by deriving the least squares estimates, then
writing simple R commands to perform the necessary calculations. Since this is
such a common task, this is functionality that is built directly into R via the
lm() command.
The lm() command is used to fit linear models which actually account for a
broader class of models than simple linear regression, but we will use SLR as
our first demonstration of lm(). The lm() function will be one of our most
commonly used tools, so you may want to take a look at the documentation by
using ?lm. You’ll notice there is a lot of information there, but we will start
with just the very basics. This is documentation you will want to return to
often.
96 CHAPTER 7. SIMPLE LINEAR REGRESSION
We’ll continue using the cars data, and essentially use the lm() function to
check the work we had previously done.
stop_dist_model = lm(dist ~ speed, data = cars)
This line of code fits our very first linear model. The syntax should look some-
what familiar. We use the dist ~ speed syntax to tell R we would like to model
the response variable dist as a linear function of the predictor variable speed.
In general, you should think of the syntax as response ~ predictor. The data
= cars argument then tells R that that dist and speed variables are from the
dataset cars. We then store this result in a variable stop_dist_model.
##
## Call:
## lm(formula = dist ~ speed, data = cars)
##
## Coefficients:
## (Intercept) speed
## -17.579 3.932
We see that it first tells us the formula we input into R, that is lm(formula
= dist ~ speed, data = cars). We also see the coefficients of the model.
We can check that these are what we had calculated previously. (Minus some
rounding that R is doing when displaying the results. They are stored with full
precision.)
c(beta_0_hat, beta_1_hat)
Next, it would be nice to add the fitted line to the scatterplot. To do so we will
use the abline() function.
plot(dist ~ speed, data = cars,
xlab = "Speed (in Miles Per Hour)",
ylab = "Stopping Distance (in Feet)",
main = "Stopping Distance vs Speed",
pch = 20,
cex = 2,
col = "grey")
abline(stop_dist_model, lwd = 3, col = "darkorange")
7.4. THE LM FUNCTION 97
100
80
60
40
20
0
5 10 15 20 25
The abline() function is used to add lines of the form a + bx to a plot. (Hence
abline.) When we give it stop_dist_model as an argument, it automatically
extracts the regression coefficient estimates (β̂0 and β̂1 ) and uses them as the
slope and intercept of the line. Here we also use lwd to modify the width of the
line, as well as col to modify the color of the line.
The “thing” that is returned by the lm() function is actually an object of class
lm which is a list. The exact details of this are unimportant unless you are
seriously interested in the inner-workings of R, but know that we can determine
the names of the elements of the list using the names() command.
names(stop_dist_model)
## 1 2 3 4 5 6
## 3.849460 11.849460 -5.947766 12.052234 2.119825 -7.812584
## 7 8 9 10 11 12
## -3.744993 4.255007 12.255007 -8.677401 2.322599 -15.609810
## 13 14 15 16 17 18
98 CHAPTER 7. SIMPLE LINEAR REGRESSION
## (Intercept) speed
## -17.579095 3.932409
resid(stop_dist_model)
## 1 2 3 4 5 6
## 3.849460 11.849460 -5.947766 12.052234 2.119825 -7.812584
## 7 8 9 10 11 12
## -3.744993 4.255007 12.255007 -8.677401 2.322599 -15.609810
## 13 14 15 16 17 18
## -9.609810 -5.609810 -1.609810 -7.542219 0.457781 0.457781
## 19 20 21 22 23 24
## 12.457781 -11.474628 -1.474628 22.525372 42.525372 -21.407036
## 25 26 27 28 29 30
## -15.407036 12.592964 -13.339445 -5.339445 -17.271854 -9.271854
## 31 32 33 34 35 36
## 0.728146 -11.204263 2.795737 22.795737 30.795737 -21.136672
## 37 38 39 40 41 42
## -11.136672 10.863328 -29.069080 -13.069080 -9.069080 -5.069080
## 43 44 45 46 47 48
## 2.930920 -2.933898 -18.866307 -6.798715 15.201285 16.201285
## 49 50
## 43.201285 4.268876
fitted(stop_dist_model)
## 1 2 3 4 5 6 7
## -1.849460 -1.849460 9.947766 9.947766 13.880175 17.812584 21.744993
7.4. THE LM FUNCTION 99
## 8 9 10 11 12 13 14
## 21.744993 21.744993 25.677401 25.677401 29.609810 29.609810 29.609810
## 15 16 17 18 19 20 21
## 29.609810 33.542219 33.542219 33.542219 33.542219 37.474628 37.474628
## 22 23 24 25 26 27 28
## 37.474628 37.474628 41.407036 41.407036 41.407036 45.339445 45.339445
## 29 30 31 32 33 34 35
## 49.271854 49.271854 49.271854 53.204263 53.204263 53.204263 53.204263
## 36 37 38 39 40 41 42
## 57.136672 57.136672 57.136672 61.069080 61.069080 61.069080 61.069080
## 43 44 45 46 47 48 49
## 61.069080 68.933898 72.866307 76.798715 76.798715 76.798715 76.798715
## 50
## 80.731124
An R function that is useful in many situations is summary(). We see that when
it is called on our model, it returns a good deal of information. By the end
of the course, you will know what every value here is used for. For now, you
should immediately notice the coefficient estimates, and you may recognize the
R2 value we saw earlier.
summary(stop_dist_model)
##
## Call:
## lm(formula = dist ~ speed, data = cars)
##
## Residuals:
## Min 1Q Median 3Q Max
## -29.069 -9.525 -2.272 9.215 43.201
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -17.5791 6.7584 -2.601 0.0123 *
## speed 3.9324 0.4155 9.464 1.49e-12 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 15.38 on 48 degrees of freedom
## Multiple R-squared: 0.6511, Adjusted R-squared: 0.6438
## F-statistic: 89.57 on 1 and 48 DF, p-value: 1.49e-12
The summary() command also returns a list, and we can again use names() to
learn what about the elements of this list.
names(summary(stop_dist_model))
So, for example, if we wanted to directly access the value of R2 , instead of copy
and pasting it out of the printed statement from summary(), we could do so.
summary(stop_dist_model)$r.squared
## [1] 0.6510794
## [1] 15.37959
Note that this is the same result seen earlier as s_e. You may also notice that
this value was displayed above as a result of the summary() command, which R
labeled the “Residual Standard Error.”
v
u
u 1 ∑ 2
n
se = RSE = t e
n − 2 i=1 i
Often it is useful to talk about se (or RSE) instead of s2e because of their units.
The units of se in the cars example is feet, while the units of s2e is feet-squared.
Another useful function, which we will use almost as often as lm() is the
predict() function.
predict(stop_dist_model, newdata = data.frame(speed = 8))
## 1
## 13.88018
The above code reads “predict the stopping distance of a car traveling 8 miles
per hour using the stop_dist_model.” Importantly, the second argument to
predict() is a data frame that we make in place. We do this so that we can
specify that 8 is a value of speed, so that predict knows how to use it with
the model stored in stop_dist_model. We see that this result is what we had
calculated “by hand” previously.
## 1 2 3
## 13.88018 65.00149 179.04134
7.4. THE LM FUNCTION 101
Or we could calculate the fitted value for each of the original data points. We
can simply supply the original data frame, cars, since it contains a variables
called speed which has the values we would like to predict at.
predict(stop_dist_model, newdata = cars)
## 1 2 3 4 5 6 7
## -1.849460 -1.849460 9.947766 9.947766 13.880175 17.812584 21.744993
## 8 9 10 11 12 13 14
## 21.744993 21.744993 25.677401 25.677401 29.609810 29.609810 29.609810
## 15 16 17 18 19 20 21
## 29.609810 33.542219 33.542219 33.542219 33.542219 37.474628 37.474628
## 22 23 24 25 26 27 28
## 37.474628 37.474628 41.407036 41.407036 41.407036 45.339445 45.339445
## 29 30 31 32 33 34 35
## 49.271854 49.271854 49.271854 53.204263 53.204263 53.204263 53.204263
## 36 37 38 39 40 41 42
## 57.136672 57.136672 57.136672 61.069080 61.069080 61.069080 61.069080
## 43 44 45 46 47 48 49
## 61.069080 68.933898 72.866307 76.798715 76.798715 76.798715 76.798715
## 50
## 80.731124
# predict(stop_dist_model, newdata = data.frame(speed = cars$speed))
## 1 2 3 4 5 6 7
## -1.849460 -1.849460 9.947766 9.947766 13.880175 17.812584 21.744993
## 8 9 10 11 12 13 14
## 21.744993 21.744993 25.677401 25.677401 29.609810 29.609810 29.609810
## 15 16 17 18 19 20 21
## 29.609810 33.542219 33.542219 33.542219 33.542219 37.474628 37.474628
## 22 23 24 25 26 27 28
## 37.474628 37.474628 41.407036 41.407036 41.407036 45.339445 45.339445
## 29 30 31 32 33 34 35
## 49.271854 49.271854 49.271854 53.204263 53.204263 53.204263 53.204263
## 36 37 38 39 40 41 42
## 57.136672 57.136672 57.136672 61.069080 61.069080 61.069080 61.069080
## 43 44 45 46 47 48 49
102 CHAPTER 7. SIMPLE LINEAR REGRESSION
## 1 2 3 4 5 6 7
## -1.849460 -1.849460 9.947766 9.947766 13.880175 17.812584 21.744993
## 8 9 10 11 12 13 14
## 21.744993 21.744993 25.677401 25.677401 29.609810 29.609810 29.609810
## 15 16 17 18 19 20 21
## 29.609810 33.542219 33.542219 33.542219 33.542219 37.474628 37.474628
## 22 23 24 25 26 27 28
## 37.474628 37.474628 41.407036 41.407036 41.407036 45.339445 45.339445
## 29 30 31 32 33 34 35
## 49.271854 49.271854 49.271854 53.204263 53.204263 53.204263 53.204263
## 36 37 38 39 40 41 42
## 57.136672 57.136672 57.136672 61.069080 61.069080 61.069080 61.069080
## 43 44 45 46 47 48 49
## 61.069080 68.933898 72.866307 76.798715 76.798715 76.798715 76.798715
## 50
## 80.731124
Yi = β0 + β1 xi + ϵi
where ϵi ∼ N (0, σ 2 ).
Then we can find the mean and variance of each Yi .
E[Yi | Xi = xi ] = β0 + β1 xi
and
Var[Yi | Xi = xi ] = σ 2 .
Yi | Xi ∼ N (β0 + β1 xi , σ 2 )
[ ( )2 ]
1 1 x−µ
fX (x; µ, σ ) = √ 2
exp − .
2πσ 2 2 σ
[ ( )2 ]
1 1 yi − (β0 + β1 xi )
fYi (yi ; xi , β0 , β1 , σ ) = √
2
exp − .
2πσ 2 2 σ
Given n data points (xi , yi ) we can write the likelihood, which is a function of
the three parameters β0 , β1 , and σ 2 . Since the data have been observed, we use
lower case yi to denote that these values are no longer random.
[ ( )2 ]
∏
n
11 yi − β0 − β1 xi
2
L(β0 , β1 , σ ) = √ exp −
i=1 2πσ 2 2 σ
Our goal is to find values of β0 , β1 , and σ 2 which maximize this function, which
is a straightforward multivariate calculus problem.
We’ll start by doing a bit of rearranging to make our task easier.
( )n [ ]
1 ∑
n
1
2
L(β0 , β1 , σ ) = √ exp − 2 (yi − β0 − β1 xi )2
2πσ 2 2σ i=1
Then, as is often the case when finding MLEs, for mathematical convenience we
will take the natural logarithm of the likelihood function since log is a monoton-
ically increasing function. Then we will proceed to maximize the log-likelihood,
and the resulting estimates will be the same as if we had not taken the log.
1 ∑
n
n n
log L(β0 , β1 , σ ) = − log(2π) − log(σ ) − 2
2 2
(yi − β0 − β1 xi )2
2 2 2σ i=1
Note that we use log to mean the natural logarithm. We now take a partial
derivative with respect to each of the parameters.
104 CHAPTER 7. SIMPLE LINEAR REGRESSION
1 ∑
n
∂ log L(β0 , β1 , σ 2 )
= 2 (yi − β0 − β1 xi )
∂β0 σ i=1
1 ∑
n
∂ log L(β0 , β1 , σ 2 )
= 2 (xi )(yi − β0 − β1 xi )
∂β1 σ i=1
1 ∑
n
∂ log L(β0 , β1 , σ 2 ) n
= − + (yi − β0 − β1 xi )2
∂σ 2 2σ 2 2(σ 2 )2 i=1
We then set each of the partial derivatives equal to zero and solve the resulting
system of equations.
∑
n
(yi − β0 − β1 xi ) = 0
i=1
∑
n
(xi )(yi − β0 − β1 xi ) = 0
i=1
1 ∑
n
n
− 2+ (yi − β0 − β1 xi )2 = 0
2σ 2(σ 2 )2 i=1
You may notice that the first two equations also appear in the least squares
approach. Then, skipping the issue of actually checking if we have found a max-
imum, we then arrive at our estimates. We call these estimates the maximum
likelihood estimates.
∑n ∑n
∑n ( xi )( yi )
xi yi − i=1 i=1
Sxy
β̂1 = i=1 ∑n n 2 =
∑n ( xi ) Sxx
i=1 xi −
2 i=1
n
β̂0 = ȳ − β̂1 x̄
1∑
n
σ̂ 2 = (yi − ŷi )2
n i=1
Note that β̂0 and β̂1 are the same as the least squares estimates. However
we now have a new estimate of σ 2 , that is σ̂ 2 . So we now have two different
estimates of σ 2 .
1 ∑ 1 ∑ 2
n n
s2e = (yi − ŷi )2 = e Least Squares
n − 2 i=1 n − 2 i=1 i
1∑ 1∑ 2
n n
2
σ̂ = (yi − ŷi )2 = e MLE
n i=1 n i=1 i
7.6. SIMULATING SLR 105
In the next chapter, we will discuss in detail the difference between these two
estimates, which involves biasedness.
Y = 5 − 2x + ϵ.
Y | X ∼ N (µ = 5 − 2x, σ 2 = 9).
Now, since the xi values in SLR are considered fixed and known, we simply
specify 20 values. Another common practice is to generate them from a uniform
distribution, and then use them for the remainder of the analysis.
x_vals = seq(from = 0, to = 10, length.out = num_obs)
# set.seed(1)
# x_vals = runif(num_obs, 0, 10)
The data, (xi , yi ), represent a possible sample from the true distribution. Now
to check how well the method of least squares works, we use lm() to fit the
model to our simulated data, then take a look at the estimated coefficients.
sim_fit = lm(y_vals ~ x_vals)
coef(sim_fit)
## (Intercept) x_vals
## 4.832639 -1.831401
And look at that, they aren’t too far from the true parameters we specified!
plot(y_vals ~ x_vals)
abline(sim_fit)
5
0
y_vals
-5
-10
-15
0 2 4 6 8 10
x_vals
We should say here, that we’re being sort of lazy, and not the good kinda of
lazy that could be considered efficient. Any time you simulate data, you should
consider doing two things: writing a function, and storing the data in a data
frame.
The function below, sim_slr(), can be used for the same task as above, but
is much more flexible. Notice that we provide x to the function, instead of
generating x inside the function. In the SLR model, the xi are considered known
values. That is, they are not random, so we do not assume a distribution for
the xi . Because of this, we will repeatedly use the same x values across all
simulations.
7.6. SIMULATING SLR 107
## predictor response
## 1 0.0 3.1206386
## 2 0.5 4.5509300
## 3 1.0 0.4931142
## 4 1.5 6.7858424
## 5 2.0 1.9885233
## 6 2.5 -2.4614052
Now when we fit the model with lm() we can use a data argument, a very good
practice.
sim_fit = lm(response ~ predictor, data = sim_data)
coef(sim_fit)
## (Intercept) predictor
## 4.832639 -1.831401
And this time, we’ll make the plot look a lot nicer.
plot(response ~ predictor, data = sim_data,
xlab = "Simulated Predictor Variable",
ylab = "Simulated Response Variable",
main = "Simulated Regression Data",
pch = 20,
cex = 2,
col = "grey")
abline(sim_fit, lwd = 3, lty = 1, col = "darkorange")
abline(beta_0, beta_1, lwd = 3, lty = 2, col = "dodgerblue")
legend("topright", c("Estimate", "Truth"), lty = c(1, 2), lwd = 2,
col = c("darkorange", "dodgerblue"))
108 CHAPTER 7. SIMPLE LINEAR REGRESSION
Estimate
Simulated Response Variable
5
0
-5
-10
-15 Truth
0 2 4 6 8 10
7.7 History
For some brief background on the history of linear regression, see “Galton, Pear-
son, and the Peas: A Brief History of Linear Regression for Statistics Instruc-
tors” from the Journal of Statistics Education as well as the Wikipedia page
on the history of regression analysis and lastly the article for regression to the
mean which details the origins of the term “regression.”
7.8 R Markdown
The R Markdown file for this chapter can be found here:
• slr.Rmd
The file was created using R version 3.6.1.
Chapter 8
“There are three types of lies: lies, damn lies, and statistics.”
— Benjamin Disraeli
After reading this chapter you will be able to:
• Understand the distributions of regression estimates.
• Create interval estimates for regression parameters, mean response, and
predictions.
• Test for significance of regression.
Last chapter we defined the simple linear regression model,
Yi = β0 + β1 xi + ϵi
∑
n
f (β0 , β1 ) = (yi − (β0 + β1 xi ))2 .
i=1
∑n
Sxy (x − x̄)(yi − ȳ)
β̂1 = = ∑n i
i=1
i=1 (xi − x̄)
Sxx 2
109
110 CHAPTER 8. INFERENCE FOR SIMPLE LINEAR REGRESSION
v
u
u 1 ∑ 2
n
se = RSE = t e
n − 2 i=1 i
##
## Call:
## lm(formula = dist ~ speed, data = cars)
##
## Residuals:
## Min 1Q Median 3Q Max
## -29.069 -9.525 -2.272 9.215 43.201
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -17.5791 6.7584 -2.601 0.0123 *
## speed 3.9324 0.4155 9.464 1.49e-12 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 15.38 on 48 degrees of freedom
## Multiple R-squared: 0.6511, Adjusted R-squared: 0.6438
## F-statistic: 89.57 on 1 and 48 DF, p-value: 1.49e-12
Last chapter, we only discussed the Estimate, Residual standard error, and
Multiple R-squared values. In this chapter, we will discuss all of the informa-
tion under Coefficients as well as F-statistic.
plot(dist ~ speed, data = cars,
xlab = "Speed (in Miles Per Hour)",
ylab = "Stopping Distance (in Feet)",
main = "Stopping Distance vs Speed",
pch = 20,
cex = 2,
col = "grey")
abline(stop_dist_model, lwd = 5, col = "darkorange")
111
100
80
60
40
20
0
5 10 15 20 25
To get started, we’ll note that there is another equivalent expression for Sxy
which we did not see last chapter,
∑
n ∑
n
Sxy = (xi − x̄)(yi − ȳ) = (xi − x̄)yi .
i=1 i=1
This may be a surprising equivalence. (Maybe try to prove it.) However, it will
be useful for illustrating concepts in this chapter.
Note that, β̂1 is a sample statistic when calculated with observed data as
written above, as is β̂0 .
However, in this chapter it will often be convenient to use both β̂1 and β̂0 as
random variables, that is, we have not yet observed the values for each Yi .
When this is the case, we will use a slightly different notation, substituting in
capital Yi for lower case yi .
∑n
(xi − x̄)Yi
β̂1 = ∑i=1
n
i=1 (xi − x̄)
2
β̂0 = Ȳ − β̂1 x̄
Linear
Recall, in the SLR setup that the xi values are considered fixed and known
quantities. Then a linear estimate is one which can be written as a linear
combination of the Yi . In the case of β̂1 we see
∑n ∑n
(xi − x̄)Yi
β̂1 = ∑i=1
n = ki Yi = k1 Y1 + k2 Y2 + · · · kn Yn
i=1 (xi − x̄)
2
i=1
(xi − x̄)
where ki = ∑n .
i=1 (xi − x̄)
2
In a similar fashion, we could show that β̂0 can be written as a linear combination
of the Yi . Thus both β̂0 and β̂1 are linear estimators.
Unbiased
Now that we know our estimates are linear, how good are these estimates? One
measure of the “goodness” of an estimate is its bias. Specifically, we prefer
estimates that are unbiased, meaning their expected value is the parameter
being estimated.
In the case of the regression estimates, we have,
E[β̂0 ] = β0
E[β̂1 ] = β1 .
This tells us that, when the conditions of the SLR model are met, on average
our estimates will be correct. However, as we saw last chapter when simulating
from the SLR model, that does not mean that each individual estimate will be
correct. Only that, if we repeated the process an infinite number of times, on
average the estimate would be correct.
8.2. SAMPLING DISTRIBUTIONS 113
Best
θ̂BAD = 5.
Var[θ̂BAD ] = 0.
However since,
E[θ̂BAD ] = 5
These quantify the variability of the estimates due to random chance during
sampling. Are these “the best”? Are these variances as small as we can possibil-
ity get? You’ll just have to take our word for it that they are because showing
that this is true is beyond the scope of this course.
Now that we have “redefined” the estimates for β̂0 and β̂1 as random variables,
we can discuss their sampling distribution, which is the distribution when a
statistic is considered a random variable.
Since both β̂0 and β̂1 are a linear combination of the Yi and each Yi is normally
distributed, then both β̂0 and β̂1 also follow a normal distribution.
114 CHAPTER 8. INFERENCE FOR SIMPLE LINEAR REGRESSION
Then, putting all of the above together, we arrive at the distributions of β̂0 and
β̂1 .
For β̂1 we say,
∑n ( )
Sxy (xi − x̄)Yi σ2
β̂1 = = ∑i=1
n ∼ N β 1 , ∑n .
i=1 (xi − x̄) i=1 (xi − x̄)
Sxx 2 2
Or more succinctly,
( )
σ2
β̂1 ∼ N β1 , .
Sxx
( ∑n )
σ2 x2i
β̂0 = Ȳ − β̂1 x̄ ∼ N β0 , ∑n i=1
.
i=1 (xi − x̄)
n 2
Or more succinctly,
( ( ))
1 x̄2
β̂0 ∼ N β0 , σ 2 +
n Sxx
To verify the above results, we will simulate samples of size n = 100 from the
model
Yi = β0 + β1 xi + ϵi
• σ2 = 4
Then, based on the above, we should find that
( )
σ2
β̂1 ∼ N β1 ,
Sxx
and
( ( ))
1 x̄2
β̂0 ∼ N β0 , σ 2 + .
n Sxx
First we need to decide ahead of time what our x values will be for this sim-
ulation, since the x values in SLR are also considered known quantities. The
choice of x values is arbitrary. Here we also set a seed for randomization, and
calculate Sxx which we will need going forward.
set.seed(42)
sample_size = 100 # this is n
x = seq(-1, 1, length = sample_size)
Sxx = sum((x - mean(x)) ^ 2)
## [1] 0.1176238
(var_beta_0_hat = sigma ^ 2 * (1 / sample_size + mean(x) ^ 2 / Sxx))
## [1] 0.04
and
That is,
E[β̂1 ] = 6
Var[β̂1 ] = 0.1176238
116 CHAPTER 8. INFERENCE FOR SIMPLE LINEAR REGRESSION
and
E[β̂0 ] = 3
Var[β̂0 ] = 0.04.
We now simulate data from this model 10,000 times. Note this may not be the
most R way of doing the simulation. We perform the simulation in this manner
in an attempt at clarity. For example, we could have used the sim_slr()
function from the previous chapter. We also simply store variables in the global
environment instead of creating a data frame for each new simulated dataset.
num_samples = 10000
beta_0_hats = rep(0, num_samples)
beta_1_hats = rep(0, num_samples)
for (i in 1:num_samples) {
eps = rnorm(sample_size, mean = 0, sd = sigma)
y = beta_0 + beta_1 * x + eps
sim_model = lm(y ~ x)
beta_0_hats[i] = coef(sim_model)[1]
beta_1_hats[i] = coef(sim_model)[2]
}
Each time we simulated the data, we obtained values of the estimated coefficiets.
The variables beta_0_hats and beta_1_hats now store 10,000 simulated values
of β̂0 and β̂1 respectively.
We first verify the distribution of β̂1 .
mean(beta_1_hats) # empirical mean
## [1] 6.001998
beta_1 # true mean
## [1] 6
var(beta_1_hats) # empirical variance
## [1] 0.11899
var_beta_1_hat # true variance
## [1] 0.1176238
We see that the empirical and true means and variances are very similar. We also
verify that the empirical distribution is normal. To do so, we plot a histogram
8.2. SAMPLING DISTRIBUTIONS 117
of the beta_1_hats, and add the curve for the true distribution of β̂1 . We use
prob = TRUE to put the histogram on the same scale as the normal curve.
# note need to use prob = TRUE
hist(beta_1_hats, prob = TRUE, breaks = 20,
xlab = expression(hat(beta)[1]), main = "", border = "dodgerblue")
curve(dnorm(x, mean = beta_1, sd = sqrt(var_beta_1_hat)),
col = "darkorange", add = TRUE, lwd = 3)
1.0
0.8
Density
0.6
0.4
0.2
0.0
## [1] 3.001147
beta_0 # true mean
## [1] 3
var(beta_0_hats) # empirical variance
## [1] 0.04017924
var_beta_0_hat # true variance
## [1] 0.04
hist(beta_0_hats, prob = TRUE, breaks = 25,
xlab = expression(hat(beta)[0]), main = "", border = "dodgerblue")
118 CHAPTER 8. INFERENCE FOR SIMPLE LINEAR REGRESSION
2.0
1.5
Density
1.0
0.5
0.0
6.04
Empirical Mean of β 1
6.02
^
6.00
5.98
5.96
Number of Simulations
par(mar = c(5, 5, 1, 1)) # adjusted plot margins, otherwise the "hat" does not display
plot(cumsum(beta_0_hats) / (1:length(beta_0_hats)), type = "l", ylim = c(2.95, 3.05),
xlab = "Number of Simulations",
ylab = expression("Empirical Mean of " ~ hat(beta)[0]),
col = "dodgerblue")
abline(h = 3, col = "darkorange", lwd = 2)
120 CHAPTER 8. INFERENCE FOR SIMPLE LINEAR REGRESSION
3.04
Empirical Mean of β 0
3.02
^
3.00
2.98
2.96
Number of Simulations
( ( ))
1 x̄2
β̂0 ∼ N β0 , σ 2
+
n Sxx
( )
σ2
β̂1 ∼ N β1 , .
Sxx
β̂0 − β0
∼ N (0, 1)
SD[β̂0 ]
and
β̂1 − β1
∼ N (0, 1)
SD[β̂1 ]
where
8.3. STANDARD ERRORS 121
√
1 x̄2
SD[β̂0 ] = σ +
n Sxx
and
σ
SD[β̂1 ] = √ .
Sxx
√
1 x̄2
SE[β̂0 ] = se +
n Sxx
se
SE[β̂1 ] = √
Sxx
β̂0 − β0
∼ tn−2
SE[β̂0 ]
β̂1 − β1
∼ tn−2
SE[β̂1 ]
RSS (n − 2)s2e
= ∼ χ2n−2 .
σ2 σ2
Z
T =√ 2
χd
d
We write,
T ∼ td
β̂1 − β1 β̂1 − β1
= √
SE[β̂1 ] se / Sxx
√
β̂1 − β1 σ/ Sxx
= √ · √
se / Sxx σ/ Sxx
√
β̂1 − β1 σ/ Sxx
= √ · √
σ/ Sxx se / Sxx
/√ 2
β̂1 − β1 se
= √
σ/ Sxx σ2
√
/ (n−2)s2e
β̂1 − β1 Z
= σ2
∼√ 2 ∼ tn−2
SD[β̂1 ] n − 2 χ n−2
n−2
# add legend
legend("topright", title = "Distributions",
8.4. CONFIDENCE INTERVALS FOR SLOPE AND INTERCEPT 123
Normal vs t Distributions
0.4
Distributions
t, df = 1
t, df = 10
0.3
Standard Normal
Density
0.2
0.1
0.0
-4 -2 0 2 4
EST ± CRIT · SE
or
EST ± MARGIN
where EST is an estimate for the parameter of interest, SE is the standard error
of the estimate, and MARGIN = CRIT · SE.
Then, for β0 and β1 we can create confidence intervals using
√
1 x̄2
β̂0 ± tα/2,n−2 · SE[β̂0 ] β̂0 ± tα/2,n−2 · se +
n Sxx
and
124 CHAPTER 8. INFERENCE FOR SIMPLE LINEAR REGRESSION
se
β̂1 ± tα/2,n−2 · SE[β̂1 ] β̂1 ± tα/2,n−2 · √
Sxx
where tα/2,n−2 is the critical value such that P (tn−2 > tα/2,n−2 ) = α/2.
EST − HYP
TS =
SE
where EST is an estimate for the parameter of interest, HYP is a hypothesized
value of the parameter, and SE is the standard error of the estimate.
So, to test
H0 : β0 = β00 vs H1 : β0 ̸= β00
H0 : β1 = β10 vs H1 : β1 ̸= β10
which again, under the null hypothesis, follows a t distribution with n−2 degrees
of freedom. We now use β10 to denote the hypothesized value of β1 .
8.6. CARS EXAMPLE 125
##
## Call:
## lm(formula = dist ~ speed, data = cars)
##
## Residuals:
## Min 1Q Median 3Q Max
## -29.069 -9.525 -2.272 9.215 43.201
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -17.5791 6.7584 -2.601 0.0123 *
## speed 3.9324 0.4155 9.464 1.49e-12 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 15.38 on 48 degrees of freedom
## Multiple R-squared: 0.6511, Adjusted R-squared: 0.6438
## F-statistic: 89.57 on 1 and 48 DF, p-value: 1.49e-12
8.6.1 Tests in R
We will now discuss the results displayed called Coefficients. First recall that
we can extract this information directly.
names(summary(stop_dist_model))
β̂0 = −17.5790949
and
β̂1 = 3.9324088
β̂1 = 3.9324088.
se
SE[β̂1 ] = √ = 0.4155128.
Sxx
The third value, t value, is the value of the test statistic for testing H0 : β1 = 0
vs H1 : β1 ̸= 0,
β̂1 − 0 β̂1 − 0
t= = √ = 9.46399.
SE[β̂1 ] se / Sxx
We can then verify some equivalent expressions: the t test statistic for β̂1 and
the two-sided p-value associated with that test statistic.
(beta_1_hat - 0) / beta_1_hat_se
## [1] 9.46399
beta_1_hat_t
## [1] 9.46399
2 * pt(abs(beta_1_hat_t), df = length(resid(stop_dist_model)) - 2, lower.tail = FALSE)
## [1] 1.489836e-12
beta_1_hat_pval
## [1] 1.489836e-12
H0 : β 1 = 0 vs H1 : β1 ̸= 0
is used most often. Let’s think about this test in terms of the simple linear
regression model,
128 CHAPTER 8. INFERENCE FOR SIMPLE LINEAR REGRESSION
Yi = β0 + β1 xi + ϵi .
If we assume the null hypothesis is true, then β1 = 0 and we have the model,
Yi = β0 + ϵi .
In this model, the response does not depend on the predictor. So then we could
think of this test in the following way,
With this extremely low p-value, we would reject the null hypothesis at any rea-
sonable α level, say for example α = 0.01. So we say there is a significant linear
relationship between speed and stopping distance. Notice that we emphasize
linear.
8.6. CARS EXAMPLE 129
9
8
7
y
6
5
4
Using R we can very easily obtain the confidence intervals for β0 and β1 .
confint(stop_dist_model, level = 0.99)
## 0.5 % 99.5 %
## (Intercept) -35.706610 0.5484205
## speed 2.817919 5.0468988
This automatically calculates 99% confidence intervals for both β0 and β1 , the
first row for β0 , the second row for β1 .
For the cars example when interpreting these intervals, we say, we are 99%
confident that for an increase in speed of 1 mile per hour, the average increase in
stopping distance is between 2.8179187 and 5.0468988 feet, which is the interval
for β1 .
130 CHAPTER 8. INFERENCE FOR SIMPLE LINEAR REGRESSION
Note that this 99% confidence interval does not contain the hypothesized value
of 0. Since it does not contain 0, it is equivalent to rejecting the test of H0 :
β1 = 0 vs H1 : β1 ̸= 0 at α = 0.01, which we had seen previously.
You should be somewhat suspicious of the confidence interval for β0 , as it covers
negative values, which correspond to negative stopping distances. Technically
the interpretation would be that we are 99% confident that the average stopping
distance of a car traveling 0 miles per hour is between -35.7066103 and 0.5484205
feet, but we don’t really believe that, since we are actually certain that it would
be non-negative.
Note, we can extract specific values from this output a number of ways. This
code is not run, and instead, you should check how it relates to the output of
the code above.
confint(stop_dist_model, level = 0.99)[1,]
confint(stop_dist_model, level = 0.99)[1, 1]
confint(stop_dist_model, level = 0.99)[1, 2]
confint(stop_dist_model, parm = "(Intercept)", level = 0.99)
confint(stop_dist_model, level = 0.99)[2,]
confint(stop_dist_model, level = 0.99)[2, 1]
confint(stop_dist_model, level = 0.99)[2, 2]
confint(stop_dist_model, parm = "speed", level = 0.99)
We can also verify that calculations that R is performing for the β1 interval.
# store estimate
beta_1_hat = coef(stop_dist_model)[2]
## speed speed
## 2.817919 5.046899
In this situation we use ŷ(x) as our estimate of E[Y | X = x]. We modify our
notation slightly to make it clear that the predicted value is a function of the x
value.
Recall that,
E[Y | X = x] = β0 + β1 x.
E[ŷ(x)] = β0 + β1 x.
Like the other estimates we have seen, ŷ(x) also follows a normal distribution.
Since β̂0 and β̂1 are linear combinations of normal random variables, ŷ(x) is as
well.
( ( ))
1 (x − x̄)2
ŷ(x) ∼ N β0 + β1 x, σ 2
+
n Sxx
And lastly, since we need to estimate this variance, we arrive at the standard
error of our estimate,
√
1 (x − x̄)2
SE[ŷ(x)] = se + .
n Sxx
We can then use this to find the confidence interval for the mean response,
√
1 (x − x̄)2
ŷ(x) ± tα/2,n−2 · se +
n Sxx
To find confidence intervals for the mean response using R, we use the predict()
function. We give the function our fitted model as well as new data, stored as
a data frame. (This is important, so that R knows the name of the predictor
variable.) Here, we are finding the confidence interval for the mean stopping
distance when a car is travelling 5 miles per hour and when a car is travelling
21 miles per hour.
132 CHAPTER 8. INFERENCE FOR SIMPLE LINEAR REGRESSION
√
1 (x − x̄)2
ŷ(x) ± tα/2,n−2 · se 1+ + .
n Sxx
To calculate this for a set of points in R notice there is only a minor change in
syntax from finding a confidence interval for the mean response.
8.9. CONFIDENCE AND PREDICTION BANDS 133
Also notice that these two intervals are wider than the corresponding confidence
intervals for the mean response.
Often we will like to plot both confidence intervals for the mean response and
prediction intervals for all possible values of x. We calls these confidence and
prediction bands.
speed_grid = seq(min(cars$speed), max(cars$speed), by = 0.01)
dist_ci_band = predict(stop_dist_model,
newdata = data.frame(speed = speed_grid),
interval = "confidence", level = 0.99)
dist_pi_band = predict(stop_dist_model,
newdata = data.frame(speed = speed_grid),
interval = "prediction", level = 0.99)
100
Stopping Distance (in Feet)
50
+
0
-50
5 10 15 20 25
• We use the ylim argument to stretch the y-axis of the plot, since the bands
extend further than the points.
• We add a point at the point (x̄, ȳ).
– This is a point that the regression line will always pass through.
(Think about why.)
– This is the point where both the confidence and prediction bands are
the narrowest. Look at the standard errors of both to understand
why.
• The prediction bands (dotted blue) are less curved than the confidence
bands (dashed blue). This is a result of the extra factor of σ 2 added to
the variance at any value of x.
In the case of simple linear regression, the t test for the significance of the
regression is equivalent to another test, the F test for the significance of the
regression. This equivalence will only be true for simple linear regression, and in
the next chapter we will only use the F test for the significance of the regression.
8.10. SIGNIFICANCE OF REGRESSION, F-TEST 135
Recall from last chapter the decomposition of variance we saw before calculating
R2 ,
∑
n ∑
n ∑
n
(yi − ȳ)2 = (yi − ŷi )2 + (ŷi − ȳ)2 ,
i=1 i=1 i=1
or, in short,
Degrees of Mean
Source Sum of Squares Freedom Square F
∑n
Regression ∑i=1 (ŷi − ȳ)2 1 SSReg/1 MSReg/MSE
n
Error ∑i=1 (yi − ŷi )2 n−2 SSE/(n − 2)
n
i=1 (yi − ȳ) n−1
2
Total
∑n
(ŷi − ȳ)2 /1
F = ∑n i=1 ∼ F1,n−2
i=1 (yi − ŷi ) /(n − 2)
2
H0 : β 1 = 0 Yi = β0 + ϵi
H1 : β1 ̸= 0 Yi = β0 + β1 xi + ϵi
∑n
(ŷi − ȳ)2 /1
F = ∑n i=1
i=1 (yi − ŷi ) /(n − 2)
2
136 CHAPTER 8. INFERENCE FOR SIMPLE LINEAR REGRESSION
In particular, we will reject the null when the F statistic is large, that is, when
there is a low probability that the observations could have come from the null
model by chance. We will let R calculate the p-value for us.
To perform the F test in R you can look at the last row of the output from
summary() called F-statistic which gives the value of the test statistic, the
relevant degrees of freedom, as well as the p-value of the test.
summary(stop_dist_model)
##
## Call:
## lm(formula = dist ~ speed, data = cars)
##
## Residuals:
## Min 1Q Median 3Q Max
## -29.069 -9.525 -2.272 9.215 43.201
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -17.5791 6.7584 -2.601 0.0123 *
## speed 3.9324 0.4155 9.464 1.49e-12 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 15.38 on 48 degrees of freedom
## Multiple R-squared: 0.6511, Adjusted R-squared: 0.6438
## F-statistic: 89.57 on 1 and 48 DF, p-value: 1.49e-12
Additionally, you can use the anova() function to display the information in an
ANOVA table.
anova(stop_dist_model)
8.11 R Markdown
The R Markdown file for this chapter can be found here:
• slr-inf.Rmd
The file was created using R version 3.6.1.
138 CHAPTER 8. INFERENCE FOR SIMPLE LINEAR REGRESSION
Chapter 9
Yi = β0 + β1 xi + ϵi
where ϵi ∼ N (0, σ 2 ).
However, it is rarely the case that a dataset will have a single predictor variable.
It is also rarely the case that a response variable will only depend on a single
variable. So in this chapter, we will extend our current linear model to allow a
response to depend on multiple predictors.
# read the data from the web
autompg = read.table(
"http://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data",
quote = "\"",
139
140 CHAPTER 9. MULTIPLE LINEAR REGRESSION
comment.char = "",
stringsAsFactors = FALSE)
# give the dataframe headers
colnames(autompg) = c("mpg", "cyl", "disp", "hp", "wt", "acc", "year", "origin", "name"
# remove missing data, which is stored as "?"
autompg = subset(autompg, autompg$hp != "?")
# remove the plymouth reliant, as it causes some issues
autompg = subset(autompg, autompg$name != "plymouth reliant")
# give the dataset row names, based on the engine, year and name
rownames(autompg) = paste(autompg$cyl, "cylinder", autompg$year, autompg$name)
# remove the variable for name, as well as origin
autompg = subset(autompg, select = c("mpg", "cyl", "disp", "hp", "wt", "acc", "year"))
# change horsepower from character to numeric
autompg$hp = as.numeric(autompg$hp)
# check final structure of data
str(autompg)
Yi = β0 + β1 xi1 + β2 xi2 + ϵi , i = 1, 2, . . . , n
to the data, we will fit a plane. (We’ll soon move to higher dimensions, so this
will be the last example that is easy to visualize and think about this way.)
40
30
40
30 20
20 82
mp
80
g
10 78
0 76 10
r
yea
2000
74
3000
wt 4000 72
500070 0
How do we find such a plane? Well, we would like a plane that is as close as
possible to the data points. That is, we would like it to minimize the errors it
is making. How will we define these errors? Squared distance of course! So, we
would like to minimize
∑
n
f (β0 , β1 , β2 ) = (yi − (β0 + β1 xi1 + β2 xi2 ))2
i=1
∂f
=0
∂β0
∂f
=0
∂β1
∂f
=0
∂β2
After doing so, we will once again obtain the normal equations.
∑
n ∑
n ∑
n
nβ0 + β1 xi1 + β2 xi2 = yi
i=1 i=1 i=1
∑
n ∑
n ∑
n ∑n
β0 xi1 + β1 x2i1 + β2 xi1 xi2 = xi1 yi
i=1 i=1 i=1 i=1
∑n ∑n ∑
n ∑n
β0 xi2 + β1 xi1 xi2 + β2 x2i2 = xi2 yi
i=1 i=1 i=1 i=1
We now have three equations and three variables, which we could solve, or we
could simply let R solve for us.
mpg_model = lm(mpg ~ wt + year, data = autompg)
coef(mpg_model)
## (Intercept) wt year
## -14.637641945 -0.006634876 0.761401955
Here we have once again fit our model using lm(), however we have introduced
a new syntactical element. The formula mpg ~ wt + year now reads: “model
the response variable mpg as a linear function of wt and year”. That is, it will
estimate an intercept, as well as slope coefficients for wt and year. We then
extract these as we have done before using coef().
In the multiple linear regression setting, some of the interpretations of the coef-
ficients change slightly.
Here, β̂0 = −14.6376419 is our estimate for β0 , the mean miles per gallon for
a car that weighs 0 pounds and was built in 1900. We see our estimate here is
negative, which is a physical impossibility. However, this isn’t unexpected, as
we shouldn’t expect our model to be accurate for cars from 1900 which weigh 0
pounds. (Because they never existed!) This isn’t much of a change from SLR.
That is, β0 is still simply the mean when all of the predictors are 0.
9.1. MATRIX APPROACH TO REGRESSION 143
The interpretation of the coefficients in front of our predictors are slightly dif-
ferent than before. For example β̂1 = −0.0066349 is our estimate for β1 , the
average change in miles per gallon for an increase in weight (x1 ) of one-pound
for a car of a certain model year, that is, for a fixed value of x2 . Note that
this coefficient is actually the same for any given value of x2 . Later, we will
look at models that allow for a different change in mean response for different
values of x2 . Also note that this estimate is negative, which we would expect
since, in general, fuel efficiency decreases for larger vehicles. Recall that in the
multiple linear regression setting, this interpretation is dependent on a fixed
value for x2 , that is, “for a car of a certain model year.” It is possible that the
indirect relationship between fuel efficiency and weight does not hold when an
additional factor, say year, is included, and thus we could have the sign of our
coefficient flipped.
Lastly, β̂2 = 0.761402 is our estimate for β2 , the average change in miles per
gallon for a one-year increase in model year (x2 ) for a car of a certain weight, that
is, for a fixed value of x1 . It is not surprising that the estimate is positive. We
expect that as time passes and the years march on, technology would improve
so that a car of a specific weight would get better mileage now as compared to
their predecessors. And yet, the coefficient could have been negative because
we are also including weight as variable, and not strictly as a fixed value.
In our above example we used two predictor variables, but it will only take a
little more work to allow for an arbitrary number of predictor variables and
derive their coefficient estimates. We can consider the model,
If we were to stack together the n linear equations that represent each Yi into
a column vector, we get the following.
144 CHAPTER 9. MULTIPLE LINEAR REGRESSION
β0
Y1 1 x11 x12 ··· x1(p−1) ϵ1
Y2 1 β1
x21 x22 ··· x2(p−1)
ϵ2
.. = .. .. .. .. β2
+ ..
. . . . .
.
..
.
Yn 1 xn1 xn2 ··· xn(p−1) ϵn
βp−1
Y = Xβ + ϵ
β0
Y1 1 x11 x12 ··· x1(p−1) ϵ1
Y2 1 β1
x21 x22 ··· x2(p−1)
ϵ2
Y = . , X = . .. .. .. , β= ,β2 ϵ=.
.. .. . . . .. ..
.
Yn 1 xn1 xn2 ··· xn(p−1) ϵn
βp−1
∑
n
f (β0 , β1 , β2 , · · · , βp−1 ) = (yi − (β0 + β1 xi1 + β2 xi2 + · · · + βp−1 xi(p−1) ))2 ,
i=1
∑n ∑n ∑n ∑n
∑n n xi1 ∑n i=1 xi2 ··· ∑n i=1 xi(p−1) β0
∑i=1 ∑n i=1 yi
xi1
n 2
i=1 xi1 i=1 xi1 xi2 ···
i=1 xi1 xi(p−1) β1
i=1 i=1 xi1 yi
.. .. .. .. .. = ..
. . . .
∑n ∑n ∑n ∑n . 2 ∑n .
x
i=1 i(p−1) i=1 xi(p−1) xi1 i=1 x i(p−1) xi2 ··· x
i=1 i(p−1) β p−1 x
i=1 i(p−1) y i
The normal equations can be written much more succinctly in matrix notation,
X ⊤ Xβ = X ⊤ y.
9.1. MATRIX APPROACH TO REGRESSION 145
We can then solve this expression by multiplying both sides by the inverse of
X ⊤ X, which exists, provided the columns of X are linearly independent. Then
as always, we denote our solution with a hat.
( )−1 ⊤
β̂ = X ⊤ X X y
To verify that this is what R has done for us in the case of two predictors, we
create an X matrix. Note that the first column is all 1s, and the remaining
columns contain the data.
n = nrow(autompg)
p = length(coef(mpg_model))
X = cbind(rep(1, n), autompg$wt, autompg$year)
y = autompg$mpg
## [,1]
## [1,] -14.637641945
## [2,] -0.006634876
## [3,] 0.761401955
coef(mpg_model)
## (Intercept) wt year
## -14.637641945 -0.006634876 0.761401955
−14.6376419
β̂ = −0.0066349
0.761402
ŷ = X β̂.
ŷ1
ŷ2
ŷ = .
..
ŷn
e1 y1 ŷ1
e2 y2 ŷ2
e = . = . − . .
.. .. ..
en yn ŷn
∑n
− ŷi )2
i=1 (yi e⊤ e
s2e = =
n−p n−p
E[s2e ] = σ 2
Note that the change from the SLR estimate to now is in the denominator.
Specifically we now divide by n − p instead of n − 2. Or actually, we should note
that in the case of SLR, there are two β parameters and thus p = 2.
Also note that if we fit the model Yi = β + ϵi that ŷ = ȳ and p = 1 and s2e would
become
∑n
− ȳ)2
i=1 (yi
s2e =
n−1
which is likely the very first sample standard deviation you saw in a mathemati-
cal statistics class. The same reason for n − 1 in this case, that we estimated one
parameter, so we lose one degree of freedom. Now, in general, we are estimating
p parameters, the β parameters, so we lose p degrees of freedom.
Also, recall that most often we will be interested in se , the residual standard
error as R calls it,
√∑
n
− ŷi )2
i=1 (yi
se = .
n−p
## [1] 3.431367
And we can now verify that our math above is indeed calculating the same
quantities.
9.2. SAMPLING DISTRIBUTION 147
## [,1]
## [1,] 3.431367
sqrt(sum((y - y_hat) ^ 2) / (n - p))
## [1] 3.431367
##
## Call:
## lm(formula = mpg ~ wt + year, data = autompg)
##
## Residuals:
## Min 1Q Median 3Q Max
## -8.852 -2.292 -0.100 2.039 14.325
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -1.464e+01 4.023e+00 -3.638 0.000312 ***
## wt -6.635e-03 2.149e-04 -30.881 < 2e-16 ***
## year 7.614e-01 4.973e-02 15.312 < 2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 3.431 on 387 degrees of freedom
## Multiple R-squared: 0.8082, Adjusted R-squared: 0.8072
## F-statistic: 815.6 on 2 and 387 DF, p-value: < 2.2e-16
To understand these differences in detail, we will need to first obtain the sam-
pling distribution of β̂.
The derivation of the sampling distribution of β̂ involves the multivariate normal
distribution. These brief notes from semesters past give a basic overview. These
are simply for your information, as we will not present the derivation in full here.
Our goal now is to obtain the distribution of the β̂ vector,
148 CHAPTER 9. MULTIPLE LINEAR REGRESSION
β̂0
β̂1
β̂ =
β̂2
..
.
β̂p−1
Recall from last time that when discussing sampling distributions, we now con-
sider β̂ to be a random vector, thus we use Y instead of the data vector y.
( )−1 ⊤
β̂ = X ⊤ X X Y
( ( )−1 )
β̂ ∼ N β, σ 2 X ⊤ X .
We then have
E[β̂] = β
E[β̂j ] = βj .
We also have
( )−1
Var[β̂] = σ 2 X ⊤ X
Var[β̂j ] = σ 2 Cjj
where
( )−1
C = X ⊤X
C00 C01 C02 ··· C0(p−1)
C10 C11 C12 ··· C1(p−1)
C20 C21 C22 ··· C2(p−1)
C= .
.. .. .. ..
. . . .
C(p−1)0 C(p−1)1 C(p−1)2 ··· C(p−1)(p−1)
√
−1
SE[β̂] = se (X ⊤ X)
√
SE[β̂j ] = se Cjj .
( )
β̂j ∼ N βj , σ 2 Cjj .
thus
β̂j − βj
√ ∼ tn−p .
se Cjj
Now that we have the necessary distributional results, we can move on to per-
form tests and make interval estimates.
H0 : βj = 0 vs H1 : βj ̸= 0
EST − HYP
TS = .
SE
In particular,
150 CHAPTER 9. MULTIPLE LINEAR REGRESSION
β̂j − βj β̂j − 0
t= = √ ,
SE[β̂j ] se Cjj
Yi = β0 + β1 xi1 + β2 xi2 + ϵi , i = 1, 2, . . . , n
where ϵi ∼ N (0, σ 2 ).
H0 : β1 = 0 vs H1 : β1 ̸= 0
The estimate (Estimate), standard error (Std. Error), test statistic (t value),
and p-value (Pr(>|t|)) for this test are displayed in the second row, labeled wt.
Remember that the p-value given here is specifically for a two-sided test, where
the hypothesized value is 0.
Also note in this case, by hypothesizing that β1 = 0 the null and alternative
essentially specify two different models:
• H0 : Y = β 0 + β 2 x 2 + ϵ
• H1 : Y = β 0 + β 1 x 1 + β 2 x 2 + ϵ
This is important. We are not simply testing whether or not there is a relation-
ship between weight and fuel efficiency. We are testing if there is a relationship
between weight and fuel efficiency, given that a term for year is in the model.
(Note, we dropped some indexing here, for readability.)
9.2. SAMPLING DISTRIBUTION 151
E[β̂j ] = βj
√
SE[β̂j ] = se Cjj
and the sampling distribution of β̂j is Normal, then we can easily construct
confidence intervals for each of the β̂j .
√
β̂j ± tα/2,n−p · se Cjj
We can find these in R using the same method as before. Now there will simply
be additional rows for the additional β.
confint(mpg_model, level = 0.99)
## 0.5 % 99.5 %
## (Intercept) -25.052563681 -4.222720208
## wt -0.007191036 -0.006078716
## year 0.632680051 0.890123859
As we saw in SLR, we can create confidence intervals for the mean response,
that is, an interval estimate for E[Y | X = x]. In SLR, the mean of Y was only
dependent on a single value x. Now, in multiple regression, E[Y | X = x] is
dependent on the value of each of the predictors, so we define the vector x0 to
be,
1
x01
x02
x0 = .
..
.
x0(p−1)
ŷ(x0 ) = x⊤
0 β̂
E[ŷ(x0 )] = x⊤
0β
= β0 + β1 x01 + β2 x02 + · · · + βp−1 x0(p−1)
√
−1
SE[ŷ(x0 )] = se x⊤ ⊤
0 (X X) x0
Putting it all together, we obtain a confidence interval for the mean response.
√
−1
ŷ(x0 ) ± tα/2,n−p · se x⊤ ⊤
0 (X X) x0
The math has changed a bit, but the process in R remains almost identical. Here,
we create a data frame for two additional cars. One car that weighs 3500 pounds
produced in 1976, as well as a second car that weighs 5000 pounds which was
produced in 1981.
new_cars = data.frame(wt = c(3500, 5000), year = c(76, 81))
new_cars
## wt year
## 1 3500 76
## 2 5000 81
We can then use the predict() function with interval = "confidence" to
obtain intervals for the mean fuel efficiency for both new cars. Again, it is
important to make the data passed to newdata a data frame, so that R knows
which values are for which variables.
predict(mpg_model, newdata = new_cars, interval = "confidence", level = 0.99)
Note that both of the weights of the new cars are within the range of observed
values.
new_cars$year
## [1] 76 81
range(autompg$year)
## [1] 70 82
As are the years of each of the new cars.
plot(year ~ wt, data = autompg, pch = 20, col = "dodgerblue", cex = 1.5)
points(new_cars, col = "darkorange", cex = 3, pch = "X")
82
X
80
78
X
year
76
74
72
70
wt
However, we have to consider weight and year together now. And based on the
above plot, one of the new cars is within the “blob” of observed values, while
the other, the car from 1981 weighing 5000 pounds, is noticeably outside of the
observed values. This is a hidden extrapolation which you should be aware of
when using multiple regression.
Shifting gears back to the new data pair that can be reasonably estimated, we
do a quick verification of some of the mathematics in R.
x0 = c(1, 3500, 76)
x0 %*% beta_hat
## [,1]
154 CHAPTER 9. MULTIPLE LINEAR REGRESSION
## [1,] 20.00684
1
x0 = 3500
76
−14.6376419
β̂ = −0.0066349
0.761402
[ ] −14.6376419
ŷ(x0 ) = x⊤
0 β̂ = 1 3500 76 −0.0066349 = 20.0068411
0.761402
Also note that, using a particular value for x0 , we can essentially extract certain
β̂j values.
beta_hat
## [,1]
## [1,] -14.637641945
## [2,] -0.006634876
## [3,] 0.761401955
x0 = c(0, 0, 1)
x0 %*% beta_hat
## [,1]
## [1,] 0.761402
With this in mind, confidence intervals for the individual β̂j are actually a special
case of a confidence interval for mean response.
As with SLR, creating prediction intervals involves one slight change to the
standard error to account for the fact that we are now considering an observation,
instead of a mean.
Here we use ŷ(x0 ) to estimate Y0 , a new observation of Y at the predictor vector
x0 .
ŷ(x0 ) = x⊤
0 β̂
E[ŷ(x0 )] = x⊤
0β
= β0 + β1 x01 + β2 x02 + · · · + βp−1 x0(p−1)
√
−1
SE[ŷ(x0 ) + ϵ] = se 1 + x⊤ ⊤
0 (X X) x0
√
−1
ŷ(x0 ) ± tα/2,n−p · se 1 + x⊤ ⊤
0 (X X) x0
new_cars
## wt year
## 1 3500 76
## 2 5000 81
predict(mpg_model, newdata = new_cars, interval = "prediction", level = 0.99)
The decomposition of variation that we had seen in SLR still holds for MLR.
∑
n ∑
n ∑
n
(yi − ȳ)2 = (yi − ŷi )2 + (ŷi − ȳ)2
i=1 i=1 i=1
That is,
This means that, we can still calculate R2 in the same manner as before, which
R continues to do automatically.
summary(mpg_model)$r.squared
## [1] 0.8082355
156 CHAPTER 9. MULTIPLE LINEAR REGRESSION
H0 : β1 = β2 = · · · = βp−1 = 0.
Here, we see that the null hypothesis sets all of the βj equal to 0, except the
intercept, β0 . We could then say that the null model, or “model under the null
hypothesis” is
Yi = β0 + ϵi .
ŷ0i = ȳ.
The alternative hypothesis here is that at least one of the βj from the null
hypothesis is not 0.
H1 : At least one of βj ̸= 0, j = 1, 2, · · · , (p − 1)
We could then say that the full model, or “model under the alternative hypoth-
esis” is
This is a model where the regression is significant. At least one of the predic-
tors has a significant linear relationship with the response. There is some linear
relationship between y and the predictors, x1 , x2 , . . . , xp−1 .
We will denote the fitted values of this model as ŷ1i .
To develop the F test for the significance of the regression, we will arrange the
variance decomposition into an ANOVA table.
Degrees of Mean
Source Sum of Squares Freedom Square F
∑n
i=1 (ŷ1i − ȳ) p−1 SSReg/(p −
2
Regression MSReg/MSE
∑n 1)
∑ni=1 (yi − ŷ1i2) n−p SSE/(n − p)
2
Error
Total i=1 (yi − ȳ) n−1
9.3. SIGNIFICANCE OF REGRESSION 157
∑n
(ŷ1i − ȳ)2 /(p − 1)
F = ∑ni=1 ,
i=1 (yi − ŷ1i ) /(n − p)
2
P (Fp−1,n−p > F )
since we reject for large values of F . A large value of the statistic corresponds to
a large portion of the variance being explained by the regression. Here Fp−1,n−p
represents a random variable which follows an F distribution with p − 1 and
n − p degrees of freedom.
To perform this test in R, we first explicitly specify the two models in R and
save the results in different variables. We then use anova() to compare the
two models, giving anova() the null model first and the alternative (full) model
second. (Specifying the full model first will result in the same p-value, but some
nonsensical intermediate values.)
In this case,
• H0 : Yi = β0 + ϵi
• H1 : Yi = β0 + β1 xi1 + β2 xi2 + ϵi
That is, in the null model, we use neither of the predictors, whereas in the full
(alternative) model, at least one of the predictors is useful.
null_mpg_model = lm(mpg ~ 1, data = autompg)
full_mpg_model = lm(mpg ~ wt + year, data = autompg)
anova(null_mpg_model, full_mpg_model)
summary(mpg_model)
##
## Call:
## lm(formula = mpg ~ wt + year, data = autompg)
##
## Residuals:
## Min 1Q Median 3Q Max
## -8.852 -2.292 -0.100 2.039 14.325
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -1.464e+01 4.023e+00 -3.638 0.000312 ***
## wt -6.635e-03 2.149e-04 -30.881 < 2e-16 ***
## year 7.614e-01 4.973e-02 15.312 < 2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 3.431 on 387 degrees of freedom
## Multiple R-squared: 0.8082, Adjusted R-squared: 0.8072
## F-statistic: 815.6 on 2 and 387 DF, p-value: < 2.2e-16
Notice that the value reported in the row for F-statistic is indeed the F test
statistic for the significance of regression test, and additionally it reports the
two relevant degrees of freedom.
Also, note that none of the individual t-tests are equivalent to the F -test as
they were in SLR. This equivalence only holds for SLR because the individual
test for β1 is the same as testing for all non-intercept parameters, since there is
only one.
We can also verify the sums of squares and degrees of freedom directly in R. You
should match these to the table from R and use this to match R’s output to the
written table above.
# SSReg
sum((fitted(full_mpg_model) - fitted(null_mpg_model)) ^ 2)
## [1] 19205.03
# SSE
sum(resid(full_mpg_model) ^ 2)
## [1] 4556.646
# SST
sum(resid(null_mpg_model) ^ 2)
## [1] 23761.67
9.4. NESTED MODELS 159
## [1] 2
# Degrees of Freedom: Error
length(resid(full_mpg_model)) - length(coef(full_mpg_model))
## [1] 387
# Degrees of Freedom: Total
length(resid(null_mpg_model)) - length(coef(null_mpg_model))
## [1] 389
H0 : βq = βq+1 = · · · = βp−1 = 0.
Specifically, the β-parameters from the full model that are not in the null model
are zero. The resulting model, which is nested, is the null model.
We can then perform this test using an F -test, which is the result of the following
ANOVA table.
160 CHAPTER 9. MULTIPLE LINEAR REGRESSION
Degrees of Mean
Source Sum of Squares Freedom Square F
∑n
Diff ∑i=1 (ŷ1i − ŷ0i )2 p−q SSD/(p − q) MSD/MSE
n
∑ni=1 (yi − ŷ1i )2 n−p SSE/(n − p)
2
Full
Null i=1 (yi − ŷ0i ) n−q
∑n
(ŷ1i − ŷ0i )2 /(p − q)
F = ∑i=1
n .
i=1 (yi − ŷ1i ) /(n − p)
2
Notice that the row for “Diff” compares the sum of the squared differences of
the fitted values. The degrees of freedom is then the difference of the number
of β-parameters estimated between the two models.
For example, the autompg dataset has a number of additional variables that we
have yet to use.
names(autompg)
The alternative is simply that at least one of the βj from the null is not 0.
To perform this test in R we first define both models, then give them to the
anova() commands.
null_mpg_model = lm(mpg ~ wt + year, data = autompg)
#full_mpg_model = lm(mpg ~ wt + year + cyl + disp + hp + acc, data = autompg)
full_mpg_model = lm(mpg ~ ., data = autompg)
anova(null_mpg_model, full_mpg_model)
Here we have used the formula mpg ~ . to define to full model. This is the
same as the commented out line. Specifically, this is a common shortcut in R
which reads, “model mpg as the response with each of the remaining variables
in the data frame as predictors.”
Here we see that the value of the F statistic is 0.553, and the p-value is very
large, so we fail to reject the null hypothesis at any reasonable α and say that
none of cyl, disp, hp, and acc are significant with wt and year already in the
model.
Again, we verify the sums of squares and degrees of freedom directly in R. You
should match these to the table from R, and use this to match R’s output to the
written table above.
# SSDiff
sum((fitted(full_mpg_model) - fitted(null_mpg_model)) ^ 2)
## [1] 26.17981
# SSE (For Full)
sum(resid(full_mpg_model) ^ 2)
## [1] 4530.466
# SST (For Null)
sum(resid(null_mpg_model) ^ 2)
## [1] 4556.646
# Degrees of Freedom: Diff
length(coef(full_mpg_model)) - length(coef(null_mpg_model))
## [1] 4
# Degrees of Freedom: Full
length(resid(full_mpg_model)) - length(coef(full_mpg_model))
## [1] 383
# Degrees of Freedom: Null
length(resid(null_mpg_model)) - length(coef(null_mpg_model))
## [1] 387
162 CHAPTER 9. MULTIPLE LINEAR REGRESSION
9.5 Simulation
Since we ignored the derivation of certain results, we will again use simulation to
convince ourselves of some of the above results. In particular, we will simulate
samples of size n = 100 from the model
Yi = 5 + −2xi1 + 6xi2 + ϵi , i = 1, 2, . . . , n
beta_0 = 5
beta_1 = -2
beta_2 = 6
sigma = 4
As is the norm with regression, the x values are considered fixed and known
quantities, so we will simulate those first, and they remain the same for the rest
of the simulation study. Also note we create an x0 which is all 1, which we need
to create our X matrix. If you look at the matrix formulation of regression, this
unit vector of all 1s is a “predictor” that puts the intercept into the model. We
also calculate the C matrix for later use.
x0 = rep(1, n)
x1 = sample(seq(1, 10, length = n))
x2 = sample(seq(1, 10, length = n))
X = cbind(x0, x1, x2)
C = solve(t(X) %*% X)
We then simulate the response according the model above. Lastly, we place the
two predictors and response into a data frame. Note that we do not place x0
in the data frame. This is a result of R adding an intercept by default.
eps = rnorm(n, mean = 0, sd = sigma)
y = beta_0 + beta_1 * x1 + beta_2 * x2 + eps
sim_data = data.frame(x1, x2, y)
Plotting this data and fitting the regression produces the following plot.
9.5. SIMULATION 163
60
60
50
40
40
y
20
30
0 20
10
2 10
8
4
6
6 x2 0
x1 4
8
2
10
We then calculate
( )−1 ⊤
β̂ = X ⊤ X X y.
## [,1]
## x0 7.290735
## x1 -2.282176
## x2 5.843424
Notice that these values are the same as the coefficients found using lm() in R.
coef(lm(y ~ x1 + x2, data = sim_data))
## (Intercept) x1 x2
## 7.290735 -2.282176 5.843424
164 CHAPTER 9. MULTIPLE LINEAR REGRESSION
## [1] 5 -2 6
We then calculated the fitted values in order to calculate se , which we see is the
same as the sigma which is returned by summary().
y_hat = X %*% beta_hat
(s_e = sqrt(sum((y - y_hat) ^ 2) / (n - p)))
## [1] 4.294307
summary(lm(y ~ x1 + x2, data = sim_data))$sigma
## [1] 4.294307
So far so good. Everything checks out. Now we will finally simulate from this
model repeatedly in order to obtain an empirical distribution of β̂2 .
We expect β̂2 to follow a normal distribution,
( )
β̂2 ∼ N β2 , σ 2 C22 .
In this case,
( )
β̂2 ∼ N µ = 6, σ 2 = 16 × 0.0014534 = 0.0232549 .
( )
β̂2 ∼ N µ = 6, σ 2 = 0.0232549 .
Note that C22 corresponds to the element in the third row and third column
since β2 is the third parameter in the model and because R is indexed starting
at 1. However, we index the C matrix starting at 0 to match the diagonal
elements to the corresponding βj .
C[3, 3]
## [1] 0.00145343
C[2 + 1, 2 + 1]
## [1] 0.00145343
sigma ^ 2 * C[2 + 1, 2 + 1]
## [1] 0.02325487
We now perform the simulation a large number of times. Each time, we update
the y variable in the data frame, leaving the x variables the same. We then fit
a model, and store β̂2 .
9.5. SIMULATION 165
num_sims = 10000
beta_hat_2 = rep(0, num_sims)
for(i in 1:num_sims) {
eps = rnorm(n, mean = 0 , sd = sigma)
sim_data$y = beta_0 * x0 + beta_1 * x1 + beta_2 * x2 + eps
fit = lm(y ~ x1 + x2, data = sim_data)
beta_hat_2[i] = coef(fit)[3]
}
We then see that the mean of the simulated values is close to the true value of
β2 .
mean(beta_hat_2)
## [1] 5.999723
beta_2
## [1] 6
We also see that the variance of the simulated values is close to the true variance
of β̂2 .
var(beta_hat_2)
## [1] 0.02343408
sigma ^ 2 * C[2 + 1, 2 + 1]
## [1] 0.02325487
The standard deviations found from the simulated data and the parent popula-
tion are also very close.
sd(beta_hat_2)
## [1] 0.1530819
sqrt(sigma ^ 2 * C[2 + 1, 2 + 1])
## [1] 0.1524955
Lastly, we plot a histogram of the simulated values, and overlay the true distri-
bution.
hist(beta_hat_2, prob = TRUE, breaks = 20,
xlab = expression(hat(beta)[2]), main = "", border = "dodgerblue")
curve(dnorm(x, mean = beta_2, sd = sqrt(sigma ^ 2 * C[2 + 1, 2 + 1])),
col = "darkorange", add = TRUE, lwd = 3)
166 CHAPTER 9. MULTIPLE LINEAR REGRESSION
2.5
2.0
1.5
Density
1.0
0.5
0.0
## [1] 0.6807
mean(beta_2 - 2 * sd_bh2 < beta_hat_2 & beta_hat_2 < beta_2 + 2 * sd_bh2)
## [1] 0.9529
mean(beta_2 - 3 * sd_bh2 < beta_hat_2 & beta_hat_2 < beta_2 + 3 * sd_bh2)
## [1] 0.9967
9.6 R Markdown
Model Building
“Statisticians, like artists, have the bad habit of falling in love with
their models.”
— George Box
Let’s take a step back and consider the process of finding a model for data at a
higher level. We are attempting to find a model for a response variable y based
on a number of predictors x1 , x2 , x3 , . . . , xp−1 .
Essentially, we are trying to discover the functional relationship between y and
the predictors. In the previous chapter we were fitting models for a car’s fuel
efficiency (mpg) as a function of its attributes (wt, year, cyl, disp, hp, acc). We
also consider y to be a function of some noise. Rarely if ever do we expect there
to be an exact functional relationship between the predictors and the response.
y = f (x1 , x2 , x3 , . . . , xp−1 ) + ϵ
We could consider all sorts of complicated functions for f . You will likely en-
counter several ways of doing this in future machine learning courses. So far in
this course we have focused on (multiple) linear regression. That is
y = f (x1 , x2 , x3 , . . . , xp−1 ) + ϵ
= β0 + β1 x1 + β2 x2 + · · · + βp−1 xp−1 + ϵ
In the big picture of possible models that we could fit to this data, this is a
rather restrictive model. What do we mean by a restrictive model?
169
170 CHAPTER 10. MODEL BUILDING
When modeling data, there are a number of choices that need to be made.
10.1.1 Fit
Consider one of the simplest models we could fit to data, simple linear regression.
y = f (x1 , x2 , x3 , . . . , xp−1 ) + ϵ = β0 + β1 x1 + ϵ
So here, despite having multiple predictors, we chose to use only one. How is
this model fit? We will almost exclusively use the method of least squares, but
recall, we had seen alternative methods of fitting this model.
∑
n
argmin |yi − (β0 + β1 xi )|
β0 ,β1 i=1
∑
n
argmin (yi − (β0 + β1 xi ))2
β0 ,β1 i=1
Any of these methods (we will always use the last, least squares) will obtain
estimates of the unknown parameters β0 and β1 . Since those are the only
unknowns of the specified model, we have then fit the model. The fitted model
is then
Note that, now we have dropped the term for the noise. We don’t make any
effort to model the noise, only the signal.
10.1. FAMILY, FORM, AND FIT 171
10.1.2 Form
What are the different forms a model could take? Currently, for the linear
models we have considered, the only method for altering the form of the model
is to control the predictors used. For example, one form of the multiple linear
regression model is simple linear regression.
y = f (x1 , x2 , x3 , . . . , xp−1 ) + ϵ = β0 + β1 x1 + ϵ
We could also consider a SLR model with a different predictor, thus altering the
form of the model.
y = f (x1 , x2 , x3 , . . . , xp−1 ) + ϵ = β0 + β2 x2 + ϵ
Often, we’ll use multiple predictors in our model. Very often, we will at least
try a model with all possible predictors.
y = f (x1 , x2 , x3 , . . . , xp−1 ) + ϵ
= β0 + β1 x1 + β2 x2 + · · · + βp−1 xp−1 + ϵ
y = f (x1 , x2 , x3 , . . . , xp−1 ) + ϵ
= β0 + β1 x 1 + β3 x 3 + β5 x 5 + ϵ
These forms are restrictive in two senses. First, they only allow for linear rela-
tionships between the response and the predictors. This seems like an obvious
restriction of linear models, but in fact, we will soon see how to use linear mod-
els for non-linear relationships. (It will involve transforming variables.) Second,
how one variable affects the response is the same for any values of the other
predictors. Soon we will see how to create models where the effect of x1 can be
different for different values of x2 . We will discuss the concept of interaction.
10.1.3 Family
In linear regression, we specified models with parameters, βj and fit the model
by finding the best values of these parameters. This is a parametric approach. A
non-parametric approach skips the step of specifying a model with parameters,
and are often described as more of an algorithm. Non-parametric models are
often used in machine learning.
10
5
5
Response
Response
0
0
-5
-5
0 2 4 6 8 10 0 2 4 6 8 10
Predictor Predictor
When searching for a model, we often need to make assumptions. These as-
sumptions are codified in the family and form of the model. For example
y = β0 + β1 x 1 + β 3 x 3 + β5 x 5 + ϵ
Suppose we then fit this model to some data and obtain the fitted model. For
example, in R we would use
fit = lm(y ~ x1 + x3 + x5, data = some_data)
This is R’s way of saying the family is linear and specifying the form from above.
An additive model with the specified predictors as well as an intercept. We then
obtain
y = f (x1 , x2 , x3 , . . . , xp−1 ) + ϵ
for the assumed family and form. Fitting a model only gives us the best fit
for the family and form that we specify. So the natural question is; how do we
choose the correct family and form? We’ll focus on form since we are focusing
on the family of linear models.
10.2.1 Explanation
If the goal of a model is to explain the relationship between the response and
the predictors, we are looking for a model that is small and interpretable,
but still fits the data well. When discussing linear models, the size of a model
is essentially the number of β parameters used.
Suppose we would like to find a model that explains fuel efficiency (mpg) based
on a car’s attributes (wt, year, cyl, disp, hp, acc). Perhaps we are a car
manufacturer trying to engineer a fuel efficient vehicle. If this is the case, we
are interested in both which predictor variables are useful for explaining the
car’s fuel efficiency, as well as how those variables effect fuel efficiency. By
174 CHAPTER 10. MODEL BUILDING
y = β0 + β1 x1 + β3 x3 + β5 x5 + ϵ
ϵ ∼ N (0, σ 2 )
This assumption, that the errors are normally distributed with some common
variance is the key to all of the inference we have done so far. We will discuss
this in great detail later.
So with our inference tools (ANOVA and t-test) we have two potential strategies.
Start with a very small model (no predictors) and attempt to add predictors.
Or, start with a big model (all predictors) and attempt to remove predictors.
A word of caution when using a model to explain a relationship. There are two
terms often used to describe a relationship between two variables: causation
and correlation. Correlation is often also referred to as association.
Just because two variables are correlated does not necessarily mean that one
causes the other. For example, consider modeling mpg as only a function of hp.
plot(mpg ~ hp, data = autompg, col = "dodgerblue", pch = 20, cex = 1.5)
10.2. EXPLANATION VERSUS PREDICTION 175
40
30
mpg
20
10
hp
10.2.2 Prediction
If the goal of a model is to predict the response, then the only consideration is
how well the model fits the data. For this, we will need a metric. In regression
problems, this is most often RMSE.
v
u n
u1 ∑
RMSE(model, data) = t (yi − ŷi )2
n i=1
176 CHAPTER 10. MODEL BUILDING
where
• yi are the actual values of the response for the given data
• ŷi are the predicted values using the fitted model and the predictors from
the data
Correlation and causation are not an issue here. If a predictor is correlated
with the response, it is useful for prediction. For example, in elementary school
aged children their shoe size certainly doesn’t cause them to read at a higher
level, however we could very easily use shoe size to make a prediction about a
child’s reading ability. The larger their shoe size, the better they read. There’s
a lurking variable here though, their age! (Don’t send your kids to school with
size 14 shoes, it won’t make them read better!)
Also, since we are not performing inference, the extra assumption about the
errors is not needed. The only thing we care about is how close the fitted model
is to the data. Least squares is least squares. For a specified model, it will
find the values of the parameters which will minimize the squared error loss.
Your results might be largely uninterpretable and useless for inference, but for
prediction none of that matters.
Suppose instead of the manufacturer who would like to build a car, we are a
consumer who wishes to purchase a new car. However this particular car is so
new, it has not been rigorously tested, so we are unsure of what fuel efficiency to
expect. (And, as skeptics, we don’t trust what the manufacturer is telling us.)
In this case, we would like to use the model to help predict the fuel efficiency
of this car based on its attributes, which are the predictors of the model. The
smaller the errors the model makes, the more confident we are in its prediction.
The trouble with using RMSE to identify how well a model fits data, is that
RMSE is always (equal or) lower for a larger model. This would suggest that
we should always use the largest model possible when looking for a model that
predicts well. The problem with this is the potential to overfit to the data. So,
we want a model that fits well, but does not overfit. To understand overfitting,
we need to think about applying a model to seen and unseen data.
Suppose we fit a model using all data available and we evaluate RMSE on this
fitted model and all of the seen data. We will call this data the training data,
and this RMSE the train RMSE.
Now, suppose we magically encounter some additional additional data. To truly
asses how well the model predicts, we should evaluate how well our models
predicts the response of this data. We will call this data the test data and this
RMSE the test RMSE.
• Train RMSE: model fit on seen data, evaluated on seen data
10.3. SUMMARY 177
8
6
6
4
4
Response
Response
2
2
0
0
-2
-2
-4
-4
0 2 4 6 8 10 0 2 4 6 8 10
Predictor Predictor
The left panel shows the data that was used to fit the two models. Clearly the
“complex” model fits the data much better. The right panel shows additional
data that was simulated in the same manner as the original data. Here we see
that the “simple” model fits much better. The dashed orange line almost seems
random.
The more “complex”, wiggly, model fits the training data much better as it has
a much lower train RMSE. However, we see that the “simple” model fits the test
data much better, with a much lower test RMSE. This means that the complex
model has overfit the data, and we prefer the simple model. When choosing a
model for prediction, we prefer a model that predicts unseen data.
In practice, you can’t simply generate more data to evaluate your models. In-
stead we split existing data into data used to fit the model (train) and data
used to evaluate the model (test). Never fit a model with test data.
10.3 Summary
Models can be used to explain relationships and predict observations.
178 CHAPTER 10. MODEL BUILDING
10.4 R Markdown
The R Markdown file for this chapter can be found here:
• model-building.Rmd
The file was created using R version 3.6.1.
Chapter 11
— John Tukey
So far in each of our analyses, we have only used numeric variables as predictors.
We have also only used additive models, meaning the effect any predictor had on
the response was not dependent on the other predictors. In this chapter, we will
remove both of these restrictions. We will fit models with categorical predictors,
and use models that allow predictors to interact. The mathematics of multiple
regression will remain largely unchanging, however, we will pay close attention
to interpretation, as well as some difference in R usage.
179
180 CHAPTER 11. CATEGORICAL PREDICTORS AND INTERACTIONS
For this chapter, we will briefly use the built in dataset mtcars before returning
to our autompg dataset that we created in the last chapter. The mtcars dataset
is somewhat smaller, so we’ll quickly take a look at the entire dataset.
mtcars
As we often do, we will start by plotting the data. We are interested in mpg as
the response variable, and hp as a predictor.
plot(mpg ~ hp, data = mtcars, cex = 2)
30
25
mpg
20
15
10
hp
Since we are also interested in the transmission type, we could also label the
points accordingly.
plot(mpg ~ hp, data = mtcars, col = am + 1, pch = am + 1, cex = 2)
legend("topright", c("Automatic", "Manual"), col = c(1, 2), pch = c(1, 2))
182 CHAPTER 11. CATEGORICAL PREDICTORS AND INTERACTIONS
Automatic
30
25 Manual
mpg
20
15
10
hp
We used a common R “trick” when plotting this data. The am variable takes two
possible values; 0 for automatic transmission, and 1 for manual transmissions. R
can use numbers to represent colors, however the color for 0 is white. So we take
the am vector and add 1 to it. Then observations with automatic transmissions
are now represented by 1, which is black in R, and manual transmission are
represented by 2, which is red in R. (Note, we are only adding 1 inside the call
to plot(), we are not actually modifying the values stored in am.)
Y = β0 + β1 x1 + ϵ,
where Y is mpg and x1 is hp. For notational brevity, we drop the index i for
observations.
mpg_hp_slr = lm(mpg ~ hp, data = mtcars)
We then re-plot the data and add the fitted line to the plot.
plot(mpg ~ hp, data = mtcars, col = am + 1, pch = am + 1, cex = 2)
abline(mpg_hp_slr, lwd = 3, col = "grey")
legend("topright", c("Automatic", "Manual"), col = c(1, 2), pch = c(1, 2))
11.1. DUMMY VARIABLES 183
Automatic
Manual
30
25
mpg
20
15
10
hp
We should notice a pattern here. The red, manual observations largely fall above
the line, while the black, automatic observations are mostly below the line. This
means our model underestimates the fuel efficiency of manual transmissions, and
overestimates the fuel efficiency of automatic transmissions. To correct for this,
we will add a predictor to our model, namely, am as x2 .
Our new model is
Y = β0 + β1 x1 + β2 x2 + ϵ,
So, to fit the above model, we do so like any other multiple regression model we
have seen before.
mpg_hp_add = lm(mpg ~ hp + am, data = mtcars)
Briefly checking the output, we see that R has estimated the three β parameters.
mpg_hp_add
##
## Call:
## lm(formula = mpg ~ hp + am, data = mtcars)
##
## Coefficients:
## (Intercept) hp am
## 26.58491 -0.05889 5.27709
Since x2 can only take values 0 and 1, we can effectively write two different
models, one for manual and one for automatic transmissions.
For automatic transmissions, that is x2 = 0, we have,
Y = β0 + β1 x1 + ϵ.
Y = (β0 + β2 ) + β1 x1 + ϵ.
Notice that these models share the same slope, β1 , but have different intercepts,
differing by β2 . So the change in mpg is the same for both models, but on average
mpg differs by β2 between the two transmission types.
We’ll now calculate the estimated slope and intercept of these two models so
that we can add them to a plot. Note that:
• β̂0 = coef(mpg_hp_add)[1] = 26.5849137
• β̂1 = coef(mpg_hp_add)[2] = -0.0588878
• β̂2 = coef(mpg_hp_add)[3] = 5.2770853
We can then combine these to calculate the estimated slope and intercepts.
int_auto = coef(mpg_hp_add)[1]
int_manu = coef(mpg_hp_add)[1] + coef(mpg_hp_add)[3]
slope_auto = coef(mpg_hp_add)[2]
slope_manu = coef(mpg_hp_add)[2]
Re-plotting the data, we use these slopes and intercepts to add the “two” fitted
models to the plot.
11.1. DUMMY VARIABLES 185
Automatic
Manual
30
25
mpg
20
15
10
hp
We notice right away that the points are no longer systematically incorrect. The
red, manual observations vary about the red line in no particular pattern without
underestimating the observations as before. The black, automatic points vary
about the black line, also without an obvious pattern.
H0 : β2 = 0 vs H1 : β2 ̸= 0.
This is nothing new. Again, the math is the same as the multiple regression
analyses we have seen before. We could perform either a t or F test here. The
only difference is a slight change in interpretation. We could think of this as
testing a model with a single line (H0 ) against a model that allows two lines
(H1 ).
To obtain the test statistic and p-value for the t-test, we would use
186 CHAPTER 11. CATEGORICAL PREDICTORS AND INTERACTIONS
summary(mpg_hp_add)$coefficients["am",]
Y = β0 + β1 x1 + β2 x2 + ϵ,
in general, we can think of a dummy variable as creating “two models,” one for
each category of a binary categorical variable.
11.2 Interactions
str(autompg)
We’ve removed cars with 3 and 5 cylinders , as well as created a new variable
domestic which indicates whether or not a car was built in the United States.
Removing the 3 and 5 cylinders is simply for ease of demonstration later in the
chapter and would not be done in practice. The new variable domestic takes
the value 1 if the car was built in the United States, and 0 otherwise, which
we will refer to as “foreign.” (We are arbitrarily using the United States as the
reference point here.) We have also made cyl and origin into factor variables,
which we will discuss later.
We’ll now be concerned with three variables: mpg, disp, and domestic. We will
use mpg as the response. We can fit a model,
Y = β0 + β1 x1 + β2 x2 + ϵ,
where
{
1 Domestic
x2 =
0 Foreign
We will fit this model, extract the slope and intercept for the “two lines,” plot
the data and add the lines.
mpg_disp_add = lm(mpg ~ disp + domestic, data = autompg)
int_for = coef(mpg_disp_add)[1]
int_dom = coef(mpg_disp_add)[1] + coef(mpg_disp_add)[3]
slope_for = coef(mpg_disp_add)[2]
slope_dom = coef(mpg_disp_add)[2]
Foreign
Domestic
40
30
mpg
20
10
disp
This is a model that allows for two parallel lines, meaning the mpg can be different
on average between foreign and domestic cars of the same engine displacement,
but the change in average mpg for an increase in displacement is the same for
both. We can see this model isn’t doing very well here. The red line fits the red
points fairly well, but the black line isn’t doing very well for the black points,
it should clearly have a more negative slope. Essentially, we would like a model
that allows for two different slopes.
Consider the following model,
Y = β0 + β1 x1 + β2 x2 + β3 x1 x2 + ϵ,
where x1 , x2 , and Y are the same as before, but we have added a new inter-
action term x1 x2 which multiplies x1 and x2 , so we also have an additional β
parameter β3 .
This model essentially creates two slopes and two intercepts, β2 being the dif-
ference in intercepts and β3 being the difference in slopes. To see this, we will
break down the model into the two “sub-models” for foreign and domestic cars.
For foreign cars, that is x2 = 0, we have
Y = β0 + β1 x1 + ϵ.
You should only do this as a last resort. We greatly prefer not to have to modify
our data simply to fit a model. Instead, we can tell R we would like to use the
existing data with an interaction term, which it will create automatically when
we use the : operator.
mpg_disp_int = lm(mpg ~ disp + domestic + disp:domestic, data = autompg)
An alternative method, which will fit the exact same model as above would be
to use the * operator. This method automatically creates the interaction term,
as well as any “lower order terms,” which in this case are the first order terms
for disp and domestic
mpg_disp_int2 = lm(mpg ~ disp * domestic, data = autompg)
We can quickly verify that these are doing the same thing.
coef(mpg_disp_int)
##
11.2. INTERACTIONS 191
## Call:
## lm(formula = mpg ~ disp + domestic + disp:domestic, data = autompg)
##
## Residuals:
## Min 1Q Median 3Q Max
## -10.8332 -2.8956 -0.8332 2.2828 18.7749
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 46.05484 1.80582 25.504 < 2e-16 ***
## disp -0.15692 0.01668 -9.407 < 2e-16 ***
## domestic -12.57547 1.95644 -6.428 3.90e-10 ***
## disp:domestic 0.10252 0.01692 6.060 3.29e-09 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 4.308 on 379 degrees of freedom
## Multiple R-squared: 0.7011, Adjusted R-squared: 0.6987
## F-statistic: 296.3 on 3 and 379 DF, p-value: < 2.2e-16
We see that using summary() gives the usual output for a multiple regression
model. We pay close attention to the row for disp:domestic which tests,
H0 : β3 = 0.
In this case, testing for β3 = 0 is testing for two lines with parallel slopes
versus two lines with possibly different slopes. The disp:domestic line in the
summary() output uses a t-test to perform the test.
We could also use an ANOVA F -test. The additive model, without interaction
is our null model, and the interaction model is the alternative.
anova(mpg_disp_add, mpg_disp_int)
Again we see this test has the same p-value as the t-test. Also the p-value is
extremely low, so between the two, we choose the interaction model.
192 CHAPTER 11. CATEGORICAL PREDICTORS AND INTERACTIONS
int_for = coef(mpg_disp_int)[1]
int_dom = coef(mpg_disp_int)[1] + coef(mpg_disp_int)[3]
slope_for = coef(mpg_disp_int)[2]
slope_dom = coef(mpg_disp_int)[2] + coef(mpg_disp_int)[4]
Here we again calculate the slope and intercepts for the two lines for use in
plotting.
plot(mpg ~ disp, data = autompg, col = domestic + 1, pch = domestic + 1)
abline(int_for, slope_for, col = 1, lty = 1, lwd = 2) # line for foreign cars
abline(int_dom, slope_dom, col = 2, lty = 2, lwd = 2) # line for domestic cars
legend("topright", c("Foreign", "Domestic"), pch = c(1, 2), col = c(1, 2))
Foreign
Domestic
40
30
mpg
20
10
disp
We see that these lines fit the data much better, which matches the result of
our tests.
Y = β0 + β1 x1 + β2 x2 + β3 x1 x2 + ϵ,
11.2. INTERACTIONS 193
where
• Y is mpg, the fuel efficiency in miles per gallon,
• x1 is disp, the displacement in cubic inches,
• x2 is hp, the horsepower, in foot-pounds per second.
How does mpg change based on disp in this model? We can rearrange some
terms to see how.
Y = β0 + (β1 + β3 x2 )x1 + β2 x2 + ϵ
So, for a one unit increase in x1 (disp), the mean of Y (mpg) increases β1 +β3 x2 ,
which is a different value depending on the value of x2 (hp)!
Since we’re now working in three dimensions, this model can’t be easily justified
via visualizations like the previous example. Instead, we will have to rely on a
test.
mpg_disp_add_hp = lm(mpg ~ disp + hp, data = autompg)
mpg_disp_int_hp = lm(mpg ~ disp * hp, data = autompg)
summary(mpg_disp_int_hp)
##
## Call:
## lm(formula = mpg ~ disp * hp, data = autompg)
##
## Residuals:
## Min 1Q Median 3Q Max
## -10.7849 -2.3104 -0.5699 2.1453 17.9211
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 5.241e+01 1.523e+00 34.42 <2e-16 ***
## disp -1.002e-01 6.638e-03 -15.09 <2e-16 ***
## hp -2.198e-01 1.987e-02 -11.06 <2e-16 ***
## disp:hp 5.658e-04 5.165e-05 10.96 <2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 3.896 on 379 degrees of freedom
## Multiple R-squared: 0.7554, Adjusted R-squared: 0.7535
## F-statistic: 390.2 on 3 and 379 DF, p-value: < 2.2e-16
Using summary() we focus on the row for disp:hp which tests,
H0 : β3 = 0.
194 CHAPTER 11. CATEGORICAL PREDICTORS AND INTERACTIONS
Again, we see a very low p-value so we reject the null (additive model) in favor
of the interaction model. Again, there is an equivalent F -test.
anova(mpg_disp_add_hp, mpg_disp_int_hp)
Y = β0 + (β1 + β3 x2 )x1 + β2 x2 + ϵ.
This says that, for an increase of one disp we see an estimated change in average
mpg of −0.1001738 + 5.658269 × 10−4 x2 . So how disp and mpg are related,
depends on the hp of the car.
So for a car with 50 hp, the estimated change in average mpg for an increase of
one disp is
11.3. FACTOR VARIABLES 195
And for a car with 350 hp, the estimated change in average mpg for an increase
of one disp is
## [1] FALSE
Earlier when we used the domestic variable, it was not a factor variable. It was
simply a numerical variable that only took two possible values, 1 for domestic,
and 0 for foreign. Let’s create a new variable origin that stores the same
information, but in a different way.
autompg$origin[autompg$domestic == 1] = "domestic"
autompg$origin[autompg$domestic == 0] = "foreign"
head(autompg$origin)
## [1] FALSE
However, this is simply a vector of character values. A vector of car models is
a character variable in R. A vector of Vehicle Identification Numbers (VINs) is
a character variable as well. But those don’t represent a short list of levels that
might influence a response variable. We will want to coerce this origin variable
to be something more: a factor variable.
196 CHAPTER 11. CATEGORICAL PREDICTORS AND INTERACTIONS
autompg$origin = as.factor(autompg$origin)
Now when we check the structure of the autompg dataset, we see that origin
is a factor variable.
str(autompg)
Y = β0 + β1 x1 + β2 x2 + β3 x1 x2 + ϵ,
where
• Y is mpg, the fuel efficiency in miles per gallon,
• x1 is disp, the displacement in cubic inches,
• x2 is domestic a dummy variable where 1 indicates a domestic car.
(mod_dummy = lm(mpg ~ disp * domestic, data = autompg))
##
## Call:
## lm(formula = mpg ~ disp * domestic, data = autompg)
##
## Coefficients:
## (Intercept) disp domestic disp:domestic
## 46.0548 -0.1569 -12.5755 0.1025
So here we see that
##
## Call:
## lm(formula = mpg ~ disp * origin, data = autompg)
##
## Coefficients:
## (Intercept) disp originforeign
## 33.47937 -0.05441 12.57547
## disp:originforeign
## -0.10252
It seems that it doesn’t produce the same results. Right away we notice that
the intercept is different, as is the the coefficient in front of disp. We also notice
that the remaining two coefficients are of the same magnitude as their respective
counterparts using the domestic variable, but with a different sign. Why is this
happening?
It turns out, that by using a factor variable, R is automatically creating a dummy
variable for us. However, it is not the dummy variable that we had originally
used ourselves.
R is fitting the model
Y = β0 + β1 x1 + β2 x2 + β3 x1 x2 + ϵ,
where
• Y is mpg, the fuel efficiency in miles per gallon,
• x1 is disp, the displacement in cubic inches,
• x2 is a dummy variable created by R. It uses 1 to represent a foreign
car.
So now,
β̂0 = 33.4793709
is the estimated average mpg for a domestic car with 0 disp, which is indeed
the same as before.
When R created x2 , the dummy variable, it used domestic cars as the reference
level, that is the default value of the factor variable. So when the dummy
variable is 0, the model represents this reference level, which is domestic. (R
makes this choice because domestic comes before foreign alphabetically.)
198 CHAPTER 11. CATEGORICAL PREDICTORS AND INTERACTIONS
So the two models have different estimated coefficients, but due to the different
model representations, they are actually the same model.
Let’s now consider a factor variable with more than two levels. In this dataset,
cyl is an example.
is.factor(autompg$cyl)
## [1] TRUE
levels(autompg$cyl)
{
1 4 cylinder
v1 =
0 not 4 cylinder
{
1 6 cylinder
v2 =
0 not 6 cylinder
{
1 8 cylinder
v3 =
0 not 8 cylinder
Now, let’s fit an additive model in R, using mpg as the response, and disp and
cyl as predictors. This should be a model that uses “three regression lines” to
model mpg, one for each of the possible cyl levels. They will all have the same
slope (since it is an additive model), but each will have its own intercept.
11.3. FACTOR VARIABLES 199
##
## Call:
## lm(formula = mpg ~ disp + cyl, data = autompg)
##
## Coefficients:
## (Intercept) disp cyl6 cyl8
## 34.99929 -0.05217 -3.63325 -2.03603
The question is, what is the model that R has fit here? It has chosen to use the
model
Y = β0 + β1 x + β2 v2 + β3 v3 + ϵ,
where
• Y is mpg, the fuel efficiency in miles per gallon,
• x is disp, the displacement in cubic inches,
• v2 and v3 are the dummy variables define above.
Why doesn’t R use v1 ? Essentially because it doesn’t need to. To create three
lines, it only needs two dummy variables since it is using a reference level, which
in this case is a 4 cylinder car. The three “sub models” are then:
• 4 Cylinder: Y = β0 + β1 x + ϵ
• 6 Cylinder: Y = (β0 + β2 ) + β1 x + ϵ
• 8 Cylinder: Y = (β0 + β3 ) + β1 x + ϵ
Notice that they all have the same slope. However, using the two dummy
variables, we achieve the three intercepts.
• β0 is the average mpg for a 4 cylinder car with 0 disp.
• β0 + β2 is the average mpg for a 6 cylinder car with 0 disp.
• β0 + β3 is the average mpg for a 8 cylinder car with 0 disp.
So because 4 cylinder is the reference level, β0 is specific to 4 cylinders, but β2
and β3 are used to represent quantities relative to 4 cylinders.
As we have done before, we can extract these intercepts and slopes for the three
lines, and plot them accordingly.
int_4cyl = coef(mpg_disp_add_cyl)[1]
int_6cyl = coef(mpg_disp_add_cyl)[1] + coef(mpg_disp_add_cyl)[3]
int_8cyl = coef(mpg_disp_add_cyl)[1] + coef(mpg_disp_add_cyl)[4]
slope_all_cyl = coef(mpg_disp_add_cyl)[2]
4 Cylinder
6 Cylinder
40
8 Cylinder
30
mpg
20
10
disp
##
11.3. FACTOR VARIABLES 201
## Call:
## lm(formula = mpg ~ disp * cyl, data = autompg)
##
## Coefficients:
## (Intercept) disp cyl6 cyl8 disp:cyl6
## 43.59052 -0.13069 -13.20026 -20.85706 0.08299
## disp:cyl8
## 0.10817
# could also use mpg ~ disp + cyl + disp:cyl
R has again chosen to use 4 cylinder cars as the reference level, but this also
now has an effect on the interaction terms. R has fit the model.
Y = β0 + β1 x + β2 v2 + β3 v3 + γ2 xv2 + γ3 xv3 + ϵ
We’re using γ like a β parameter for simplicity, so that, for example β2 and γ2
are both associated with v2 .
Now, the three “sub models” are:
• 4 Cylinder: Y = β0 + β1 x + ϵ.
• 6 Cylinder: Y = (β0 + β2 ) + (β1 + γ2 )x + ϵ.
• 8 Cylinder: Y = (β0 + β3 ) + (β1 + γ3 )x + ϵ.
Interpreting some parameters and coefficients then:
• (β0 + β2 ) is the average mpg of a 6 cylinder car with 0 disp
• (β̂1 + γ̂3 ) = −0.1306935+0.1081714 = −0.0225221 is the estimated change
in average mpg for an increase of one disp, for an 8 cylinder car.
So, as we have seen before β2 and β3 change the intercepts for 6 and 8 cylinder
cars relative to the reference level of β0 for 4 cylinder cars.
Now, similarly γ2 and γ3 change the slopes for 6 and 8 cylinder cars relative to
the reference level of β1 for 4 cylinder cars.
Once again, we extract the coefficients and plot the results.
int_4cyl = coef(mpg_disp_int_cyl)[1]
int_6cyl = coef(mpg_disp_int_cyl)[1] + coef(mpg_disp_int_cyl)[3]
int_8cyl = coef(mpg_disp_int_cyl)[1] + coef(mpg_disp_int_cyl)[4]
slope_4cyl = coef(mpg_disp_int_cyl)[2]
slope_6cyl = coef(mpg_disp_int_cyl)[2] + coef(mpg_disp_int_cyl)[5]
slope_8cyl = coef(mpg_disp_int_cyl)[2] + coef(mpg_disp_int_cyl)[6]
4 Cylinder
6 Cylinder
40
8 Cylinder
30
mpg
20
10
disp
This looks much better! We can see that for medium displacement cars, 6
cylinder cars now perform better than 8 cylinder cars, which seems much more
reasonable than before.
To completely justify the interaction model (i.e., a unique slope for each cyl
level) compared to the additive model (single slope), we can perform an F -test.
Notice first, that there is no t-test that will be able to do this since the difference
between the two models is not a single parameter.
We will test,
H0 : γ 2 = γ 3 = 0
Y = β0 + β1 x + β2 v2 + β3 v3 + ϵ.
anova(mpg_disp_add_cyl, mpg_disp_int_cyl)
## [1] 2
We see there is a difference of two parameters, which is also displayed in the
resulting ANOVA table from R. Notice that the following two values also appear
on the ANOVA table.
nrow(autompg) - length(coef(mpg_disp_int_cyl))
## [1] 377
nrow(autompg) - length(coef(mpg_disp_add_cyl))
## [1] 379
11.4 Parameterization
So far we have been simply letting R decide how to create the dummy variables,
and thus R has been deciding the parameterization of the models. To illustrate
the ability to use alternative parameterizations, we will recreate the data, but
directly creating the dummy variables ourselves.
new_param_data = data.frame(
y = autompg$mpg,
x = autompg$disp,
204 CHAPTER 11. CATEGORICAL PREDICTORS AND INTERACTIONS
v1 = 1 * as.numeric(autompg$cyl == 4),
v2 = 1 * as.numeric(autompg$cyl == 6),
v3 = 1 * as.numeric(autompg$cyl == 8))
head(new_param_data, 20)
## y x v1 v2 v3
## 1 18 307 0 0 1
## 2 15 350 0 0 1
## 3 18 318 0 0 1
## 4 16 304 0 0 1
## 5 17 302 0 0 1
## 6 15 429 0 0 1
## 7 14 454 0 0 1
## 8 14 440 0 0 1
## 9 14 455 0 0 1
## 10 15 390 0 0 1
## 11 15 383 0 0 1
## 12 14 340 0 0 1
## 13 15 400 0 0 1
## 14 14 455 0 0 1
## 15 24 113 1 0 0
## 16 22 198 0 1 0
## 17 18 199 0 1 0
## 18 21 200 0 1 0
## 19 27 97 1 0 0
## 20 26 97 1 0 0
Now,
• y is mpg
• x is disp, the displacement in cubic inches,
• v1, v2, and v3 are dummy variables as defined above.
First let’s try to fit an additive model using x as well as the three dummy
variables.
lm(y ~ x + v1 + v2 + v3, data = new_param_data)
##
## Call:
## lm(formula = y ~ x + v1 + v2 + v3, data = new_param_data)
##
## Coefficients:
## (Intercept) x v1 v2 v3
## 32.96326 -0.05217 2.03603 -1.59722 NA
What is happening here? Notice that R is essentially ignoring v3, but why?
11.4. PARAMETERIZATION 205
Well, because R uses an intercept, it cannot also use v3. This is because
1 = v1 + v2 + v3
which means that 1, v1 , v2 , and v3 are linearly dependent. This would make
the X ⊤ X matrix singular, but we need to be able to invert it to solve the
normal equations and obtain β̂. With the intercept, v1, and v2, R can make the
necessary “three intercepts”. So, in this case v3 is the reference level.
If we remove the intercept, then we can directly obtain all “three intercepts”
without a reference level.
lm(y ~ 0 + x + v1 + v2 + v3, data = new_param_data)
##
## Call:
## lm(formula = y ~ 0 + x + v1 + v2 + v3, data = new_param_data)
##
## Coefficients:
## x v1 v2 v3
## -0.05217 34.99929 31.36604 32.96326
Here, we are fitting the model
Y = µ1 v1 + µ2 v2 + µ3 v3 + βx + ϵ.
Thus we have:
• 4 Cylinder: Y = µ1 + βx + ϵ
• 6 Cylinder: Y = µ2 + βx + ϵ
• 8 Cylinder: Y = µ3 + βx + ϵ
We could also do something similar with the interaction model, and give each
line an intercept and slope, without the need for a reference level.
lm(y ~ 0 + v1 + v2 + v3 + x:v1 + x:v2 + x:v3, data = new_param_data)
##
## Call:
## lm(formula = y ~ 0 + v1 + v2 + v3 + x:v1 + x:v2 + x:v3, data = new_param_data)
##
## Coefficients:
## v1 v2 v3 v1:x v2:x v3:x
## 43.59052 30.39026 22.73346 -0.13069 -0.04770 -0.02252
• 6 Cylinder: Y = µ2 + β2 x + ϵ
• 8 Cylinder: Y = µ3 + β3 x + ϵ
Using the original data, we have (at least) three equivalent ways to specify the
interaction model with R.
lm(mpg ~ disp * cyl, data = autompg)
##
## Call:
## lm(formula = mpg ~ disp * cyl, data = autompg)
##
## Coefficients:
## (Intercept) disp cyl6 cyl8 disp:cyl6
## 43.59052 -0.13069 -13.20026 -20.85706 0.08299
## disp:cyl8
## 0.10817
lm(mpg ~ 0 + cyl + disp : cyl, data = autompg)
##
## Call:
## lm(formula = mpg ~ 0 + cyl + disp:cyl, data = autompg)
##
## Coefficients:
## cyl4 cyl6 cyl8 cyl4:disp cyl6:disp cyl8:disp
## 43.59052 30.39026 22.73346 -0.13069 -0.04770 -0.02252
lm(mpg ~ 0 + disp + cyl + disp : cyl, data = autompg)
##
## Call:
## lm(formula = mpg ~ 0 + disp + cyl + disp:cyl, data = autompg)
##
## Coefficients:
## disp cyl4 cyl6 cyl8 disp:cyl6 disp:cyl8
## -0.13069 43.59052 30.39026 22.73346 0.08299 0.10817
They all fit the same model, importantly each using six parameters, but the
coefficients mean slightly different things in each. However, once they are in-
terpreted as slopes and intercepts for the “three lines” they will have the same
result.
Use ?all.equal to learn about the all.equal() function, and think about how
the following code verifies that the residuals of the two models are the same.
all.equal(fitted(lm(mpg ~ disp * cyl, data = autompg)),
fitted(lm(mpg ~ 0 + cyl + disp : cyl, data = autompg)))
## [1] TRUE
11.5. BUILDING LARGER MODELS 207
Y = β0 + β1 x1 + β2 x2 + β3 x3 + β4 x1 x2 + β5 x1 x3 + β6 x2 x3 + β7 x1 x2 x3 + ϵ.
Here,
• Y is mpg.
• x1 is disp.
• x2 is hp.
• x3 is domestic, which is a dummy variable we defined, where 1 is a do-
mestic vehicle.
First thing to note here, we have included a new term x1 x2 x3 which is a three-
way interaction. Interaction terms can be larger and larger, up to the number
of predictors in the model.
Since we are using the three-way interaction term, we also use all possible two-
way interactions, as well as each of the first order (main effect) terms. This is
the concept of a hierarchy. Any time a “higher-order” term is in a model, the
related “lower-order” terms should also be included. Mathematically their inclu-
sion or exclusion is sometimes irrelevant, but from an interpretation standpoint,
it is best to follow the hierarchy rules.
Let’s do some rearrangement to obtain a “coefficient” in front of x1 .
Y = β0 + β2 x2 + β3 x3 + β6 x2 x3 + (β1 + β4 x2 + β5 x3 + β7 x2 x3 )x1 + ϵ.
(β1 + β4 x2 + β5 x3 + β7 x2 x3 ).
Let’s discuss this “coefficient” to help us understand the idea of the flexibility
of a model. Recall that,
• β1 is the coefficient for a first order term,
• β4 and β5 are coefficients for two-way interactions,
• β7 is the coefficient for the three-way interaction.
208 CHAPTER 11. CATEGORICAL PREDICTORS AND INTERACTIONS
If the two and three way interactions were not in the model, the whole “coeffi-
cient” would simply be
β1 .
(β1 + β4 x2 + β5 x3 ).
(β1 + β4 x2 + β5 x3 + β7 x2 x3 )
which is even more flexible. Now changing x1 (disp) has a different effect on Y
(mpg), depending on the values of x2 and x3 , but in a more flexible way which
we can see with some more rearrangement. Now the “coefficient” in front of x3
in this “coefficient” is dependent on x2 .
##
## Call:
## lm(formula = mpg ~ disp * hp * domestic, data = autompg)
##
## Residuals:
## Min 1Q Median 3Q Max
## -11.9410 -2.2147 -0.4008 1.9430 18.4094
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 6.065e+01 6.600e+00 9.189 < 2e-16 ***
## disp -1.416e-01 6.344e-02 -2.232 0.0262 *
## hp -3.545e-01 8.123e-02 -4.364 1.65e-05 ***
11.5. BUILDING LARGER MODELS 209
H0 : β7 = 0.
So,
• Full Model: Y = β0 + β1 x1 + β2 x2 + β3 x3 + β4 x1 x2 + β5 x1 x3 + β6 x2 x3 +
β7 x 1 x 2 x 3 + ϵ
• Null Model: Y = β0 + β1 x1 + β2 x2 + β3 x3 + β4 x1 x2 + β5 x1 x3 + β6 x2 x3 + ϵ
We fit the null model in R as two_way_int_mod, then use anova() to perform
an F -test as usual.
two_way_int_mod = lm(mpg ~ disp * hp + disp * domestic + hp * domestic, data = autompg)
#two_way_int_mod = lm(mpg ~ (disp + hp + domestic) ^ 2, data = autompg)
anova(two_way_int_mod, big_model)
## [1] 14.74053
210 CHAPTER 11. CATEGORICAL PREDICTORS AND INTERACTIONS
mean(resid(two_way_int_mod) ^ 2)
## [1] 14.81259
However, it is not much smaller. We could even say that, the difference is
insignificant. This is an idea we will return to later in greater detail.
Now that we have chosen the model without the three-way interaction, can we
go further? Do we need the two-way interactions? Let’s test
H0 : β4 = β5 = β6 = 0.
Y = β0 + β1 x1 + β2 x2 + β3 x3 + β4 x1 x2 + β5 x1 x3 + β6 x2 x3 + ϵ.
11.6 R Markdown
The R Markdown file for this chapter can be found here:
• cat-int.Rmd
The file was created using R version 3.6.1.
Chapter 12
Analysis of Variance
Chapter Status: This chapter should be considered optional for a first reading
of this text. Its inclusion is mostly for the benefit of some courses that use the
text. Additionally, this chapter is currently somewhat underdeveloped compared
to the rest of the text. If you are interested in contributing, you can find several
lines marked “TODO” in the source. Pull requests encouraged!
“To find out what happens when you change something, it is neces-
sary to change it.”
— Box, Hunter, and Hunter, Statistics for Experimenters (1978)
Thus far, we have built models for numeric responses, when the predictors are all
numeric. We’ll take a minor detour to go back and consider models which only
have categorical predictors. A categorical predictor is a variable which takes
only a finite number of values, which are not ordered. For example a variable
which takes possible values red, blue, green is categorical. In the context
of using a categorical variable as a predictor, it would place observations into
different groups (categories).
We’ve also mostly been dealing with observational data. The methods in this
section are most useful in experimental settings, but still work with observational
data. (However, for determining causation, we require experiments.)
12.1 Experiments
The biggest difference between an observational study and an experiment is how
the predictor data is obtained. Is the experimenter in control?
• In an observational study, both response and predictor data are obtained
via observation.
211
212 CHAPTER 12. ANALYSIS OF VARIANCE
yij ∼ N (µi , σ 2 )
y1j ∼ N (µ1 , σ 2 )
y2j ∼ N (µ2 , σ 2 )
Later, we will investigate the normal and equal variance assumptions. For now,
we will continue to assume they are reasonable.
The natural question to ask: Is there a difference between the two groups? The
specific question we’ll answer: Are the means of the two groups different?
Mathematically, that is
H0 : µ1 = µ2 vs H1 : µ1 ̸= µ2
For the stated model and assuming the null hypothesis is true, the t test statistic
would follow a t distribution with degrees of freedom n1 + n2 − 2.
## sleep group
## 1 8.145150 control
## 2 7.522362 treatment
## 3 6.935754 control
## 4 8.959435 treatment
## 5 6.985122 control
## 6 8.072651 treatment
## 7 8.313826 control
## 8 8.086409 treatment
## 9 8.922108 control
## 10 8.124743 treatment
## 11 8.065844 control
## 12 10.943974 treatment
## 13 4.833367 control
## 14 7.865453 treatment
## 15 6.340014 control
## 16 8.963140 treatment
## 17 6.158896 control
## 18 5.012253 treatment
## 19 3.571440 control
## 20 9.784136 treatment
H0 : µ C = µ T vs H1 : µC ̸= µT
##
## Two Sample t-test
##
## data: sleep by group
## t = -2.0854, df = 18, p-value = 0.05154
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -3.02378261 0.01117547
## sample estimates:
## mean in group control mean in group treatment
## 6.827152 8.333456
Here we see that the subjects in the melatonin group sleep an average of about
1.5 hours longer than the control group. An hour and a half of sleep is certainly
important!
With a big enough sample size, we could make an effect size of say, four minutes
statistically significant. Is it worth taking a pill every night to get an extra four
minutes of sleep? (Probably not.)
boxplot(sleep ~ group, data = melatonin, col = 5:6)
12.3. ONE-WAY ANOVA 215
10
8
sleep
6
4
control treatment
group
What if there are more than two groups? Consider the model
yij = µ + αi + eij .
where
∑
αi = 0
and
eij ∼ N (0, σ 2 ).
Here,
• i = 1, 2, . . . g where g is the number of groups.
• j = 1, 2, . . . ni where ni is the number of observations in group i.
Then the total sample size is
216 CHAPTER 12. ANALYSIS OF VARIANCE
∑
g
N= ni
i=i
yij ∼ N (µi , σ 2 )
µi = µ + αi .
Here αi measures the effect of group i. It is the difference between the overall
mean and the mean of group i.
Essentially, the assumptions here are the same as the two sample case, however
now, we simply have more groups.
Much like the two-sample case, we would again like to test if the means of the
groups are equal.
Notice that the alternative simply indicates the some of the means are not equal,
not specifically which are not equal. More on that later.
Alternatively, we could write
We’ll then decompose the variance, as we’ve seen before in regression. The
total variation measures how much the observations vary about the overall
sample mean, ignoring the groups.
∑
g ∑
ni
SST = (yij − ȳ)2
i=i j=1
The variation between groups looks at how far the individual sample means
are from the overall sample mean.
∑
g ∑
ni ∑
g
SSB = (ȳi − ȳ)2 = ni (ȳi − ȳ)2
i=i j=1 i=i
Lastly, the within group variation measures how far observations are from the
sample mean of its group.
∑
g ∑
ni ∑
g
SSW = (yij − ȳi )2 = (ni − 1)s2i
i=i j=1 i=i
This could also be thought of as the error sum of squares, where yij is an
observation and ȳi is its fitted (predicted) value from the model.
To develop the test statistic for ANOVA, we place this information into an
ANVOA table.
We reject the null (equal means) when the F statistic is large. This occurs when
the variation between groups is large compared to the variation within groups.
Under the null hypothesis, the distribution of the test statistic is F with degrees
of freedom g − 1 and N − g.
Let’s see what this looks like in a few situations. In each of the following exam-
ples, we’ll consider sampling 20 observations (ni = 20) from three populations
(groups).
0.4
5
0.3
response
density
0.2
0
0.1
-5
0.0
-5 0 5 A B C
observations group
The left panel shows the three normal distributions we are sampling from. The
ticks along the x-axis show the randomly sampled observations. The right panel,
re-displays only the sampled values in a boxplot. Note that the mid-line of the
boxes is usually the sample median. These boxplots have been modified to use
the sample mean.
Here the sample means vary a lot around the overall sample mean, which is the
solid grey line on the right panel. Within the groups there is variability, but it
is still obvious that the sample means are very different.
As a result, we we obtain a large test statistic, thus small p-value.
• F = 374.4469511
• p-value = 1.6349862 × 10−33
Now consider µA = 0, µB = 0, µC = 0 with σ = 1. That is, equal means for the
groups.
3
0.3
2
response
1
density
0.2
0
0.1
-1
-2
0.0
-3 -2 -1 0 1 2 3 A B C
observations group
12.3. ONE-WAY ANOVA 219
Here the sample means vary only a tiny bit around the overall sample mean.
Within the groups there is variability, this time much larger than the variability
of the sample means.
• F = 2.667892
• p-value = 0.0780579
The next two examples show different means, with different levels of noise. No-
tice how these affect the test statistic and p-value.
• µA = −1, µB = 0, µC = 1, σ = 1
3
2
0.3
1
response
density
0.2
0
-1
0.1
-2
-3
0.0
-4 -2 0 2 4 A B C
observations group
• F = 16.4879492
• p-value = 2.2378806 × 10−6
Above, there isn’t obvious separation between the groups like the first example,
but it is still obvious the means are different. Below, there is more noise. Visu-
ally it is somewhat hard to tell, but the test still suggests a difference of means.
(At an α of 0.05.)
• µA = −1, µB = 0, µC = 1, σ = 2
• ni = 20 for each group.
220 CHAPTER 12. ANALYSIS OF VARIANCE
6
0.4
4
0.3
2
response
density
0.2
0
-2
0.1
-4
0.0
-6
-6 -4 -2 0 2 4 6 A B C
observations group
• F = 4.6256472
• p-value = 0.0137529
Let’s consider an example with real data. We’ll use the coagulation dataset
from the faraway package. Here four different diets (A, B, C, D) were adminis-
tered to a random sample of 24 animals. The subjects were randomly assigned
to one of the four diets. For each, their blood coagulation time was measured
in seconds.
H0 : µ A = µ B = µ C = µ D
where, for example, µA is the mean blood coagulation time for an animal that
ate diet A.
library(faraway)
names(coagulation)
70
65
coag
60
A B C D
diet
We first load the data and create the relevant boxplot. The plot alone suggests
a difference of means. The aov() function is used to obtain the relevant sums
of squares. Using the summary() function on the output from aov() creates the
desired ANOVA table. (Without the unneeded row for total.)
coag_aov = aov(coag ~ diet, data = coagulation)
coag_aov
## Call:
## aov(formula = coag ~ diet, data = coagulation)
##
## Terms:
## diet Residuals
## Sum of Squares 228 112
## Deg. of Freedom 3 20
##
## Residual standard error: 2.366432
## Estimated effects may be unbalanced
summary(coag_aov)
## diet coag
## 1 A 61
## 2 B 66
## 3 C 68
## 4 D 61
Here, we’ve created a dataframe with a row for each diet. By predicting on this
dataframe, we obtain the sample means of each diet (group).
## [1] FALSE
is.factor(good$group) # 1, 2, and 3 are labels.
## [1] TRUE
12.3. ONE-WAY ANOVA 223
Here we verify the distribution of the test statistic under the null hypothesis. We
simulate from a null model (equal variance) to obtain an empirical distribution
of the F statistic. We add the curve for the expected distribution.
library(broom)
sim_anova = function(n = 10, mu_a = 0, mu_b = 0, mu_c = 0, mu_d = 0, sigma = 1, stat = TRUE) {
# create data from one-way ANOVA model with four groups of equal size
# response simulated from normal with group mean, shared variance
# group variable indicates group A, B, C or D
sim_data = data.frame(
response = c(rnorm(n = n, mean = mu_a, sd = sigma),
rnorm(n = n, mean = mu_b, sd = sigma),
rnorm(n = n, mean = mu_c, sd = sigma),
rnorm(n = n, mean = mu_d, sd = sigma)),
group = c(rep("A", times = n), rep("B", times = n),
rep("C", times = n), rep("D", times = n))
)
Empirical Distribution of F
0.6
Density
0.4
0.2
0.0
0 1 2 3 4 5 6
f_stats
12.3.3 Power
Now that we’re performing experiments, getting more data means finding more
test subjects, running more lab tests, etc. In other words, it will cost more time
and money.
We’d like to design our experiment so that we have a good chance of detecting
an interesting effect size, without spending too much money. There’s no point in
running an experiment if there’s only a very low chance that it has a significant
result that you care about. (Remember, not all statistically significant results
have practical value.)
We’d like the ANOVA test to have high power for an alternative hypothesis
with a minimum desired effect size.
That is, for a true difference of means that we deem interesting, we want the
test to reject with high probability.
• Noise level σ. The less noise, the easier it is to detect signal (effect). We
don’t have much ability to control this, except maybe to measure more
accurately.
• Significance level α. Lower significance level makes rejecting more diffi-
cult. (But also allows for less false positives.)
• Sample size. Large samples means easier to detect effects.
• Balanced design. An equal number of observations per group leads to
higher power.
The following simulations look at the effect of significance level, effect size, and
noise level on the power of an ANOVA F -test. Homework will look into sample
size and balance.
p_vals = replicate(n = 1000, sim_anova(mu_a = -1, mu_b = 0, mu_c = 0, mu_d = 1,
sigma = 1.5, stat = FALSE))
mean(p_vals < 0.05)
## [1] 0.663
mean(p_vals < 0.01)
## [1] 0.39
p_vals = replicate(n = 1000, sim_anova(mu_a = -1, mu_b = 0, mu_c = 0, mu_d = 1,
sigma = 2.0, stat = FALSE))
mean(p_vals < 0.05)
## [1] 0.408
mean(p_vals < 0.01)
## [1] 0.179
p_vals = replicate(n = 1000, sim_anova(mu_a = -2, mu_b = 0, mu_c = 0, mu_d = 2,
sigma = 2.0, stat = FALSE))
mean(p_vals < 0.05)
## [1] 0.964
mean(p_vals < 0.01)
## [1] 0.855
##
## Pairwise comparisons using t tests with pooled SD
##
## data: coag and diet
##
## A B C
## B 0.00380 - -
## C 0.00018 0.15878 -
## D 1.00000 0.00086 2.3e-05
##
## P value adjustment method: none
# pairwise.t.test(coagulation$coag, coagulation$diet, p.adj = "none")
##
## Pairwise comparisons using t tests with pooled SD
##
## data: coag and diet
12.4. POST HOC TESTING 227
##
## A B C
## B 0.02282 - -
## C 0.00108 0.95266 -
## D 1.00000 0.00518 0.00014
##
## P value adjustment method: bonferroni
We see that these p-values are much higher than the unadjusted p-values, thus,
we are less likely to reject each tests. As a result, the FWER is 0.05, instead of
an error rate of 0.05 for each test.
We can simulate the 100 test scenario to illustrate this point.
get_p_val = function() {
set.seed(1337)
## [1] 0.994
# FWER with 100 tests
# desired rate = 0.05
# bonferroni adjustment
mean(replicate(1000, any(p.adjust(replicate(100, get_p_val()), "bonferroni") < 0.05)))
## [1] 0.058
For the specific case of testing all two-way mean differences after an ANOVA
test, there are a number of potential methods for making an adjustment of this
type. The pros and cons of the potential methods are beyond the scope of
this course. We choose a method for its ease of use, and to a lesser extent, its
developer.
Tukey’s Honest Significance difference can be applied directly to an object which
was created using aov(). It will adjust the p-values of the pairwise comparisons
228 CHAPTER 12. ANALYSIS OF VARIANCE
of the means to control the FWER, in this case, for 0.05. Notice it also gives
confidence intervals for the difference of the means.
TukeyHSD(coag_aov, conf.level = 0.95)
-10 -5 0 5 10
The creator of this method, John Tukey, is an important figure in the history of
data science. He essentially predicted the rise of data science over 50 years ago.
For some retrospective thoughts on those 50 years, see this paper from David
Donoho.
∑ ∑
αi = 0 βj = 0.
and
for any i or j.
Here,
• i = 1, 2, . . . I where I is the number of levels of factor A.
• j = 1, 2, . . . J where J is the number of levels of factor B.
• k = 1, 2, . . . K where K is the number of replicates per group.
Here, we can think of a group as a combination of a level from each of the factors.
So for example, one group will receive level 2 of factor A and level 3 of factor
B. The number of replicates is the number of subjects in each group. Here y135
would be the measurement for the fifth member (replicate) of the group for level
1 of factor A and level 3 of factor B.
We call this setup an I × J factorial design with K replicates. (Our current
notation only allows for equal replicates in each group. It isn’t difficult to
allow for different replicates for different groups, but we’ll proceed using equal
replicates per group, which if possible, is desirable.)
• αi measures the effect of level i of factor A. We call these the main
effects of factor A.
• βj measures the effect of level j of factor B. We call these the main
effects of factor B.
230 CHAPTER 12. ANALYSIS OF VARIANCE
The question then, is which of these models should we use if we have two factors?
12.5. TWO-WAY ANOVA 231
The most important question to consider is whether or not we should model the
interaction. Is the effect of Factor A the same for all levels of Factor B? In
the additive model, yes. In the interaction model, no. Both models would use
a different mean for each group, but in a very specific way in both cases.
Let’s discuss these comparisons by looking at some examples. We’ll first look at
the rats data from the faraway package. There are two factors here: poison
and treat. We use the levels() function to extract the levels of a factor
variable.
levels(rats$poison)
0.9
treat poison
0.8
0.8
B II
D I
0.7
0.7
C III
mean of time
mean of time
A
0.6
0.6
0.5
0.5
0.4
0.4
0.3
0.3
0.2
0.2
I II III A B C D
poison treat
Let’s fit each of the possible models, then investigate their estimates for each of
the group means.
rats_int = aov(time ~ poison * treat, data = rats) # interaction model
rats_add = aov(time ~ poison + treat, data = rats) # additive model
rats_pois = aov(time ~ poison , data = rats) # single factor model
rats_treat = aov(time ~ treat, data = rats) # single factor model
rats_null = aov(time ~ 1, data = rats) # null model
To get the estimates, we’ll create a table which we will predict on.
rats_table = expand.grid(poison = unique(rats$poison), treat = unique(rats$treat))
rats_table
## poison treat
## 1 I A
## 2 II A
## 3 III A
## 4 I B
## 5 II B
## 6 III B
## 7 I C
## 8 II C
## 9 III C
## 10 I D
## 11 II D
## 12 III D
matrix(paste0(rats_table$poison, "-", rats_table$treat) , 4, 3, byrow = TRUE)
First, we obtain the estimates from the interaction model. Note that each cell
12.5. TWO-WAY ANOVA 233
I II III
A 0.4125 0.3200 0.210
B 0.8800 0.8150 0.335
C 0.5675 0.3750 0.235
D 0.6100 0.6675 0.325
Next, we obtain the estimates from the additive model. Again, each cell has
a different value. We also see that these estimates are somewhat close to those
from the interaction model.
knitr::kable(get_est_means(model = rats_add, table = rats_table))
I II III
A 0.4522917 0.3791667 0.1110417
B 0.8147917 0.7416667 0.4735417
C 0.5306250 0.4575000 0.1893750
D 0.6722917 0.5991667 0.3310417
## I II III
## -0.3625 -0.3625 -0.3625
interaction_means = get_est_means(model = rats_int, table = rats_table)
interaction_means["A",] - interaction_means["B",]
## I II III
## -0.4675 -0.4950 -0.1250
This is the key difference between the interaction and additive models. The
difference between the effect of treatments A and B is the same for each poison
in the additive model. They are different in the interaction model.
The remaining three models are much simpler, having either only row or only
column effects. Or no effects in the case of the null model.
knitr::kable(get_est_means(model = rats_pois, table = rats_table))
I II III
A 0.6175 0.544375 0.27625
B 0.6175 0.544375 0.27625
C 0.6175 0.544375 0.27625
D 0.6175 0.544375 0.27625
234 CHAPTER 12. ANALYSIS OF VARIANCE
I II III
A 0.3141667 0.3141667 0.3141667
B 0.6766667 0.6766667 0.6766667
C 0.3925000 0.3925000 0.3925000
D 0.5341667 0.5341667 0.5341667
knitr::kable(get_est_means(model = rats_null, table = rats_table))
I II III
A 0.479375 0.479375 0.479375
B 0.479375 0.479375 0.479375
C 0.479375 0.479375 0.479375
D 0.479375 0.479375 0.479375
To perform the needed tests, we will need to create another ANOVA table.
(We’ll skip the details of the sums of squares calculations and simply let R take
care of them.)
We reject the null when the F statistic is large. Under the null hypothesis, the
distribution of the test statistic is F with degrees of freedom (I − 1)(J − 1) and
IJ(K − 1).
We reject the null when the F statistic is large. Under the null hypothesis,
the distribution of the test statistic is F with degrees of freedom J − 1 and
IJ(K − 1).
The row for Factor A tests:
45
tension wool
M A
40
40
L B
H
mean of breaks
mean of breaks
35
35
30
30
25
25
20
20
A B L M H
wool tension
Either plot makes it rather clear that the wool and tensions factors interact.
summary(aov(breaks ~ wool * tension, data = warpbreaks))
12.6 R Markdown
The R Markdown file for this chapter can be found here:
• anova.Rmd
The file was created using R version 3.6.1.
238 CHAPTER 12. ANALYSIS OF VARIANCE
Chapter 13
Model Diagnostics
“Your assumptions are your windows on the world. Scrub them off
every once in a while, or the light won’t come in.”
— Isaac Asimov
After reading this chapter you will be able to:
• Understand the assumptions of a regression model.
• Assess regression model assumptions using visualizations and tests.
• Understand leverage, outliers, and influential points.
• Be able to identify unusual observations in regression models.
Using matrix notation, this model can be written much more succinctly as
Y = Xβ + ϵ.
( )−1 ⊤
β̂ = X ⊤ X X y.
239
240 CHAPTER 13. MODEL DIAGNOSTICS
E[β̂] = β,
and variance
( )−1
Var[β̂] = σ 2 X ⊤ X .
( )
β̂j ∼ N βj , σ 2 Cjj
( )−1
C = X ⊤X .
β̂j − βj
√ ∼ tn−p ,
se Cjj
∑
n
(yi − ŷi )2 .
i=1
So, essentially each of these looks at how close the data points are to the model.
However is that all we care about?
• It could be that the errors are made in a systematic way, which means
that our model is misspecified. We may need additional interaction terms,
or polynomial terms which we will see later.
• It is also possible that at a particular set of predictor values, the errors are
very small, but at a different set of predictor values, the errors are large.
• Perhaps most of the errors are very small, but some are very large. This
would suggest that the errors do not follow a normal distribution.
Are these issues that we care about? If all we would like to do is predict, possibly
not, since we would only care about the size of our errors. However, if we would
like to perform inference, for example to determine if a particular predictor is
important, we care a great deal. All of the distributional results, such as a t-test
for a single predictor, are derived under the assumptions of our model.
13.2. CHECKING ASSUMPTIONS 241
Technically, the assumptions of the model are encoded directly in a model state-
ment such as,
where ϵi ∼ N (0, σ 2 ).
Often, the assumptions of linear regression, are stated as,
• Linearity: the response can be written as a linear combination of the
predictors. (With noise about this true linear relationship.)
• Independence: the errors are independent.
• Normality: the distribution of the errors should follow a normal distribu-
tion.
• Equal Variance: the error variance is the same at any set of predictor
values.
The linearity assumption is encoded as
ϵi ∼ N (0, σ 2 ),
since the ϵi are iid normal random variables with constant variance.
If these assumptions are met, great! We can perform inference, and it is valid.
If these assumptions are not met, we can still “perform” a t-test using R, but
the results are not valid. The distributions of the parameter estimates will
not be what we expect. Hypothesis tests will then accept or reject incorrectly.
Essentially, garbage in, garbage out.
Model 1: Y = 3 + 5x + ϵ, ϵ ∼ N (0, 1)
Model 2: Y = 3 + 5x + ϵ, ϵ ∼ N (0, x2 )
Probably our most useful tool will be a Fitted versus Residuals Plot. It will
be useful for checking both the linearity and constant variance assumptions.
Data generated from Model 1 above should not show any signs of violating
assumptions, so we’ll use this to see what a good fitted versus residuals plot
should look like. First, we’ll simulate observations from this model.
set.seed(42)
sim_data_1 = sim_1()
head(sim_data_1)
## x y
## 1 4.574030 24.773995
## 2 4.685377 26.475936
## 3 1.430698 8.954993
## 4 4.152238 23.951210
## 5 3.208728 20.341344
## 6 2.595480 14.943525
We then fit the model and add the fitted line to a scatterplot.
plot(y ~ x, data = sim_data_1, col = "grey", pch = 20,
main = "Data from Model 1")
fit_1 = lm(y ~ x, data = sim_data_1)
abline(fit_1, col = "darkorange", lwd = 3)
13.2. CHECKING ASSUMPTIONS 243
10
5
0 1 2 3 4 5
We now plot a fitted versus residuals plot. Note, this is residuals on the y-axis
despite the ordering in the name. Sometimes you will see this called a residuals
versus fitted, or residuals versus predicted plot.
plot(fitted(fit_1), resid(fit_1), col = "grey", pch = 20,
xlab = "Fitted", ylab = "Residuals", main = "Data from Model 1")
abline(h = 0, col = "darkorange", lwd = 2)
244 CHAPTER 13. MODEL DIAGNOSTICS
3
2
1
Residuals
0
-1
-2
-3
5 10 15 20 25
Fitted
• At any fitted value, the mean of the residuals should be roughly 0. If this
is the case, the linearity assumption is valid. For this reason, we generally
add a horizontal line at y = 0 to emphasize this point.
• At every fitted value, the spread of the residuals should be roughly the
same. If this is the case, the constant variance assumption is valid.
To get a better idea of how a fitted versus residuals plot can be useful, we will
simulate from models with violated assumptions.
20
10
0 1 2 3 4 5
This actually is rather easy to see here by adding the fitted line to a scatterplot.
This is because we are only performing simple linear regression. With multiple
regression, a fitted versus residuals plot is a necessity, since adding a fitted
regression to a scatterplot isn’t exactly possible.
plot(fitted(fit_2), resid(fit_2), col = "grey", pch = 20,
xlab = "Fitted", ylab = "Residuals", main = "Data from Model 2")
abline(h = 0, col = "darkorange", lwd = 2)
246 CHAPTER 13. MODEL DIAGNOSTICS
15
10
5
Residuals
0
-5
-10
-15
5 10 15 20 25
Fitted
On the fitted versus residuals plot, we see two things very clearly. For any fitted
value, the residuals seem roughly centered at 0. This is good! The linearity
assumption is not violated. However, we also see very clearly, that for larger
fitted values, the spread of the residuals is larger. This is bad! The constant
variance assumption is violated here.
Now we will demonstrate a model which does not meet the linearity assumption.
Model 3 is an example of a model where Y is not a linear combination of the
predictors. In this case the predictor is x, but the model uses x2 . (We’ll see later
that this is something that a “linear” model can deal with. The fix is simple,
just make x2 a predictor!)
set.seed(42)
sim_data_3 = sim_3()
fit_3 = lm(y ~ x, data = sim_data_3)
plot(y ~ x, data = sim_data_3, col = "grey", pch = 20,
main = "Data from Model 3")
abline(fit_3, col = "darkorange", lwd = 3)
13.2. CHECKING ASSUMPTIONS 247
60
40
20
0
0 1 2 3 4 5
Again, this is rather clear on the scatterplot, but again, we wouldn’t be able to
check this plot for multiple regression.
plot(fitted(fit_3), resid(fit_3), col = "grey", pch = 20,
xlab = "Fitted", ylab = "Residuals", main = "Data from Model 3")
abline(h = 0, col = "darkorange", lwd = 2)
248 CHAPTER 13. MODEL DIAGNOSTICS
30
20
Residuals
10
0
-10
-20
-20 0 20 40 60 80 100
Fitted
This time on the fitted versus residuals plot, for any fitted value, the spread of
the residuals is about the same. However, they are not even close to centered
at zero! At small and large fitted values the model is underestimating, while at
medium fitted values, the model is overestimating. These are systematic errors,
not random noise. So the constant variance assumption is met, but the linearity
assumption is violated. The form of our model is simply wrong. We’re trying
to fit a line to a curve!
While a fitted versus residuals plot can give us an idea about homoscedasticity,
sometimes we would prefer a more formal test. There are many tests for constant
variance, but here we will present one, the Breusch-Pagan Test. The exact
details of the test will omitted here, but importantly the null and alternative
can be considered to be,
Isn’t that convenient? A test that will specifically test the constant variance
assumption.
##
## studentized Breusch-Pagan test
##
## data: fit_1
## BP = 1.0234, df = 1, p-value = 0.3117
For fit_1 we see a large p-value, so we do not reject the null of homoscedasticity,
which is what we would expect.
bptest(fit_2)
##
## studentized Breusch-Pagan test
##
## data: fit_2
## BP = 76.693, df = 1, p-value < 2.2e-16
##
## studentized Breusch-Pagan test
##
## data: fit_3
## BP = 0.33466, df = 1, p-value = 0.5629
Lastly, for fit_3 we again see a large p-value, so we do not reject the null of
homoscedasticity, which matches our findings with a fitted versus residuals plot.
250 CHAPTER 13. MODEL DIAGNOSTICS
13.2.3 Histograms
We have a number of tools for assessing the normality assumption. The most
obvious would be to make a histogram of the residuals. If it appears roughly
normal, then we’ll believe the errors could truly be normal.
par(mfrow = c(1, 3))
hist(resid(fit_1),
xlab = "Residuals",
main = "Histogram of Residuals, fit_1",
col = "darkorange",
border = "dodgerblue",
breaks = 20)
hist(resid(fit_2),
xlab = "Residuals",
main = "Histogram of Residuals, fit_2",
col = "darkorange",
border = "dodgerblue",
breaks = 20)
hist(resid(fit_3),
xlab = "Residuals",
main = "Histogram of Residuals, fit_3",
col = "darkorange",
border = "dodgerblue",
breaks = 20)
150
80
30
60
100
Frequency
Frequency
Frequency
20
40
50
10
20
0
Above are histograms for each of the three regression we have been considering.
Notice that the first, for fit_1 appears very normal. The third, for fit_3,
appears to be very non-normal. However fit_2 is not as clear. It does have
a rough bell shape, however, it also has a very sharp peak. For this reason we
will usually use more powerful tools such as Q-Q plots and the Shapiro-Wilk
test for assessing the normality of errors.
13.2. CHECKING ASSUMPTIONS 251
Another visual method for assessing the normality of errors, which is more
powerful than a histogram, is a normal quantile-quantile plot, or Q-Q plot for
short.
In R these are very easy to make. The qqnorm() function plots the points, and
the qqline() function adds the necessary line. We create a Q-Q plot for the
residuals of fit_1 to check if the errors could truly be normally distributed.
qqnorm(resid(fit_1), main = "Normal Q-Q Plot, fit_1", col = "darkgrey")
qqline(resid(fit_1), col = "dodgerblue", lwd = 2)
1
0
-1
-2
-3
-3 -2 -1 0 1 2 3
Theoretical Quantiles
In short, if the points of the plot do not closely follow a straight line, this would
suggest that the data do not come from a normal distribution.
The calculations required to create the plot vary depending on the implementa-
tion, but essentially the y-axis is the sorted data (observed, or sample quantiles),
and the x-axis is the values we would expect if the data did come from a normal
distribution (theoretical quantiles).
The Wikipedia page for Normal probability plots gives details on how this is
implemented in R if you are interested.
Also, to get a better idea of how Q-Q plots work, here is a quick function which
creates a Q-Q plot:
252 CHAPTER 13. MODEL DIAGNOSTICS
qq_plot = function(e) {
n = length(e)
normal_quantiles = qnorm(((1:n - 0.5) / n))
# normal_quantiles = qnorm(((1:n) / (n + 1)))
2
1
1
Sample Quantiles
Sample Quantiles
0
0
-1
-1
-2
-2
-3
-3
-2 -1 0 1 2 -2 -1 0 1 2
To get a better idea of what “close to the line” means, we perform a number of
simulations, and create Q-Q plots.
First we simulate data from a normal distribution with different sample sizes,
and each time create a Q-Q plot.
par(mfrow = c(1, 3))
set.seed(420)
qq_plot(rnorm(10))
qq_plot(rnorm(25))
qq_plot(rnorm(100))
2
0.5
1.5
0.0
1
1.0
-0.5
Sample Quantiles
Sample Quantiles
Sample Quantiles
0
0.5
-1.0
-1.5
0.0
-1
-2.0
-0.5
-2
-2.5
-1.0
-3.0
-3
Since this data is sampled from a normal distribution, these are all, by definition,
good Q-Q plots. The points are “close to the line” and we would conclude that
this data could have been sampled from a normal distribution. Notice in the first
plot, one point is somewhat far from the line, but just one point, in combination
with the small sample size, is not enough to make us worried. We see with the
large sample size, all of the points are rather close to the line.
Next, we simulate data from a t distribution with a small degrees of freedom,
for different sample sizes.
par(mfrow = c(1, 3))
set.seed(420)
qq_plot(rt(10, df = 4))
qq_plot(rt(25, df = 4))
qq_plot(rt(100, df = 4))
4
0.5
2
Sample Quantiles
Sample Quantiles
Sample Quantiles
0.0
0
-0.5
0
-2
-1.0
-2
-4
-1.5
Recall, that as the degrees of freedom for a t distribution become larger, the
distribution becomes more and more similar to a normal. Here, using 4 degrees
of freedom, we have a distribution that is somewhat normal, it is symmetrical
and roughly bell-shaped, however it has “fat tails.” This presents itself clearly in
the third panel. While many of the points are close to the line, at the edges, there
are large discrepancies. This indicates that the values are too small (negative) or
too large (positive) compared to what we would expect for a normal distribution.
So for the sample size of 100, we would conclude that that normality assumption
is violated. (If these were residuals of a model.) For sample sizes of 10 and 25
we may be suspicious, but not entirely confident. Reading Q-Q plots, is a bit of
an art, not completely a science.
Next, we simulate data from an exponential distribution.
par(mfrow = c(1, 3))
set.seed(420)
qq_plot(rexp(10))
qq_plot(rexp(25))
qq_plot(rexp(100))
4
6
1.5
3
Sample Quantiles
Sample Quantiles
Sample Quantiles
4
1.0
2
3
2
0.5
1
1
0.0
0
0
This is a distribution that is not very similar to a normal, so in all three cases,
we see points that are far from the lines, so we would think that the normality
assumption is violated.
For a better understanding of which Q-Q plots are “good,” repeat the simula-
tions above a number of times (without setting the seed) and pay attention to
the differences between those that are simulated from normal, and those that
are not. Also consider different samples sizes and distribution parameters.
Returning to our three regressions, recall,
• fit_1 had no violation of assumptions,
• fit_2 violated the constant variance assumption, but not linearity,
• fit_3 violated linearity, but not constant variance.
We’ll now create a Q-Q plot for each to assess normality of errors.
qqnorm(resid(fit_1), main = "Normal Q-Q Plot, fit_1", col = "darkgrey")
qqline(resid(fit_1), col = "dodgerblue", lwd = 2)
13.2. CHECKING ASSUMPTIONS 255
1
0
-1
-2
-3
-3 -2 -1 0 1 2 3
Theoretical Quantiles
For fit_1, we have a near perfect Q-Q plot. We would believe the errors follow
a normal distribution.
qqnorm(resid(fit_2), main = "Normal Q-Q Plot, fit_2", col = "darkgrey")
qqline(resid(fit_2), col = "dodgerblue", lwd = 2)
5
0
-5
-10
-15
-3 -2 -1 0 1 2 3
Theoretical Quantiles
256 CHAPTER 13. MODEL DIAGNOSTICS
For fit_2, we have a suspect Q-Q plot. We would probably not believe the
errors follow a normal distribution.
qqnorm(resid(fit_3), main = "Normal Q-Q Plot, fit_3", col = "darkgrey")
qqline(resid(fit_3), col = "dodgerblue", lwd = 2)
10
0
-10
-20
-3 -2 -1 0 1 2 3
Theoretical Quantiles
Lastly, for fit_3, we again have a suspect Q-Q plot. We would probably not
believe the errors follow a normal distribution.
Histograms and Q-Q Plots give a nice visual representation of the residuals
distribution, however if we are interested in formal testing, there are a number
of options available. A commonly used test is the Shapiro–Wilk test, which
is implemented in R.
set.seed(42)
shapiro.test(rnorm(25))
##
## Shapiro-Wilk normality test
##
## data: rnorm(25)
## W = 0.9499, p-value = 0.2495
13.2. CHECKING ASSUMPTIONS 257
shapiro.test(rexp(25))
##
## Shapiro-Wilk normality test
##
## data: rexp(25)
## W = 0.71164, p-value = 1.05e-05
This gives us the value of the test statistic and its p-value. The null hypothesis
assumes the data were sampled from a normal distribution, thus a small p-value
indicates we believe there is only a small probability the data could have been
sampled from a normal distribution.
In the above examples, we see we fail to reject for the data sampled from normal,
and reject on the non-normal data, for any reasonable α.
Returning again to fit_1, fit_2 and fit_3, we see the result of running
shapiro.test() on the residuals of each, returns a result for each that matches
for decisions based on the Q-Q plots.
shapiro.test(resid(fit_1))
##
## Shapiro-Wilk normality test
##
## data: resid(fit_1)
## W = 0.99858, p-value = 0.9622
shapiro.test(resid(fit_2))
##
## Shapiro-Wilk normality test
##
## data: resid(fit_2)
## W = 0.93697, p-value = 1.056e-13
shapiro.test(resid(fit_3))
##
## Shapiro-Wilk normality test
##
## data: resid(fit_3)
## W = 0.97643, p-value = 3.231e-07
258 CHAPTER 13. MODEL DIAGNOSTICS
Low Leverage, Large Residual, Small Influence High Leverage, Small Residual, Small Influence High Leverage, Large Residual, Large Influence
10
10
10
8
5
y
y
6
0
4
0
2
2 4 6 8 10 5 10 15 2 4 6 8 10 12 14
x x x
The blue solid line in each plot is a regression fit to the 10 original data points
stored in ex_data. The dashed orange line in each plot is the result of adding a
single point to the original data in ex_data. This additional point is indicated
by the circled point.
The slope of the regression for the original ten points, the solid blue line, is
given by:
coef(ex_model)[2]
## x
## -0.9696033
The added point in the first plot has a small effect on the slope, which becomes:
coef(model_1)[2]
## x
## -0.9749534
We will say that this point has low leverage, is an outlier due to its large residual,
but has small influence.
The added point in the second plot also has a small effect on the slope, which
is:
coef(model_2)[2]
## x
## -0.9507397
We will say that this point has high leverage, is not an outlier due to its small
residual, and has a very small influence.
260 CHAPTER 13. MODEL DIAGNOSTICS
Lastly, the added point in the third plot has a large effect on the slope, which
is now:
coef(model_3)[2]
## x
## -0.5892241
This added point is influential. It both has high leverage, and is an outlier due
to its large residual.
We’ve now mentioned three new concepts: leverage, outliers, and influential
points, each of which we will discuss in detail.
13.3.1 Leverage
A data point with high leverage, is a data point that could have a large influence
when fitting the model.
Recall that,
( )−1 ⊤
β̂ = X ⊤ X X y.
Thus,
( )−1 ⊤
ŷ = X β̂ = X X ⊤ X X y
Now we define,
( )−1 ⊤
H = X X ⊤X X
which we will refer to as the hat matrix. The hat matrix is used to project
onto the subspace spanned by the columns of X. It is also simply known as a
projection matrix.
The hat matrix, is a matrix that takes the original y values, and adds a hat!
ŷ = Hy
Hii = hi ,
Here, p the number of βs is also the trace (and rank) of the hat matrix.
∑
n
hi = p
i=1
hi > 2h̄
∑n
i=1 hi p
h̄ = = .
n n
For simple linear regression, the leverage for each point is given by
1 (xi − x̄)2
hi = + .
n Sxx
This expression should be familiar. (Think back to inference for SLR.) It sug-
gests that the large leverages occur when x values are far from their mean.
Recall that the regression goes through the point (x̄, ȳ).
5
4
x2
3
2
1
0 2 4 6 8 10
x1
Here we’ve created some multivariate data. Notice that we have plotted the x
values, not the y values. The red point is (7, 3) which is the mean of x1 and the
mean of x2 respectively.
We could calculate the leverages using the expressions defined above. We first
create the X matrix, then calculate H as defined, and extract the diagonal
elements.
X = cbind(rep(1, 8), lev_ex$x1, lev_ex$x2)
H = X %*% solve(t(X) %*% X) %*% t(X)
diag(H)
## [1] 3
Alternatively, the method we will use more often, is to simply fit a regression,
then use the hatvalues() function, which returns the leverages.
lev_fit = lm(y ~ ., data = lev_ex)
hatvalues(lev_fit)
## 1 2 3 4 5 6 7 8
## 0.6000 0.3750 0.2875 0.1250 0.4000 0.2125 0.5875 0.4125
13.3. UNUSUAL OBSERVATIONS 263
Again, note that here we have “used” the y values to fit the regression, but R
still ignores them when calculating the leverages, as leverages only depend on
the x values.
coef(lev_fit)
## (Intercept) x1 x2
## 3.7 -0.7 4.4
Let’s see what happens to these coefficients when we modify the y value of the
point with the highest leverage.
which.max(hatvalues(lev_fit))
## 1
## 1
lev_ex[which.max(hatvalues(lev_fit)),]
## x1 x2 y
## 1 0 1 11
We see that the original y value is 11. We’ll create a copy of the data, and
modify this point to have a y value of 20.
lev_ex_1 = lev_ex
lev_ex_1$y[1] = 20
lm(y ~ ., data = lev_ex_1)
##
## Call:
## lm(formula = y ~ ., data = lev_ex_1)
##
## Coefficients:
## (Intercept) x1 x2
## 8.875 -1.375 4.625
Notice the large changes in the coefficients. Also notice that each of the coef-
ficients has changed in some way. Note that the leverages of the points would
not have changed, as we have not modified any of the x values.
Now let’s see what happens to these coefficients when we modify the y value of
the point with the lowest leverage.
which.min(hatvalues(lev_fit))
## 4
## 4
lev_ex[which.min(hatvalues(lev_fit)),]
## x1 x2 y
264 CHAPTER 13. MODEL DIAGNOSTICS
## 4 7 3 14
We see that the original y value is 14. We’ll again create a copy of the data,
and modify this point to have a y value of 30.
lev_ex_2 = lev_ex
lev_ex_2$y[4] = 30
lm(y ~ ., data = lev_ex_2)
##
## Call:
## lm(formula = y ~ ., data = lev_ex_2)
##
## Coefficients:
## (Intercept) x1 x2
## 5.7 -0.7 4.4
This time despite a large change in the y value, there is only small change in
the coefficients. Also, only the intercept has changed!
mean(lev_ex$x1)
## [1] 7
mean(lev_ex$x2)
## [1] 3
lev_ex[4,]
## x1 x2 y
## 4 7 3 14
Notice that this point was the mean of both of the predictors.
Returning to our three plots, each with an added point, we can calculate the
leverages for each. Note that the 11th data point each time is the added data
point.
hatvalues(model_1)
## 1 2 3 4 5 6
## 0.33534597 0.23860732 0.16610842 0.11784927 0.09382988 0.09405024
## 7 8 9 10 11
## 0.11851036 0.16721022 0.24014985 0.33732922 0.09100926
hatvalues(model_2)
## 1 2 3 4 5 6
## 0.23238866 0.18663968 0.14979757 0.12186235 0.10283401 0.09271255
## 7 8 9 10 11
## 0.09149798 0.09919028 0.11578947 0.14129555 0.66599190
13.3. UNUSUAL OBSERVATIONS 265
hatvalues(model_3)
## 1 2 3 4 5 6
## 0.27852761 0.21411043 0.16319018 0.12576687 0.10184049 0.09141104
## 7 8 9 10 11
## 0.09447853 0.11104294 0.14110429 0.18466258 0.49386503
Are any of these large?
hatvalues(model_1) > 2 * mean(hatvalues(model_1))
## 1 2 3 4 5 6 7 8 9 10 11
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
hatvalues(model_2) > 2 * mean(hatvalues(model_2))
## 1 2 3 4 5 6 7 8 9 10 11
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE TRUE
hatvalues(model_3) > 2 * mean(hatvalues(model_3))
## 1 2 3 4 5 6 7 8 9 10 11
## FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE TRUE
We see that in the second and third plots, the added point is a point of high
leverage. Recall that only in the third plot did that have an influence on the
regression. To understand why, we’ll need to discuss outliers.
13.3.2 Outliers
Outliers are points which do not fit the model well. They may or may not have
a large affect on the model. To identify outliers, we will look for observations
with large residuals.
Note,
e = y − ŷ = Iy − Hy = (I − H)y
Var(ei ) = (1 − hi )σ 2
√
SE[ei ] = se (1 − hi ).
266 CHAPTER 13. MODEL DIAGNOSTICS
ei approx
ri = √ ∼ N (µ = 0, σ 2 = 1)
se 1 − hi
when n is large.
We can use this fact to identify “large” residuals. For example, standardized
residuals greater than 2 in magnitude should only happen approximately 5 per-
cent of the time.
Returning again to our three plots, each with an added point, we can calculate
the residuals and standardized residuals for each. Standardized residuals can
be obtained in R by using rstandard() where we would normally use resid().
resid(model_1)
## 1 2 3 4 5 6
## 0.4949887 -1.4657145 -0.5629345 -0.3182468 -0.5718877 -1.1073271
## 7 8 9 10 11
## 0.4852728 -1.1459548 0.9420814 -1.1641029 4.4138254
rstandard(model_1)
## 1 2 3 4 5 6
## 0.3464701 -0.9585470 -0.3517802 -0.1933575 -0.3428264 -0.6638841
## 7 8 9 10 11
## 0.2949482 -0.7165857 0.6167268 -0.8160389 2.6418234
rstandard(model_1)[abs(rstandard(model_1)) > 2]
## 11
## 2.641823
In the first plot, we see that the 11th point, the added point, is a large stan-
dardized residual.
resid(model_2)
## 1 2 3 4 5 6
## 1.03288292 -0.95203397 -0.07346766 0.14700626 -0.13084829 -0.69050140
## 7 8 9 10 11
## 0.87788484 -0.77755647 1.28626601 -0.84413207 0.12449986
rstandard(model_2)
## 1 2 3 4 5 6
## 1.41447023 -1.26655590 -0.09559792 0.18822094 -0.16574677 -0.86977220
## 7 8 9 10 11
## 1.10506546 -0.98294409 1.64121833 -1.09295417 0.25846620
13.3. UNUSUAL OBSERVATIONS 267
rstandard(model_2)[abs(rstandard(model_2)) > 2]
## named numeric(0)
In the second plot, we see that there are no points with large standardized
residuals.
resid(model_3)
## 1 2 3 4 5 6
## 2.30296166 -0.04347087 0.47357980 0.33253808 -0.30683212 -1.22800087
## 7 8 9 10 11
## -0.02113027 -2.03808722 -0.33578039 -2.82769411 3.69191633
rstandard(model_3)
## 1 2 3 4 5 6
## 1.41302755 -0.02555591 0.26980722 0.18535382 -0.16873216 -0.67141143
## 7 8 9 10 11
## -0.01157256 -1.12656475 -0.18882474 -1.63206526 2.70453408
rstandard(model_3)[abs(rstandard(model_3)) > 2]
## 11
## 2.704534
In the last plot, we see that the 11th point, the added point, is a large standard-
ized residual.
Recall that the added point in plots two and three were both high leverage, but
now only the point in plot three has a large residual. We will now combine this
information and discuss influence.
13.3.3 Influence
As we have now seen in the three plots, some outliers only change the regression
a small amount (plot one) and some outliers have a large effect on the regression
(plot three). Observations that fall into the latter category, points with (some
combination of) high leverage and large residual, we will call influential.
A common measure of influence is Cook’s Distance, which is defined as
1 2 hi
Di = r .
p i 1 − hi
4
Di >
n
The Cook’s distance for each point of a regression can be calculated using
cooks.distance() which is a default function in R. Let’s look for influential
points in the three plots we had been considering.
Low Leverage, Large Residual, Small Influence High Leverage, Small Residual, Small Influence High Leverage, Large Residual, Large Influence
10
10
10
8
5
y
y
6
0
4
0
2
2 4 6 8 10 5 10 15 2 4 6 8 10 12 14
x x x
Recall that the circled points in each plot have different characteristics:
## 11
## FALSE
cooks.distance(model_2)[11] > 4 / length(cooks.distance(model_2))
## 11
## FALSE
cooks.distance(model_3)[11] > 4 / length(cooks.distance(model_3))
## 11
## TRUE
And, as expected, the added point in the third plot, with high leverage and a
large residual is considered influential!
13.4. DATA ANALYSIS EXAMPLES 269
Last chapter we fit an additive regression to the mtcars data with mpg as the
response and hp and am as predictors. Let’s perform some diagnostics on this
model.
First, fit the model as we did last chapter.
mpg_hp_add = lm(mpg ~ hp + am, data = mtcars)
0
-2
-4
15 20 25
Fitted
The fitted versus residuals plot looks good. We don’t see any obvious pattern,
and the variance looks roughly constant. (Maybe a little larger for large fitted
values, but not enough to worry about.)
bptest(mpg_hp_add)
##
## studentized Breusch-Pagan test
##
## data: mpg_hp_add
270 CHAPTER 13. MODEL DIAGNOSTICS
2
0
-2
-4
-2 -1 0 1 2
Theoretical Quantiles
The Q-Q plot looks extremely good and the Shapiro-Wilk test agrees.
shapiro.test(resid(mpg_hp_add))
##
## Shapiro-Wilk normality test
##
## data: resid(mpg_hp_add)
## W = 0.96485, p-value = 0.3706
sum(hatvalues(mpg_hp_add) > 2 * mean(hatvalues(mpg_hp_add)))
## [1] 2
We see that there are two points of large leverage.
sum(abs(rstandard(mpg_hp_add)) > 2)
## [1] 1
There is also one point with a large residual. Do these result in any points that
are considered influential?
13.4. DATA ANALYSIS EXAMPLES 271
cd_mpg_hp_add = cooks.distance(mpg_hp_add)
sum(cd_mpg_hp_add > 4 / length(cd_mpg_hp_add))
## [1] 2
large_cd_mpg = cd_mpg_hp_add > 4 / length(cd_mpg_hp_add)
cd_mpg_hp_add[large_cd_mpg]
We find two influential points. Interestingly, they are very different cars.
coef(mpg_hp_add)
## (Intercept) hp am
## 26.5849137 -0.0588878 5.2770853
Since the diagnostics looked good, there isn’t much need to worry about these
two points, but let’s see how much the coefficients change if we remove them.
mpg_hp_add_fix = lm(mpg ~ hp + am,
data = mtcars,
subset = cd_mpg_hp_add <= 4 / length(cd_mpg_hp_add))
coef(mpg_hp_add_fix)
## (Intercept) hp am
## 27.22190933 -0.06286249 4.29765867
2
Lotus Europa Lotus Europa
Fiat 128 Fiat 128
Standardized residuals
4
1
Residuals
2
0
0
-2
-1
-4
15 20 25 -2 -1 0 1 2
Toyota Corolla
Lotus Europa Toyota Corolla 1
2
Fiat 128
Standardized residuals
Fiat 128
Standardized residuals
0.5
Maserati Bora
1.0
1
0
0.5
-1
0.5
Cook's distance
0.0
-2
Notice that, calling plot() on a variable which stores an object created by lm()
outputs four diagnostic plots by default. Use ?plot.lm to learn more. The first
two should already be familiar.
Let’s consider the model big_model from last chapter which was fit to the
autompg dataset. It used mpg as the response, and considered many interaction
terms between the predictors disp, hp, and domestic.
str(autompg)
## $ hp : num 130 165 150 150 140 198 220 215 225 190 ...
## $ wt : num 3504 3693 3436 3433 3449 ...
## $ acc : num 12 11.5 11 12 10.5 10 9 8.5 10 8.5 ...
## $ year : int 70 70 70 70 70 70 70 70 70 70 ...
## $ origin : int 1 1 1 1 1 1 1 1 1 1 ...
## $ domestic: num 1 1 1 1 1 1 1 1 1 1 ...
big_model = lm(mpg ~ disp * hp * domestic, data = autompg)
10
5
0
-5
-10
-3 -2 -1 0 1 2 3
Theoretical Quantiles
shapiro.test(resid(big_model))
##
## Shapiro-Wilk normality test
##
## data: resid(big_model)
## W = 0.96161, p-value = 1.824e-08
Here both the Q-Q plot, and the Shapiro-Wilk test suggest that the normality
assumption is violated.
big_mod_cd = cooks.distance(big_model)
sum(big_mod_cd > 4 / length(big_mod_cd))
## [1] 31
274 CHAPTER 13. MODEL DIAGNOSTICS
5
0
-5
-3 -2 -1 0 1 2 3
Theoretical Quantiles
shapiro.test(resid(big_model_fix))
##
## Shapiro-Wilk normality test
##
## data: resid(big_model_fix)
## W = 0.99035, p-value = 0.02068
Removing these points results in a much better Q-Q plot, and now Shapiro-Wilk
fails to reject for a low α.
We’ve now seen that sometimes modifying the data can fix issues with regression.
However, next chapter, instead of modifying the data, we will modify the model
via transformations.
13.5. R MARKDOWN 275
13.5 R Markdown
The R Markdown file for this chapter can be found here:
• diagnostics.Rmd
The file was created using R version 3.6.1.
276 CHAPTER 13. MODEL DIAGNOSTICS
Chapter 14
Transformations
Let’s look at some (fictional) salary data from the (fictional) company Initech.
We will try to model salary as a function of years of experience. The data
can be found in initech.csv.
277
278 CHAPTER 14. TRANSFORMATIONS
initech = read.csv("data/initech.csv")
plot(salary ~ years, data = initech, col = "grey", pch = 20, cex = 1.5,
main = "Salaries at Initech, By Seniority")
150000
50000
5 10 15 20 25
years
##
## Call:
## lm(formula = salary ~ years, data = initech)
##
## Residuals:
## Min 1Q Median 3Q Max
## -57225 -18104 241 15589 91332
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 5302 5750 0.922 0.359
## years 8637 389 22.200 <2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 27360 on 98 degrees of freedom
14.1. RESPONSE TRANSFORMATION 279
This model appears significant, but does it meet the model assumptions?
plot(salary ~ years, data = initech, col = "grey", pch = 20, cex = 1.5,
main = "Salaries at Initech, By Seniority")
abline(initech_fit, col = "darkorange", lwd = 2)
150000
50000
5 10 15 20 25
years
Adding the fitted line to the plot, we see that the linear relationship appears
correct.
par(mfrow = c(1, 2))
50000
50000
Sample Quantiles
Residuals
0
-50000
-50000
50000 100000 150000 200000 -2 -1 0 1 2
However, from the fitted versus residuals plot it appears there is non-constant
variance. Specifically, the variance increases as the fitted value increases.
Recall the fitted value is our estimate of the mean at a particular value of x.
Under our usual assumptions,
ϵ ∼ N (0, σ 2 )
and thus
Var[Y |X = x] = σ 2
Var[g(Y ) | X = x] = c
14.1. RESPONSE TRANSFORMATION 281
where c is a constant that does not depend on the mean, E[Y | X = x]. A
transformation that accomplishes this is called a variance stabilizaing trans-
formation.
A reminder, that for our purposes, log and ln are both the natural log. R uses
log to mean the natural log, unless a different base is specified.
We will now use a model with a log transformed response for the Initech data,
log(Yi ) = β0 + β1 xi + ϵi .
Note, if we re-scale the model from a log scale back to the original scale of the
data, we now have
Yi = exp(β0 + β1 xi ) · exp(ϵi )
Fitting this model in R requires only a minor modification to our formula speci-
fication.
initech_fit_log = lm(log(salary) ~ years, data = initech)
Note that while log(y) is considered the new response variable, we do not
actually create a new variable in R, but simply transform the variable inside the
model formula.
plot(log(salary) ~ years, data = initech, col = "grey", pch = 20, cex = 1.5,
main = "Salaries at Initech, By Seniority")
abline(initech_fit_log, col = "darkorange", lwd = 2)
282 CHAPTER 14. TRANSFORMATIONS
12.5
12.0
log(salary)
11.5
11.0
10.5
5 10 15 20 25
years
Plotting the data on the transformed log scale and adding the fitted line, the
relationship again appears linear, and we can already see that the variation
about the fitted line looks constant.
plot(salary ~ years, data = initech, col = "grey", pch = 20, cex = 1.5,
main = "Salaries at Initech, By Seniority")
curve(exp(initech_fit_log$coef[1] + initech_fit_log$coef[2] * x),
from = 0, to = 30, add = TRUE, col = "darkorange", lwd = 2)
14.1. RESPONSE TRANSFORMATION 283
150000
50000
5 10 15 20 25
years
By plotting the data on the original scale, and adding the fitted regression, we
see an exponential relationship. However, this is still a linear model, since the
new transformed response, log(y), is still a linear combination of the predictors.
par(mfrow = c(1, 2))
0.4
0.4
0.2
0.2
Sample Quantiles
Residuals
0.0
0.0
-0.2
-0.2
-0.4
-0.4
-0.6
-0.6
10.5 11.0 11.5 12.0 12.5 -2 -1 0 1 2
The fitted versus residuals plot looks much better. It appears the constant
variance assumption is no longer violated.
Comparing the RMSE using the original and transformed response, we also
see that the log transformed model simply fits better, with a smaller average
squared error.
sqrt(mean(resid(initech_fit) ^ 2))
## [1] 27080.16
sqrt(mean(resid(initech_fit_log) ^ 2))
## [1] 0.1934907
But wait, that isn’t fair, this difference is simply due to the different scales being
used.
sqrt(mean((initech$salary - fitted(initech_fit)) ^ 2))
## [1] 27080.16
sqrt(mean((initech$salary - exp(fitted(initech_fit_log))) ^ 2))
## [1] 24280.36
Transforming the fitted values of the log model back to the data scale, we do
indeed see that it fits better!
summary(initech_fit_log)
##
## Call:
## lm(formula = log(salary) ~ years, data = initech)
##
## Residuals:
14.1. RESPONSE TRANSFORMATION 285
But now, if we re-scale the data from a log scale back to the original scale of
the data, we now have
We see that for every one additional year of experience, average salary increases
exp(0.079) = 1.0822 times. We are now multiplying, not adding.
While using a log transform is possibly the most common response variable trans-
formation, many others exist. We will now consider a family of transformations
and choose the best from among them, which includes the log transform.
λ
y −1
λ ̸= 0
gλ (y) = λ
log(y) λ=0
n ∑
L(λ) = − log(RSSλ /n) + (λ − 1) log(yi ).
2
286 CHAPTER 14. TRANSFORMATIONS
{ }
1
λ : L(λ) > L(λ̂) − χ21,α
2
which R will plot for us to help quickly select an appropriate λ value. We often
choose a “nice” value from within the confidence interval, instead of the value
of λ that truly maximizes the likelihood.
library(MASS)
library(faraway)
Here we need the MASS package for the boxcox() function, and we will consider
a couple of datasets from the faraway package.
First we will use the savings dataset as an example of using the Box-Cox
method to justify the use of no transformation. We fit an additive multiple
regression model with sr as the response and each of the other variables as
predictors.
savings_model = lm(sr ~ ., data = savings)
We then use the boxcox() function to find the best transformation of the form
considered by the Box-Cox method.
boxcox(savings_model, plotit = TRUE)
-50
95%
-100
log-Likelihood
-150
-200
-2 -1 0 1 2
λ
14.1. RESPONSE TRANSFORMATION 287
-58.0
95%
-59.0
-60.0
Note that we can specify a range of λ values to consider and thus be plotted.
We often specify a range that is more visually interesting. Here we see that
λ = 1 is both in the confidence interval, and is extremely close to the maximum.
This suggests a transformation of the form
yλ − 1 y1 − 1
= = y − 1.
λ 1
10
5
Residuals
0
-5
6 8 10 12 14 16
Fitted
Looking at a fitted versus residuals plot verifies that there likely are not any
issue with the assumptions of this model, which Breusch-Pagan and Shapiro-
Wilk tests verify.
library(lmtest)
bptest(savings_model)
##
## studentized Breusch-Pagan test
##
## data: savings_model
## BP = 4.9852, df = 4, p-value = 0.2888
shapiro.test(resid(savings_model))
##
## Shapiro-Wilk normality test
##
## data: resid(savings_model)
## W = 0.98698, p-value = 0.8524
Now we will use the gala dataset as an example of using the Box-Cox method
to justify a transformation other than log. We fit an additive multiple regres-
sion model with Species as the response and most of the other variables as
predictors.
14.1. RESPONSE TRANSFORMATION 289
50
0
-100
Fitted
Even though there is not a lot of data for large fitted values, it still seems very
clear that the constant variance assumption is violated.
boxcox(gala_model, lambda = seq(-0.25, 0.75, by = 0.05), plotit = TRUE)
290 CHAPTER 14. TRANSFORMATIONS
-46 95%
log-Likelihood
-50
-54
-58
Using the Box-Cox method, we see that λ = 0.3 is both in the confidence interval,
and is extremely close to the maximum, which suggests a transformation of the
form
yλ − 1 y 0.3 − 1
= .
λ 0.3
4
2
Residuals
0
-2
-4
5 10 15
Fitted
Lastly, we return to the initech data, and the initech_fit model we had
used earlier. Recall, that this was the untransformed model, that we used a log
transform to fix.
boxcox(initech_fit)
292 CHAPTER 14. TRANSFORMATIONS
95%
-80
log-Likelihood
-120
-160
-200
-2 -1 0 1 2
Using the Box-Cox method, we see that λ = 0 is both in the interval, and
extremely close to the maximum, which suggests a transformation of the form
log(y).
Recall the autompg dataset from the previous chapter. Here we will attempt to
model mpg as a function of hp.
par(mfrow = c(1, 2))
plot(mpg ~ hp, data = autompg, col = "dodgerblue", pch = 20, cex = 1.5)
mpg_hp = lm(mpg ~ hp, data = autompg)
abline(mpg_hp, col = "darkorange", lwd = 2)
plot(fitted(mpg_hp), resid(mpg_hp), col = "dodgerblue",
pch = 20, cex = 1.5, xlab = "Fitted", ylab = "Residuals")
abline(h = 0, lty = 2, col = "darkorange", lwd = 2)
40
10
Residuals
5
30
mpg
0
20
-10
10
hp Fitted
We first attempt SLR, but we see a rather obvious pattern in the fitted versus
residuals plot, which includes increasing variance, so we attempt a log transform
of the response.
par(mfrow = c(1, 2))
plot(log(mpg) ~ hp, data = autompg, col = "dodgerblue", pch = 20, cex = 1.5)
mpg_hp_log = lm(log(mpg) ~ hp, data = autompg)
abline(mpg_hp_log, col = "darkorange", lwd = 2)
plot(fitted(mpg_hp_log), resid(mpg_hp_log), col = "dodgerblue",
pch = 20, cex = 1.5, xlab = "Fitted", ylab = "Residuals")
abline(h = 0, lty = 2, col = "darkorange", lwd = 2)
294 CHAPTER 14. TRANSFORMATIONS
0.6
3.5
0.2
Residuals
log(mpg)
3.0
-0.2
2.5
-0.6
50 100 150 200 2.2 2.4 2.6 2.8 3.0 3.2 3.4
hp Fitted
After performing the log transform of the response, we still have some of the
same issues with the fitted versus response. Now, we will try also log transform-
ing the predictor.
par(mfrow = c(1, 2))
plot(log(mpg) ~ log(hp), data = autompg, col = "dodgerblue", pch = 20, cex = 1.5)
mpg_hp_loglog = lm(log(mpg) ~ log(hp), data = autompg)
abline(mpg_hp_loglog, col = "darkorange", lwd = 2)
plot(fitted(mpg_hp_loglog), resid(mpg_hp_loglog), col = "dodgerblue",
pch = 20, cex = 1.5, xlab = "Fitted", ylab = "Residuals")
abline(h = 0, lty = 2, col = "darkorange", lwd = 2)
0.6
3.5
0.2
Residuals
log(mpg)
3.0
-0.2
2.5
-0.6
4.0 4.5 5.0 5.5 2.4 2.6 2.8 3.0 3.2 3.4 3.6
log(hp) Fitted
14.2.1 Polynomials
It should come as no surprise that sales of a product are related to the advertising
budget for the product, but there are diminishing returns. A company cannot
always expect linear returns based on an increased advertising budget.
20
15
10
2 4 6 8 10 12 14
Yi = β0 + β1 xi + β2 x2i + ϵi
The response y is now a linear function of “two” variables which now allows
y to be a non-linear function of the original single predictor x. We consider
this a transformation, although we have actually in some sense added another
predictor.
Thus, our X matrix is,
1 x1 x21
1 x2 x22
1 x3 x23
. .. ..
.
. . .
1 xn x2n
We can then proceed to fit the model as we have in the past for multiple linear
regression.
( )−1 ⊤
β̂ = X ⊤ X X y.
Our estimates will have the usual properties. The mean is still
E[β̂] = β,
and variance
( )−1
Var[β̂] = σ 2 X ⊤ X .
( )
β̂j ∼ N βj , σ 2 Cjj .
##
## Call:
## lm(formula = sales ~ advert, data = marketing)
##
## Residuals:
## Min 1Q Median 3Q Max
## -2.7845 -1.4762 -0.5103 1.2361 3.1869
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
14.2. PREDICTOR TRANSFORMATION 297
While the SLR model is significant, the fitted versus residuals plot would have
a very clear pattern.
mark_mod_poly2 = lm(sales ~ advert + I(advert ^ 2), data = marketing)
summary(mark_mod_poly2)
##
## Call:
## lm(formula = sales ~ advert + I(advert^2), data = marketing)
##
## Residuals:
## Min 1Q Median 3Q Max
## -1.9175 -0.8333 -0.1948 0.9292 2.1385
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 6.76161 0.67219 10.059 8.16e-09 ***
## advert 2.46231 0.24830 9.917 1.02e-08 ***
## I(advert^2) -0.08745 0.01658 -5.275 5.14e-05 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 1.228 on 18 degrees of freedom
## Multiple R-squared: 0.9587, Adjusted R-squared: 0.9541
## F-statistic: 209 on 2 and 18 DF, p-value: 3.486e-13
To add the second order term we need to use the I() function in the model
specification around our newly created predictor. We see that with the first
order term in the model, the quadratic term is also significant.
n = length(marketing$advert)
X = cbind(rep(1, n), marketing$advert, marketing$advert ^ 2)
t(X) %*% X
## [,1]
## [1,] 6.76161045
## [2,] 2.46230964
## [3,] -0.08745394
Here we verify the parameter estimates were found as we would expect.
We could also add higher order terms, such as a third degree predictor. This is
easy to do. Our X matrix simply becomes larger again.
Yi = β0 + β1 xi + β2 x2i + β3 x3i + ϵi
1 x1 x21 x31
1 x2 x22 x32
1 x3 x23 x33
. .. .. ..
.
. . . .
1 xn x2n x3n
##
## Call:
## lm(formula = sales ~ advert + I(advert^2) + I(advert^3), data = marketing)
##
## Residuals:
## Min 1Q Median 3Q Max
## -1.44322 -0.61310 -0.01527 0.68131 1.22517
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 3.890070 0.761956 5.105 8.79e-05 ***
## advert 4.681864 0.501032 9.344 4.14e-08 ***
## I(advert^2) -0.455152 0.078977 -5.763 2.30e-05 ***
## I(advert^3) 0.016131 0.003429 4.704 0.000205 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.8329 on 17 degrees of freedom
## Multiple R-squared: 0.9821, Adjusted R-squared: 0.9789
## F-statistic: 310.2 on 3 and 17 DF, p-value: 4.892e-15
14.2. PREDICTOR TRANSFORMATION 299
Now we see that with the first and second order terms in the model, the third
order term is also significant. But does this make sense practically? The fol-
lowing plot should gives hints as to why it doesn’t. (The model with the third
order term doesn’t have diminishing returns!)
plot(sales ~ advert, data = marketing,
xlab = "Advert Spending (in $100,00)", ylab = "Sales (in $100,00)",
pch = 20, cex = 2)
abline(mark_mod, lty = 2, col = "green", lwd = 2)
xplot = seq(0, 16, by = 0.01)
lines(xplot, predict(mark_mod_poly2, newdata = data.frame(advert = xplot)),
col = "blue", lwd = 2)
lines(xplot, predict(mark_mod_poly3, newdata = data.frame(advert = xplot)),
col = "red", lty = 3, lwd = 3)
25
Sales (in $100,00)
20
15
10
2 4 6 8 10 12 14
The previous plot was made using base graphics in R. The next plot was made
using the package ggplot2, an increasingly popular plotting method in R.
library(ggplot2)
ggplot(data = marketing, aes(x = advert, y = sales)) +
stat_smooth(method = "lm", se = FALSE, color = "green", formula = y ~ x) +
stat_smooth(method = "lm", se = FALSE, color = "blue", formula = y ~ x + I(x ^ 2)) +
stat_smooth(method = "lm", se = FALSE, color = "red", formula = y ~ x + I(x ^ 2)+ I(x ^ 3)) +
geom_point(colour = "black", size = 3)
300 CHAPTER 14. TRANSFORMATIONS
25
20
sales
15
10
0 5 10 15
advert
##
## Call:
## lm(formula = y ~ x + I(x^2) + I(x^3) + I(x^4) + I(x^5) + I(x^6) +
## I(x^7) + I(x^8) + I(x^9) + I(x^10))
##
## Residuals:
## ALL 11 residuals are 0: no residual degrees of freedom!
14.2. PREDICTOR TRANSFORMATION 301
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -2.114e+01 NA NA NA
## x -1.918e+03 NA NA NA
## I(x^2) 4.969e+03 NA NA NA
## I(x^3) -4.932e+03 NA NA NA
## I(x^4) 2.581e+03 NA NA NA
## I(x^5) -8.035e+02 NA NA NA
## I(x^6) 1.570e+02 NA NA NA
## I(x^7) -1.947e+01 NA NA NA
## I(x^8) 1.490e+00 NA NA NA
## I(x^9) -6.424e-02 NA NA NA
## I(x^10) 1.195e-03 NA NA NA
##
## Residual standard error: NaN on 0 degrees of freedom
## Multiple R-squared: 1, Adjusted R-squared: NaN
## F-statistic: NaN on 10 and 0 DF, p-value: NA
xplot = seq(0, 10, by = 0.1)
lines(xplot, predict(fit, newdata = data.frame(x = xplot)),
col = "dodgerblue", lwd = 2, lty = 1)
lines(xplot, predict(fit_perf, newdata = data.frame(x = xplot)),
col = "darkorange", lwd = 2, lty = 2)
300
100
y
-100
-300
0 2 4 6 8 10
Notice in the summary, R could not calculate standard errors. This is a result
302 CHAPTER 14. TRANSFORMATIONS
In this example, the true relationship is quadratic, but the order 10 polynomial’s
fit is “perfect”. Next chapter we will focus on the trade-off between goodness of
fit (minimizing errors) and complexity of model.
Suppose you work for an automobile manufacturer which makes a large luxury
sedan. You would like to know how the car performs from a fuel efficiency
standpoint when it is driven at various speeds. Instead of testing the car at
every conceivable speed (which would be impossible) you create an experiment
where the car is driven at speeds of interest in increments of 5 miles per hour.
Our goal then, is to fit a model to this data in order to be able to predict fuel
efficiency when driving at certain speeds. The data from this example can be
found in fuel_econ.csv.
econ = read.csv("data/fuel_econ.csv")
In this example, we will be frequently looking a the fitted versus residuals plot,
so we should write a function to make our life easier, but this is left as an exercise
for homework.
30
5
Residuals
25
0
20
-5
15
Pretty clearly we can do better. Yes fuel efficiency does increase as speed in-
creases, but only up to a certain point.
We will now add polynomial terms until we fit a suitable fit.
fit2 = lm(mpg ~ mph + I(mph ^ 2), data = econ)
summary(fit2)
##
## Call:
## lm(formula = mpg ~ mph + I(mph^2), data = econ)
##
## Residuals:
## Min 1Q Median 3Q Max
## -2.8411 -0.9694 0.0017 1.0181 3.3900
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 2.4444505 1.4241091 1.716 0.0984 .
## mph 1.2716937 0.0757321 16.792 3.99e-15 ***
## I(mph^2) -0.0145014 0.0008719 -16.633 4.97e-15 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 1.663 on 25 degrees of freedom
## Multiple R-squared: 0.9188, Adjusted R-squared: 0.9123
## F-statistic: 141.5 on 2 and 25 DF, p-value: 2.338e-14
par(mfrow = c(1, 2))
plot_econ_curve(fit2)
plot(fitted(fit2), resid(fit2), xlab = "Fitted", ylab = "Residuals",
col = "dodgerblue", pch = 20, cex =2)
304 CHAPTER 14. TRANSFORMATIONS
3
30
2
Residuals
1
25
0
-1
20
-2
15
-3
10 20 30 40 50 60 70 15 20 25 30
While this model clearly fits much better, and the second order term is signif-
icant, we still see a pattern in the fitted versus residuals plot which suggests
higher order terms will help. Also, we would expect the curve to flatten as
speed increases or decreases, not go sharply downward as we see here.
fit3 = lm(mpg ~ mph + I(mph ^ 2) + I(mph ^ 3), data = econ)
summary(fit3)
##
## Call:
## lm(formula = mpg ~ mph + I(mph^2) + I(mph^3), data = econ)
##
## Residuals:
## Min 1Q Median 3Q Max
## -2.8112 -0.9677 0.0264 1.0345 3.3827
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 2.258e+00 2.768e+00 0.816 0.4227
## mph 1.291e+00 2.529e-01 5.103 3.2e-05 ***
## I(mph^2) -1.502e-02 6.604e-03 -2.274 0.0322 *
## I(mph^3) 4.066e-06 5.132e-05 0.079 0.9375
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 1.697 on 24 degrees of freedom
## Multiple R-squared: 0.9188, Adjusted R-squared: 0.9087
## F-statistic: 90.56 on 3 and 24 DF, p-value: 3.17e-13
14.2. PREDICTOR TRANSFORMATION 305
3
30
2
Residuals
1
25
0
-1
20
-2
15
-3
10 20 30 40 50 60 70 15 20 25 30
Adding the third order term doesn’t seem to help at all. The fitted curve hardly
changes. This makes sense, since what we would like is for the curve to flatten
at the extremes. For this we will need an even degree polynomial term.
fit4 = lm(mpg ~ mph + I(mph ^ 2) + I(mph ^ 3) + I(mph ^ 4), data = econ)
summary(fit4)
##
## Call:
## lm(formula = mpg ~ mph + I(mph^2) + I(mph^3) + I(mph^4), data = econ)
##
## Residuals:
## Min 1Q Median 3Q Max
## -1.57410 -0.60308 0.04236 0.74481 1.93038
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 2.146e+01 2.965e+00 7.238 2.28e-07 ***
## mph -1.468e+00 3.913e-01 -3.751 0.00104 **
## I(mph^2) 1.081e-01 1.673e-02 6.463 1.35e-06 ***
## I(mph^3) -2.130e-03 2.844e-04 -7.488 1.31e-07 ***
## I(mph^4) 1.255e-05 1.665e-06 7.539 1.17e-07 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
306 CHAPTER 14. TRANSFORMATIONS
1.5
30
Residuals
0.5
25
-0.5
20
-1.5
15
10 20 30 40 50 60 70 15 20 25 30
Now we are making progress. The fourth order term is significant with the other
terms in the model. Also we are starting to see what we expected for low and
high speed. However, there still seems to be a bit of a pattern in the residuals,
so we will again try more higher order terms. We will add the fifth and sixth
together, since adding the fifth will be similar to adding the third.
fit6 = lm(mpg ~ mph + I(mph ^ 2) + I(mph ^ 3) + I(mph ^ 4) + I(mph ^ 5) + I(mph^6), dat
summary(fit6)
##
## Call:
## lm(formula = mpg ~ mph + I(mph^2) + I(mph^3) + I(mph^4) + I(mph^5) +
## I(mph^6), data = econ)
##
## Residuals:
## Min 1Q Median 3Q Max
## -1.1129 -0.5717 -0.1707 0.5026 1.5288
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -4.206e+00 1.204e+01 -0.349 0.7304
## mph 4.203e+00 2.553e+00 1.646 0.1146
14.2. PREDICTOR TRANSFORMATION 307
Residuals
25
20
-1.0
15
10 20 30 40 50 60 70 15 20 25 30
Again the sixth order term is significant with the other terms in the model and
here we see less pattern in the residuals plot. Let’s now test for which of the
previous two models we prefer. We will test
H0 : β5 = β6 = 0.
anova(fit4, fit6)
## 1 23 19.922
## 2 21 15.739 2 4.1828 2.7905 0.0842 .
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
So, this test does not reject the null hypothesis at a level of significance of
α = 0.05, however the p-value is still rather small, and the fitted versus residuals
plot is much better for the model with the sixth order term. This makes the
sixth order model a good choice. We could repeat this process one more time.
fit8 = lm(mpg ~ mph + I(mph ^ 2) + I(mph ^ 3) + I(mph ^ 4) + I(mph ^ 5)
+ I(mph ^ 6) + I(mph ^ 7) + I(mph ^ 8), data = econ)
summary(fit8)
##
## Call:
## lm(formula = mpg ~ mph + I(mph^2) + I(mph^3) + I(mph^4) + I(mph^5) +
## I(mph^6) + I(mph^7) + I(mph^8), data = econ)
##
## Residuals:
## Min 1Q Median 3Q Max
## -1.21938 -0.50464 -0.09105 0.49029 1.45440
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -1.202e+01 7.045e+01 -0.171 0.866
## mph 6.021e+00 2.014e+01 0.299 0.768
## I(mph^2) -5.037e-01 2.313e+00 -0.218 0.830
## I(mph^3) 2.121e-02 1.408e-01 0.151 0.882
## I(mph^4) -4.008e-04 5.017e-03 -0.080 0.937
## I(mph^5) 1.789e-06 1.080e-04 0.017 0.987
## I(mph^6) 4.486e-08 1.381e-06 0.032 0.974
## I(mph^7) -6.456e-10 9.649e-09 -0.067 0.947
## I(mph^8) 2.530e-12 2.835e-11 0.089 0.930
##
## Residual standard error: 0.9034 on 19 degrees of freedom
## Multiple R-squared: 0.9818, Adjusted R-squared: 0.9741
## F-statistic: 128.1 on 8 and 19 DF, p-value: 7.074e-15
par(mfrow = c(1, 2))
plot_econ_curve(fit8)
plot(fitted(fit8), resid(fit8), xlab = "Fitted", ylab = "Residuals",
col = "dodgerblue", pch = 20, cex =2)
abline(h = 0, col = "darkorange", lwd = 2)
14.2. PREDICTOR TRANSFORMATION 309
Fuel Efficiency (Miles per Gallon)
1.0
30
Residuals
25
0.0
20
-1.0
15
10 20 30 40 50 60 70 15 20 25 30
summary(fit8)
##
## Call:
## lm(formula = mpg ~ mph + I(mph^2) + I(mph^3) + I(mph^4) + I(mph^5) +
## I(mph^6) + I(mph^7) + I(mph^8), data = econ)
##
## Residuals:
## Min 1Q Median 3Q Max
## -1.21938 -0.50464 -0.09105 0.49029 1.45440
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -1.202e+01 7.045e+01 -0.171 0.866
## mph 6.021e+00 2.014e+01 0.299 0.768
## I(mph^2) -5.037e-01 2.313e+00 -0.218 0.830
## I(mph^3) 2.121e-02 1.408e-01 0.151 0.882
## I(mph^4) -4.008e-04 5.017e-03 -0.080 0.937
## I(mph^5) 1.789e-06 1.080e-04 0.017 0.987
## I(mph^6) 4.486e-08 1.381e-06 0.032 0.974
## I(mph^7) -6.456e-10 9.649e-09 -0.067 0.947
## I(mph^8) 2.530e-12 2.835e-11 0.089 0.930
##
## Residual standard error: 0.9034 on 19 degrees of freedom
## Multiple R-squared: 0.9818, Adjusted R-squared: 0.9741
## F-statistic: 128.1 on 8 and 19 DF, p-value: 7.074e-15
anova(fit6, fit8)
## [1] TRUE
We first verify that this method produces the same fitted values. However, the
estimated coefficients are different.
coef(fit6)
##
## Call:
## lm(formula = mpg ~ mph + I(mph^2) + I(mph^3) + I(mph^4) + I(mph^5) +
## I(mph^6), data = econ)
##
## Residuals:
## Min 1Q Median 3Q Max
## -1.1129 -0.5717 -0.1707 0.5026 1.5288
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -4.206e+00 1.204e+01 -0.349 0.7304
14.2. PREDICTOR TRANSFORMATION 311
##
## Call:
## lm(formula = mpg ~ poly(mph, 6), data = econ)
##
## Residuals:
## Min 1Q Median 3Q Max
## -1.1129 -0.5717 -0.1707 0.5026 1.5288
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 24.40714 0.16360 149.184 < 2e-16 ***
## poly(mph, 6)1 4.16770 0.86571 4.814 9.31e-05 ***
## poly(mph, 6)2 -27.66686 0.86571 -31.958 < 2e-16 ***
## poly(mph, 6)3 0.13447 0.86571 0.155 0.878
## poly(mph, 6)4 7.01671 0.86571 8.105 6.68e-08 ***
## poly(mph, 6)5 0.09289 0.86571 0.107 0.916
## poly(mph, 6)6 -2.04308 0.86571 -2.360 0.028 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.8657 on 21 degrees of freedom
## Multiple R-squared: 0.9815, Adjusted R-squared: 0.9762
## F-statistic: 186 on 6 and 21 DF, p-value: < 2.2e-16
Notice though that the p-value for testing the degree 6 term is the same. Because
of this, for the most part we can use these interchangeably.
To use poly() to obtain the same results as using I() repeatedly, we would
need to set raw = TRUE.
fit6_alt2 = lm(mpg ~ poly(mph, 6, raw = TRUE), data = econ)
coef(fit6_alt2)
312 CHAPTER 14. TRANSFORMATIONS
We’ve now seen how to transform predictor and response variables. In this
chapter we have mostly focused on using this in the context of fixing SLR models.
However, these concepts can easily be used together with categorical variables
and interactions to build larger, more flexible models. In the next chapter, we
will discuss how to choose a good model from a collection of possible models.
Material below here is currently being merged into the content above.
Response Transformations
initech = read.csv("data/initech.csv")
plot(salary ~ years, data = initech, col = "grey", pch = 20, cex = 1.5,
main = "Salaries at Initech, By Seniority")
14.2. PREDICTOR TRANSFORMATION 313
150000
50000
5 10 15 20 25
years
##
## Call:
## lm(formula = salary ~ years, data = initech)
##
## Residuals:
## Min 1Q Median 3Q Max
## -57225 -18104 241 15589 91332
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 5302 5750 0.922 0.359
## years 8637 389 22.200 <2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 27360 on 98 degrees of freedom
## Multiple R-squared: 0.8341, Adjusted R-squared: 0.8324
## F-statistic: 492.8 on 1 and 98 DF, p-value: < 2.2e-16
plot(salary ~ years, data = initech, col = "grey", pch = 20, cex = 1.5,
main = "Salaries at Initech, By Seniority")
abline(initech_fit, col = "darkorange", lwd = 2)
314 CHAPTER 14. TRANSFORMATIONS
250000
salary
150000
50000
5 10 15 20 25
years
50000
Sample Quantiles
Residuals
0
-50000
-50000
log(Yi ) = β0 + β1 xi + ϵi
plot(log(salary) ~ years, data = initech, col = "grey", pch = 20, cex = 1.5,
main = "Salaries at Initech, By Seniority")
abline(initech_fit_log, col = "darkorange", lwd = 2)
11.5
11.0
10.5
5 10 15 20 25
years
Yi = exp(β0 + β1 xi ) · exp(ϵi )
plot(salary ~ years, data = initech, col = "grey", pch = 20, cex = 1.5,
main = "Salaries at Initech, By Seniority")
curve(exp(initech_fit_log$coef[1] + initech_fit_log$coef[2] * x),
from = 0, to = 30, add = TRUE, col = "darkorange", lwd = 2)
316 CHAPTER 14. TRANSFORMATIONS
250000
salary
150000
50000
5 10 15 20 25
years
0.4
0.2
0.2
Sample Quantiles
Residuals
0.0
0.0
-0.2
-0.2
-0.4
-0.4
-0.6
-0.6
sqrt(mean(resid(initech_fit) ^ 2))
## [1] 27080.16
sqrt(mean(resid(initech_fit_log) ^ 2))
## [1] 0.1934907
sqrt(mean((initech$salary - fitted(initech_fit)) ^ 2))
## [1] 27080.16
sqrt(mean((initech$salary - exp(fitted(initech_fit_log))) ^ 2))
## [1] 24280.36
Predictor Transformations
##
## Call:
## lm(formula = y ~ x, data = quad_data)
##
## Residuals:
## Min 1Q Median 3Q Max
## -20.363 -7.550 -3.416 8.472 26.181
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -18.3271 1.5494 -11.83 <2e-16 ***
## x 24.8716 0.5343 46.55 <2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
318 CHAPTER 14. TRANSFORMATIONS
##
## Residual standard error: 10.79 on 198 degrees of freedom
## Multiple R-squared: 0.9163, Adjusted R-squared: 0.9158
## F-statistic: 2167 on 1 and 198 DF, p-value: < 2.2e-16
plot(y ~ x, data = quad_data, col = "grey", pch = 20, cex = 1.5,
main = "Simulated Quadratic Data")
abline(lin_fit, col = "darkorange", lwd = 2)
40
20
0
0 1 2 3 4 5
20
Sample Quantiles
10
10
Residuals
0
-10
-10
-20
-20
-20 0 20 40 60 80 100 -3 -2 -1 0 1 2 3
Yi = β0 + β1 xi + β2 x2i + ϵi
##
## Call:
## lm(formula = y ~ x + I(x^2), data = quad_data)
##
## Residuals:
## Min 1Q Median 3Q Max
## -11.4167 -3.0581 0.2297 3.1024 12.1256
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 3.0649 0.9577 3.200 0.0016 **
## x -0.5108 0.8637 -0.591 0.5549
## I(x^2) 5.0740 0.1667 30.433 <2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 4.531 on 197 degrees of freedom
## Multiple R-squared: 0.9853, Adjusted R-squared: 0.9852
## F-statistic: 6608 on 2 and 197 DF, p-value: < 2.2e-16
plot(y ~ x, data = quad_data, col = "grey", pch = 20, cex = 1.5,
main = "Simulated Quadratic Data")
curve(quad_fit$coef[1] + quad_fit$coef[2] * x + quad_fit$coef[3] * x ^ 2,
from = -5, to = 30, add = TRUE, col = "darkorange", lwd = 2)
320 CHAPTER 14. TRANSFORMATIONS
120
80
60
y
40
20
0
0 1 2 3 4 5
10
Sample Quantiles
5
5
Residuals
0
-5
-5
-10
-10
0 20 40 60 80 100 120 -3 -2 -1 0 1 2 3
sim_for_perf = function() {
x = seq(0, 10)
y = 3 + x - 4 * x ^ 2 + rnorm(n = 11, mean = 0, sd = 25)
data.frame(x, y)
}
set.seed(1234)
data_for_perf = sim_for_perf()
-200
-300
-400
0 2 4 6 8 10
x
322 CHAPTER 14. TRANSFORMATIONS
Yi = β0 + β1 xi + β2 x2i + ϵi
set.seed(42)
data_higher = sim_higher()
-5
-10
-2 -1 0 1 2
-5
-10
-2 -1 0 1 2
5
Sample Quantiles
Residuals
0
-5
-5
-8 -6 -4 -2 0 -3 -2 -1 0 1 2 3
5
Sample Quantiles
Residuals
0
-5
-5
-6 -4 -2 0 2 -3 -2 -1 0 1 2 3
anova(fit_2, fit_4)
## 1 247 2334.1
## 2 245 1912.6 2 421.51 26.997 2.536e-11 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
fit_6 = lm(y ~ poly(x, 6), data = data_higher)
anova(fit_4, fit_6)
coef(fit_4a)
unname(coef(fit_4b))
## [1] TRUE
all.equal(resid(fit_4a),
resid(fit_4b))
## [1] TRUE
summary(fit_4a)
##
## Call:
## lm(formula = y ~ poly(x, degree = 4), data = data_higher)
##
## Residuals:
## Min 1Q Median 3Q Max
## -7.6982 -2.0334 0.0042 1.9532 7.4626
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -1.9800 0.1767 -11.205 < 2e-16 ***
## poly(x, degree = 4)1 -2.0539 2.7940 -0.735 0.463
## poly(x, degree = 4)2 -49.3448 2.7940 -17.661 < 2e-16 ***
## poly(x, degree = 4)3 0.6699 2.7940 0.240 0.811
## poly(x, degree = 4)4 20.5198 2.7940 7.344 3.06e-12 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 2.794 on 245 degrees of freedom
## Multiple R-squared: 0.5993, Adjusted R-squared: 0.5928
## F-statistic: 91.61 on 4 and 245 DF, p-value: < 2.2e-16
summary(fit_4c)
##
## Call:
## lm(formula = y ~ x + I(x^2) + I(x^3) + I(x^4), data = data_higher)
##
## Residuals:
14.2. PREDICTOR TRANSFORMATION 327
## (Intercept) x
## -18.32715 24.87163
coef(lm(y ~ x + I(x ^ 2), data = quad_data))
## (Intercept) x I(x^2)
## 3.0649446 -0.5108131 5.0739805
coef(lm(y ~ x + x:x, data = quad_data))
## (Intercept) x
## -18.32715 24.87163
coef(lm(y ~ x * x, data = quad_data))
## (Intercept) x
## -18.32715 24.87163
coef(lm(y ~ x ^ 2, data = quad_data))
## (Intercept) x
## -18.32715 24.87163
coef(lm(y ~ x + x ^ 2, data = quad_data))
## (Intercept) x
## -18.32715 24.87163
328 CHAPTER 14. TRANSFORMATIONS
## (Intercept) I(x + x)
## -18.32715 12.43582
coef(lm(y ~ x + x, data = quad_data))
## (Intercept) x
## -18.32715 24.87163
pairs(autompg)
mpg
30
10
3.0
cyl
2.0
1.0
300
disp
100
150
hp
50
3500
wt
1500
25
20
acc
15
10
82
78
year
74
70
3.0
origin
2.0
1.0
0.8
domestic
0.4
0.0
plot(mpg ~ hp, data = autompg, col = "dodgerblue", pch = 20, cex = 1.5)
abline(mpg_hp, col = "darkorange", lwd = 2)
15
40
10
5
Residuals
30
mpg
0
20
-5
-10
10
hp Fitted
plot(mpg ~ hp, data = autompg, col = "dodgerblue", pch = 20, cex = 1.5)
xplot = seq(min(autompg$hp), max(autompg$hp), by = 0.1)
lines(xplot, predict(mpg_hp_log, newdata = data.frame(hp = xplot)),
col = "darkorange", lwd = 2, lty = 1)
15
40
10
5
Residuals
30
mpg
0
-5
20
-10
10
-15
50 100 150 200 15 20 25 30 35
hp Fitted
plot(log(mpg) ~ hp, data = autompg, col = "dodgerblue", pch = 20, cex = 1.5)
xplot = seq(min(autompg$hp), max(autompg$hp), by = 0.1)
lines(xplot, predict(mpg_hp_log, newdata = data.frame(hp = xplot)),
col = "darkorange", lwd = 2, lty = 1)
0.2
Residuals
log(mpg)
0.0
3.0
hp Fitted
plot(log(mpg) ~ log(hp), data = autompg, col = "dodgerblue", pch = 20, cex = 1.5)
abline(mpg_hp_loglog, col = "darkorange", lwd = 2)
0.6
0.4
3.5
0.2
Residuals
log(mpg)
0.0
3.0
-0.2
-0.4
2.5
-0.6
4.0 4.5 5.0 5.5 2.4 2.6 2.8 3.0 3.2 3.4 3.6
log(hp) Fitted
15
Sample Quantiles
10
5
0
-5
-10
-3 -2 -1 0 1 2 3
Theoretical Quantiles
##
## Call:
## lm(formula = log(mpg) ~ disp * hp * domestic + I(disp^2) + I(hp^2),
## data = autompg)
##
## Residuals:
## Min 1Q Median 3Q Max
## -0.40381 -0.08635 -0.01040 0.09995 0.71365
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 4.276e+00 2.564e-01 16.674 <2e-16 ***
## disp -5.289e-03 2.565e-03 -2.062 0.0399 *
## hp -7.386e-03 3.309e-03 -2.232 0.0262 *
## domestic -2.496e-01 2.787e-01 -0.896 0.3710
## I(disp^2) 8.552e-06 4.141e-06 2.065 0.0396 *
## I(hp^2) -1.565e-05 1.679e-05 -0.932 0.3519
## disp:hp 2.685e-05 3.082e-05 0.871 0.3842
## disp:domestic -1.101e-03 2.526e-03 -0.436 0.6631
## hp:domestic 7.560e-03 3.689e-03 2.049 0.0411 *
## disp:hp:domestic -2.311e-05 2.662e-05 -0.868 0.3859
14.3. R MARKDOWN 333
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.1507 on 373 degrees of freedom
## Multiple R-squared: 0.8107, Adjusted R-squared: 0.8062
## F-statistic: 177.5 on 9 and 373 DF, p-value: < 2.2e-16
qqnorm(resid(bigger_model), col = "darkgrey")
qqline(resid(bigger_model), col = "dodgerblue", lwd = 2)
0.4
0.2
0.0
-0.4 -0.2
-3 -2 -1 0 1 2 3
Theoretical Quantiles
14.3 R Markdown
The R Markdown file for this chapter can be found here:
• transformations.Rmd
The file was created using R version 3.6.1.
334 CHAPTER 14. TRANSFORMATIONS
Chapter 15
Collinearity
Notice that the way we are generating this data, the response y only really
depends on x1 and x2 .
set.seed(42)
exact_collin_data = gen_exact_collin_data()
head(exact_collin_data)
## y x1 x2 x3
## 1 170.7135 93.70958 76.00483 494.4385
335
336 CHAPTER 15. COLLINEARITY
##
## Call:
## lm(formula = y ~ x1 + x2 + x3, data = exact_collin_data)
##
## Residuals:
## Min 1Q Median 3Q Max
## -2.57662 -0.66188 -0.08253 0.63706 2.52057
##
## Coefficients: (1 not defined because of singularities)
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 2.957336 1.735165 1.704 0.0915 .
## x1 0.985629 0.009788 100.702 <2e-16 ***
## x2 1.017059 0.022545 45.112 <2e-16 ***
## x3 NA NA NA NA
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 1.014 on 97 degrees of freedom
## Multiple R-squared: 0.9923, Adjusted R-squared: 0.9921
## F-statistic: 6236 on 2 and 97 DF, p-value: < 2.2e-16
We see that R simply decides to exclude a variable. Why is this happening?
X = cbind(1, as.matrix(exact_collin_data[,-1]))
solve(t(X) %*% X)
( )−1
If we attempt to find β̂ using X T X , we see that this is not possible, due
to the fact that the columns of X are linearly dependent. The previous lines of
code were not run, because they produce an error!
When this happens, we say there is exact collinearity in the dataset.
As a result of this issue, R essentially chose to fit the model y ~ x1 + x2. How-
ever notice that two other models would accomplish exactly the same fit.
fit1 = lm(y ~ x1 + x2, data = exact_collin_data)
fit2 = lm(y ~ x1 + x3, data = exact_collin_data)
15.2. COLLINEARITY 337
We see that the fitted values for each of the three models are exactly the same.
This is a result of x3 containing all of the information from x1 and x2 . As long
as one of x1 or x2 are included in the model, x3 can be used to recover the
information from the variable not included.
all.equal(fitted(fit1), fitted(fit2))
## [1] TRUE
all.equal(fitted(fit2), fitted(fit3))
## [1] TRUE
While their fitted values are all the same, their estimated coefficients are wildly
different. The sign of x2 is switched in two of the models! So only fit1 properly
explains the relationship between the variables, fit2 and fit3 still predict as
well as fit1, despite the coefficients having little to no meaning, a concept we
will return to later.
coef(fit1)
## (Intercept) x1 x2
## 2.9573357 0.9856291 1.0170586
coef(fit2)
## (Intercept) x1 x3
## 2.1945418 0.4770998 0.2542647
coef(fit3)
## (Intercept) x2 x3
## 1.4788921 -0.9541995 0.4928145
15.2 Collinearity
for a given driver. This is potentially useful information for car manufacturers
considering comfort and safety when designing vehicles.
We will attempt to fit a model that predicts hipcenter. Two predictor variables
are immediately interesting to us: HtShoes and Ht. We certainly expect a
person’s height to be highly correlated to their height when wearing shoes. We’ll
pay special attention to these two variables when fitting models.
library(faraway)
pairs(seatpos, col = "dodgerblue")
20 40 60
Age
300
Weight
200
100
190
HtShoes
160
180
Ht
150
90 100
Seated
80
38
Arm
32
26
45
Thigh
35
42
Leg
36
30
-100
hipcenter
-250
round(cor(seatpos), 2)
## Seated -0.17 0.78 0.93 0.93 1.00 0.63 0.61 0.81 -0.73
## Arm 0.36 0.70 0.75 0.75 0.63 1.00 0.67 0.75 -0.59
## Thigh 0.09 0.57 0.72 0.73 0.61 0.67 1.00 0.65 -0.59
## Leg -0.04 0.78 0.91 0.91 0.81 0.75 0.65 1.00 -0.79
## hipcenter 0.21 -0.64 -0.80 -0.80 -0.73 -0.59 -0.59 -0.79 1.00
We can also do this numerically with the cor() function, which when applied to
a dataset, returns all pairwise correlations. Notice this is a symmetric matrix.
Recall that correlation measures strength and direction of the linear relationship
between to variables. The correlation between Ht and HtShoes is extremely high.
So high, that rounded to two decimal places, it appears to be 1!
Unlike exact collinearity, here we can still fit a model with all of the predictors,
but what effect does this have?
hip_model = lm(hipcenter ~ ., data = seatpos)
summary(hip_model)
##
## Call:
## lm(formula = hipcenter ~ ., data = seatpos)
##
## Residuals:
## Min 1Q Median 3Q Max
## -73.827 -22.833 -3.678 25.017 62.337
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 436.43213 166.57162 2.620 0.0138 *
## Age 0.77572 0.57033 1.360 0.1843
## Weight 0.02631 0.33097 0.080 0.9372
## HtShoes -2.69241 9.75304 -0.276 0.7845
## Ht 0.60134 10.12987 0.059 0.9531
## Seated 0.53375 3.76189 0.142 0.8882
## Arm -1.32807 3.90020 -0.341 0.7359
## Thigh -1.14312 2.66002 -0.430 0.6706
## Leg -6.43905 4.71386 -1.366 0.1824
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 37.72 on 29 degrees of freedom
## Multiple R-squared: 0.6866, Adjusted R-squared: 0.6001
## F-statistic: 7.94 on 8 and 29 DF, p-value: 1.306e-05
340 CHAPTER 15. COLLINEARITY
One of the first things we should notice is that the F -test for the regression tells
us that the regression is significant, however each individual predictor is not.
Another interesting result is the opposite signs of the coefficients for Ht and
HtShoes. This should seem rather counter-intuitive. Increasing Ht increases
hipcenter, but increasing HtShoes decreases hipcenter?
This happens as a result of the predictors being highly correlated. For example,
the HtShoe variable explains a large amount of the variation in Ht. When they
are both in the model, their effects on the response are lessened individually,
but together they still explain a large portion of the variation of hipcenter.
We define Rj2 to be the proportion of observed variation in the j-th predictor
explained by the other predictors. In other words Rj2 is the multiple R-Squared
for the regression of xj on each of the other predictors.
ht_shoes_model = lm(HtShoes ~ . - hipcenter, data = seatpos)
summary(ht_shoes_model)$r.squared
## [1] 0.9967472
Here we see that the other predictors explain 99.67% of the variation in HtShoe.
When fitting this model, we removed hipcenter since it is not a predictor.
( )
1 1
Var(βˆj ) = σ Cjj = σ
2 2
1 − Rj2 Sxj xj
where
∑
Sxj xj = (xij − x̄j )2 .
This gives us a way to understand how collinearity affects our regression esti-
mates.
We will call,
1
1 − Rj2
the variance inflation factor. The variance inflation factor quantifies the
effect of collinearity on the variance of our regression estimates. When Rj2 is
large, that is close to 1, xj is well explained by the other predictors. With a
15.2. COLLINEARITY 341
large Rj2 the variance inflation factor becomes large. This tells us that when xj
is highly correlated with other predictors, our estimate of βj is highly variable.
The vif function from the faraway package calculates the VIFs for each of the
predictors of a model.
vif(hip_model)
Adding the noise had such a large effect, the sign of the coefficient for Ht has
changed.
coef(hip_model)
-100
Predicted, With Noise
-150
-200
-250
We see that by plotting the predicted values using both models against each
other, they are actually rather similar.
##
## Call:
## lm(formula = hipcenter ~ Age + Arm + Ht, data = seatpos)
##
## Residuals:
## Min 1Q Median 3Q Max
## -82.347 -24.745 -0.094 23.555 58.314
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 493.2491 101.0724 4.880 2.46e-05 ***
## Age 0.7988 0.5111 1.563 0.12735
## Arm -2.9385 3.5210 -0.835 0.40979
## Ht -3.4991 0.9954 -3.515 0.00127 **
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 36.12 on 34 degrees of freedom
15.2. COLLINEARITY 343
## Age Arm Ht
## 1.749943 3.996766 3.508693
Also notice that using an F -test to compare the two models, we would prefer
the smaller model.
We now investigate the effect of adding another variable to this smaller model.
Specifically we want to look at adding the variable HtShoes. So now our possible
predictors are HtShoes, Age, Arm, and Ht. Our response is still hipcenter.
To quantify this effect we will look at a variable added plot and a partial
correlation coefficient. For both of these, we will look at the residuals of two
models:
The correlation of these two residuals gives us the partial correlation coeffi-
cient of HtShoes and hipcenter with the effects of Age, Arm, and Ht removed.
cor(resid(ht_shoes_model_small), resid(hip_model_small))
## [1] -0.03311061
344 CHAPTER 15. COLLINEARITY
Since this value is small, close to zero, it means that the variation of hipcenter
that is unexplained by Age, Arm, and Ht shows very little correlation with the
variation of HtShoes that is not explained by Age, Arm, and Ht. Thus adding
HtShoes to the model would likely be of little benefit.
Similarly a variable added plot visualizes these residuals against each other.
It is also helpful to regress the residuals of the response against the residuals of
the predictor and add the regression line to the plot.
plot(resid(hip_model_small) ~ resid(ht_shoes_model_small),
col = "dodgerblue", pch = 20,
xlab = "Residuals, Added Predictor",
ylab = "Residuals, Original Model")
abline(h = 0, lty = 2)
abline(v = 0, lty = 2)
abline(lm(resid(hip_model_small) ~ resid(ht_shoes_model_small)),
col = "darkorange", lwd = 2)
20 40 60
Residuals, Original Model
0
-40
-80
Here the variable added plot shows almost no linear relationship. This tells us
that adding HtShoes to the model would probably not be worthwhile. Since
its variation is largely explained by the other predictors, adding it to the model
will not do much to improve the model. However it will increase the variation
of the estimates and make the model much harder to interpret.
Had there been a strong linear relationship here, thus a large partial correlation
coefficient, it would likely have been useful to add the additional predictor to
the model.
15.3. SIMULATION 345
This trade off is mostly true in general. As a model gets more predictors, errors
will get smaller and its prediction will be better, but it will be harder to interpret.
This is why, if we are interested in explaining the relationship between the
predictors and the response, we often want a model that fits well, but with a
small number of predictors with little correlation.
Next chapter we will learn about methods to find models that both fit well,
but also have a small number of predictors. We will also discuss overfitting. Al-
though, adding additional predictors will always make errors smaller, sometimes
we will be “fitting the noise” and such a model will not generalize to additional
observations well.
15.3 Simulation
Here we simulate example data with and without collinearity. We will note the
difference in the distribution of the estimates of the β parameters, in particular
their variance. However, we will also notice the similarity in their M SE.
We will use the model,
Y = β0 + β1 x1 + β2 x2 + ϵ
We will use a sample size of 10, and 2500 simulations for both situations.
sample_size = 10
num_sim = 2500
We’ll first consider the situation with a collinearity issue, so we manually create
the two predictor variables.
x1 = c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
x2 = c(1, 2, 3, 4, 5, 7, 6, 10, 9, 8)
c(sd(x1), sd(x2))
## [1] 0.9393939
346 CHAPTER 15. COLLINEARITY
We perform the simulation 2500 times, each time fitting a regression model, and
storing the estimated coefficients and the MSE.
for (s in 1:num_sim) {
y = true_line_bad + rnorm(n = sample_size, mean = 0, sd = sigma)
reg_out = lm(y ~ x1 + x2)
beta_hat_bad[s, ] = coef(reg_out)[-1]
mse_bad[s] = mean(resid(reg_out) ^ 2)
}
Notice that the standard deviations of each are the same as before, however,
now the correlation is extremely close to 0.
c(sd(z1), sd(z2))
## [1] 0.03030303
true_line_good = beta_0 + beta_1 * z1 + beta_2 * z2
beta_hat_good = matrix(0, num_sim, 2)
mse_good = rep(0, num_sim)
^ ^
Histogram of β1 with Collinearity Histogram of β1 without Collinearity
300
300
250
200
Frequency
Frequency
200
150
100
100
50
0
-2 0 2 4 6 8 10 1 2 3 4 5
^ ^
β1 β1
First, for β1 , which has a true value of 3, we see that both with and without
collinearity, the simulated values are centered near 3.
mean(beta_hat_bad[, 1])
## [1] 2.963325
mean(beta_hat_good[, 1])
## [1] 3.013414
The way the predictors were created, the Sxj xj portion of the variance is the
same for the predictors in both cases, but the variance is still much larger in the
simulations performed with collinearity. The variance is so large in the collinear
case, that sometimes the estimated coefficient for β1 is negative!
sd(beta_hat_bad[, 1])
## [1] 1.633294
sd(beta_hat_good[, 1])
## [1] 0.5484684
348 CHAPTER 15. COLLINEARITY
^ ^
Histogram of β2 with Collinearity Histogram of β2 without Collinearity
400
300
250
300
200
Frequency
Frequency
200
150
100
100
50
0
-2 0 2 4 6 8 10 2 3 4 5 6
^ ^
β2 β2
We see the same issues with β2 . On average the estimates are correct, but the
variance is again much larger with collinearity.
mean(beta_hat_bad[, 2])
## [1] 4.025059
mean(beta_hat_good[, 2])
## [1] 4.004913
sd(beta_hat_bad[, 2])
## [1] 1.642592
sd(beta_hat_good[, 2])
## [1] 0.5470381
15.4. R MARKDOWN 349
600
600
500
500
400
400
Frequency
Frequency
300
300
200
200
100
100
0
0 20 40 60 0 10 20 30 40 50 60 70
MSE MSE
Interestingly, in both cases, the MSE is roughly the same on average. Again,
this is because collinearity effects a model’s ability to explain, but not predict.
mean(mse_bad)
## [1] 17.7186
mean(mse_good)
## [1] 17.70513
15.4 R Markdown
The R Markdown file for this chapter can be found here:
• collinearity.Rmd
The file was created using R version 3.6.1.
350 CHAPTER 15. COLLINEARITY
Chapter 16
So far, we have seen criteria such as R2 and RMSE for assessing quality of fit.
However, both of these have a fatal flaw. By increasing the size of a model, that
is adding predictors, that can at worst not improve. It is impossible to add a
predictor to a model and make R2 or RMSE worse. That means, if we were to
351
352 CHAPTER 16. VARIABLE SELECTION AND MODEL BUILDING
use either of these to chose between models, we would always simply choose the
larger model. Eventually we would simply be fitting to noise.
This suggests that we need a quality criteria that takes into account the size of
the model, since our preference is for small models that still fit well. We are
willing to sacrifice a small amount of “goodness-of-fit” for obtaining a smaller
model. (Here we use “goodness-of-fit” to simply mean how far the data is from
the model, the smaller the errors the better. Often in statistics, goodness-of-
fit can have a more precise meaning.) We will look at three criteria that do
this explicitly: AIC, BIC, and Adjusted R2 . We will also look at one, Cross-
Validated RMSE, which implicitly considers the size of the model.
The first criteria we will discuss is the Akaike Information Criterion, or AIC for
short. (Note that, when Akaike first introduced this metric, it was simply called
An Information Criterion. The A has changed meaning over the years.)
Recall, the maximized log-likelihood of a regression model can be written as
( )
n n RSS n
log L(β̂, σ̂ 2 ) = − log(2π) − log − ,
2 2 n 2
∑n
where RSS = i=1 (yi − ŷi )2 and β̂ and σ̂ 2 were chosen to maximize the likeli-
hood.
Then we can define AIC as
( )
RSS
AIC = −2 log L(β̂, σ̂ ) + 2p = n + n log(2π) + n log
2
+ 2p,
n
which is a measure of quality of the model. The smaller the AIC, the better.
To see why, let’s talk about the two main components of AIC, the likelihood
(which measures “goodness-of-fit”) and the penalty (which is a function of the
size of the model).
The likelihood portion of AIC is given by
( )
RSS
−2 log L(β̂, σ̂ 2 ) = n + n log(2π) + n log .
n
For the
( sake
) of comparing models, the only term here that will change is
n log RSS
n , which is function of RSS. The
n + n log(2π)
16.1. QUALITY CRITERION 353
terms will be constant across all models applied to the same data. So, when a
model fits well, that is, has a low RSS, then this likelihood component will be
small.
Similarly, we can discuss the penalty component of AIC which is,
2p,
( )
RSS
AIC = n log + 2p
n
is a sufficient expression, as n + n log(2π) is the same across all models for any
particular dataset.
The Bayesian Information Criterion, or BIC, is similar to AIC, but has a larger
penalty. BIC also quantifies the trade-off between a model which fits well and
the number of model parameters, however for a reasonable sample size, generally
picks a smaller model than AIC. Again, for model selection use the model with
the smallest BIC.
( )
RSS
BIC = −2 log L(β̂, σ̂ 2 ) + log(n)p = n + n log(2π) + n log + log(n)p.
n
2p,
log(n)p.
So, for any dataset where log(n) > 2 the BIC penalty will be larger than the
AIC penalty, thus BIC will likely prefer a smaller model.
354 CHAPTER 16. VARIABLE SELECTION AND MODEL BUILDING
Note that, sometimes the penalty is considered a general expression of the form
k · p.
Recall,
∑n
SSE (yi − ŷi )2
R2 = 1 − = 1 − ∑i=1
n .
SST i=1 (yi − ȳ)
2
We now define
( )
SSE/(n − p) n−1
Ra2 =1− =1− (1 − R2 )
SST/(n − 1) n−p
Each of the previous three metrics explicitly used p, the number of parameters,
in their calculations. Thus, they all explicitly limit the size of models chosen
when used to compare models.
We’ll now briefly introduce overfitting and cross-validation.
make_poly_data = function(sample_size = 11) {
x = seq(0, 10)
y = 3 + x + 4 * x ^ 2 + rnorm(n = sample_size, mean = 0, sd = 20)
data.frame(x, y)
}
16.1. QUALITY CRITERION 355
set.seed(1234)
poly_data = make_poly_data()
Y = 3 + x + 4x2 + ϵ.
We’ll now fit two models to this data, one which has the correct form, quadratic,
and one that is large, which includes terms up to and including an eighth degree.
fit_quad = lm(y ~ poly(x, degree = 2), data = poly_data)
fit_big = lm(y ~ poly(x, degree = 8), data = poly_data)
We then plot the data and the results of the two models.
plot(y ~ x, data = poly_data, ylim = c(-100, 400), cex = 2, pch = 20)
xplot = seq(0, 10, by = 0.1)
lines(xplot, predict(fit_quad, newdata = data.frame(x = xplot)),
col = "dodgerblue", lwd = 2, lty = 1)
lines(xplot, predict(fit_big, newdata = data.frame(x = xplot)),
col = "darkorange", lwd = 2, lty = 2)
400
300
200
y
100
0
-100
0 2 4 6 8 10
We can see that the solid blue curve models this data rather nicely. The dashed
orange curve fits the points better, making smaller errors, however it is unlikely
356 CHAPTER 16. VARIABLE SELECTION AND MODEL BUILDING
## [1] 17.61812
sqrt(mean(resid(fit_big) ^ 2))
## [1] 10.4197
To correct for this, we will introduce cross-validation. We define the leave-one-
out cross-validated RMSE to be
v
u n
u1 ∑
RMSELOOCV =t e2 .
n i=1 [i]
The e[i] are the residual for the ith observation, when that observation is not
used to fit the model.
e[i] = yi − ŷ[i]
ŷ[i] = x⊤
i β̂[i]
where β̂[i] are the estimated coefficients when the ith observation is removed
from the dataset.
In general, to perform this calculation, we would be required to fit the model n
times, once with each possible observation removed. However, for leave-one-out
cross-validation and linear models, the equation can be rewritten as
v
u n ( )2
u1 ∑ ei
RMSELOOCV =t ,
n i=1 1 − hi
where hi are the leverages and ei are the usual residuals. This is great, because
now we can obtain the LOOCV RMSE by fitting only one model! In practice 5
or 10 fold cross-validation are much more popular. For example, in 5-fold cross-
validation, the model is fit 5 times, each time leaving out a fifth of the data, then
predicting on those values. We’ll leave in-depth examination of cross-validation
to a machine learning course, and simply use LOOCV here.
16.1. QUALITY CRITERION 357
Let’s calculate LOOCV RMSE for both models, then discuss why we want to
do so. We first write a function which calculates the LOOCV RMSE as defined
using the shortcut formula for linear models.
calc_loocv_rmse = function(model) {
sqrt(mean((resid(model) / (1 - hatvalues(model))) ^ 2))
}
## [1] 23.57189
calc_loocv_rmse(fit_big)
## [1] 1334.357
Now we see that the quadratic model has a much smaller LOOCV RMSE, so we
would prefer this quadratic model. This is because the large model has severely
over-fit the data. By leaving a single data point out and fitting the large model,
the resulting fit is much different than the fit using all of the data. For example,
let’s leave out the third data point and fit both models, then plot the result.
fit_quad_removed = lm(y ~ poly(x, degree = 2), data = poly_data[-3, ])
fit_big_removed = lm(y ~ poly(x, degree = 8), data = poly_data[-3, ])
400
300
200
y
100
0
-100
0 2 4 6 8 10
We see that on average, the solid blue line for the quadratic model has similar
errors as before. It has changed very slightly. However, the dashed orange line
for the large model, has a huge error at the point that was removed and is much
different that the previous fit.
This is the purpose of cross-validation. By assessing how the model fits points
that were not used to perform the regression, we get an idea of how well the
model will work for future observations. It assess how well the model works in
general, not simply on the observed data.
We’ve now seen a number of model quality criteria, but now we need to address
which models to consider. Model selection involves both a quality criterion, plus
a search procedure.
library(faraway)
hipcenter_mod = lm(hipcenter ~ ., data = seatpos)
coef(hipcenter_mod)
Let’s return to the seatpos data from the faraway package. Now, let’s consider
only models with first order terms, thus no interactions and no polynomials.
There are eight predictors in this model. So if we consider all possible models,
ranging from using 0 predictors, to all eight predictors, there are
p−1 (
∑ )
p−1
= 2p−1 = 28 = 256
k
k=0
possible models.
If we had 10 or more predictors, we would already be considering over 1000
models! For this reason, we often search through possible models in an intelligent
way, bypassing some models that are unlikely to be considered good. We will
consider three search procedures: backwards, forwards, and stepwise.
Backward selection procedures start with all possible predictors in the model,
then considers how deleting a single predictor will effect a chosen metric. Let’s
try this on the seatpos data. We will use the step() function in R which by
default uses AIC as its metric of choice.
hipcenter_mod_back_aic = step(hipcenter_mod, direction = "backward")
## Start: AIC=283.62
## hipcenter ~ Age + Weight + HtShoes + Ht + Seated + Arm + Thigh +
## Leg
##
## Df Sum of Sq RSS AIC
## - Ht 1 5.01 41267 281.63
## - Weight 1 8.99 41271 281.63
## - Seated 1 28.64 41290 281.65
## - HtShoes 1 108.43 41370 281.72
## - Arm 1 164.97 41427 281.78
## - Thigh 1 262.76 41525 281.87
## <none> 41262 283.62
## - Age 1 2632.12 43894 283.97
## - Leg 1 2654.85 43917 283.99
##
## Step: AIC=281.63
## hipcenter ~ Age + Weight + HtShoes + Seated + Arm + Thigh + Leg
##
## Df Sum of Sq RSS AIC
## - Weight 1 11.10 41278 279.64
## - Seated 1 30.52 41297 279.66
360 CHAPTER 16. VARIABLE SELECTION AND MODEL BUILDING
( )
RSS
AIC = n log + 2p,
n
## [1] 9
n * log(mean(resid(hipcenter_mod) ^ 2)) + 2 * p
## [1] 283.624
Returning to the first step, R then gives us a row which shows the effect of
deleting each of the current predictors. The - signs at the beginning of each
row indicates we are considering removing a predictor. There is also a row with
<none> which is a row for keeping the current model. Notice that this row has
the smallest RSS, as it is the largest model.
We see that every row above <none> has a smaller AIC than the row for <none>
with the one at the top, Ht, giving the lowest AIC. Thus we remove Ht from
the model, and continue the process.
Notice, in the second step, we start with the model hipcenter ~ Age + Weight
+ HtShoes + Seated + Arm + Thigh + Leg and the variable Ht is no longer
considered.
We continue the process until we reach the model hipcenter ~ Age + HtShoes
+ Leg. At this step, the row for <none> tops the list, as removing any addi-
362 CHAPTER 16. VARIABLE SELECTION AND MODEL BUILDING
tional variable will not improve the AIC This is the model which is stored in
hipcenter_mod_back_aic.
coef(hipcenter_mod_back_aic)
We could also search through the possible models in a backwards fashion using
BIC. To do so, we again use the step() function, but now specify k = log(n),
where n stores the number of observations in the data.
n = length(resid(hipcenter_mod))
hipcenter_mod_back_bic = step(hipcenter_mod, direction = "backward", k = log(n))
## Start: AIC=298.36
## hipcenter ~ Age + Weight + HtShoes + Ht + Seated + Arm + Thigh +
## Leg
##
## Df Sum of Sq RSS AIC
## - Ht 1 5.01 41267 294.73
## - Weight 1 8.99 41271 294.73
## - Seated 1 28.64 41290 294.75
## - HtShoes 1 108.43 41370 294.82
## - Arm 1 164.97 41427 294.88
## - Thigh 1 262.76 41525 294.97
## - Age 1 2632.12 43894 297.07
## - Leg 1 2654.85 43917 297.09
## <none> 41262 298.36
##
## Step: AIC=294.73
## hipcenter ~ Age + Weight + HtShoes + Seated + Arm + Thigh + Leg
##
## Df Sum of Sq RSS AIC
## - Weight 1 11.10 41278 291.10
## - Seated 1 30.52 41297 291.12
## - Arm 1 160.50 41427 291.24
## - Thigh 1 269.08 41536 291.34
## - HtShoes 1 971.84 42239 291.98
## - Leg 1 2664.65 43931 293.47
## - Age 1 2808.52 44075 293.59
## <none> 41267 294.73
##
## Step: AIC=291.1
## hipcenter ~ Age + HtShoes + Seated + Arm + Thigh + Leg
##
## Df Sum of Sq RSS AIC
16.2. SELECTION PROCEDURES 363
## Step: AIC=278.73
## hipcenter ~ HtShoes
##
## Df Sum of Sq RSS AIC
## <none> 48105 278.73
## - HtShoes 1 83534 131639 313.35
The procedure is exactly the same, except at each step we look to improve the
BIC, which R still labels AIC in the output.
The variable hipcenter_mod_back_bic stores the model chosen by this proce-
dure.
coef(hipcenter_mod_back_bic)
## (Intercept) HtShoes
## 565.592659 -4.262091
We note that this model is smaller, has fewer predictors, than the model chosen
by AIC, which is what we would expect. Also note that while both models are
different, neither uses both Ht and HtShoes which are extremely correlated.
We can use information from the summary() function to compare their Adjusted
R2 values. Note that either selected model performs better than the original
full model.
summary(hipcenter_mod)$adj.r.squared
## [1] 0.6000855
summary(hipcenter_mod_back_aic)$adj.r.squared
## [1] 0.6531427
summary(hipcenter_mod_back_bic)$adj.r.squared
## [1] 0.6244149
We can also calculate the LOOCV RMSE for both selected models, as well as
the full model.
calc_loocv_rmse(hipcenter_mod)
## [1] 44.44564
calc_loocv_rmse(hipcenter_mod_back_aic)
## [1] 37.58473
calc_loocv_rmse(hipcenter_mod_back_bic)
## [1] 37.40564
16.2. SELECTION PROCEDURES 365
We see that we would prefer the model chosen via BIC if using LOOCV RMSE
as our metric.
## Start: AIC=311.71
## hipcenter ~ 1
##
## Df Sum of Sq RSS AIC
## + Ht 1 84023 47616 275.07
## + HtShoes 1 83534 48105 275.45
## + Leg 1 81568 50071 276.98
## + Seated 1 70392 61247 284.63
## + Weight 1 53975 77664 293.66
## + Thigh 1 46010 85629 297.37
## + Arm 1 45065 86574 297.78
## <none> 131639 311.71
## + Age 1 5541 126098 312.07
##
## Step: AIC=275.07
## hipcenter ~ Ht
##
## Df Sum of Sq RSS AIC
## + Leg 1 2781.10 44835 274.78
## <none> 47616 275.07
## + Age 1 2353.51 45262 275.14
## + Weight 1 195.86 47420 276.91
## + Seated 1 101.56 47514 276.99
## + Arm 1 75.78 47540 277.01
## + HtShoes 1 25.76 47590 277.05
## + Thigh 1 4.63 47611 277.06
##
## Step: AIC=274.78
## hipcenter ~ Ht + Leg
366 CHAPTER 16. VARIABLE SELECTION AND MODEL BUILDING
##
## Df Sum of Sq RSS AIC
## + Age 1 2896.60 41938 274.24
## <none> 44835 274.78
## + Arm 1 522.72 44312 276.33
## + Weight 1 445.10 44390 276.40
## + HtShoes 1 34.11 44801 276.75
## + Thigh 1 32.96 44802 276.75
## + Seated 1 1.12 44834 276.78
##
## Step: AIC=274.24
## hipcenter ~ Ht + Leg + Age
##
## Df Sum of Sq RSS AIC
## <none> 41938 274.24
## + Thigh 1 372.71 41565 275.90
## + Arm 1 257.09 41681 276.01
## + Seated 1 121.26 41817 276.13
## + Weight 1 46.83 41891 276.20
## + HtShoes 1 13.38 41925 276.23
Again, by default R uses AIC as its quality metric when using the step() func-
tion. Also note that now the rows begin with a + which indicates addition of
predictors to the current model from any step.
hipcenter_mod_forw_bic = step(
hipcenter_mod_start,
scope = hipcenter ~ Age + Weight + HtShoes + Ht + Seated + Arm + Thigh + Leg,
direction = "forward", k = log(n))
## Start: AIC=313.35
## hipcenter ~ 1
##
## Df Sum of Sq RSS AIC
## + Ht 1 84023 47616 278.34
## + HtShoes 1 83534 48105 278.73
## + Leg 1 81568 50071 280.25
## + Seated 1 70392 61247 287.91
## + Weight 1 53975 77664 296.93
## + Thigh 1 46010 85629 300.64
## + Arm 1 45065 86574 301.06
## <none> 131639 313.35
## + Age 1 5541 126098 315.35
##
## Step: AIC=278.34
## hipcenter ~ Ht
##
16.2. SELECTION PROCEDURES 367
## [1] 0.6000855
summary(hipcenter_mod_forw_aic)$adj.r.squared
## [1] 0.6533055
summary(hipcenter_mod_forw_bic)$adj.r.squared
## [1] 0.6282374
We can compare the two selected models’ Adjusted R2 as well as their LOOCV
RMSE The results are very similar to those using backwards selection, although
the models are not exactly the same.
calc_loocv_rmse(hipcenter_mod)
## [1] 44.44564
calc_loocv_rmse(hipcenter_mod_forw_aic)
## [1] 37.62516
calc_loocv_rmse(hipcenter_mod_forw_bic)
## [1] 37.2511
Stepwise search checks going both backwards and forwards at every step. It
considers the addition of any variable not currently in the model, as well as the
removal of any variable currently in the model.
Here we perform stepwise search using AIC as our metric. We start with
the model hipcenter ~ 1 and search up to hipcenter ~ Age + Weight +
HtShoes + Ht + Seated + Arm + Thigh + Leg. Notice that at many of the
steps, some row begin with -, while others begin with +.
368 CHAPTER 16. VARIABLE SELECTION AND MODEL BUILDING
hipcenter_mod_both_aic = step(
hipcenter_mod_start,
scope = hipcenter ~ Age + Weight + HtShoes + Ht + Seated + Arm + Thigh + Leg,
direction = "both")
## Start: AIC=311.71
## hipcenter ~ 1
##
## Df Sum of Sq RSS AIC
## + Ht 1 84023 47616 275.07
## + HtShoes 1 83534 48105 275.45
## + Leg 1 81568 50071 276.98
## + Seated 1 70392 61247 284.63
## + Weight 1 53975 77664 293.66
## + Thigh 1 46010 85629 297.37
## + Arm 1 45065 86574 297.78
## <none> 131639 311.71
## + Age 1 5541 126098 312.07
##
## Step: AIC=275.07
## hipcenter ~ Ht
##
## Df Sum of Sq RSS AIC
## + Leg 1 2781 44835 274.78
## <none> 47616 275.07
## + Age 1 2354 45262 275.14
## + Weight 1 196 47420 276.91
## + Seated 1 102 47514 276.99
## + Arm 1 76 47540 277.01
## + HtShoes 1 26 47590 277.05
## + Thigh 1 5 47611 277.06
## - Ht 1 84023 131639 311.71
##
## Step: AIC=274.78
## hipcenter ~ Ht + Leg
##
## Df Sum of Sq RSS AIC
## + Age 1 2896.6 41938 274.24
## <none> 44835 274.78
## - Leg 1 2781.1 47616 275.07
## + Arm 1 522.7 44312 276.33
## + Weight 1 445.1 44390 276.40
## + HtShoes 1 34.1 44801 276.75
## + Thigh 1 33.0 44802 276.75
## + Seated 1 1.1 44834 276.78
16.2. SELECTION PROCEDURES 369
## Start: AIC=313.35
## hipcenter ~ 1
##
## Df Sum of Sq RSS AIC
## + Ht 1 84023 47616 278.34
## + HtShoes 1 83534 48105 278.73
## + Leg 1 81568 50071 280.25
## + Seated 1 70392 61247 287.91
## + Weight 1 53975 77664 296.93
## + Thigh 1 46010 85629 300.64
## + Arm 1 45065 86574 301.06
## <none> 131639 313.35
## + Age 1 5541 126098 315.35
##
## Step: AIC=278.34
## hipcenter ~ Ht
##
## Df Sum of Sq RSS AIC
## <none> 47616 278.34
## + Leg 1 2781 44835 279.69
## + Age 1 2354 45262 280.05
## + Weight 1 196 47420 281.82
## + Seated 1 102 47514 281.90
## + Arm 1 76 47540 281.92
370 CHAPTER 16. VARIABLE SELECTION AND MODEL BUILDING
## [1] 0.6000855
summary(hipcenter_mod_both_aic)$adj.r.squared
## [1] 0.6533055
summary(hipcenter_mod_both_bic)$adj.r.squared
## [1] 0.6282374
calc_loocv_rmse(hipcenter_mod)
## [1] 44.44564
calc_loocv_rmse(hipcenter_mod_both_aic)
## [1] 37.62516
calc_loocv_rmse(hipcenter_mod_both_bic)
## [1] 37.2511
Backward, forward, and stepwise search are all useful, but do have an obvious
issue. By not checking every possible model, sometimes they will miss the best
possible model. With an extremely large number of predictors, sometimes this is
necessary since checking every possible model would be rather time consuming,
even with current computers.
However, with a reasonably sized dataset, it isn’t too difficult to check all possi-
ble models. To do so, we will use the regsubsets() function in the R package
leaps.
library(leaps)
all_hipcenter_mod = summary(regsubsets(hipcenter ~ ., data = seatpos))
A few points about this line of code. First, note that we immediately
use summary() and store those results. That is simply the intended use
of regsubsets(). Second, inside of regsubsets() we specify the model
16.2. SELECTION PROCEDURES 371
hipcenter ~ .. This will be the largest model considered, that is the model
using all first-order predictors, and R will check all possible subsets.
Using $which gives us the best model, according to RSS, for a model of each
possible size, in this case ranging from one to eight predictors. For example the
best model with four predictors (p = 5) would use Age, HtShoes, Thigh, and
Leg.
all_hipcenter_mod$rss
We can obtain the RSS for each of these models using $rss. Notice that these
are decreasing since the models range from small to large.
Now that we have the RSS for each of these models, it is rather easy to obtain
AIC, BIC, and Adjusted R2 since they are all a function of RSS Also, since we
have the models with the best RSS for each size, they will result in the models
with the best AIC, BIC, and Adjusted R2 for each size. Then by picking from
those, we can find the overall best AIC, BIC, and Adjusted R2 .
To find which model has the highest Adjusted R2 we can use the which.max()
function.
(best_r2_ind = which.max(all_hipcenter_mod$adjr2))
## [1] 3
all_hipcenter_mod$which[best_r2_ind, ]
We’ll use the form of AIC which leaves out the constant term that is equal across
all models.
( )
RSS
AIC = n log + 2p.
n
Since we have the RSS of each model stored, this is easy to calculate.
hipcenter_mod_aic = n * log(all_hipcenter_mod$rss / n) + 2 * (2:p)
We can then extract the predictors of the model with the best AIC.
best_aic_ind = which.min(hipcenter_mod_aic)
all_hipcenter_mod$which[best_aic_ind,]
The extractAIC() function will calculate the AIC defined above for a fitted
model.
extractAIC(hipcenter_mod_best_aic)
278
276
274
2 3 4 5 6 7 8 9
p, number of parameters
( )
RSS
BIC = n log + log(n)p.
n
which.min(hipcenter_mod_bic)
## [1] 1
all_hipcenter_mod$which[1,]
374 CHAPTER 16. VARIABLE SELECTION AND MODEL BUILDING
extractAIC(hipcenter_mod_best_bic, k = log(n))
So far we have only allowed first-order terms in our models. Let’s return to the
autompg dataset to explore higher-order terms.
autompg = read.table(
"http://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data",
quote = "\"",
comment.char = "",
stringsAsFactors = FALSE)
colnames(autompg) = c("mpg", "cyl", "disp", "hp", "wt", "acc",
"year", "origin", "name")
autompg = subset(autompg, autompg$hp != "?")
autompg = subset(autompg, autompg$name != "plymouth reliant")
rownames(autompg) = paste(autompg$cyl, "cylinder", autompg$year, autompg$name)
autompg$hp = as.numeric(autompg$hp)
autompg$domestic = as.numeric(autompg$origin == 1)
autompg = autompg[autompg$cyl != 5,]
autompg = autompg[autompg$cyl != 3,]
autompg$cyl = as.factor(autompg$cyl)
autompg$domestic = as.factor(autompg$domestic)
autompg = subset(autompg, select = c("mpg", "cyl", "disp", "hp",
"wt", "acc", "year", "domestic"))
16.3. HIGHER ORDER TERMS 375
str(autompg)
Recall that we have two factor variables, cyl and domestic. The cyl variable
has three levels, while the domestic variable has only two. Thus the cyl variable
will be coded using two dummy variables, while the domestic variable will only
need one. We will pay attention to this later.
pairs(autompg, col = "dodgerblue")
376 CHAPTER 16. VARIABLE SELECTION AND MODEL BUILDING
mpg
30
10
3.0
cyl
2.0
1.0
300
disp
100
150
hp
50
3500
wt
1500
20
acc
10
70 74 78 82
year
1.8
domestic
1.4
1.0
We’ll use the pairs() plot to determine which variables may benefit from a
quadratic relationship with the response. We’ll also consider all possible two-
way interactions. We won’t consider any three-order or higher. For example, we
won’t consider the interaction between first-order terms and the added quadratic
terms.
So now, we’ll fit this rather large model. We’ll use a log-transformed response.
Notice that log(mpg) ~ . ^ 2 will automatically consider all first-order terms,
as well as all two-way interactions. We use I(var_name ^ 2) to add quadratic
terms for some variables. This generally works better than using poly() when
performing variable selection.
autompg_big_mod = lm(
log(mpg) ~ . ^ 2 + I(disp ^ 2) + I(hp ^ 2) + I(wt ^ 2) + I(acc ^ 2),
data = autompg)
We think it is rather unlikely that we truly need all of these terms. There are
16.3. HIGHER ORDER TERMS 377
quite a few!
length(coef(autompg_big_mod))
## [1] 40
We’ll try backwards search with both AIC and BIC to attempt to find a smaller,
more reasonable model.
autompg_mod_back_aic = step(autompg_big_mod, direction = "backward", trace = 0)
Notice that we used trace = 0 in the function call. This suppress the output
for each step, and simply stores the chosen model. This is useful, as this code
would otherwise create a large amount of output. If we had viewed the output,
which you can try on your own by removing trace = 0, we would see that R
only considers the cyl variable as a single variable, despite the fact that it is
coded using two dummy variables. So removing cyl would actually remove two
parameters from the resulting model.
You should also notice that R respects hierarchy when attempting to remove
variables. That is, for example, R will not consider removing hp if hp:disp or
I(hp ^ 2) are currently in the model.
We also use BIC.
n = length(resid(autompg_big_mod))
autompg_mod_back_bic = step(autompg_big_mod, direction = "backward",
k = log(n), trace = 0)
Looking at the coefficients of the two chosen models, we see they are still rather
large.
coef(autompg_mod_back_aic)
However, they are much smaller than the original full model. Also notice that
the resulting models respect hierarchy.
length(coef(autompg_big_mod))
## [1] 40
length(coef(autompg_mod_back_aic))
## [1] 19
length(coef(autompg_mod_back_bic))
## [1] 15
Calculating the LOOCV RMSE for each, we see that the model chosen using
BIC performs the best. That means that it is both the best model for pre-
diction, since it achieves the best LOOCV RMSE, but also the best model for
explanation, as it is also the smallest.
calc_loocv_rmse(autompg_big_mod)
## [1] 0.1112024
calc_loocv_rmse(autompg_mod_back_aic)
## [1] 0.1032888
calc_loocv_rmse(autompg_mod_back_bic)
## [1] 0.103134
##
## Call:
## lm(formula = log(mpg) ~ cyl + disp + hp + wt + acc + year + domestic +
## cyl:acc + disp:wt + hp:acc + acc:year + acc:domestic, data = autompg)
16.4. EXPLANATION VERSUS PREDICTION 379
##
## Coefficients:
## (Intercept) cyl6 cyl8 disp hp
## 4.658e+00 -1.086e-01 -7.612e-01 -1.609e-03 2.621e-03
## wt acc year domestic1 cyl6:acc
## -2.636e-04 -1.671e-01 -1.046e-02 3.342e-01 4.315e-03
## cyl8:acc disp:wt hp:acc acc:year acc:domestic1
## 4.610e-02 4.103e-07 -3.386e-04 2.500e-03 -2.193e-02
16.4.1 Explanation
Suppose we would like to use this model for explanation. Perhaps we are a car
manufacturer trying to engineer a fuel efficient vehicle. If this is the case, we are
interested in both what predictor variables are useful for explaining the car’s fuel
efficiency, as well as how those variables effect fuel efficiency. By understanding
this relationship, we can use this knowledge to our advantage when designing a
car.
Note that linear models are rather interpretable to begin with. Later in your
data analysis careers, you will see more complicated models that may fit data
better, but are much harder, if not impossible to interpret. These models aren’t
very useful for explaining a relationship.
To find small and interpretable models, we would use selection criterion that
explicitly penalize larger models, such as AIC and BIC. In this case we still
obtained a somewhat large model, but much smaller than the model we used to
start the selection process.
A word of caution when using a model to explain a relationship. There are two
terms often used to describe a relationship between two variables: causation
and correlation. Correlation is often also referred to as association.
380 CHAPTER 16. VARIABLE SELECTION AND MODEL BUILDING
Just because two variable are correlated does not necessarily mean that one
causes the other. For example, considering modeling mpg as only a function of
hp.
plot(mpg ~ hp, data = autompg, col = "dodgerblue", pch = 20, cex = 1.5)
40
30
mpg
20
10
hp
16.4.2 Prediction
Suppose now instead of the manufacturer who would like to build a car, we are
a consumer who wishes to purchase a new car. However this particular car is so
new, it has not been rigorously tested, so we are unsure of what fuel efficiency
to expect. (And, as skeptics, we don’t trust what the manufacturer is telling
us.)
16.5. R MARKDOWN 381
In this case, we would like to use the model to help predict the fuel efficiency
of this car based on its attributes, which are the predictors of the model. The
smaller the errors the model makes, the more confident we are in its prediction.
Thus, to find models for prediction, we would use selection criterion that implic-
itly penalize larger models, such as LOOCV RMSE. So long as the model does
not over-fit, we do not actually care how large the model becomes. Explaining
the relationship between the variables is not our goal here, we simply want to
know what kind of fuel efficiency we should expect!
If we only care about prediction, we don’t need to worry about correlation vs
causation, and we don’t need to worry about model assumptions.
If a variable is correlated with the response, it doesn’t actually matter if it causes
an effect on the response, it can still be useful for prediction. For example, in
elementary school aged children their shoe size certainly doesn’t cause them to
read at a higher level, however we could very easily use shoe size to make a
prediction about a child’s reading ability. The larger their shoe size, the better
they read. There’s a lurking variable here though, their age! (Don’t send your
kids to school with size 14 shoes, it won’t make them read better!)
We also don’t care about model assumptions. Least squares is least squares. For
a specified model, it will find the values of the parameters which will minimize
the squared error loss. Your results might be largely uninterpretable and useless
for inference, but for prediction none of that matters.
16.5 R Markdown
The R Markdown file for this chapter can be found here:
• selection.Rmd
The file was created using R version 3.6.1.
382 CHAPTER 16. VARIABLE SELECTION AND MODEL BUILDING
Chapter 17
Logistic Regression
Note to current readers: This chapter is slightly less tested than previous
chapters. Please do not hesitate to report any errors, or suggest sections that
need better explanation! Also, as a result, this material is more likely to receive
edits.
After reading this chapter you will be able to:
• Understand how generalized linear models are a generalization of ordinary
linear models.
• Use logistic regression to model a binary response.
• Apply concepts learned for ordinary linear models to logistic regression.
• Use logistic regression to perform classification.
So far we have only considered models for numeric response variables. What
about response variables that only take integer values? What about a response
variable that is categorical? Can we use linear models in these situations? Yes!
The model that we have been using, which we will call ordinary linear regression,
is actually a specific case of the more general, generalized linear model. (Aren’t
statisticians great at naming things?)
So far, we’ve had response variables that, conditioned on the predictors, were
modeled using a normal distribution with a mean that is some linear combina-
tion of the predictors. This linear combination is what made a linear model
“linear.”
383
384 CHAPTER 17. LOGISTIC REGRESSION
Now we’ll allow for two modifications of this situation, which will let us use linear
models in many more situations. Instead of using a normal distribution for the
response conditioned on the predictors, we’ll allow for other distributions. Also,
instead of the conditional mean being a linear combination of the predictors, it
can be some function of a linear combination of the predictors.
In general, a generalized linear model has three parts:
• A distribution of the response conditioned on the predictors. (Techni-
cally this distribution needs to be from the exponential family of distribu-
tions.)
• A linear combination of the p − 1 predictors, β0 + β1 x1 + β2 x2 + . . . +
βp−1 xp−1 , which we write as η(x). That is,
Like ordinary linear regression, we will seek to “fit” the model by estimating
the β parameters. To do so, we will use the method of maximum likelihood.
Note that a Bernoulli distribution is a specific case of a binomial distribution
where the n parameter of a binomial is 1. Binomial regression is also possible,
17.2. BINARY RESPONSE 385
E[Y | X = x] = g −1 (η(x)).
p(x) = P [Y = 1 | X = x]
With a binary (Bernoulli) response, we’ll mostly focus on the case when Y = 1,
since with only two possibilities, it is trivial to obtain probabilities when Y = 0.
P [Y = 0 | X = x] + P [Y = 1 | X = x] = 1
P [Y = 0 | X = x] = 1 − p(x)
The left hand side is called the log odds, which is the log of the odds. The
odds are the probability for a positive event (Y = 1) divided by the probability
of a negative event (Y = 0). So when the odds are 1, the two events have equal
probability. Odds greater than 1 favor a positive event. The opposite is true
when the odds are less than 1.
p(x) P [Y = 1 | X = x]
=
1 − p(x) P [Y = 0 | X = x]
Essentially, the log odds are the logit transform applied to p(x).
( )
ξ
logit(ξ) = log
1−ξ
It will also be useful to define the inverse logit, otherwise known as the “logistic”
or sigmoid function.
eξ 1
logit−1 (ξ) = =
1 + eξ 1 + e−ξ
Note that for x ∈ (−∞, ∞)), this function outputs values between 0 and 1.
Students often ask, where is the error term? The answer is that its something
that is specific to the normal model. First notice that the model with the error
term,
Y = β0 + β1 x1 + . . . + βq xq + ϵ, ϵ ∼ N (0, σ 2 )
can instead be written as
Y | X = x ∼ N (β0 + β1 x1 + . . . + βq xq , σ 2 ).
Note that applying the inverse logit transformation allow us to obtain an ex-
pression for p(x).
With n observations, we write the model indexed with i to note that it is being
applied to each observation.
( )
p(xi )
log = β0 + β1 xi1 + · · · + βp−1 xi(p−1)
1 − p(xi ))
1
1 − p(xi ) = P [Yi = 0 | X = xi ] =
1 + eβ0 +β1 xi1 +···+βp−1 xi(p−1)
To “fit” this model, that is estimate the β parameters, we will use maximum
likelihood.
β = [β0 , β1 , β2 , β3 , . . . , βp−1 ]
∏
n
L(β) = P [Yi = yi | Xi = xi ]
i=1
∏
n
L(β) = p(xi )yi (1 − p(xi ))(1−yi )
i=1
∏
n ∏
n
L(β) = p(xi ) (1 − p(xj ))
i:yi =1 j:yj =0
388 CHAPTER 17. LOGISTIC REGRESSION
xi ⊤ β ∗ > 0 =⇒ yi = 1
and
xi ⊤ β ∗ < 0 =⇒ yi = 0
for all observations, then the MLE is not unique. Such data is said to be
separable.
This, and similar numeric issues related to estimated probabilities near 0 or 1,
will return a warning in R:
## Warning: glm.fit: fitted probabilities numerically 0 or 1 occurred
When this happens, the model is still “fit,” but there are consequences, namely,
the estimated coefficients are highly suspect. This is an issue when then trying
to interpret the model. When this happens, the model will often still be useful
for creating a classifier, which will be discussed later. However, it is still subject
to the usual evaluations for classifiers to determine how well it is performing.
For details, see Modern Applied Statistics with S-PLUS, Chapter 7.
data.frame(y, x)
}
You might think, why not simply use ordinary linear regression? Even with a
binary response, our goal is still to model (some function of) E[Y | X = x].
However, with a binary response coded as 0 and 1, E[Y | X = x] = P [Y = 1 |
X = x] since
E[Y | X = x] = 1 · P [Y = 1 | X = x] + 0 · P [Y = 0 | X = x]
= P [Y = 1 | X = x]
Then why can’t we just use ordinary linear regression to estimate E[Y | X = x],
and thus P [Y = 1 | X = x]?
To investigate, let’s simulate data from the following model:
( )
p(x)
log = −2 + 3x
1 − p(x)
Another way to write this, which better matches the function we’re using to
simulate the data:
Yi | Xi = xi ∼ Bern(pi )
1
pi = p(xi ) =
1 + e−η(xi )
η(xi ) = −2 + 3xi
set.seed(1)
example_data = sim_logistic_data()
head(example_data)
## y x
## 1 0 -0.6264538
## 2 1 0.1836433
## 3 0 -0.8356286
## 4 1 1.5952808
## 5 0 0.3295078
## 6 0 -0.8204684
After simulating a dataset, we’ll then fit both ordinary linear regression and
logistic regression. Notice that currently the responses variable y is a numeric
variable that only takes values 0 and 1. Later we’ll see that we can also fit
logistic regression when the response is a factor variable with only two levels.
(Generally, having a factor response is preferred, but having a dummy response
allows use to make the comparison to using ordinary linear regression.)
390 CHAPTER 17. LOGISTIC REGRESSION
would actually fit the ordinary linear regression that we have seen in the past.
By default, glm() uses family = gaussian argument. That is, we’re fitting
a GLM with a normally distributed response and the identity function as the
link.
The family argument to glm() actually specifies both the distribution and the
link function. If not made explicit, the link function is chosen to be the canon-
ical link function, which is essentially the most mathematical convenient link
function. See ?glm and ?family for details. For example, the following code
explicitly specifies the link function which was previously used by default.
# more detailed call to glm for logistic regression
fit_glm = glm(y ~ x, data = example_data, family = binomial(link = "logit"))
Making predictions with an object of type glm is slightly different than making
predictions after fitting with lm(). In the case of logistic regression, with family
= binomial, we have:
type Returned
( )
p̂(x)
"link" [default] η̂(x) = log 1−p̂(x)
eη̂(x)
"response" p̂(x) = 1+eη̂(x)
=
1
1+e−η̂(x)
That is, type = "link" will get you the log odds, while type = "response"
will return the estimated mean, in this case, P [Y = 1 | X = x] for each obser-
vation.
plot(y ~ x, data = example_data,
pch = 20, ylab = "Estimated Probability",
main = "Ordinary vs Logistic Regression")
grid()
abline(fit_lm, col = "darkorange")
17.2. BINARY RESPONSE 391
Ordinary
Logistic
0.8
Data
Estimated Probability
0.6
0.4
0.2
0.0
-2 -1 0 1
Since we only have a single predictor variable, we are able to graphically show
this situation. First, note that the data, is plotted using black dots. The
response y only takes values 0 and 1.
Next, we need to discuss the two added lines to the plot. The first, the solid
orange line, is the fitted ordinary linear regression.
The dashed blue curve is the estimated logistic regression. It is helpful to realize
that we are not plotting an estimate of Y for either. (Sometimes it might seem
that way with ordinary linear regression, but that isn’t what is happening.) For
both, we are plotting Ê[Y | X = x], the estimated mean, which for a binary
response happens to be an estimate of P [Y = 1 | X = x].
We immediately see why ordinary linear regression is not a good idea. While it
is estimating the mean, we see that it produces estimates that are less than 0!
(And in other situations could produce estimates greater than 1!) If the mean
is a probability, we don’t want probabilities less than 0 or greater than 1.
Enter logistic regression. Since the output of the inverse logit function is re-
stricted to be between 0 and 1, our estimates make much more sense as prob-
abilities. Let’s look at our estimated coefficients. (With a lot of rounding, for
simplicity.)
392 CHAPTER 17. LOGISTIC REGRESSION
round(coef(fit_glm), 1)
## (Intercept) x
## -2.3 3.7
Our estimated model is then:
( )
p̂(x)
log = −2.3 + 3.7x
1 − p̂(x)
Because we’re not directly estimating the mean, but instead a function of the
mean, we need to be careful with our interpretation of β̂1 = 3.7. This means
that, for a one unit increase in x, the log odds change (in this case increase) by
3.7. Also, since β̂1 is positive, as we increase x we also increase p̂(x). To see
how much, we have to consider the inverse logistic function.
For example, we have:
e−2.3+3.7·(−0.5)
P̂ [Y = 1 | X = −0.5] = ≈ 0.016
1 + e−2.3+3.7·(−0.5)
e−2.3+3.7·(0)
P̂ [Y = 1 | X = 0] = ≈ 0.09112296
1 + e−2.3+3.7·(0)
e−2.3+3.7·(1)
P̂ [Y = 1 | X = 1] = ≈ 0.8021839
1 + e−2.3+3.7·(1)
Now that we know we should use logistic regression, and not ordinary linear
regression, let’s consider another example. This time, let’s consider the model
( )
p(x)
log = 1 + −4x.
1 − p(x)
Again, we could re-write this to better match the function we’re using to simulate
the data:
Yi | Xi = xi ∼ Bern(pi )
1
pi = p(xi ) =
1 + e−η(xi )
η(xi ) = 1 + −4xi
set.seed(1)
example_data = sim_logistic_data(sample_size = 50, beta_0 = 1, beta_1 = -4)
We again simulate some observations form this model, then fit logistic regression.
fit_glm = glm(y ~ x, data = example_data, family = binomial)
0.6
0.4
0.2
True Probability
Estimated Probability
Data
0.0
-2 -1 0 1
p = 1 / (1 + exp(-eta))
y = rbinom(n = sample_size, size = 1, prob = p)
data.frame(y, x)
}
( )
p(x)
log = −1.5 + 0.5x + x2 .
1 − p(x)
Again, we could re-write this to better match the function we’re using to simulate
the data:
Yi | Xi = xi ∼ Bern(pi )
1
pi = p(xi ) =
1 + e−η(xi )
η(xi ) = −1.5 + 0.5xi + x2i
set.seed(42)
example_data = sim_quadratic_logistic_data(sample_size = 50)
0.6
0.4
0.2
True Probability
Estimated Probability
Data
0.0
-2 -1 0 1 2
While the logistic regression model isn’t exactly the same as the ordinary linear
regression model, because they both use a linear combination of the predictors
working with logistic regression is very similar. Many of the things we did with
ordinary linear regression can be done with logistic regression in a very similar
fashion. For example,
After some introduction to the new tests, we’ll demonstrate each of these using
an example.
396 CHAPTER 17. LOGISTIC REGRESSION
H0 : β j = 0 ̸ 0
vs H1 : βj =
using a t-test.
For the logistic regression model,
( )
p(x)
log = β0 + β1 x1 + . . . + βp−1 xp−1
1 − p(x)
H0 : βj = 0 vs H1 : βj ̸= 0
however, the test statistic and its distribution are no longer t. We see that the
test statistic takes the same form
β̂j − βj approx
z= ∼ N (0, 1)
SE[β̂j ]
( )
p(xi )
log = β0 + β1 xi1 + β2 xi2 + · · · + β(p−1) xi(p−1) + ϵi
1 − p(xi )
( )
p(xi )
log = β0 + β1 xi1 + β2 xi2 + · · · + β(q−1) xi(q−1) + ϵi
1 − p(xi )
H0 : βq = βq+1 = · · · = βp−1 = 0.
This implies that the reduced model is nested inside the full model.
We then define a test statistic, D,
( ) ( )
L(β̂Null ) L(β̂Full ) ( )
D = −2 log = 2 log = 2 ℓ(β̂Full ) − ℓ(β̂Null )
L(β̂Full ) L(β̂Null )
approx
D ∼ χ2k
To illustrate the use of logistic regression, we will use the SAheart dataset from
the ElemStatLearn package.
398 CHAPTER 17. LOGISTIC REGRESSION
# install.packages("ElemStatLearn")
library(ElemStatLearn)
data("SAheart")
sbp tobacco ldl adiposity famhist typea obesity alcohol age chd
160 12.00 5.73 23.11 Present 49 25.30 97.20 52 1
144 0.01 4.41 28.61 Absent 55 28.87 2.06 63 1
118 0.08 3.48 32.28 Present 52 29.14 3.81 46 0
170 7.50 6.41 38.03 Present 51 31.99 24.26 58 1
134 13.60 3.50 27.78 Present 60 25.99 57.34 49 1
132 6.20 6.47 36.21 Present 62 30.77 14.14 45 0
The predictors are various measurements for each individual, many related to
heart health. For example sbp, systolic blood pressure, and ldl, low density
lipoprotein cholesterol. For full details, use ?SAheart.
( )
P [chd = 1]
log = β0 + βldl xldl
1 − P [chd = 1]
1.0
0.8
Probability of CHD
0.6
0.4
0.2
0.0
2 4 6 8 10 12 14
As before, we plot the data in addition to the estimated probabilities. Note that
we have “jittered” the data to make it easier to visualize, but the data do only
take values 0 and 1.
As we would expect, this plot indicates that as ldl increases, so does the prob-
ability of chd.
coef(summary(chd_mod_ldl))
H0 : βldl = 0
we use the summary() function as we have done so many times before. Like the
t-test for ordinary linear regression, this returns the estimate of the parameter,
its standard error, the relevant test statistic (z), and its p-value. Here we have
an incredibly low p-value, so we reject the null hypothesis. The ldl variable
appears to be a significant predictor.
When fitting logistic regression, we can use the same formula syntax as ordinary
linear regression. So, to fit an additive model using all available predictors, we
use:
400 CHAPTER 17. LOGISTIC REGRESSION
We can then use the likelihood-ratio test to compare the two model. Specifically,
we are testing
## [1] 92.13879
Or we could utilize the anova() function. By specifying test = "LRT", R will
use the likelihood-ratio test to compare the two models.
anova(chd_mod_ldl, chd_mod_additive, test = "LRT")
We can create confidence intervals for the β parameters using the confint()
function as we did with ordinary linear regression.
confint(chd_mod_selected, level = 0.99)
Confidence intervals for the mean response require some additional thought.
With a “large enough” sample, we have
Then we can create an approximate (1 − α)% confidence intervals for η(x) using
where zα/2 is the critical value such that P (Z > zα/2 ) = α/2.
This isn’t a particularly interesting interval. Instead, what we really want is an
interval for the mean response, p(x). To obtain an interval for p(x), we simply
apply the inverse logit transform to the endpoints of the interval for η.
( )
logit−1 (η̂(x) − zα/2 · SE[η̂(x)]), logit−1 (η̂(x) + zα/2 · SE[η̂(x)])
Fist, we’ll use the predict() function to obtain η̂(x) for this observation.
eta_hat = predict(chd_mod_selected, new_obs, se.fit = TRUE, type = "link")
eta_hat
## $fit
## 1
## 1.579545
##
## $se.fit
## [1] 0.4114796
##
## $residual.scale
17.3. WORKING WITH LOGISTIC REGRESSION 403
## [1] 1
By setting se.fit = TRUE, R also computes SE[η̂(x)]. Note that we used type
= "link", but this is actually a default value. We added it here to stress that
the output from predict() will be the value of the link function.
z_crit = round(qnorm(0.975), 2)
round(z_crit, 2)
## [1] 1.96
After obtaining the correct critical value, we can easily create a 95% confidence
interval for η(x).
eta_hat$fit + c(-1, 1) * z_crit * eta_hat$se.fit
Without really thinking about it, we’ve been using our previous knowledge of
R’s model formula syntax to fit logistic regression.
17.3.7.1 Interactions
Let’s add an interaction between LDL and family history for the model we
selected.
chd_mod_interaction = glm(chd ~ alcohol + ldl + famhist + typea + age + ldl:famhist,
data = SAheart, family = binomial)
summary(chd_mod_interaction)
##
## Call:
## glm(formula = chd ~ alcohol + ldl + famhist + typea + age + ldl:famhist,
## family = binomial, data = SAheart)
404 CHAPTER 17. LOGISTIC REGRESSION
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -1.9082 -0.8308 -0.4550 0.9286 2.5152
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -6.043472 0.937186 -6.449 1.13e-10 ***
## alcohol 0.003800 0.004332 0.877 0.38033
## ldl 0.035593 0.071448 0.498 0.61837
## famhistPresent -0.733836 0.618131 -1.187 0.23515
## typea 0.036253 0.012172 2.978 0.00290 **
## age 0.062416 0.009723 6.419 1.37e-10 ***
## ldl:famhistPresent 0.314311 0.114922 2.735 0.00624 **
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 596.11 on 461 degrees of freedom
## Residual deviance: 477.46 on 455 degrees of freedom
## AIC: 491.46
##
## Number of Fisher Scoring iterations: 5
Based on the z-test seen in the above summary, this interaction is significant.
The effect of LDL on the probability of CHD is different depending on family
history.
Let’s take the previous model, and now add a polynomial term.
chd_mod_int_quad = glm(chd ~ alcohol + ldl + famhist + typea + age + ldl:famhist + I(ld
data = SAheart, family = binomial)
summary(chd_mod_int_quad)
##
## Call:
## glm(formula = chd ~ alcohol + ldl + famhist + typea + age + ldl:famhist +
## I(ldl^2), family = binomial, data = SAheart)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -1.8953 -0.8311 -0.4556 0.9276 2.5204
##
17.3. WORKING WITH LOGISTIC REGRESSION 405
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -6.096747 1.065952 -5.720 1.07e-08 ***
## alcohol 0.003842 0.004350 0.883 0.37716
## ldl 0.056876 0.214420 0.265 0.79081
## famhistPresent -0.723769 0.625167 -1.158 0.24698
## typea 0.036248 0.012171 2.978 0.00290 **
## age 0.062299 0.009788 6.365 1.95e-10 ***
## I(ldl^2) -0.001587 0.015076 -0.105 0.91617
## ldl:famhistPresent 0.311615 0.117559 2.651 0.00803 **
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 596.11 on 461 degrees of freedom
## Residual deviance: 477.45 on 454 degrees of freedom
## AIC: 493.45
##
## Number of Fisher Scoring iterations: 5
Unsurprisingly, since this additional transformed variable wasn’t intelligently
chosen, it is not significant. However, this does allow us to stress the fact that
the syntax notation that we had been using with lm() works basically exactly
the same for glm(), however now we understand that this is specifying the linear
combination of predictions, η(x).
That is, the above fits the model
( )
p(x)
log = β0 +β1 xalcohol +β2 xldl +β3 xfamhist +β4 xtypea +β5 xage +β6 xldl xfamhist +β7 x2ldl
1 − p(x)
You may have realized this before we actually explicitly wrote it down!
17.3.8 Deviance
You have probably noticed that the output from summary() is also very similar
to that of ordinary linear regression. One difference, is the “deviance” being
reported. The Null deviance is the deviance for the null model, that is, a
model with no predictors. The Residual deviance is the deviance for the
mode that was fit.
Deviance compares the model to a saturated model. (Without repeated obser-
vations, a saturated model is a model that fits perfectly, using a parameter for
406 CHAPTER 17. LOGISTIC REGRESSION
## [1] 564.2788
deviance(chd_mod_selected)
## [1] 475.6856
deviance(chd_mod_additive)
## [1] 472.14
Note that these are nested, and we see that deviance does decrease as the model
size becomes larger. So while a lower deviance is better, if the model becomes
too big, it may be overfitting. Note that R also outputs AIC in the summary,
which will penalize according to model size, to prevent overfitting.
17.4 Classification
So far we’ve mostly used logistic regression to estimate class probabilities. The
somewhat obvious next step is to use these probabilities to make “predictions,”
which in this context, we would call classifications. Based on the values of the
predictors, should an observation be classified as Y = 1 or as Y = 0?
Suppose we didn’t need to estimate probabilities from data, and instead, we
actually knew both
p(x) = P [Y = 1 | X = x]
and
1 − p(x) = P [Y = 0 | X = x].
With this information, classifying observations based on the values of the pre-
dictors is actually extremely easy. Simply classify an observation to the class
(0 or 1) with the larger probability. In general, this result is called the Bayes
Classifier,
{
1 p(x) > 0.5
Ĉ(x) =
0 p(x) ≤ 0.5
Simply put, the Bayes classifier (not to be confused with the Naive Bayes Clas-
sifier) minimizes the probability of misclassification by classifying each observa-
tion to the class with the highest probability. Unfortunately, in practice, we
won’t know the necessary probabilities to directly use the Bayes classifier. In-
stead we’ll have to use estimated probabilities. So to create a classifier that
seeks to minimize misclassifications, we would use,
{
1 p̂(x) > 0.5
Ĉ(x) =
0 p̂(x) ≤ 0.5
Using this simple classification rule, we can turn logistic regression into a clas-
sifier. To use logistic regression for classification, we first use logistic regression
to obtain estimated probabilities, p̂(x), then use these in conjunction with the
above classification rule.
Logistic regression is just one of many ways that these probabilities could be
estimated. In a course completely focused on machine learning, you’ll learn
many additional ways to do this, as well as methods to directly make classifica-
tions without needing to first estimate probabilities. But since we had already
introduced logistic regression, it makes sense to discuss it in the context of
classification.
To illustrate the use of logistic regression as a classifier, we will use the spam
dataset from the kernlab package.
# install.packages("kernlab")
library(kernlab)
data("spam")
tibble::as.tibble(spam)
## Warning: `as.tibble()` is deprecated, use `as_tibble()` (but mind the new semantics).
## This warning is displayed once per session.
408 CHAPTER 17. LOGISTIC REGRESSION
## # A tibble: 4,601 x 58
## make address all num3d our over remove internet order mail
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 0 0.64 0.64 0 0.32 0 0 0 0 0
## 2 0.21 0.28 0.5 0 0.14 0.28 0.21 0.07 0 0.94
## 3 0.06 0 0.71 0 1.23 0.19 0.19 0.12 0.64 0.25
## 4 0 0 0 0 0.63 0 0.31 0.63 0.31 0.63
## 5 0 0 0 0 0.63 0 0.31 0.63 0.31 0.63
## 6 0 0 0 0 1.85 0 0 1.85 0 0
## 7 0 0 0 0 1.92 0 0 0 0 0.64
## 8 0 0 0 0 1.88 0 0 1.88 0 0
## 9 0.15 0 0.46 0 0.61 0 0.3 0 0.92 0.76
## 10 0.06 0.12 0.77 0 0.19 0.32 0.38 0 0.06 0
## # ... with 4,591 more rows, and 48 more variables: receive <dbl>,
## # will <dbl>, people <dbl>, report <dbl>, addresses <dbl>, free <dbl>,
## # business <dbl>, email <dbl>, you <dbl>, credit <dbl>, your <dbl>,
## # font <dbl>, num000 <dbl>, money <dbl>, hp <dbl>, hpl <dbl>,
## # george <dbl>, num650 <dbl>, lab <dbl>, labs <dbl>, telnet <dbl>,
## # num857 <dbl>, data <dbl>, num415 <dbl>, num85 <dbl>, technology <dbl>,
## # num1999 <dbl>, parts <dbl>, pm <dbl>, direct <dbl>, cs <dbl>,
## # meeting <dbl>, original <dbl>, project <dbl>, re <dbl>, edu <dbl>,
## # table <dbl>, conference <dbl>, charSemicolon <dbl>,
## # charRoundbracket <dbl>, charSquarebracket <dbl>,
## # charExclamation <dbl>, charDollar <dbl>, charHash <dbl>,
## # capitalAve <dbl>, capitalLong <dbl>, capitalTotal <dbl>, type <fct>
This dataset, created in the late 1990s at Hewlett-Packard Labs, contains 4601
emails, of which 1813 are considered spam. The remaining are not spam. (Which
for simplicity, we might call, ham.) Additional details can be obtained by using
?spam of by visiting the UCI Machine Learning Repository.
The response variable, type, is a factor with levels that label each email as spam
or nonspam. When fitting models, nonspam will be the reference level, Y = 0,
as it comes first alphabetically.
is.factor(spam$type)
## [1] TRUE
levels(spam$type)
Many of the predictors (often called features in machine learning) are engineered
based on the emails. For example, charDollar is the number of times an email
contains the $ character. Some variables are highly specific to this dataset, for
example george and num650. (The name and area code for one of the researchers
whose emails were used.) We should keep in mind that this dataset was created
17.4. CLASSIFICATION 409
based on emails send to academic type researcher in the 1990s. Any results we
derive probably won’t generalize to modern emails for the general public.
We’ve used a somewhat small train set relative to the total size of the dataset.
In practice it should likely be larger, but this is simply to keep training time
low for illustration and rendering of this document.
fit_caps = glm(type ~ capitalTotal,
data = spam_trn, family = binomial)
fit_selected = glm(type ~ edu + money + capitalTotal + charDollar,
data = spam_trn, family = binomial)
fit_additive = glm(type ~ .,
data = spam_trn, family = binomial)
fit_over = glm(type ~ capitalTotal * (.),
data = spam_trn, family = binomial, maxit = 50)
We’ll fit four logistic regressions, each more complex than the previous. Note
that we’re suppressing two warnings. The first we briefly mentioned previously.
Note that, when we receive this warning, we should be highly suspicious of the
parameter estimates.
coef(fit_selected)
However, the model can still be used to create a classifier, and we will evaluate
that classifier on its own merits.
In reality, we didn’t actually suppress it, but instead changed maxit to 50, when
fitting the model fit_over. This was enough additional iterations to allow
the iteratively reweighted least squares algorithm to converge when fitting the
model.
410 CHAPTER 17. LOGISTIC REGRESSION
The metric we’ll be most interested in for evaluating the overall performance
of a classifier is the misclassification rate. (Sometimes, instead accuracy is
reported, which is instead the proportion of correction classifications, so both
metrics serve the same purpose.)
1∑
n
Misclass(Ĉ, Data) = ̸ Ĉ(xi ))
I(yi =
n i=1
{
0 yi = Ĉ(xi )
I(yi ̸= Ĉ(xi )) =
1 yi ̸= Ĉ(xi )
When using this metric on the training data, it will have the same issues as RSS
did for ordinary linear regression, that is, it will only go down.
# training misclassification rate
mean(ifelse(predict(fit_caps) > 0, "spam", "nonspam") != spam_trn$type)
## [1] 0.339
mean(ifelse(predict(fit_selected) > 0, "spam", "nonspam") != spam_trn$type)
## [1] 0.224
mean(ifelse(predict(fit_additive) > 0, "spam", "nonspam") != spam_trn$type)
## [1] 0.066
mean(ifelse(predict(fit_over) > 0, "spam", "nonspam") != spam_trn$type)
## [1] 0.136
Because of this, training data isn’t useful for evaluating, as it would suggest that
we should always use the largest possible model, when in reality, that model is
likely overfitting. Recall, a model that is too complex will overfit. A model that
is too simple will underfit. (We’re looking for something in the middle.)
To overcome this, we’ll use cross-validation as we did with ordinary linear re-
gression, but this time we’ll cross-validate the misclassification rate. To do so,
we’ll use the cv.glm() function from the boot library. It takes arguments for
the data (in this case training), a model fit via glm(), and K, the number of
folds. See ?cv.glm for details.
Previously, for cross-validating RMSE in ordinary linear regression, we used
LOOCV. We certainly could do that here. However, with logistic regression,
we no longer have the clever trick that would allow use to obtain a LOOCV
metric without needing to fit the model n times. So instead, we’ll use 5-fold
17.4. CLASSIFICATION 411
## [1] 0.2166961
cv.glm(spam_trn, fit_selected, K = 5)$delta[1]
## [1] 0.1587043
cv.glm(spam_trn, fit_additive, K = 5)$delta[1]
## [1] 0.08684467
cv.glm(spam_trn, fit_over, K = 5)$delta[1]
## [1] 0.135
Note that we’re suppressing warnings again here. (Now there would be a lot
more, since were fitting a total of 20 models.)
Based on these results, fit_caps and fit_selected are underfitting relative
to fit_additive. Similarly, fit_over is overfitting relative to fit_additive.
Thus, based on these results, we prefer the classifier created based on the logistic
regression fit and stored in fit_additive.
Going forward, to evaluate and report on the efficacy of this classifier, we’ll use
the test dataset. We’re going to take the position that the test data set should
never be used in training, which is why we used cross-validation within the
training dataset to select a model. Even though cross-validation uses hold-out
sets to generate metrics, at some point all of the data is used for training.
To quickly summarize how well this classifier works, we’ll create a confusion
matrix.
It further breaks down the classification errors into false positives and false
negatives.
412 CHAPTER 17. LOGISTIC REGRESSION
Let’s explicitly store the predicted values of our classifier on the test dataset.
spam_tst_pred = ifelse(predict(fit_additive, spam_tst) > 0,
"spam",
"nonspam")
spam_tst_pred = ifelse(predict(fit_additive, spam_tst, type = "response") > 0.5,
"spam",
"nonspam")
The previous two lines of code produce the same output, that is the same pre-
dictions, since
## actual
## predicted nonspam spam
## nonspam 2057 157
## spam 127 1260
P TP + FN
Prev = =
Total Obs Total Obs
table(spam_tst$type) / nrow(spam_tst)
##
## nonspam spam
## 0.6064982 0.3935018
17.4. CLASSIFICATION 413
## [1] 0.921133
In other words, the test misclassification is
mean(spam_tst_pred != spam_tst$type)
## [1] 0.07886698
This seems like a decent classifier…
However, are all errors created equal? In this case, absolutely note. The 137
non-spam emails that were marked as spam (false positives) are a problem. We
can’t allow important information, say, a job offer, miss our inbox and get sent
to the spam folder. On the other hand, the 161 spam email that would make it
to an inbox (false negatives) are easily dealt with, just delete them.
Instead of simply evaluating a classifier based on its misclassification rate (or
accuracy), we’ll define two additional metrics, sensitivity and specificity. Note
that this are simply two of many more metrics that can be considered. The
Wikipedia page for sensitivity and specificity details a large number of metrics
that can be derived form a confusion matrix.
Sensitivity is essentially the true positive rate. So when sensitivity is high, the
number of false negatives is low.
TP TP
Sens = True Positive Rate = =
P TP + FN
Specificity is essentially the true negative rate. So when specificity is high, the
number of false positives is low.
TN TN
Spec = True Negative Rate = =
N TN + FP
414 CHAPTER 17. LOGISTIC REGRESSION
get_spec = function(conf_mat) {
conf_mat[1, 1] / sum(conf_mat[, 1])
}
We calculate both based on the confusion matrix we had created for our classi-
fier.
get_sens(conf_mat_50)
## [1] 0.8892025
get_spec(conf_mat_50)
## [1] 0.9418498
Recall that we had created this classifier using a probability of 0.5 as a “cutoff”
for how observations should be classified. Now we’ll modify this cutoff. We’ll
see that by modifying the cutoff, c, we can improve sensitivity or specificity at
the expense of the overall accuracy (misclassification rate).
{
1 p̂(x) > c
Ĉ(x) =
0 p̂(x) ≤ c
## actual
## predicted nonspam spam
## nonspam 1583 29
## spam 601 1388
Unfortunately, while this does greatly reduce false negatives, false positives have
almost quadrupled. We see this reflected in the sensitivity and specificity.
get_sens(conf_mat_10)
## [1] 0.9795342
17.4. CLASSIFICATION 415
get_spec(conf_mat_10)
## [1] 0.7248168
This classifier, using 0.1 instead of 0.5 has a higher sensitivity, but a much lower
specificity. Clearly, we should have moved the cutoff in the other direction. Let’s
try 0.9.
spam_tst_pred_90 = ifelse(predict(fit_additive, spam_tst, type = "response") > 0.9,
"spam",
"nonspam")
## actual
## predicted nonspam spam
## nonspam 2136 537
## spam 48 880
This is the result we’re looking for. We have far fewer false positives. While
sensitivity is greatly reduced, specificity has gone up.
get_sens(conf_mat_90)
## [1] 0.6210303
get_spec(conf_mat_90)
## [1] 0.978022
While this is far fewer false positives, is it acceptable though? Still probably
not. Also, don’t forget, this would actually be a terrible spam detector today
since this is based on data from a very different era of the internet, for a very
specific set of people. Spam has changed a lot since 90s! (Ironically, machine
learning is probably partially to blame.)
This chapter has provided a rather quick introduction to classification, and thus,
machine learning. For a more complete coverage of machine learning, An Intro-
duction to Statistical Learning is a highly recommended resource. Additionally,
R for Statistical Learning has been written as a supplement which provides ad-
ditional detail on how to perform these methods using R. The classification and
logistic regression chapters might be useful.
We should note that the code to perform classification using logistic regression
is presented in a way that illustrates the concepts to the reader. In practice, you
may to prefer to use a more general machine learning pipeline such as caret in R.
416 CHAPTER 17. LOGISTIC REGRESSION
This will streamline processes for creating predictions and generating evaluation
metrics.
17.5 R Markdown
The R Markdown file for this chapter can be found here:
• logistic.Rmd
The file was created using R version 3.6.1.
Chapter 18
Beyond
18.2 RStudio
RStudio has recently released version 1.0! This is exciting for a number of reason,
especially the release of R Notebooks. R Notebooks combine the RMarkdown
you have already learned with the ability to work interactively.
417
418 CHAPTER 18. BEYOND
This is not always the case! Many packages have been developed to deal with
data, and force it into a nice format, which is called tidy data, that we can then
use for modeling. Often during analysis, this is where a large portion of your
time will be spent.
The R community has started to call this collection of packages the Tidyverse.
It was once called the Hadleyverse, as Hadley Wickham has authored so many
of the packages. Hadley is writing a book called R for Data Science which
describes the use of many of these packages. (And also how to use some to
make the modeling process better!) This book is a great starting point for
diving deeper into the R community. The two main packages are dplyr and
tidyr both of which are used internally in RStudio.
18.4 Visualization
In this course, we have mostly used the base plotting methods in R. When work-
ing with tidy data, many users prefer to use the ggplot2 package, also devel-
oped by Hadley Wickham. RStudio provides a rather detailed “cheat sheet” for
working with ggplot2. The community maintains a graph gallery of examples.
Use of the manipulate package with RStudio gives the ability to quickly change
a static graphic to become interactive.
RStudio has made it incredible easy to create data products through the use
of Shiny, which allows for the creation of web applications with R. RStudio
maintains an ever-growing tutorial and gallery of examples.
Using models for prediction is the key focus of machine learning. There are
many methods, each with its own package, however R has a wonderful package
called caret, Classification And REgression Training, which provides a unified
interface to training these models. It also contains various utilities for data
processing and visualization that are useful for predictive modeling.
Applied Predictive Modeling by Max Kuhn, the author of the caret package is
a good general resource for predictive modeling, which obviously utilizes R. An
Introduction to Statistical Learning by James, Witten, Hastie, and Tibshirani is
a gentle introduction to machine learning from a statistical perspective which
uses R and picks up right where this courses stops. This is based on the of-
ten referenced The Elements of Statistical Learning by Hastie, Tibshirani, and
Friedman. Both are freely available online.
While, it probably isn’t the best tool for the job, R now has the ability to train
deep neural networks via TensorFlow.
In this class we have only considered independent data. What if data is depen-
dent? Time Series is the area of statistics which deals with this issue, and could
easily span multiple courses.
The primary textbook for STAT 429: Time Series Analysis at the University of
Illinois that is free is:
• Time Series Analysis and Its Applications: With R Examples by Shumway
and Stoffer
Some tutorials:
• Little Book of R for Time Series
• Quick R: Time Series and Forecasting
• TSA: Start to Finish Examples
When performing time series analysis in R you should be aware of the many
packages that are useful for analysis. It should be hard to avoid the forecast
and zoo packages. Often the most difficult part will be dealing with time and
date data. Make sure you are utilizing one of the many packages that help with
this.
420 CHAPTER 18. BEYOND
18.9 Bayesianism
In this class, we have worked within the frequentist view of statistics. There is
an entire alternative universe of Bayesian statistics.
Doing Bayesian Data Analysis: A Tutorial with R, JAGS, and Stan by John
Kruschke is a great introduction to the topic. It introduces the world of prob-
abilistic programming, in particular Stan, which can be used in both R and
Python.
Appendix
421