Na Ive Bayes Classifier
Na Ive Bayes Classifier
Na Ive Bayes Classifier
Na Bayes Classier ve
Lecturer: Xiaojin Zhu [email protected]
We are given a set of documents x1 , . . . , xn , with the associated class labels y1 , . . . , yn . We want to learn a model that will predict the label y for any future document x. This task is known as classication. Naive Bayes is one classication method.
Let each document be represented by x = (c1 , . . . , cv ) the word count vector, otherwise known as bag of word representation. We assume within each class y, the probability of a document follows the multinomial distribution with parameter y :
v
p(x|y)
w=1
cw yw .
(1)
The log likelihood is log p(x|y) = x log y + const. (2) Note dierent classes have dierent y s. Also note that the multinomial distribution assume conditional independence of feature dimensions 1, . . . , v given the class y. We know this is not true in reality, and more sophisticated models would assume otherwise. For this reason, such assumption on independence of features is known as the na Bayes assumption. ve If we know p(x|y) and p(y) for all classes, classication is done via the Bayes rule: y = = = = arg max p(y|x)
y
The process of computing the conditional distribution p(y|x) of the unknown variable (y) given observed variables (x) is called inference. Making classication predictions given p(x|y), p(y), and x is called inference. Where do we get p(x|y) and p(y)? These are the parameters of the model, and we learn them from the training set. Given a training set {(x1 , y1 ), . . . , (xn , yn )}, training or parameter learning involves nding the best parameters = {, 1 , . . . , C }. Our complete model is p(y = j) = j , and p(x|y = j) = Mult(x; j ) V xw w=1 jw . For simplicity we use the MLE here, but MAP is common too. We maximize the joint (log)
log
i=1 n
=
i=1
(11)
C j=1
s.t.
(12) (13)
= 1, j = 1 . . . C.
= j]
n
i:yi =j xiw . V i:yi =j u=1 xiu
(14) (15)
These MLEs are intuitive: they are the class frequency in the training set, and the word frequency within each class. Note that the concepts of inference and parameter learning described above are fairly general. The only special thing is the na Bayes assumption (i.e., unigram language model for p(x|y)) which assumes ve conditional independence of features. This makes it a Na Bayes classier. ve
1.1
Consider binary classication where y = 0 or 1. Our classication rule with arg max can equivalently be expressed with log odds ratio f (x) p(y = 1|x) p(y = 0|x) = log p(y = 1|x) log p(y = 0|x) = (log 1 log 0 ) x + (log p(y = 1) log p(y = 0)). = log (16) (17) (18)
The decision rule is to classify x with y = 1 if f (x) > 0, and y = 0 otherwise. Note for given parameters, this is a linear function in x. That is to say, the Naive Bayes classier induces a linear decision boundary in feature space X . The boundary takes the form of a hyperplane, dened by f (x) = 0.
1.2
A generative model is a probabilistic model which describe the full generation process of the data, i.e. the joint probability p(x, y). Our Naive Bayes model consists of p(y) and p(x|y), and does just that: One can generate data (x, y) by rst sample y p(y), and then sample word counts from the multinomial p(x|y). There is another family of models known as discriminative models, which do not model p(x, y). Instead, they directly model the conditional p(y|x), which is directly related to classication. We will see our rst discriminative model when we discuss logistic regression.
Na Bayes Classier ve
1.3
A Bayes Network is a directed graph that represent a family of probability distributions. This is covered in detail in [cB] Chapter 8.1, 8.2. Outline: nodes: each node is a random variable. We have one y node, and v xw nodes. directed edges: No directed cycles allowed, i.e. must be a DAG. For naive Bayes, from y to xw . meaning: the joint probability on all nodes s1:K is factorized in a particularly form
K
p(s) =
i=1
where pa(si ) are the parents of si . For naive Bayes, p(x1:v , y) = p(y) observed nodes: nodes with known values, e.g. x1:v . Shaded.
plate: a lazy way to duplicate the node (and associated edges) multiple times. Our x1:v can be condensed into a plate.