Classifier
Classifier
Classifier
Introduction
Bag-of-words models
Visual words with spatial location
Part-based models
Discriminative methods
Segmentation and recognition
Recognition-based image retrieval
Datasets & Conclusions
Decision
boundary
Computer screen
Discriminative methods
Nearest neighbor
Neural networks
106 examples
Nearest Neighbors
Difficult due to high intrinsic dimensionality of images
- lots of data needed
- slow neighbor lookup
106 examples
Multi-layer Hubel-Wiesel
architectures
Neural networks
Biologically inspired
More in
Segmentation section
Boosting
A simple algorithm for learning robust classifiers
Freund & Shapire, 1995
Friedman, Hastie, Tibshhirani, 1998
Matlab code
Gentle boosting
Object detector using a part based model
http://people.csail.mit.edu/torralba/iccv2005/
Boosting
Boosting fits the additive model
Training samples
The exponential loss is a differentiable upper bound to the misclassification error.
Weak classifiers
The input is a set of weighted training
samples (x,y,w)
Regression stumps: simple but commonly
used in object detection.
fm(x)
b=Ew(y [x> ])
Four parameters:
a=Ew(y [x< ])
Weak detectors
Textures of textures
Tieu and Viola, CVPR 2000
Haar wavelets
Haar filters and integral image
Viola and Jones, ICCV 2001
Haar wavelets
Papageorgiou & Poggio (2000)
Polynomial SVM
Edge fragments
Opelt, Pinz, Zisserman,
ECCV 2006
Shape context
Belongie, Malik, Puzicha, NIPS 2000
Weak detectors
Part based: similar to part-based generative
models. We create weak detectors by
using parts and voting for the object center
location
Car model
Screen model
These features are used for the detector on the course web site.
Weak detectors
First we collect a set of part templates from a set of training
objects.
Vidal-Naquet, Ullman, Nature Neuroscience 2003
Weak detectors
We now define a family of weak detectors as:
Weak detectors
We can do a better job using filtered images
Thresholded
output
Weak detector
Produces many false alarms.
Thresholded
output
Strong classifier
at iteration 1
Thresholded
output
Strong
classifier
Thresholded
output
Strong
classifier
+
Strong classifier
at iteration 2
Strong
classifier
Feature
output
Strong classifier
at iteration 10
Strong
classifier
Feature
output
Adding
features
Final
classification
Strong classifier
at iteration 200
Cascade of classifiers
100%
Precision
100 features
30 features
3 features
0%
Recall
100%
We want the complexity of the 3 features classifier with the performance of the 100
features classifier:
Shared features
Is learning the object class 1000 easier
than learning the first?
Shared features
Independent binary classifiers:
Screen detector
Car detector
Face detector
Binary classifiers that share features:
Screen detector
Car detector
Face detector
Torralba, Murphy, Freeman. CVPR 2004. PAMI 2007
50 training samples/class
29 object classes
2000 entries in the dictionary
Class-specific features
Shared features
Generalization as a function of
object similarities
12 viewpoints
K = 2.1
K = 4.8
Sharing patches
Bart and Ullman, 2004
For a new class, use only features similar to features that where good for other
classes:
Proposed Dog
features
Sharing transformations
Miller, E., Matsakis, N., and Viola, P. (2000). Learning from one example
through shared densities on transforms. In IEEE Computer Vision and
Pattern Recognition.
Caruana 1997
Schapire, Singer, 2000
Thrun, Pratt 1997
Krempp, Geman, Amit, 2002
E.L.Miller, Matsakis, Viola, 2000
Mahamud, Hebert, Lafferty, 2001
Fink 2004
LeCun, Huang, Bottou, 2004
Holub, Welling, Perona, 2005