Beyond traditional kernels: Classification in two dissimilarity-based representation spaces
E Pekalska, RPW Duin - IEEE Transactions on Systems, Man …, 2008 - ieeexplore.ieee.org
IEEE Transactions on Systems, Man, and Cybernetics, Part C …, 2008•ieeexplore.ieee.org
Proximity captures the degree of similarity between examples and is thereby fundamental in
learning. Learning from pairwise proximity data usually relies on either kernel methods for
specifically designed kernels or the nearest neighbor (NN) rule. Kernel methods are
powerful, but often cannot handle arbitrary proximities without necessary corrections. The
NN rule can work well in such cases, but suffers from local decisions. The aim of this paper
is to provide an indispensable explanation and insights about two simple yet powerful …
learning. Learning from pairwise proximity data usually relies on either kernel methods for
specifically designed kernels or the nearest neighbor (NN) rule. Kernel methods are
powerful, but often cannot handle arbitrary proximities without necessary corrections. The
NN rule can work well in such cases, but suffers from local decisions. The aim of this paper
is to provide an indispensable explanation and insights about two simple yet powerful …
Proximity captures the degree of similarity between examples and is thereby fundamental in learning. Learning from pairwise proximity data usually relies on either kernel methods for specifically designed kernels or the nearest neighbor (NN) rule. Kernel methods are powerful, but often cannot handle arbitrary proximities without necessary corrections. The NN rule can work well in such cases, but suffers from local decisions. The aim of this paper is to provide an indispensable explanation and insights about two simple yet powerful alternatives when neither conventional kernel methods nor the NN rule can perform best. These strategies use two proximity-based representation spaces (RSs) in which accurate classifiers are trained on all training objects and demand comparisons to a small set of prototypes. They can handle all meaningful dissimilarity measures, including non-Euclidean and nonmetric ones. Practical examples illustrate that these RSs can be highly advantageous in supervised learning. Simple classifiers built there tend to outperform the NN rule. Moreover, computational complexity may be controlled. Consequently, these approaches offer an appealing alternative to learn from proximity data for which kernel methods cannot directly be applied, are too costly or impractical, while the NN rule leads to noisy results.
ieeexplore.ieee.org
Showing the best result for this search. See all results