Authors:
Nathan Formentin Garcia
1
;
Frederico Tiggeman
1
;
Eduardo N. Borges
1
;
Giancarlo Lucca
1
;
Helida Santos
1
and
Graçaliz Dimuro
1
;
2
Affiliations:
1
Centro de Ciências Computacionais, Universidade Federal do Rio Grande, Av. Itália, km 8, 96203-900, Rio Grande, Brazil
;
2
Departamento de Estadística, Informática y Matemáticas, Universidad Publica de Navarra, Pamplona, Spain
Keyword(s):
Machine Learning Ensembles, Complexity Measures, Diversity Measures.
Abstract:
Several classification techniques have been proposed in the last years. Each approach is best suited for a particular classification problem, i.e., a classification algorithm may not effectively or efficiently recognize some patterns in complex data. Selecting the best-tuned solution may be prohibitive. Methods for combining classifiers have also been proposed aiming at improving the generalization ability and classification results. In this paper, we analyze geometrical features of the data class distribution and the diversity of the base classifiers to understand better the performance of an ensemble approach based on stacking. The experimental evaluation was conducted using 32 real datasets, twelve data complexity measures, five diversity measures, and five heterogeneous classification algorithms. The results show that stacked generalization outperforms the best individual base classifier when there is a combination of complex and imbalanced data with diverse predictions among wea
k learners.
(More)