Robust benchmarking for multi-objective optimization

T Eftimov, P Korošec - Proceedings of the Genetic and Evolutionary …, 2021 - dl.acm.org
Proceedings of the Genetic and Evolutionary Computation Conference Companion, 2021dl.acm.org
The performance assessment of multi-objective optimization algorithms is a crucial task for
investigating their behaviour. However, the selected quality indicators and statistical
techniques used in comparison studies can have huge impact on the study results. A quality
indicator transforms high-dimensional data (an approximation set) into one-dimensional
data (a quality indicator), followed by a potential loss of high-dimensional information
concerning the transformation. Comparison approaches typically involve a single quality …
The performance assessment of multi-objective optimization algorithms is a crucial task for investigating their behaviour. However, the selected quality indicators and statistical techniques used in comparison studies can have huge impact on the study results. A quality indicator transforms high-dimensional data (an approximation set) into one-dimensional data (a quality indicator), followed by a potential loss of high-dimensional information concerning the transformation. Comparison approaches typically involve a single quality indicator or an ensemble of quality indicators to address more quality criteria, which are predefined by the user. To provide more robust benchmarking for multi-objective optimization, we extended the DSCTool with three approaches that are ensembles of quality indicators and one novel approach that compare the high-dimensional distributions of the approximation sets and reduces the users' preference in the selection of quality indicators. The approaches are provided as web services for robust ranking and hypothesis testing, including a proper selection of an omnibus statistical test and post-hoc tests if needed.
ACM Digital Library
Showing the best result for this search. See all results