Unsupervised learning for robust texture segmentation
JM Buhmann, J Puzicha - Performance Characterization in Computer …, 2000 - Springer
JM Buhmann, J Puzicha
Performance Characterization in Computer Vision, 2000•SpringerRobustness of computer vision algorithms requires stability of the computed results against
variations in the input data caused by noise or modeling uncertainty. In unsupervised image
processing tasks like texture segmentation the extracted image partition should provide
reliable model estimates of the different texture types. These texture models represent
typical properties of textures and they should not depend on the specific texture data
available to the algorithm. Instead, the performance of the algorithms should be invariant to …
variations in the input data caused by noise or modeling uncertainty. In unsupervised image
processing tasks like texture segmentation the extracted image partition should provide
reliable model estimates of the different texture types. These texture models represent
typical properties of textures and they should not depend on the specific texture data
available to the algorithm. Instead, the performance of the algorithms should be invariant to …
Robustness of computer vision algorithms requires stability of the computed results against variations in the input data caused by noise or modeling uncertainty. In unsupervised image processing tasks like texture segmentation the extracted image partition should provide reliable model estimates of the different texture types. These texture models represent typical properties of textures and they should not depend on the specific texture data available to the algorithm. Instead, the performance of the algorithms should be invariant to within-class texture fluctuations and sample fluctuations which are omnipresent in noisy images. Segmentation solutions have to generalize from the given texture samples to new instances of the same texture types. The underlying evaluation methodology of this chapter is a two step process: first, computer vision algorithms are characterized by an appropriate optimization principle for the computer vision task; second, the noise sensitivity of these algorithms is measured by a large deviation analysis as known from Statistical Learning Theory. We demonstrate in an exemplary fashion how this methodology provides a systematic evaluation of algorithms. The unsupervised segmentation of texture images using histogram clustering is employed as a prototypical application. Histogram clustering is described in an abstract way in section 2. A central question for all data analysis procedures is concerned with the significance of the extracted structures, ie the robustness of the induced image partition against within-class variability and noise. In the language of statistics, the expectation value wrt the feature distribution of a given cluster measure is called the expected risk and its empirical mean evaluated
Springer
Showing the best result for this search. See all results