Abstract: The co-training algorithm can be applied if a dataset admits a representation into two different feature sets (two views).
The analysis shows that if the hypothesis a) does not hold the co-training algorithm is unable to converge to the optimal Bayesian classifier. This happens ...
This work addresses the case where condition a) doesn't hold and the co-training is unable to converge to the optimal Bayesian classifier, because samples ...
These results help to better understand the behavior of the co-training algorithm when the classes are only 'statistically' separable. ResearchGate Logo.
In this paper, we present the theoretical analysis on co-training with insufficient views ... Algorithm 2 Adaptive margin-based co-training. 1: Input ...
In this paper we address this issue empirically, testing the algorithm on 24 real datasets artificially splitted in two views, using two different base ...
We start from an undirected graphical model for single-view learning with Gaussian processes, and then present Bayesian co-training which is a new undirected ...
Missing: insufficient | Show results with:insufficient
This paper presents a study in the semi-supervised learning paradigm and proposes changes on the co-training algorithm in order to propose a confidence value ...
In this paper, we present a new analysis on co-training, a representative paradigm of disagreement-based semi-supervised learning methods.
This paper addresses an issue that has been overlooked so far in the literature, namely, how co-training performance is affected by the size of the initial ...