Cost-efficient Crowdsourcing for Span-based Sequence Labeling: Worker Selection and Data Augmentation
arXiv preprint arXiv:2305.06683, 2023•arxiv.org
This paper introduces a novel crowdsourcing worker selection algorithm, enhancing
annotation quality and reducing costs. Unlike previous studies targeting simpler tasks, this
study contends with the complexities of label interdependencies in sequence labeling. The
proposed algorithm utilizes a Combinatorial Multi-Armed Bandit (CMAB) approach for
worker selection, and a cost-effective human feedback mechanism. The challenge of
dealing with imbalanced and small-scale datasets, which hinders offline simulation of …
annotation quality and reducing costs. Unlike previous studies targeting simpler tasks, this
study contends with the complexities of label interdependencies in sequence labeling. The
proposed algorithm utilizes a Combinatorial Multi-Armed Bandit (CMAB) approach for
worker selection, and a cost-effective human feedback mechanism. The challenge of
dealing with imbalanced and small-scale datasets, which hinders offline simulation of …
This paper introduces a novel crowdsourcing worker selection algorithm, enhancing annotation quality and reducing costs. Unlike previous studies targeting simpler tasks, this study contends with the complexities of label interdependencies in sequence labeling. The proposed algorithm utilizes a Combinatorial Multi-Armed Bandit (CMAB) approach for worker selection, and a cost-effective human feedback mechanism. The challenge of dealing with imbalanced and small-scale datasets, which hinders offline simulation of worker selection, is tackled using an innovative data augmentation method termed shifting, expanding, and shrinking (SES). Rigorous testing on CoNLL 2003 NER and Chinese OEI datasets showcased the algorithm's efficiency, with an increase in F1 score up to 100.04% of the expert-only baseline, alongside cost savings up to 65.97%. The paper also encompasses a dataset-independent test emulating annotation evaluation through a Bernoulli distribution, which still led to an impressive 97.56% F1 score of the expert baseline and 59.88% cost savings. Furthermore, our approach can be seamlessly integrated into Reinforcement Learning from Human Feedback (RLHF) systems, offering a cost-effective solution for obtaining human feedback.
arxiv.org
Showing the best result for this search. See all results