A Novel Label Aggregation with Attenuated Scores for Ground-Truth Identification of Dataset Annotation with Crowdsourcing

Ratchainant THAMMASUDJARIT  Anon PLANGPRASOPCHOK  Charnyote PLUEMPITIWIRIYAWEJ  

Publication
IEICE TRANSACTIONS on Information and Systems   Vol.E100-D   No.4   pp.750-757
Publication Date: 2017/04/01
Online ISSN: 1745-1361
Type of Manuscript: Special Section PAPER (Special Section on Data Engineering and Information Management)
Category: 
Keyword: 
ground-truth identification,  crowdsourcing,  label aggregation,  attenuation scoring,  

Full Text: PDF(523.1KB)
>>Buy this Article


Summary: 
Ground-truth identification - the process, which infers the most probable labels, for a certain dataset, from crowdsourcing annotations - is a crucial task to make the dataset usable, e.g., for a supervised learning problem. Nevertheless, the process is challenging because annotations from multiple annotators are inconsistent and noisy. Existing methods require a set of data sample with corresponding ground-truth labels to precisely estimate annotator performance but such samples are difficult to obtain in practice. Moreover, the process requires a post-editing step to validate indefinite labels, which are generally unidentifiable without thoroughly inspecting the whole annotated data. To address the challenges, this paper introduces: 1) Attenuated score (A-score) - an indicator that locally measures annotator performance for segments of annotation sequences, and 2) label aggregation method that applies A-score for ground-truth identification. The experimental results demonstrate that A-score label aggregation outperforms majority vote in all datasets by accurately recovering more labels. It also achieves higher F1 scores than those of the strong baselines in all multi-class data. Additionally, the results suggest that A-score is a promising indicator that helps identifying indefinite labels for the post-editing procedure.