Abstract
1. Introduction
2. Related work
3. Collaborative web-resource labeling
4. The crowdsourcing framework for CLabel task design
5. Enforcing collaborative web-resource labeling
6. Setting up Argo for CLabel enforcement
7. Experimental results
8. Concluding remarks
References
Abstract
In this paper, we propose a three-stage approach called CLabel for enforcing collaborative web-resource labeling in form of a crowdsourcing process. In CLabel, the results of both crowdsourcing and automated tasks are combined into a coherent process flow. CLabel leverages on crowd preferences and consensus, for capturing the different interpretations that can be associated with a considered web resource in form of different candidate labels and for selecting the most agreed candidate(s) as the final result. CLabel succeeds to be particularly appropriate for application to labeling problems and scenarios where human feelings and preferences are decisive to select the answers (i.e., labels) supported by the majority of the crowd. Moreover, CLabel succeeds in providing label variety when multiple labels are required for a suitable resource annotation, thus avoiding duplicate or repetitive labels.
A real case-study of collective web-resource labeling in the music domain is presented, where we discuss the task/consensus configuration and obtained labels as well as the results of two specific tests, respectively devoted to the analysis of label variety, and to the comparison of CLabel results against a reference classification system, where music resources are labeled using predefined categories based on a mix of social-based and expert-based recommendations.