Part of Advances in Neural Information Processing Systems 29 (NIPS 2016)
Jacob Steinhardt, Gregory Valiant, Moses Charikar
We consider a crowdsourcing model in which n workers are asked to rate the quality of n items previously generated by other workers. An unknown set of $\alpha n$ workers generate reliable ratings, while the remaining workers may behave arbitrarily and possibly adversarially. The manager of the experiment can also manually evaluate the quality of a small number of items, and wishes to curate together almost all of the high-quality items with at most an fraction of low-quality items. Perhaps surprisingly, we show that this is possible with an amount of work required of the manager, and each worker, that does not scale with n: the dataset can be curated with $\tilde{O}(1/\beta\alpha\epsilon^4)$ ratings per worker, and $\tilde{O}(1/\beta\epsilon^2)$ ratings by the manager, where $\beta$ is the fraction of high-quality items. Our results extend to the more general setting of peer prediction, including peer grading in online classrooms.