Statistical Methods for Auditing the Quality of Manual Content Reviews
Abstract
Large technology firms face the problem of moderating content on their platforms for compliance with laws and policies. To accomplish this at the scale of billions of pieces of content per day, a combination of human and machine review are necessary to label content. However, human error and subjective methods of measure are inherent in many audit procedures. This paper introduces statistical analysis methods and mathematical techniques to determine, quantify, and minimize these sources of risk. Through these methodologies it can be shown that we are able to reduce reviewer bias.