ICLR 2023 (2023) (to appear)
Abstract
Large technology firms face the problem of moderating content on their platforms for compliance with laws and policies. To accomplish this at the scale of billions of pieces of content per day, a combination of human and machine review are necessary to label content. However, human error and subjective methods of measure are inherent in many audit procedures. This paper introduces statistical analysis methods and mathematical techniques to determine, quantify, and minimize these sources of risk. Through these methodologies it can be shown that we are able to reduce reviewer bias.
Research Areas
Learn more about how we do research
We maintain a portfolio of research projects, providing individuals and teams the freedom to emphasize specific types of work