Jump to Content

A Metrological Framework for Evaluating Crowd-powered Instruments

Praveen Kumar Paritosh
HCOMP-2019: AAAI Conference on Human Computation
Google Scholar

Abstract

In this paper we present the first steps towards hardening the science of measuring AI systems, by adopting metrology, the science of measurement and its application, and applying it to human (crowd) powered evaluations. We begin with the intuitive observation that evaluating the performance of an AI system is a form of measurement. In all other science and engineering disciplines, the devices used to measure are called instruments, and all measurements are recorded with respect to the characteristics of the instruments used. One does not report mass, speed, or length, for example, of a studied object without disclosing the precision (measurement variance) and resolution (smallest detectable change) of the instrument used. It is extremely common in the AI literature to compare the performance of two systems by using a crowd-sourced dataset as an instrument, but failing to report if the performance difference lies within the capability of that instrument to measure. To further the discussion, we focus on a single crowd-sourced dataset, the so-called WS-353, a venerable and often used gold standard for word similarity, and propose a set of metrological characteristics for it {\it as an instrument}. We then analyze several previously published experiments that use the WS-353 instrument, and show that, in the light of these proposed characteristics, the differences in performance of these systems cannot be measured with this instrument.

Research Areas