- Andres Ferraro
- Fernando Diaz
Offline evaluation of information retrieval and recommendation has traditionally focused on distilling the quality of a ranking into a scalar metric such as average precision or normalized discounted cumulative gain. We can use this metric to compare multiple systems’ performance on the same query or user. Although evaluation metrics provide a convenient summary of system performance, they can also obscure subtle behavior in the original ranking and can carry assumptions about user behavior and utility not supported across retrieval scenarios. We propose recall-paired preference (RPP), a metric-free evaluation method based on directly comparing ranked lists. RPP simulates multiple user subpopulations per query and compares systems across these pseudo-populations. Our results across multiple search and recommendation tasks demonstrate that RPP substantially improves discriminative power while being robust to missing data and correlating well with existing metrics.