RADAR: Benchmarking Language Models on Imperfect Tabular Data

Ken Gu
Kumar Ayush
Hong Yu
Zhihan Zhang
Yuzhe Yang
Shwetak Patel
Max Xu
Mark Malhotra
Orson Xu
Evelyn Zhang
Tim Althoff
2025

Abstract

Language models (LMs) are increasingly being deployed to perform autonomous data analyses, yet their~\textit{\robustnessTerm}-- the ability to recognize, reason over, and appropriately handle data artifacts such as missing values, outliers, and logical inconsistencies—remains under-explored. These artifacts are common in real-world tabular data and, if mishandled, can significantly compromise the validity of analytical conclusions. To address this gap, we present RADAR, a benchmark for systematically evaluating data awareness on tabular data. RADAR introduces programmatic perturbations for each unique query table pair, enabling targeted evaluation of model behavior. RADAR~ comprises 2500 queries for data analysis across 55 datasets spanning 20 domains and 5 data awareness dimensions. In addition to evaluating artifact handling, RADAR systematically varies table size to study how reasoning performance scales with input length. In our evaluation, we identify fundamental gaps in their ability to perform reliable, data-aware analyses. Designed to be flexible and extensible, RADAR supports diverse perturbation types and controllable table sizes, offering a valuable resource for advancing tabular reasoning.