Jump to Content

ScreenQA: Large-Scale Question-Answer Pairs over Mobile App Screenshots

Yo Hsiao
Maria Wang
arxiv (2024)


We present a new task and dataset, ScreenQA, for screen content understanding via question answering. The existing screen datasets are focused either on structure and component-level understanding, or on a much higher-level composite task such as navigation and task completion. We attempt to bridge the gap between these two by annotating 86K question-answer pairs over the RICO dataset in hope to benchmark the screen reading comprehension capacity.