Google Research

Software prototype for the ensemble of automated accessibility evaluation tools

  • Peter Johnson
  • Mariana Lilley
Springer, pp. 532-539


Web accessibility evaluation is concerned with assessing the extent to which web content meets accessibility guidelines. Web accessibility evaluation is typically conducted using manual inspection, user testing and automated testing. The process of automating aspects of accessibility evaluation is of interest to accessibility evaluation practitioners due to manual evaluations requiring substantial time and effort [1]. The use of multiple evaluation tools is recommended [9][14]; however, aggregating and summarising of the results from multiple tools can be challenging [1]. This paper presents a Python software prototype for the automatic ensemble of web accessibility evaluation tools. The software prototype performs website accessibility evaluations against the WCAG 2.1 AA guidelines by utilising a combination of four free and commercial evaluation tools. The results from the tools are aggregated and presented in a report for evaluation. The tool enables practitioners to benefit from a coherent report of the findings of different accessibility conformance testing tools, without having to run each separately and then manually combine the results of the tests. Thus, it is envisaged that the tool would provide practitioners with reliable data about unmet accessibility guidelines in an efficient manner.

Learn more about how we do research

We maintain a portfolio of research projects, providing individuals and teams the freedom to emphasize specific types of work