Blog

Don’t rely on automated tests for solid accessibility

It is important to manually check for accessibility issues, as not all issues will be found using automated testing

Automated testing tools provide a good basis for finding common accessibility issues on websites. These tools can be integrated into the development process where they can run frequently and provide immediate feedback to developers. Most tools are even free to use. However, automated testing tools will often only report common accessibility violations. Manual testing and usability testing still remain the golden standard in finding all accessibility issues.

Automated testing is useful, but far from perfect

There are many automated testing tools out there, for example Axe, Wave, the Accessibility Developer Tools and Lighthouse for Chrome, and many more. Almost all tools will test your website against the Web Content Accessibility Guidelines or WCAG. These tools do a great job testing accessibility guidelines that can be computed. For example, Guideline 1.4.3 Contrast (Minimum) states that text must have a contrast ratio of at least 4.5:1 with the background color. A tool can simply calculate this value and see if it is valid or not.

Things get more complex when looking at guidelines that require more interpretation or contextual understanding. Guideline 1.3.3 Sensory Characteristics states that, among other things, you cannot rely on color alone to give information to the user. So let’s take a form validation error message as an example. You can indicate the error by making a form input label red, but that doesn’t conform to this guideline. You can add an icon next to the label, which makes it more conformant. But tooling cannot know if an image next to the label is purely decorative or indicates an error. This you need to check manually.

Things get even more complex with guidelines like guideline 1.3.2 or guideline 2.4.6. These state that content must be displayed in a meaningful sequence and that headings and labels describe the topic or purpose correctly. Meaningful content and labels can only be determined by a human. What if I put the text ‘dog’ in the `alt`-tag of a cat image? Automated testing tools will probably say that the image passed guideline 1.1.1, but I didn’t actually provide “a text alternative that serves the equivalent purpose”: the text ‘dog’ doesn’t represent the cat in the image.

A few years ago, Gov.uk tested some automated testing tools to see how they performed. They set up a test website filled with accessibility flaws and registered which tools found which flaws. Gov.uk noted that the percentage of flaws found by tools varied a lot between these tools: the number of flaws a tool found ranged between 19% and 41%. But more importantly, all ten tools combined didn’t pick up on 29% of all accessibility flaws.

Manual testing is important

In order to find all the accessibility flaws that automated testing tools cannot recognise, manual testing is important. As mentioned, many accessibility guidelines require human interpretation. The W3C has created Understanding Techniques to help developers with this. Below every guideline of the WCAG Quick Reference Guide there is a list with Sufficient Techniques and failures to help developers to adhere to the guidelines.

For example, if you want to make the image of a cat accessible we can adhere to Situation A in the Sufficient Techniques: a short description can serve the same purpose as the non-text content (the image). The technique mentioned there is G94. On this page you can get more information on how to adhere to this technique, including some examples. 

User testing is vital

Despite all our best efforts, sometimes we build something that is still not completely accessible. The only way to find out if we made something truly accessible is by user testing. You can compare this to testing a UX or visual design: people sometimes interpret things differently than the designer intended.

We did a user test for the new corporate website we build for Geldmaat. During this test we identified additional accessibility issues. For example, one user used magnifying software to zoom in her screen. This lead to very small screen real estate for actual content, as the header and sticky footer took up almost all screen space. This user also had high contrast mode enabled in her OS. As a result of this some SVG icons where not visible anymore. These issues, among other issues, we would never have found without user testing.

The website of Geldmaat as it looks using high contrast mode and 175% zoom
The website of Geldmaat as it looks using high contrast mode and 175% zoom

And it’s not only people with a disability that make use of accessibility features. For example, if you use your phone outside in the sunlight, a website with good a contrast ratio will be better readable. Closed captions are really nice if I forgot my headphones and don’t want to bother people on the train but still want to watch the news. And on an old laptop you might gain a bit of performance improvement when enabling the reduce motion setting.

Accessibility testing is a continuous process

In an evolving project, use automated testing to make sure your code doesn’t regress. A smart combination of automatic testing and manual testing will ensure you catch most flaws. However, it is important to plan user tests every once in a while to find issues that your testing setup won’t find. 

← All blog posts