“Could you give us a checklist for accessibility, please”, is a frequently asked question to accessibility consultants. Checklists, while convenient, reduce WCAG to a subset. To maximise accessibility, we likely need a superset. This post goed into how both subsets and supersets can be helpful in their own ways.
In this post, I’ll consider WCAG the baseline, as many governments and organisations do. Accessibility standards are by no means perfect, but they are essential. To get their web accessibility right at scale, organisations need a solid definition of what “success” means when building accessibly. Something to measure. Such definitions require a lot of input and perspectives. They necessarily take long. Standards like WCAG are the closest we have to that, and yes, they have gotten a wide range of input and perspectives. In other words, full WCAG reports are a great way for organisations to monitor their accessibility.
We can’t be doing full audits all the time, if only because those are best done by external auditors, outside the project team. On the other end of the spectrum, just performing WCAG audits isn’t enough. To maximise accessibility, our organisation should test with users with disabilities and include best practices beyond WCAG.
Using subsets of WCAG, more team members can work on accessibility more often. More team members, because checklists often require less expertise, and more often, because doing a few checklists requires no planning, unlike conducting a full WCAG audit.
Why settle for less?
Accessibility standards can be daunting to use. If we commit to WCAG 2.1 Level A + AA conformance, there are 50 Success Criteria that we should check against. For every aspect (content, UI design, development etc), for every component. I often hear teams say that this too much of a burden. If we want to decide what applies when and to which team members, we’ll need to be well-versed in WCAG, or hire a consultant who is. Regardless of whether we do that (please do, regular full audits are essential), it makes sense to have some checks that anyone can perform.
Sidenote: of course, in real projects, it takes less to evaluate against the full set of WCAG Success Criteria, as not everything always applies. For instance, a component that doesn’t include any “time based media”, can assume the four Level A/AA criteria that relate to such media don’t apply. And responsibilities per Success Criterion differ too (for more background: the ARRM project works on a matrix mapping Success Criteria to roles and responsibilities).
Have checks that anyone can perform
Checks that anyone can perform don’t require special software or specialist accessibility knowledge. Examples:
- if you zoom in your browser to 400%, does the interface still work?
- can you get to all the clickable things with just the
TABkey, use them and see where you are? (this one may require some setup if you’re on a Mac)
- when you click on form labels, does the input gain focus?
So that’s one subset of WCAG. Ideally, we would pick our own for our organisation or project, based on types of websites or content (will we have lots of forms? lots of data visualisation? mostly content? super interactive? etc). Pick checks that many people can perform often.
I’ve seen this approach can be a powerful part of an accessibility strategy. You know, Conway’s Game of Life only has four rules, yet you can use it to build spaceships, gliders, Boolean logic and finite state machines… sometimes there’s power in simple rules and checks.
With a superset of WCAG, our website can become more accessible.
It’s no secret that WCAG doesn’t cover everything and this only makes sense. Creating standards takes a lot of time, the web industry is in continuous evolution and some barriers can be put into testable criteria more easily than others. The Accessibility Guidelines Working Group (AGWG) in W3C does fantastic work on WCAG (sorry, I am biased), including to cover more and different user needs and take into account the ever-changing web. I mean, WCAG 2.* is from 2008 and the basic principles still stand after all those years.
Test with people
One of the most effective ways to find out if our work is accessible, is to test with users with disabilities, either by including them in our regular user tests, or as separate user tests.
User testing with people with disabilities is mostly similar to ‘regular’ user testing, but some things are different. In Things to consider when doing usability testing with disabled people, Peter van Grieken shares tips for recruiting participants, timing, interpretation and accomodation.
The Accessibility Project also has a list of organisations that can help with testing with users with disabilities.
Guidance beyond WCAG
There are also lots of accessibility best practices beyond WCAG, some provided by the W3C as non-normative guidance, some provided by others.
For instance, see:
- Making Content Usable for People with Cognitive and Learning Disabilities , a document filled with UX recommendations, specifically related to people with cognitive and learning disabilities, but useful for all
- XR Accessibility User Requirements for if you’re building anything “extended reality”-like, such as virtual reality and augmented reality
- Accessibility Requirements for People with Low Vision on how to make web content accessible to people with low vision
- GOV.UK accessibility blog, where the folks behind GOV.UK share stories from their accessibility practice, tests they’ve done and more
- Scott O’Hara’s Accessible Components
Many use WCAG as a baseline to ensure web accessibility. This matters a lot, it is important to have regular WCAG audits done (e.g. yearly). In this post, we looked at what we can do beyond that, when we use subsets and supersets of the standard. Subsets can help anyone test stuff anytime, which is good for continually catching low hanging fruit. Supersets are useful to ensure you’re really building something that is accessible, by user testing and embedding guidance and best practices beyond WCAG.
Thanks to Eric Bailey, Paul van Buuren and Marjon Bakker for feedback on earlier drafts (thanks do not imply endorsement)
Comments, likes & shares (10)
Simon R Jones, Colin, Meagan Eller, Károly Szántai and Bhupesh Singh liked this
Hadley reposted this