Subsets and supersets of WCAG

“Could you give us a checklist for accessibility, please”, is a frequently asked question to accessibility consultants. Checklists, while convenient, reduce WCAG to a subset. To maximise accessibility, we likely need a superset. This post goed into how both subsets and supersets can be helpful in their own ways.

Why WCAG

In this post, I’ll consider WCAG the baseline, as many governments and organisations do. Accessibility standards are by no means perfect, but they are essential. To get their web accessibility right at scale, organisations need a solid definition of what “success” means when building accessibly. Something to measure. Such definitions require a lot of input and perspectives. They necessarily take long. Standards like WCAG are the closest we have to that, and yes, they have gotten a wide range of input and perspectives. In other words, full WCAG reports are a great way for organisations to monitor their accessibility.

We can’t be doing full audits all the time, if only because those are best done by external auditors, outside the project team. On the other end of the spectrum, just performing WCAG audits isn’t enough. To maximise accessibility, our organisation should test with users with disabilities and include best practices beyond WCAG.

Subsets

Using subsets of WCAG, more team members can work on accessibility more often. More team members, because checklists often require less expertise, and more often, because doing a few checklists requires no planning, unlike conducting a full WCAG audit.

Why settle for less?

Accessibility standards can be daunting to use. If we commit to WCAG 2.1 Level A + AA conformance, there are 50 Success Criteria that we should check against. For every aspect (content, UI design, development etc), for every component. I often hear teams say that this too much of a burden. If we want to decide what applies when and to which team members, we’ll need to be well-versed in WCAG, or hire a consultant who is. Regardless of whether we do that (please do, regular full audits are essential), it makes sense to have some checks that anyone can perform.

Sidenote: of course, in real projects, it takes less to evaluate against the full set of WCAG Success Criteria, as not everything always applies. For instance, a component that doesn’t include any “time based media”, can assume the four Level A/AA criteria that relate to such media don’t apply. And responsibilities per Success Criterion differ too (for more background: the ARRM project works on a matrix mapping Success Criteria to roles and responsibilities).

Have checks that anyone can perform

Checks that anyone can perform don’t require special software or specialist accessibility knowledge. Examples:

  • if you zoom in your browser to 400%, does the interface still work?
  • can you get to all the clickable things with just the TAB key, use them and see where you are? (this one may require some setup if you’re on a Mac)
  • when you click on form labels, does the input gain focus?

So that’s one subset of WCAG. Ideally, we would pick our own for our organisation or project, based on types of websites or content (will we have lots of forms? lots of data visualisation? mostly content? super interactive? etc). Pick checks that many people can perform often.

I’ve seen this approach can be a powerful part of an accessibility strategy. You know, Conway’s Game of Life only has four rules, yet you can use it to build spaceships, gliders, Boolean logic and finite state machines… sometimes there’s power in simple rules and checks.

Supersets

With a superset of WCAG, our website can become more accessible.

It’s no secret that WCAG doesn’t cover everything and this only makes sense. Creating standards takes a lot of time, the web industry is in continuous evolution and some barriers can be put into testable criteria more easily than others. The Accessibility Guidelines Working Group (AGWG) in W3C does fantastic work on WCAG (sorry, I am biased), including to cover more and different user needs and take into account the ever-changing web. I mean, WCAG 2.* is from 2008 and the basic principles still stand after all those years.

Test with people

One of the most effective ways to find out if our work is accessible, is to test with users with disabilities, either by including them in our regular user tests, or as separate user tests.

User testing with people with disabilities is mostly similar to ‘regular’ user testing, but some things are different. In Things to consider when doing usability testing with disabled people, Peter van Grieken shares tips for recruiting participants, timing, interpretation and accomodation.

The Accessibility Project also has a list of organisations that can help with testing with users with disabilities.

Guidance beyond WCAG

There are also lots of accessibility best practices beyond WCAG, some provided by the W3C as non-normative guidance, some provided by others.

For instance, see:

Summing up

Many use WCAG as a baseline to ensure web accessibility. This matters a lot, it is important to have regular WCAG audits done (e.g. yearly). In this post, we looked at what we can do beyond that, when we use subsets and supersets of the standard. Subsets can help anyone test stuff anytime, which is good for continually catching low hanging fruit. Supersets are useful to ensure you’re really building something that is accessible, by user testing and embedding guidance and best practices beyond WCAG.

Thanks to Eric Bailey, Paul van Buuren and Marjon Bakker for feedback on earlier drafts (thanks do not imply endorsement)

Comments, likes & shares (1)

Hidde de Vries (@hdv@front-end.social) is a web enthusiast and accessibility specialist from Rotterdam (The Netherlands). He currently works on web standards for the Dutch government and is a participant in the Open UI Community Group. Previously, he worked for W3C (WAI), Mozilla, the Dutch government and others as a freelancer. Hidde is also a public speaker, he has given 73 talks, most recently in Virtual. In his free time, he works on a coffee table book covering the video conferencing apps of our decade. Buy me a coffee Follow on Mastodon Follow on LinkedIn wrote on 30 June 2022:

“We're 100% accessible”, some digital products claim. “That solution is inaccessible”, an accessibility specialist might say. These sorts of statements almost suggest that web accessiblity is a binary thing. Is it though?

In this post, I'll talk about why it's most helpful to see a website's accessibility as a continuum (or, you know, multiple continua). Even then, in some contexts, it makes sense to pretend it is binary.

If you find this interesting, but want actionable advice, see also Adrian Roselli's post Things to Do Before Asking “Is This Accessible?”

It is a spectrum

The accessibility of a website is a spectrum, in terms of different disabilities that exist, in terms of timing and in terms of objective claims.

The goal of web accessibility is that people with disabilities can use the web. In other words, it is about people and maximising the portion of people who can use our UI well. There are people who can't move their arms, who use their screen zoomed in, whose vision is blurry, who control their computer with their voice, and so forth. Accessibility is about people with a wide range of disabilities, that sometimes overlap, too. Our UI could be accessible to most or all of these people, or to some, or to none. Most products are accessible to at least some people with disabilities. Many have specific barriers for users from specific groups. Some do really well at continuously identifying barriers and removing them. Some don't.

Second, it is about timing: today all podcasts on our site may be transcribed, tomorrow we may upload an episode without a transcript. Or launch a new campaign that was done by this agency that didn't take all accessibility requirements into account. It happens. Accessibility can't be solved once and then shipped, it's a continuous process of tracking potential barriers and removing them. “That site is accessible” is a statement that changes over time on websites where content changes.

Third, accessibility conformance testing is subjective to some extent. This isn't a bug in accessibility standards, it's more like a most reasonable choice… If we tried hard, we could invent success criteria that can be evaluated with 100% certainty, but then we would need many of them, they might go out of date fast and they might end up a lot less technology agnostic. The subjectivity serves a purpose, but it's there, and again, a reason that a claim like “this is accessible” is hard to make.

So, basically, there is a degree of subjectivity in determining whether something is accessible, because it matters to which user(s), when it is checked and what is checked. For that reason, a statement like “this is accessible” or “this is not accessible” is best taken with a pinch of salt.

Why pretend it's not

A claim of “great accessibility” is subjective, a bit like “great user experience” and “great design”. Especially when you view “meeting WCAG” as a minimum and aim much higher by doing regular user testing and following best practices beyond WCAG (see my other post about using a superset of WCAG). But sometimes it makes sense to try and make formal and objective-like claims about the accessibility of a website or set of websites. To publish reports that say things like “60% of webshops in Germany are inaccessible” or “Only 10% of online banking is accessible”. Those are usually based on automated tests and/or accessibility conformance reports that refer to standards like WCAG.

One example of when it makes sense to pretend accessibility is objective, is the effectiveness of policy. National governments and organisations like the European Commission want to have a more accessible web, they have this as a policy goal. For that to be more than dreams or empty statements, they need to make it practical and tangible. Their method of measuring success is, roughly speaking, to gather accessibility statements and conformance reports. On the one hand, this reduces the experiences of people with disabilities to checking boxes in a standard, on the other hand, this provides insights at scale, while maintaining a reasonably good representation of individual experiences.

WCAG is used as a way to make statements about websites. Combined with a method like WCAG-EM, a detailed process for evaluating conformance published by the W3C, governments can get some level of certainty.

As an example, The Dutch government has a register of over 3500 accessibility conformance statements. They are each about a specific website, to which a rating between A (“fully meets WCAG”) and D (“does not meet WCAG”) is assigned. Rating E means “statement is missing”. Other efforts include AllAble's accessibility statements research, looking at accessibility statements from public sector bodies in the United Kingdom.

Of course, this approach is not perfect. Individual organisations might be using an auditing agency that is biased or not very good at evaluating WCAG. Some might self-evaluate (generally not a good idea). Or a website could have serious accessibility issues that happen to not be captured by WCAG (it happens). But even with some of those caveats in mind, even if the data is not 100% objective (is data ever?), collecting information from a large set of websites is the best a government can do. Regular formal WCAG/ATAG audits is a great thing for companies and organisations to do too, though ideally that strategy is supplemented by regular user tests and review of best practices.

Wrapping up

In summary: yes, measuring web accessibility is somewhat subjective and claims like “X is accessible” or “X is inaccessible” are tricky. If someone makes such statements, grab a pinch of salt! But, having said that, it can be helpful to talk about accessibility as “meets the criteria in this standard” and “does not meet the criteria in this standard”. Governments do this to measure the success of their policies and companies can do it to have some measurement of their own success. That's mostly useful, even though user tests and best practices based on them are even more meaningful.

List of updates
  • 15 August 2024: Added link to Adrian's post
Thanks to Job, Kilian, Eric and Ronny for feedback on an earlier draft of this post.