The most accessible JavaScript framework

17th July 2020 1,937 words

In my post about why I chose Eleventy over Gatsby when building my new website, I used a statistic from the WebAIM million survey that homepages using React had 5.7% more accessibility errors than the average page. At the time of writing that post, WebAIM hadn’t published the results for Gatsby because its usage share fell under the 0.5% threshold for reporting. I pointed to the value from React to imply that sites using Gatsby could have a similar rate of issues.

Gatsby have since published the statistics from the WebAIM million survey for Gatsby and the results are pretty surprising — home pages built with Gatsby had 34.3 detectable errors on average, 44% fewer than the average[1]. The article includes the following quote from WebAIM:

“Gatsby corresponded with the fewest detectable errors of any common technology or page characteristic analyzed! In fact, Gatsby home pages had about half as many detectable errors as home pages built with React alone.”

This rather unexpected outcome prompted me to explore a couple of subjects further: firstly, to look at the possible reasons for Gatsby doing so much better than other technologies; and secondly whether automated testing is accurate enough to conclusively say if a website is accessible or not.

Enforcing accessibility through developer ‘guide rails’

If I had to choose one reason for Gatsby pages doing so much better than other pages using React, I’d point to the developer 'guide rails' Gatsby puts in place to prevent certain common mistakes.

The second most frequently occurring error in the WebAIM million results is img elements missing alternative text. If an image lacks alt text, assistive technologies such as screen readers have no way (apart from the filename) to communicate its content. In cases where an image is purely decorative the alt attribute is still necessary, but should be empty. In the following example, whereas the second example would be detected by WAVE as an accessibility violation, the third would not be:

<!-- 1. GOOD: alt text describes the image -->
<img
src="/IMG_20200222_101247.jpg"
alt="A tropical beach with palm trees and a sailboat in the distance."
/>


<!-- 2. BAD: No alt attribute -->
<img src="/IMG_20200222_101247.jpg" />

<!-- 3. POTENTIALLY OK, but only if this image is purely decorative -->
<img src="/IMG_20200222_101247.jpg" alt="" />

The popular first-party plugin for Gatsby, gatsby-image aims to prevent missing alt attributes by outputting an empty alt attribute (alt="") when the alt text prop is not defined.

You can emulate this behaviour in React using the PropTypes package. In the following example, I’ve created a basic component that outputs an img element. The alt prop is not required, but will be set to an empty string if not defined:

Image.propTypes = {
src: PropTypes.string.isRequired
};

Image.defaultProps = {
alt: ''
};

const Image = ({ src, alt }) => <img src={src} alt={alt} />;

While this is definitely better than leaving the alt attribute off entirely, if you are a developer who doesn’t understand the need for alt text in the first place, it’s unlikely that you’ll learn anything from this — just because an automated test doesn’t complain, empty alt text on an image containing important information is still an accessibility violation.

Brad Frost suggests a less forgiving approach in his article, Enforcing Accessibility Best Practices with Component PropTypes, making the alt prop required. If you forget it, you’ll see a warning in the browser console.

Image.propTypes = {
src: PropTypes.string.isRequired,
alt: PropTypes.string.isRequired
};

In my opinion, this is a far better solution as it prompts action from the developer, reducing the chance they’ll repeat the same mistake again. The approach of gatsby-image helps to make sites using gatsby-image more accessible, but Brad Frost’s approach could teach developers a valuable lesson that they can apply to any site, even if it doesn’t use React.

Teaching accessibility by example

I’d also like to point out the work the Gatsby team have done to encourage accessibility through their documentation, live streams and events. Unlike some frameworks which regularly include inaccessible examples in their documentation, Gatsby documentation is consistently written with accessibility in mind.

This tutorial on using images in Gatsby contains the following code snippet, showing how to use the gatsby-image component:

<Img
fluid={data.file.childImageSharp.fluid}
alt="A corgi smiling happily"
/>

Note the nice descriptive alt prop on this image. Compare it with an example from the docs for Craft, a popular PHP-based CMS: here the alt text is missing, a serious violation of accessibility guidelines.

<img src="{{ asset.getUrl(thumb) }}" width="{{ asset.getWidth(thumb) }}" height="{{ asset.getHeight(thumb) }}">

Adding a tiny bit more markup to make your examples more accessible isn’t overcomplicating things, it’s encouraging best practices. Developers, especially beginners, copy and paste code from documentation when they need to get something working — it’s a phenomenon which even has its own Wikipedia page. Why not make that code accessible?

The limits of automated accessibility testing

The results of the WebAIM million analysis come with some pretty large caveats. Accessibility errors were detected using WAVE: an automated accessibility testing tool. WebAIM, the developers of WAVE and organisers of the survey, acknowledge that like all automated tools, it can only detect 25% to 35% of possible WCAG[2] conformance failures. This remaining 65% to 75% allows you to build an incredibly inaccessible site that passes accessibility audits, as brilliantly demonstrated by Manuel Matuzovic when he built a site to trick Google’s automated testing suite, Lighthouse.

Performance is accessibility

While front end performance testing has come on leaps and bounds in the last few years, with tools like Lighthouse becoming widely used, automated accessibility tests still have huge gaps in what they can detect. In a recent article, Scott Jehl points to the accessibility tree as one area currently overlooked by automated tests — the accessibility tree or Accessibility Object Model (AOM) is a subset of the Document Object Model (DOM) generated by the browser to communicate with assistive technology such as screen readers. Both the initial DOM and any updates triggered by JavaScript need to be accurately reflected in the accessibility tree and it can’t be used until it’s fully in sync[3].

The lighthouse score for this website, showing separate 100/100 scores for Performance, Accessibility, Best Practices, and SEO

Current DOM-based tests don’t measure whether front end JavaScript slows the time taken to build the accessibility tree. As shown above in the Lighthouse report for this website, performance is measured separately to accessibility, when in reality it can have a real impact on users if downloading and parsing a large JavaScript bundle delays the time until a site is usable. I have concerns that Gatsby’s method of sending static HTML to the browser then "hydrating"[4] it with React components could have a noticeable negative effect on the browsing experience for those using assistive technology. If you want a better idea of the impact of performance on assistive tech I’d encourage you to watch this talk by Léonie Watson in which she explores in-depth the APIs used by screen readers when communicating with the browser.

Manual accessibility testing

While there’s no way to get a quantitative measure of this ‘time-to-a-usable-accessibility-tree’ (a term I just made up, the equivalent of time-to-interactive but for assistive tech) using current automated tests, you can gain a real feeling for whether there’s an issue by manually testing with assistive tech. I’d encourage you to put yourself in the position of the people actually using these programs in their day-to-day lives. While you won’t get a definitive measurement of how accessible your site is, you’re bound to learn something and maybe gain a little empathy in the process.

I’d first start by unplugging your mouse and forcing yourself to use a keyboard to navigate your website instead: do all interactive elements have an obvious focus state? do you need to tab through fifty links before you get to the content?

Next, install a screen reader: if you use a Mac, you already have VoiceOver, if you use Windows, NVDA is free. Get familiar with the basic controls — do all of your images have meaningful alt text? How long does it take before you can start to interact with the page? Is it faster if you switch off JavaScript? If you want to really understand accessibility, instead of treating it as a checklist of chores, try using the same tech as your users.

Conclusion

The fact that an average of 34.3 detectable accessibility errors on a single page is considered good, is a damning indictment on the state of the web in 2020. Poor accessibility is a problem which seems to be deeply ingrained in the web industry as a whole — either developers are not aware of how to make a website accessible; they genuinely don’t care; or they’re not given the time, support or permission by their employer to make their work accessible. Gatsby’s attempts to make things accessible by default appear to be paying off. However I think that hiding this away behind layers of components, queries and client-side JavaScript dilutes the educational value. There’s no truly accessible frontend framework as long as those developers using the framework have the freedom to write inaccessible code.

Perhaps we shouldn’t then be looking to frameworks to fix accessibility, but to companies such as Google and Facebook who have the resources to improve automated accessibility tests. Instead much of this work seems to be left to specialist accessibility consultancies, non-profits and volunteers[5]. The fact that accessibility testing lags so far behind performance testing demonstrates the priorities of the large corporations who control much of the web. Improving performance has been shown to increase conversions, ultimately making more money for shareholders, but for many companies the only motivation to improve accessibility is the threat of lawsuits. This narrow-minded view completely ignores the fact that people with disabilities also spend money online. Gatsby remain rare in the world of VC-funded tech startups in that they do genuinely seem to care about accessibility because it’s the right thing to do, not just for monetary reasons.

Until web accessibility is something that can be fully automated, developers will need to put it on themselves to understand accessibility. I’d like to see screen readers up there with browsers and text editors as one of the core tools of front end developers. If we want to make the web more accessible for everyone, automated test results aren’t going to change people’s attitudes. Ultimately it’s empathy for users that should be the foundation of any accessible website.


  1. If you, like me, were wondering where Eleventy might sit in these results, it’s unlikely to ever show up on this survey even if it does pass the 0.5% threshold to appear in the results. Why? because there’s no way to see if a site is using Eleventy by looking at the HTML — it’s a tool to generate pages and doesn’t leave any trace in the front end code. ↩︎

  2. Web Content Accessibility Guidelines, the global standard for web content accessibility ↩︎

  3. An archived email conversation between accessibility professionals (GitHub gist) ↩︎

  4. A process whereby React attempts to match elements in the existing HTML markup to its own components and state, effectively re-running the logic which rendered the static HTML file at build time, but in the client. ↩︎

  5. Google’s Lighthouse uses a subset of tests from axe, a testing framework built by Deque, an accessibility software and development company. WebAIM is a non-profit company. ↩︎