This week, a product launched and claimed to generate “production ready” code. But it also generates code with accessibility problems, which contradicts “production ready”. When someone called this out publicly, a community showed itself from its worst side. What can we learn?
I'll state again, I wrote this to share learnings around community respondes to a concern about accessibility issues. Because these kinds of replies are common and it's useful to have context. I don't want to add fuel to the issue, which is why I left out links to individual tweets and people.
I do want to call out Vercel, a business with a large voice in the developer community, which I do at the end of the post.
The “production ready” claim
I’ll start by elaborating on my first point. “Has accessibility issues” contradicts “production ready”, for three reasons:
- equal access to information is a human right
- most organisations have legal requirements to make accessible products
- it can cost companies money if people can’t use their products (you wouldn’t stop every fifth customer from entering your shop).
I will note it was an “alpha” launch, but the words “production ready” were used and not nuanced (not in marketing and not in the actual tool; a warning banner could go a long way). Fair enough, maybe they want to look at accessibility later (a personal pet peeve though: I recommend shifting left instead, doing accessibility earlier is easier).
The company could have made different choices. If it is known that accessibility is problematic, maybe the product could have come with a checklist that helps people avoid some of the most common issues? Or some kind of warning or banner that explains nuances the “production ready” line? These are choices to be made, to balance between what makes the product look less good and what harms end users.
Ableism
Many of the responses were ableist: they discriminate or contain social prejudice against people with physical or mental disabilities. A key point to make here is: don't feel offended if you or your comment is called ableist. Instead, listen and learn (seriously, it's an opportunity). The system is ableist, on top of which individuals make comments that can be called ableist. We (as a people) need to identify and break down that system, but also, people can individually learn: everyone has a degree of ableism (like they have some degree sexism and racism). I know I do. I've been learning about accessibility for about 15 years and still learn new things all the time (same for sexism, or racism, etc, these are all things need regular introspection; and they are related, see also intersectionality).
Learning from the responses
Below, I'll list some responses I found problematic, and try and explain why. I'm hoping this is helpful for people who want to understand better why accessibility is critical, and why accessibility specialists point this out.
- “can’t expect them to make everything perfect, especially in the alpha release” - I think it's fair to have expectations from a company that is a large voice in the web development community
- “Here's a better framing”, “Why the agressive tone?”, “Why are these people so insufferable? (…)” - this shifts the question about accessibility to one about how the person asking for equality phrases their feedback (this is tone policing, a common derailment tactic)
- “🤮 being an insufferable dick to well-meaning, well-intentioned people is not going to work for your cause, no matter how good of a cause it is” - this seems to suggest that inaccessibility is ok as long as the intentions are good (that is ableist; equal access cannot be bought off by good intentions only. It is actual equality that is required)
- “Paying a six figure engineer to add features only 1% of your user base needs only makes profit sense after, idk, 100K active users? [screenshot of ChatGPT to prove the number]” - it’s not only about profit sense, it’s also about ethical sense and legal sense. If you want to focus on profit only: about 20% of people has a disability (says WHO). Also, almost all people will develop disabilities in some form throughout their life while they age.
- ‘You are complaining about a WYSIWYG editor not being accessible to the blind---do you make similar complains about sunsets and VR headsets?’ and ‘I don't think anyone using this site needs accessibility’- this is a fundamental misunderstanding of how people with disabilities use the web. Yes, blind people use WYSIWYG editors (and so do people with other disabilities, which is why creators of these tools care, see the accessibility initiatives for tools like TinyMCE). See also Apple's videos on Sady Paulson, who uses Switch Control to edit videos or on how people use tools like Door Detection and Voice Control.
- ‘Then what is the argument for accessibility, if not screen readers or search engine crawlers?’ - again, there are many more ways people with disabilities use the web, and beyond permanent disabilities (as mentioned, about 20% of people), there are people with temporary impairments (from broken arms to word) and situational impairments
Some responses were particularly hostile and personal. “I'm shocked that you're unemployed ..🤯🤯😅”, “Okay, Karen”, “(…) She wants attention”, “No matter how much you shame Vercel, they don't want you. They never will”, “Go accessibility pimp else where (sic) and pretend that others give a shit”, “[you are] being an insufferable dick”. These are all unacceptable personal attacks.
If you work at Vercel (this was relating the v0 product), please consider speaking up (silence speaks too) and/or talking with your community about how accessibility is viewed and how people in the community interact. The quotes in this post are all real quotes, from people defending Vercel. To his credit, the CEO gave the right example with his response (”Thanks for the feedback”)
Wrapping up
So, in summary: the “production ready” claim and lack of nuance about what that means is problematic. Pointing it out got responses I'd call ableist, plus a few responses that were plain hostile. All of this reflects badly on the community.
It's not new that accessibility advocates get hostile responses to reasonable requests (or when doing their job). But it's been a while since I've seen so many of those responses, so I wanted to take the opportunity to write down some common misunderstandings.
Comments, likes & shares (140)
Joe Lanman, Stephanie Eckles, Ryan Trimble, Tomáš Kout, mandy, Sophie, Manuel Matuzović, Roma Komarov, Sarah L. Fossheim :they_them:, Juhis, nate, Mike-麥-Mai-v1.618 ????, Rhian van Esch, Martine Dowden, Dana, Tyler Sticka, Florian Geierstanger, Aral Balkan, muan, Matthias Ott, Evan, Kaan Barmore-Genç, gglnx, Chee Aun ????, Simon R Jones, Fynn Becker, Nick F, Deborah, eladnarra, Daniel Davis, Demian, Nicolas Chevobbe, Holger Hellinger, JW, Heather Buchel, Lennart, John P. Green, Heather Migliorisi, Joseph Kohlmann, gullegugg@mastodon.online, Tom P, Paul Rohr, Oliver, Angelika Tyborska, Olu, Ted M. Young, Robbert Broersma, patak, Ana Rodrigues, SteveBuell, kris ????, Erik Pavletic, Bobbi Towers, Erik Vorhes, ????meshe????, AshleeMBoyer, Vasilis, Dawn Ahukanna, Frederic Marx, Samir Saeedi, Michael, Nicöd·e and Steve Woodson liked this
Joe Lanman, Ramón Martín Huidobro Peltier, Vadim Makeev, Devin Prater :blind:, Ryan Trimble, Tomáš Kout, Sophie, Manuel Matuzović, Roma Komarov, Juhis, nate, Simon R Jones, Amy Mac, Mike-麥-Mai-v1.618 ????, Tyler Sticka, Dana, xorgol@mastodon.social, Matthias Ott, Walter Ebert, Markus Unterwaditzer, Thomas Michael Semmler, Ben Myers ????, Stephanie Eckles, Rhian van Esch, Fynn Becker, Richard Spenceley, Sami Määttä, Nicolas Chevobbe, Lene Saile, Holger Hellinger, JW, Zander :jquery:, Gary Katsevman, Heather Buchel, Roman Bardt, Lennart, James Basoo, Melanie, WoodsByTheSea, Elwin van Eede, Joseph Kohlmann, gullegugg@mastodon.online, Tom P, Liz Ellis She/Her????, RT bot :verified:, Alistair Shepherd, Oliver, Angelika Tyborska, Rachele DiTullio, Caro, Robb, Todd Libby, Ted M. Young, fbiville, kris ????, Michael J. Nicholson, Steve Faulkner, Erik Vorhes, Jeremy Neander, Tim Severien, Iain, Zac Littleberry, Šime Vidas, ???? Yannick Loiseau ✅, redgluten, Michael Fürstenberg, Michael, Ken Franqueiro and Alan reposted this
I wasn't particularly enthused by the alpha release of Vercel's v0 UI generation tool this week. Provided with a prompt, it produces React code with Tailwind to style it. It's being marketed as a prototyping tool, but was described by Vercel's CEO on Twitter as "production-grade". Well, which one is it?
To be clear, my issue isn't with the concept of large language models (LLMs) in general: I use Copilot in my everyday work, and it's saved me a lot of time when writing tests! My concern is with the source material used to train it, and what people do with the output.
Hidde de Vries wrote a great post on the incredibly problematic dev community response after the accessibility of the UI was called out, so I won't go into that here.
Stop putting inaccessible prototypes into production
Prototyping is one thing. Noodling around with UI ideas? Fantastic. Please don't put it into production.
A pattern I've noticed with a huge number of tech/SaaS startups is:
Inevitably, people will copy existing patterns in the codebase, adding more stuff on top, and time passes. At this point, you have a completely inaccessible SaaS product, and the amount of work needed to go back and fix all the problems is so much, and so costly, that it's never going to be a "business priority". So many of the B2B SaaS tools I've used are terribly inaccessible; companies seem to forget that staff may have access needs, as well as customers, and that those access needs may extend beyond being blind or partially sighted.
In fact, a 2020 study by RNIB (docx) found that "50% of employers thought that there may be additional health and safety risks in the workplace [employing someone blind or partially sighted]; 33% of employers thought that they may not be able to operate a computer/laptop; 33% of employers thought that they may not be able to operate the necessary equipment, excluding computers/laptops". Isn't that just terribly fucking depressing?
I truly wish the founders of SaaS companies worldwide would learn some proper semantic HTML before they build these UIs that are going to form the basis of their entire product for years to come. I'd love to see SaaS accessibility viewed as a competitive advantage rather than an expensive afterthought.
As Ashlee Boyer puts it:
Before v0 was announced I was planning on writing a blog post about the inaccessibility of SaaS software, and how it starts at prototypes making it to production, so I guess this is that post. The problem has existed for years, long before generative LLMs came along. LLMs are just making it worse, and faster.
People write inaccessible code, and LLMs copy people
Tools like v0 are destined to become the new "copying and pasting from open-source UI libraries". In the wider community, ChatGPT has already overtaken StackOverflow as the first port of call for coding questions. Developers shouldn't unreservedly trust the output from code-generating LLMs, though. I double-triple-check everything that Copilot produces for me, and generally end up tweaking it about half of the time – and I rarely let it write blocks of code unless I'm writing loads of repetitive tests. It's great for autocompleting variables, repeating things I've already written and finishing lines, but I have found it has a tendency to make up functions that don't exist.
Likewise, ChatGPT and v0 may produce decent enough code, or they might give you
<div>
s that should be links. At least v0 seems to be adding labels to form inputs, which is more than I can say for a lot of developers on the web. But that's kind of the issue: people don't write accessible code, so why should we be trusting algorithms trained on other people's inaccessible code to write UI code for us? Vercel say it was trained on code written by their team, but considering the UI for v0 wasn't accessible itself, I don't have a huge amount of confidence that the input source material for the LLM was.This isn't purely a Vercel problem, of course, it's an everyone problem. But we as developers can make it better by learning how to write semantic HTML and proper CSS, and then training the models on that.
I do believe that LLM-backed tools can speed up the development process and make us more productive, but not through developers indiscriminately pasting whatever they generate into our codebases and trusting that it's all fine. Our users deserve a little more diligence than that.