Links
Posts about what I read elsewhere. Subscribe with RSS
-
Screenreader only component
Donny D'Amato on making a design system component for content that is meant for screenreaders:
there has been one concept that I’ve stuggled to put into this component-driven ecosystem; screenreader only as it has traditionally existed as a class (eg., .sr-only) added to an otherwise benign element
(From: Screenreader only)
-
Reducing complexity
Tim Paul on how complexity can increase unexpectedly if we automate:
handling complexity isn't the same as reducing it.
In fact, by getting better at handling complexity we're increasing our tolerance for it. And if we become more tolerant of it we're likely to see it grow, not shrink.
-
Popover in Baseline
With Firefox 125 shipping the feature, good news on popover:
This web feature is now available in all three major browser engines, and becomes Baseline Newly Available as of April 16, 2024.
-
Attributes and properties
Attributes and properties are fundamentally different things.
(From: HTML attributes vs DOM properties - JakeArchibald.com)
-
Opening
To convince a reader or conference attendee that your content is something to pay attention to, try opening strong.
I don't think I'm very good at this, so I loved Maggie Appleton's latest piece. It's full of useful advice:
For your writing to be worth reading, you need to be exploring something of consequence for someone. You have to have some kind of problem that matters.
(…)
Once you know you have a consequential problem for a community and some sense of a solution, you get to play with narrative details. This is the fun storytelling part. -
Statistical illusion
Baldur Bjarnason, author of the excellent “The intelligence illusion”, on business risks of Generative AI (recommended!):
Delegating your decision-making, ranking, assessment, strategising, analysis, or any other form of reasoning to a chatbot becomes the functional equivalent to phoning a psychic for advice.
In his post, Baldur warns us once again not to imagine functionality that doesn't exist, he says it's all a ‘statistical illusion’.
-
AI, accessibility and fiction
This week, once again, someone suggested that “AI” could replace (paraphrasing) normative guidelines (ref: mailing list post of AGWG, the group that produces WCAG).
Eric Eggert explains why this seems unnecessary:
The simple fact is that we already have all the technology to make wide-spread accessibility a reality. Today. We have guidelines that, while not covering 100% of the disability spectrum, cover a lot of the user needs. User needs that fundamentally do not change.
(From: “AI” won’t solve accessibility · Eric Eggert)
I cannot but disagree with Vanderheiden and Nielsen. They suggest (again, paraphrasing) that we can stop making accessibility requirements, because those somehow “failed” (it didn't, WCAG is successful in many ways) and because generative AI exists.
Of course, I'm happy and cautiously optimistic that there are technological advancement. They can meet user needs well, like how LLMs “effectively made any image on the Web accessible to blind people”, as Léonie Watson describes in her thoughtful comment. If people want to use tools meet their needs, great.
But it seems utterly irresponsible to have innovation reduce websites' legal obligations to provide basic accessibility. Especially while there are many unresolved problems with LLMs, like hallucinations (that some say are inevitable), environmental cost, bias, copyright and social issues (including the working conditions of people categorising stuff).
-
What ARIA attributes do
Kitty explains the difference between
disabled
andaria-disabled
:[disabled and the aria-disabled attribute] are both meaningful attributes with their own pros and cons
(From: On disabled and aria-disabled attributes | Kitty Giraudel)
There's a lesson in here that applies more generally: ARIA attributes always merely set ‘accessibility semantics’, they don't have side effects like affecting discoverability. It also means when you use them and want behaviours associated with the attributes, you need to add those yourself. So if you add a button role, it won't behave like a button upon adding that attribute, you need to add click and keyboard handlers (and more) yourself.
-
WebAIM Million 2024
The WebAIM Million 2024 report is out! More errors were detected, but also pages with fewer errors generally got better.
If this inspired you to go fix low hanging fruit in your projects, I previously wrote about ways to fix common accessibility issues, and a part 2 with more issues to fix. Making websites perfectly accessible can be hard, but reducing fruit that is both low-hanging and very common, is not.
-
AI uses too much energy
If ChatGPT were integrated into the 9 billion searches done each day, the IEA says, the electricity demand would increase by 10 terawatt-hours a year — the amount consumed by about 1.5 million European Union residents.
(From: AI already uses as much energy as a small country. It’s only the beginning. - Vox)
This is from an interview with Sasha Luccioni, climate researcher at Hugging Face. In it, she explains what the power and water consumption of AI, specifically LLMs, looks like today. It's bad, the amount of energy required is enormous. One example in the post is that a query to an LLM cost almost 10 times as much energy as a query to a regular search engine. That's unsustainable, even if we manage to use 100% renewable energy and water that we really didn't need for anything else.
Once again, this begs the question if we really need all the AI applications companies are rushing into their products. It's often completely unnecessary.
It reminds me of eating animals. With all we know about animal welfare and climate impact, we've got to consider if (regularly) eating animals has benefits that outweigh those downsides.
Everyone can choose to do whatever they want with the information they have available to them. As a person or as a company. But if you're deciding for a company, the impact is larger, it's the decision times the amount of users. For me it's increasingly clear I don't want to use these “AI” solutions in personal workflows, suggest we might as well use them when I give talks, let alone push for integrating them into the products I work on.