Links
Posts about what I read elsewhere. Subscribe with RSS
-
Attributes and properties
Attributes and properties are fundamentally different things.
(From: HTML attributes vs DOM properties - JakeArchibald.com)
-
Opening
To convince a reader or conference attendee that your content is something to pay attention to, try opening strong.
I don't think I'm very good at this, so I loved Maggie Appleton's latest piece. It's full of useful advice:
For your writing to be worth reading, you need to be exploring something of consequence for someone. You have to have some kind of problem that matters.
(…)
Once you know you have a consequential problem for a community and some sense of a solution, you get to play with narrative details. This is the fun storytelling part. -
Statistical illusion
Baldur Bjarnason, author of the excellent “The intelligence illusion”, on business risks of Generative AI (recommended!):
Delegating your decision-making, ranking, assessment, strategising, analysis, or any other form of reasoning to a chatbot becomes the functional equivalent to phoning a psychic for advice.
In his post, Baldur warns us once again not to imagine functionality that doesn't exist, he says it's all a ‘statistical illusion’.
-
AI, accessibility and fiction
This week, once again, someone suggested that “AI” could replace (paraphrasing) normative guidelines (ref: mailing list post of AGWG, the group that produces WCAG).
Eric Eggert explains why this seems unnecessary:
The simple fact is that we already have all the technology to make wide-spread accessibility a reality. Today. We have guidelines that, while not covering 100% of the disability spectrum, cover a lot of the user needs. User needs that fundamentally do not change.
(From: “AI” won’t solve accessibility · Eric Eggert)
I cannot but disagree with Vanderheiden and Nielsen. They suggest (again, paraphrasing) that we can stop making accessibility requirements, because those somehow “failed” (it didn't, WCAG is successful in many ways) and because generative AI exists.
Of course, I'm happy and cautiously optimistic that there are technological advancement. They can meet user needs well, like how LLMs “effectively made any image on the Web accessible to blind people”, as Léonie Watson describes in her thoughtful comment. If people want to use tools meet their needs, great.
But it seems utterly irresponsible to have innovation reduce websites' legal obligations to provide basic accessibility. Especially while there are many unresolved problems with LLMs, like hallucinations (that some say are inevitable), environmental cost, bias, copyright and social issues (including the working conditions of people categorising stuff).
-
What ARIA attributes do
Kitty explains the difference between
disabled
andaria-disabled
:[disabled and the aria-disabled attribute] are both meaningful attributes with their own pros and cons
(From: On disabled and aria-disabled attributes | Kitty Giraudel)
There's a lesson in here that applies more generally: ARIA attributes always merely set ‘accessibility semantics’, they don't have side effects like affecting discoverability. It also means when you use them and want behaviours associated with the attributes, you need to add those yourself. So if you add a button role, it won't behave like a button upon adding that attribute, you need to add click and keyboard handlers (and more) yourself.
-
WebAIM Million 2024
The WebAIM Million 2024 report is out! More errors were detected, but also pages with fewer errors generally got better.
If this inspired you to go fix low hanging fruit in your projects, I previously wrote about ways to fix common accessibility issues, and a part 2 with more issues to fix. Making websites perfectly accessible can be hard, but reducing fruit that is both low-hanging and very common, is not.
-
AI uses too much energy
If ChatGPT were integrated into the 9 billion searches done each day, the IEA says, the electricity demand would increase by 10 terawatt-hours a year — the amount consumed by about 1.5 million European Union residents.
(From: AI already uses as much energy as a small country. It’s only the beginning. - Vox)
This is from an interview with Sasha Luccioni, climate researcher at Hugging Face. In it, she explains what the power and water consumption of AI, specifically LLMs, looks like today. It's bad, the amount of energy required is enormous. One example in the post is that a query to an LLM cost almost 10 times as much energy as a query to a regular search engine. That's unsustainable, even if we manage to use 100% renewable energy and water that we really didn't need for anything else.
Once again, this begs the question if we really need all the AI applications companies are rushing into their products. It's often completely unnecessary.
It reminds me of eating animals. With all we know about animal welfare and climate impact, we've got to consider if (regularly) eating animals has benefits that outweigh those downsides.
Everyone can choose to do whatever they want with the information they have available to them. As a person or as a company. But if you're deciding for a company, the impact is larger, it's the decision times the amount of users. For me it's increasingly clear I don't want to use these “AI” solutions in personal workflows, suggest we might as well use them when I give talks, let alone push for integrating them into the products I work on.
-
Content that's worth our time
Cory Dransfeldt explains that while we are developing technology that can generate and produce a larger amount of content, the real problem is the quality of that content:
I'm more and more concerned that we're heading to a place that will make it ever more difficult to find anything that's actually worth our time.
(From: We have a content quality problem, not a content quantity problem // Cory Dransfeldt)
-
Alt texts as meta data would and the need for context
The idea of including alt text for images as metadata into image files pops up every now and then.
Eric Bailey explains some of the many reasons why this isn't as good of an idea as it seems:
The largest thing to grapple with is that images are contextual. Choosing to select and share one is a highly intentional act, and oftentimes requires knowing the larger context of how it will be viewed.
(From: Thoughts on embedding alternative text metadata into images – Eric Bailey)
He explains describing images is a human to human thing, not a “problem” that just needs some tech thrown at it. Even if some of the tech can in some ways be helpful and powerful.
-
Touchscreen accessibility
Touch screens and buttonless designs on devices have become the norm, not a definition of the ultra-modern any more. Which means, as a blind individual, that finding accessible household appliances has become increasingly challenging.