Links
Posts about what I read elsewhere. Subscribe with RSS
-
WCAG 2.2 in GOV.UK design system
Design systems can be a super effective way to propagate a lot of accessibility at once, across many services. Not just as part of components that have good defaults, but also, maybe especially, in written documentation that helps people understand better what to do.
The GOV.UK Design System was updated to add an accessibility section and specific guidance on meeting WCAG 2.2 across the different component pages.
-
More unnecessary AI
One of the things that I keep circling back to when reading about ‘AI’ is the kind of problems people are trying to solve with it, so many of which are completely futile.
Chris Person on the ‘rabbit’:
What’s most annoying about all of this is the sheer repeated imposition of this horseshit. I’m sick of being forced to think about generative AI, large language models and large action models. I’m tired of these adult toddlers who need an AI to tie their shoes and make bad Pixar characters for them. Microsoft and Google keep shoving AI features into their software, and I absolutely should not have to worry about this garbage from Firefox of all places.
(from: Why Would I Buy This Useless, Evil Thing? - Aftermath)
-
Inspecting a scam site
Hui Jing had fun inspecting a site:
I just got a scam SMS and thought it’d be fun to inspect how the phishing website works
Hope you recover soon, my friend!
(From: Let's inspect a phishing site)
-
Design systems and the promise of solving inconsistency
Design systems aren't a silver bullet to align teams and magically make digital products more consistent. In Through a design system, darkly, Ethan Marcotte describes two issues:
- Design systems haven’t “solved” inconsistency. Rather, they’ve shifted how and when it manifests.
- Many design systems have introduced another, deeper issue: a problem of visibility.
-
Losing the imitation game
Jennifer Moore on what LLMs can and cannot do:
The fundamental task of software development is not writing out the syntax that will execute a program. The task is to build a mental model of that complex system, make sense of it, and manage it over time.
-
How engineers see the web
In Weird things engineers believe about Web development, Brian Birtles talks about different assumptions of developers of websites and and web browsers:
it’s easy to assume our experience of the Web is representative of Web development in general
Yup, checks out.
-
AI terminology
I've been using “AI”, with quotes, a bunch on this website. I feel the industry is calling things artificially intelligent way beyond the scope of what that (admittedly hard to define) phrase actually means. That makes criticial analysis harder… that's good for marketeers, not so much for others.
Simon Willison agrees that “spicy autocomplete” is a good analogy for how LLMs work today, but at the same, it's ok to call it artificial intelligence:
We need an agreed term for this class of technology, in order to have conversations about it. I think it’s time to accept that “AI” is good enough, and is already widely understood.
-
Encapsulating components
Nolan Lawson on the problem of component encapsulation:
Overall, what I would love to see is a thorough synopsis of the various groups involved in the web component ecosystem, how the existing solutions have worked in practice, what’s been tried and what hasn’t, and what needs to change to move forward.
-
Algorithmic thatcherism
Dan McQuillan says AI is algorithmic Thatcherism:
“Case after case, from Australia to the Netherlands, has proven that unleashing machine learning in welfare systems amplifies injustice and the punishment of the poor. AI doesn't provide insights as it's just a giant statistical guessing game. What it does do is amplify thoughtlessness, a lack of care, and a distancing from actual consequences.”
(via Ethan Marcotte)
-
Filler text no one wants to read or write
Sci-fi writer Ted Chiang: ‘The machines we have now are not conscious’:
Chiang’s view is that large language models (or LLMs), the technology underlying chatbots such as ChatGPT and Google’s Bard, are useful mostly for producing filler text that no one necessarily wants to read or write