Links
All links in: ai (all links)
-
Statistical illusion
Baldur Bjarnason, author of the excellent “The intelligence illusion”, on business risks of Generative AI (recommended!):
Delegating your decision-making, ranking, assessment, strategising, analysis, or any other form of reasoning to a chatbot becomes the functional equivalent to phoning a psychic for advice.
In his post, Baldur warns us once again not to imagine functionality that doesn't exist, he says it's all a ‘statistical illusion’.
-
AI, accessibility and fiction
This week, once again, someone suggested that “AI” could replace (paraphrasing) normative guidelines (ref: mailing list post of AGWG, the group that produces WCAG).
Eric Eggert explains why this seems unnecessary:
The simple fact is that we already have all the technology to make wide-spread accessibility a reality. Today. We have guidelines that, while not covering 100% of the disability spectrum, cover a lot of the user needs. User needs that fundamentally do not change.
(From: “AI” won’t solve accessibility · Eric Eggert)
I cannot but disagree with Vanderheiden and Nielsen. They suggest (again, paraphrasing) that we can stop making accessibility requirements, because those somehow “failed” (it didn't, WCAG is successful in many ways) and because generative AI exists.
Of course, I'm happy and cautiously optimistic that there are technological advancement. They can meet user needs well, like how LLMs “effectively made any image on the Web accessible to blind people”, as Léonie Watson describes in her thoughtful comment. If people want to use tools meet their needs, great.
But it seems utterly irresponsible to have innovation reduce websites' legal obligations to provide basic accessibility. Especially while there are many unresolved problems with LLMs, like hallucinations (that some say are inevitable), environmental cost, bias, copyright and social issues (including the working conditions of people categorising stuff).
-
AI uses too much energy
If ChatGPT were integrated into the 9 billion searches done each day, the IEA says, the electricity demand would increase by 10 terawatt-hours a year — the amount consumed by about 1.5 million European Union residents.
(From: AI already uses as much energy as a small country. It’s only the beginning. - Vox)
This is from an interview with Sasha Luccioni, climate researcher at Hugging Face. In it, she explains what the power and water consumption of AI, specifically LLMs, looks like today. It's bad, the amount of energy required is enormous. One example in the post is that a query to an LLM cost almost 10 times as much energy as a query to a regular search engine. That's unsustainable, even if we manage to use 100% renewable energy and water that we really didn't need for anything else.
Once again, this begs the question if we really need all the AI applications companies are rushing into their products. It's often completely unnecessary.
It reminds me of eating animals. With all we know about animal welfare and climate impact, we've got to consider if (regularly) eating animals has benefits that outweigh those downsides.
Everyone can choose to do whatever they want with the information they have available to them. As a person or as a company. But if you're deciding for a company, the impact is larger, it's the decision times the amount of users. For me it's increasingly clear I don't want to use these “AI” solutions in personal workflows, suggest we might as well use them when I give talks, let alone push for integrating them into the products I work on.
-
Content that's worth our time
Cory Dransfeldt explains that while we are developing technology that can generate and produce a larger amount of content, the real problem is the quality of that content:
I'm more and more concerned that we're heading to a place that will make it ever more difficult to find anything that's actually worth our time.
(From: We have a content quality problem, not a content quantity problem // Cory Dransfeldt)
-
MEPs adopt new and first AI law
On Wednesday, Parliament approved the Artificial Intelligence Act that ensures safety and compliance with fundamental rights, while boosting innovation.
(…)
It aims to protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI, while boosting innovation and establishing Europe as a leader in the field. The regulation establishes obligations for AI based on its potential risks and level of impact.
(From: Artificial Intelligence Act: MEPs adopt landmark law | News | European Parliament)
-
W3C and AI
The W3C established articificial intelligence is having a “systemic impact on the web” and looked at how standardisation, guidelines and interoperability can help manage that:
Machine Learning models support a new generation of AI systems. These models are often trained on a large amount of Web content, deployed at scale through web interfaces, and can be used to generate plausible content at unprecedented speed and cost.
Given the scope and scale of these intersections, this wave of AI systems is having potential systemic impact on the Web and some of the equilibriums on which its ecosystem had grown.
This document reviews these intersections through their ethical, societal and technical impacts and highlights a number of areas where standardization, guidelines and interoperability could help manage these changes
(From: AI & the Web: Understanding and managing the impact of Machine Learning models on the Web)
-
Jakob Nielsen's problematic claims about accessibility
Jakob Nielsen wrote a post in which he states “the accessibility movement has been a miserable failure’ (his words) and claims that generative “AI” can somehow magically remove the need for accessibility research and testing.
Note, there's currently no evidence that what he proposes is desirable (by users) or possible (with the tech). It is, however, clear that testing with users and meeting WCAG is desirable and possible.
Léonie explains Nielsen needs to think again:
Nielsen thinks accessibility has failed.
Nielsen thinks that generative AI will make my experience better. Nielsen apparently doesn't realise that generative AI barely understands accessibility, never mind how to make accessible experiences for humans.
I think Nielsen needs to think again.
Matt May said we need to talk about Jakob:
This part of the post isn’t so much an argument on the merits of disabled access as it is a projection of himself in the shoes of a blind user, and how utterly miserable he thinks it would be. At no point in any of this—again, classic Jakob Nielsen style—does he cite an actual blind user, much less any blind assistive technology researchers or developers
Per Axbom wrote:
the published post is misleading, self-contradictory and underhanded. I'll walk you through the whole of it and provide my commentary and reasoning.
-
Hallucination is inevitable
Researchers show that hallucination is inevitable:
LLMs cannot learn all of the computable functions and will therefore always hallucinate. Since the formal world is a part of the real world which is much more complicated, hallucinations are also inevitable for real world LLMs.
(From: [2401.11817] Hallucination is Inevitable: An Innate Limitation of Large Language Models)
-
Stitching together
Brian Merchant explains in Let's not do this again, please that OpenAI's new image generating thingy is mostly a “promotial narrative” to try and seek more investment money (OpenAI's server spend, the article says, is over 1 million USD per day).
The tech stitches together imagery, rather than create new imagery, Brian says:
It’s not that Sora is generating new and amazing scenes based on the words you’re typing — it’s automating the act of stitching together renderings of extant video and images.
-
Opportunities for AI in accessibility
Aaron Gustafson:
AI can be used in very constructive, inclusive, and accessible ways; and it can also be used in destructive, exclusive, and harmful ones. And there are a ton of uses somewhere in the mediocre middle as well.
(From: Opportunities for AI in Accessibility – A List Apart)
In this post, Aaron shares some examples of where ‘AI’ could be used to make content more broadly accessible. This is a controversial subject, because there are many automated ‘solutions’ that don't actually remove barriers, so caution is warranted. Such solutions often focus on people who want to comply with accessibility instead of people with disabilities. And accessibility is about people with disabilities, period. Aaron acknowledges this in the post, and calls for including people with disabilities.
What if, he suggests, users could ask specific questions about complex charts? As Aaron acknowledges, hallucinations exist, there could still be a use, especially with more diverse training data. Other examples of where ‘AI’ could remove barriers in his post: voice preservation, voice recognition and text transformation.
I'm still sceptical, because I've seen too many claims from automated tools that don't work that well in practice, but I understand it's worth to at least explore different options, and weigh them against the reality of today's web. For the voice and text tools I am actually somewhat optimistic.
-
The LLM search engine
Ben Werdmuller tried Arc's new “AI”-based search and shares his concerns in Stripping the web of its humanity.
Like all these tools, it outputs falsehoods. But that isn't the worst issue, he explains. Without attribution, the tool gives a false sense of objectivity and hides away bias:
If I search for “who should I follow in AI?” I get the usual AI influencers, with no mention of Timnit Gebru or Joy Buolamwini (who would be my first choices). If I ask who to follow in tech, I get Elon Musk. It undoubtedly has a lens through which it sees the world.
It's a particular kind of bubble where Elon Musk is worth following and Timnit Gebru is not suggested (would very much recommend following her instead).
Ben also notes that when bots consume content instead of humans, that threatens the ecosystem of content and writing:
If we strip [payments or donations to writers] away, there’s no writing, information, or expression for the app to summarize.
Who's going to make the input these tools grab in order to generate their output? Google faced various legal issues around displaying excerpts of news outlets on their news website. But they did at least quote and attribute them, while linking to the original. The automated processing basically strips away any opportunity for writers to be paid (or known) for their work.
-
Billions for “AGI” and “metaverse”
“Artificial general intelligence” is a phrase different people assign different meanings to. Few think is actually within the realm of possibility. Yet, Zuckerberg talked to The Verge to announce Meta's new focus on trying to find out:
While he doesn’t have (…) an exact definition for it, he wants to build it.
(From: Mark Zuckerberg’s new goal is creating artificial general intelligence at The Verge)
In the same interview he also wanted to “unequivocally state” they're still focused on “the metaverse” and will spend more than 15 billion dollars per year on that. Imagine that sort of budget to go to solving some of the world's more clearly defined problems.
-
AI images look cheap and easy
iA write excellent posts that put “AI” into context. In their latest, they compare these images, that ‘often miss realness, depth, and originality’, to stock photos. This comes with a business risk: your content looks cheaper, of less value:
using AI images makes all of your content feel ordinary. Good images enrich your article, bad images devalue it. Your audience thinks: “If they use AI for images, they probably use it for content, too.”
(From: AI Art is The New Stock Image)
Unless there's a load of generated images that are so good that we can't recognise them, and we don't realise, I think iA are right: they're super obvious to spot and already look old.
Further down in the post, they predict the lack of creativity in machines may spark more human creativity:
Photography has made us question traditional art. Similarly, AI can make us question empty off-the-shelf communication. Ironically, machine-generated content might catalyze a fresh wave of humane creativity and hand-crafted innovation in verbal and visual storytelling.
I sure hope so. If we are to create things worth having around, we've got to make our choices and intentions matter.
-
More unnecessary AI
One of the things that I keep circling back to when reading about ‘AI’ is the kind of problems people are trying to solve with it, so many of which are completely futile.
Chris Person on the ‘rabbit’:
What’s most annoying about all of this is the sheer repeated imposition of this horseshit. I’m sick of being forced to think about generative AI, large language models and large action models. I’m tired of these adult toddlers who need an AI to tie their shoes and make bad Pixar characters for them. Microsoft and Google keep shoving AI features into their software, and I absolutely should not have to worry about this garbage from Firefox of all places.
(from: Why Would I Buy This Useless, Evil Thing? - Aftermath)
-
Losing the imitation game
Jennifer Moore on what LLMs can and cannot do:
The fundamental task of software development is not writing out the syntax that will execute a program. The task is to build a mental model of that complex system, make sense of it, and manage it over time.
-
Algorithmic thatcherism
Dan McQuillan says AI is algorithmic Thatcherism:
“Case after case, from Australia to the Netherlands, has proven that unleashing machine learning in welfare systems amplifies injustice and the punishment of the poor. AI doesn't provide insights as it's just a giant statistical guessing game. What it does do is amplify thoughtlessness, a lack of care, and a distancing from actual consequences.”
(via Ethan Marcotte)
-
Filler text no one wants to read or write
Sci-fi writer Ted Chiang: ‘The machines we have now are not conscious’:
Chiang’s view is that large language models (or LLMs), the technology underlying chatbots such as ChatGPT and Google’s Bard, are useful mostly for producing filler text that no one necessarily wants to read or write