Links
All links in: ai (all links)
-
Laundry and dishes
Writer Joanna Maciejewska on Threads (29 March 2024):
You know what the biggest problem with pushing all-things-AI is? Wrong direction. I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes.
(From: Threads)
-
Hitting a wall
Just as I argued here in April 2024, LLMs have reached a point of diminishing returns.
§
The economics are likely to be grim. Sky high valuation of companies like OpenAI and Microsoft are largely based on the notion that LLMs will, with continued scaling, become artificial general intelligence. As I have always warned, that’s just a fantasy.
(From: CONFIRMED: LLMs have indeed reached a point of diminishing returns)
-
We learn
Humans aren’t trained up. We have experience. We learn. And for us, learning a language, for example, isn’t learning to generate ‘the next token’. It’s learning to work, play, eat, love, flirt, dance, fight, pray, manipulate, negotiate, pretend, invent and think. And crucially, we don’t merely incorporate what we learn and carry on; we always resist. Our values are always problematic. We are not merely word-generators. We are makers of meaning.
(From: Can computers think? No. They can’t actually do anything | Aeon Essays)
-
LLMs also hallucinate in medical contexts
This shouldn't surprise anyone, but it turns out LLMs also make up stuff when used by doctors:
[Professors Allison Koenecke and Mona Sloane] determined that nearly 40% of the hallucinations were harmful or concerning because the speaker could be misinterpreted or misrepresented.
(From: Researchers say AI transcription tool used in hospitals invents things no one ever said | AP News)
The article lists some examples: the tools made up violence, racial details and medication out of thin air.
-
Unlicensed use of creative works
The unlicensed use of creative works for training generative AI is a major, unjust threat to the livelihoods of the people behind those works, and must not be permitted
(From: Statement on AI training)
-
Overrate
Iris van Rooij on Ada Lovelace:
Two centuries later, as we are living through yet another AI summer where AI hype and promises of artificial general intelligence (AGI) abound, Ada’s wise words remain relevant as ever. When writing about the “AI” of her time, called the Analytical Engine, she wrote: “It is desirable to guard against the possibility of exaggerated ideas that might arise as to the powers of [AI]. In considering any new subject, there is frequently a tendency [...] to overrate what we find to be already interesting or remarkable”.
(From: Editorial AI Inside: Celebrating Ada and Women in AI | Radboud University)
-
Who requested this feature?
This is creepy, dull and useless. I wish they didn't:
If you think avoiding AI-generated images is difficult as it is, Facebook and Instagram are now going to put them directly into your feeds. At the Meta Connect event on Wednesday, the company announced that it’s testing a new feature that creates AI-generated content for you “based on your interests or current trends” — including some that incorporate your face.
(From: Meta’s going to put AI-generated images in your Facebook and Instagram feeds - The Verge)
-
The temporality of making
But in eliminating the effort, in refusing the temporality of making, the outcome of an “AI”-driven creative process is a phantasm, an unsubstantiality, something that passes through the world without leaving any trace. A root that twists back upon itself and tries to suck the water from its own desiccated veins.
(From: Coming home | A Working Library)
-
No personal data harvesting in Europe
Glad the AI Act seems to effectively protect my rights:
Anyone living in the EU, EEA or Switzerland will not have their data harvested. LinkedIn has not yet confirmed why it has spared the citizens of Europe, but it may be due to rules introduced under the EU AI Act.
(From: LinkedIn trains GenAI models on personal data by default)
-
Knowledge and context
Artificial intelligence companies deeply underestimate how perfect the things around us are, and how deeply we base our understanding and acceptance of the world on knowledge and context. People generally have four fingers and a thumb on each hand, hammers have a handle made of wood and a head made of metal, and monkeys have two legs and two arms. The text on the sign of a store generally has a name and a series of words that describe it, or perhaps its address and phone number.
These are simple concepts that we learn from the people and places we see as we grow up, and what's very, very important to remember is that these are not concepts that artificial intelligence models are aware of.
(From: Subprime Intelligence)
-
Real things by real people
yup:
I want real things by real people. I don’t want more things averaged out by a language model that can only make likely sentences. I don’t want more creepy images directly sourced from thousands of copyrighted works. I want you to put yourself on the page.
(From: A short note on AI – Me, Robin)
-
Blandness vs absurdity
Have been nodding along to this post, that touches on a lot of the themes I plan to bring to Beyond Tellerrand in November:
as AI gets better at mimicking human communication, the pressure on human creators to be weirder, more original, and more authentically human will only increase.
(From: @Westenberg | Shitposting Our Way Through the Singularity)
-
Dimensions to meanings
I think there are multiple implicit dimensions to the meanings of behaviour words. That compounds questions about where to draw boundaries, and it can lead to discussion at cross purposes and confusion.
(From: Do LLMs REALLY reason, understand, think, summarise...? — UlrikeHahn)
-
What users thing vs what corporations think
The corporate branding, the new “AI-powered developer platform” slogan, makes it clear that what I think of as “GitHub”—the traditional website, what are to me the core features—simply isn’t Microsoft’s priority at this point in time.
(From: "GitHub" Is Starting to Feel Like Legacy Software - The Future Is Now)
-
The long closed site that got revitalised as a zombie AI version
TUAW (“The Unofficial Apple Weblog”) was shut down by AOL in 2015, but this past year, a new owner scooped up the domain and began posting articles under the bylines of former writers who haven’t worked there for over a decade.
(From: Early Apple tech bloggers are shocked to find their name and work have been AI-zombified - The Verge)
The content on the relaunched site was LLM-generated, including author names and pictures. But then they used real author names from people who used to work at the site. Very uncanny.
After one of the former TUAW writers posted about what happened and threatened with legal action, the names have now been changed.
-
The problem is with energy
The problem isn’t that AI is using "too much" power from our current grid; it’s that our current grid still overwhelmingly runs on fossil fuels in the first place.
-
Mediocre and derivative input
Scott Riley:
Figma’s AI shit will suffer from the same problems every other company’s GenAI shit suffers from: the average input to its dataset is, almost by definition, mediocre and derivative. Especially when you consider the state we’re in by and large as an industry.
(From: On AI and the commoditisation of design – Scott Riley)
-
Philosophically bullshit
LLMs don't hallucinate or lie, they ‘bullshit’, in the sense that the late philosopher Harry Frankfurt, explain Glasgow researchers in their recent paper:
The problem here isn't that large language models hallucinate, lie, or misrepresent the world in some way. It's that they are not designed to represent the world at all; instead, they are designed to convey convincing lines of text.
(From: ChatGPT is bullshit)
The paper explains Frankfurt's interesting distinction between ‘soft bullshit’ and ‘hard bullshit’, reasoning that ChatGPT is definitely the former and in some cases arguably the latter.
It's crucial to replace phrases like ‘hallucinate’ or ‘lie’ with a word like ‘bullshit’, not to try and be witty, but because the phrases shape how investors, policymakers and general public think of these tools. Which in turn impacts the decisions they make about using them.
-
Synergy Greg
This post is a bit violent at times, but has some very, very good points on the “AI” hype from someone who actually knows the technology and sees through the hype:
You either need to be on the absolute cutting-edge and producing novel research, or you should be doing exactly what you were doing five years ago with minor concessions to incorporating LLMs. Anything in the middle ground does not make any sense unless you actually work in the rare field where your industry is being totally disrupted right now.
(From: I Will Fucking Piledrive You If You Mention AI Again — Ludicity)
-
The opposite of human creativity
Apple's ethos3 has always been about building tools to empower users to make art, to create, to be original. I don't know what is is, but it sure as hell isn't human creativity.
-
Friends and AI
Neven Mrgan received an email:
my friend had a question to ask me, and the email asked it over the course of a few paragraphs. It then disclosed that, oh by the way, I used AI to write this
(From: How it feels to get an AI email from a friend)
In his post he talks about what it feels like to be on the receiving end of AI generated content, in this case one where you'd hope these tools aren't used: an email from a friend. Not for grammar checks, but for the actual message. It felt off:
It felt like getting a birthday card with only the prewritten message inside, and no added well-wishes from the wisher’s own pen.
-
Alt generation in Firefox
Firefox experiments with automatic text alternative text generation, using a local and therefore privacy-preserving (?) machine learning model:
Until recently it’s not been feasible for the browser to infer reasonably high quality alt text for images, without sending potentially sensitive data to a remote server. However, latest developments in AI have enabled this type of image analysis to happen efficiently, even on a CPU.
We are adding a feature within the PDF editor in Firefox Nightly to validate this approach. As we develop it further and learn from the deployment, our goal is to offer it for users who’d like to use it when browsing to help them better understand images which would otherwise be inaccessible.
This is good to see as so many websites lack text alternatives and this may be the first of its kind made by a company that didn't take part in large scale user privacy violations.
-
Legitimate
Jeremy noticed that an Instagram
notification said:we’ll now rely on the legal basis called legitimate interests for using your information to develop and improve AI
(From: Adactio: Journal—InstAI)
That's not by any means reasonably what the word legitimate means, is it?
It's unfortunate many interesting people and businesses can mostly or only be followed on Instagram, as that's pretty much why I still keep an account. This feels like the social media equivalent of being kept hostage.
-
For idea guys
Rachel Smith on makers vs idea guys:
Generative AI is like the ultimate idea guy’s idea! Imagine… if all they needed to create a business, software or art was their great idea, and a computer. No need to engage (or pay) any of those annoying makers who keep talking about limitations, scope, standards, artistic integrity etc. etc.
-
Statistical illusion
Baldur Bjarnason, author of the excellent “The intelligence illusion”, on business risks of Generative AI (recommended!):
Delegating your decision-making, ranking, assessment, strategising, analysis, or any other form of reasoning to a chatbot becomes the functional equivalent to phoning a psychic for advice.
In his post, Baldur warns us once again not to imagine functionality that doesn't exist, he says it's all a ‘statistical illusion’.
-
AI, accessibility and fiction
This week, once again, someone suggested that “AI” could replace (paraphrasing) normative guidelines (ref: mailing list post of AGWG, the group that produces WCAG).
Eric Eggert explains why this seems unnecessary:
The simple fact is that we already have all the technology to make wide-spread accessibility a reality. Today. We have guidelines that, while not covering 100% of the disability spectrum, cover a lot of the user needs. User needs that fundamentally do not change.
(From: “AI” won’t solve accessibility · Eric Eggert)
I cannot but disagree with Vanderheiden and Nielsen. They suggest (again, paraphrasing) that we can stop making accessibility requirements, because those somehow “failed” (it didn't, WCAG is successful in many ways) and because generative AI exists.
Of course, I'm happy and cautiously optimistic that there are technological advancement. They can meet user needs well, like how LLMs “effectively made any image on the Web accessible to blind people”, as Léonie Watson describes in her thoughtful comment. If people want to use tools meet their needs, great.
But it seems utterly irresponsible to have innovation reduce websites' legal obligations to provide basic accessibility. Especially while there are many unresolved problems with LLMs, like hallucinations (that some say are inevitable), environmental cost, bias, copyright and social issues (including the working conditions of people categorising stuff).
-
AI uses too much energy
If ChatGPT were integrated into the 9 billion searches done each day, the IEA says, the electricity demand would increase by 10 terawatt-hours a year — the amount consumed by about 1.5 million European Union residents.
(From: AI already uses as much energy as a small country. It’s only the beginning. - Vox)
This is from an interview with Sasha Luccioni, climate researcher at Hugging Face. In it, she explains what the power and water consumption of AI, specifically LLMs, looks like today. It's bad, the amount of energy required is enormous. One example in the post is that a query to an LLM cost almost 10 times as much energy as a query to a regular search engine. That's unsustainable, even if we manage to use 100% renewable energy and water that we really didn't need for anything else.
Once again, this begs the question if we really need all the AI applications companies are rushing into their products. It's often completely unnecessary.
It reminds me of eating animals. With all we know about animal welfare and climate impact, we've got to consider if (regularly) eating animals has benefits that outweigh those downsides.
Everyone can choose to do whatever they want with the information they have available to them. As a person or as a company. But if you're deciding for a company, the impact is larger, it's the decision times the amount of users. For me it's increasingly clear I don't want to use these “AI” solutions in personal workflows, suggest we might as well use them when I give talks, let alone push for integrating them into the products I work on.
-
Content that's worth our time
Cory Dransfeldt explains that while we are developing technology that can generate and produce a larger amount of content, the real problem is the quality of that content:
I'm more and more concerned that we're heading to a place that will make it ever more difficult to find anything that's actually worth our time.
(From: We have a content quality problem, not a content quantity problem // Cory Dransfeldt)
-
MEPs adopt new and first AI law
On Wednesday, Parliament approved the Artificial Intelligence Act that ensures safety and compliance with fundamental rights, while boosting innovation.
(…)
It aims to protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI, while boosting innovation and establishing Europe as a leader in the field. The regulation establishes obligations for AI based on its potential risks and level of impact.
(From: Artificial Intelligence Act: MEPs adopt landmark law | News | European Parliament)
-
W3C and AI
The W3C established articificial intelligence is having a “systemic impact on the web” and looked at how standardisation, guidelines and interoperability can help manage that:
Machine Learning models support a new generation of AI systems. These models are often trained on a large amount of Web content, deployed at scale through web interfaces, and can be used to generate plausible content at unprecedented speed and cost.
Given the scope and scale of these intersections, this wave of AI systems is having potential systemic impact on the Web and some of the equilibriums on which its ecosystem had grown.
This document reviews these intersections through their ethical, societal and technical impacts and highlights a number of areas where standardization, guidelines and interoperability could help manage these changes
(From: AI & the Web: Understanding and managing the impact of Machine Learning models on the Web)
-
Jakob Nielsen's problematic claims about accessibility
Jakob Nielsen wrote a post in which he states “the accessibility movement has been a miserable failure’ (his words) and claims that generative “AI” can somehow magically remove the need for accessibility research and testing.
Note, there's currently no evidence that what he proposes is desirable (by users) or possible (with the tech). It is, however, clear that testing with users and meeting WCAG is desirable and possible.
Léonie explains Nielsen needs to think again:
Nielsen thinks accessibility has failed.
Nielsen thinks that generative AI will make my experience better. Nielsen apparently doesn't realise that generative AI barely understands accessibility, never mind how to make accessible experiences for humans.
I think Nielsen needs to think again.
Matt May said we need to talk about Jakob:
This part of the post isn’t so much an argument on the merits of disabled access as it is a projection of himself in the shoes of a blind user, and how utterly miserable he thinks it would be. At no point in any of this—again, classic Jakob Nielsen style—does he cite an actual blind user, much less any blind assistive technology researchers or developers
Per Axbom wrote:
the published post is misleading, self-contradictory and underhanded. I'll walk you through the whole of it and provide my commentary and reasoning.
-
Hallucination is inevitable
Researchers show that hallucination is inevitable:
LLMs cannot learn all of the computable functions and will therefore always hallucinate. Since the formal world is a part of the real world which is much more complicated, hallucinations are also inevitable for real world LLMs.
(From: [2401.11817] Hallucination is Inevitable: An Innate Limitation of Large Language Models)
-
Stitching together
Brian Merchant explains in Let's not do this again, please that OpenAI's new image generating thingy is mostly a “promotial narrative” to try and seek more investment money (OpenAI's server spend, the article says, is over 1 million USD per day).
The tech stitches together imagery, rather than create new imagery, Brian says:
It’s not that Sora is generating new and amazing scenes based on the words you’re typing — it’s automating the act of stitching together renderings of extant video and images.
-
Opportunities for AI in accessibility
Aaron Gustafson:
AI can be used in very constructive, inclusive, and accessible ways; and it can also be used in destructive, exclusive, and harmful ones. And there are a ton of uses somewhere in the mediocre middle as well.
(From: Opportunities for AI in Accessibility – A List Apart)
In this post, Aaron shares some examples of where ‘AI’ could be used to make content more broadly accessible. This is a controversial subject, because there are many automated ‘solutions’ that don't actually remove barriers, so caution is warranted. Such solutions often focus on people who want to comply with accessibility instead of people with disabilities. And accessibility is about people with disabilities, period. Aaron acknowledges this in the post, and calls for including people with disabilities.
What if, he suggests, users could ask specific questions about complex charts? As Aaron acknowledges, hallucinations exist, there could still be a use, especially with more diverse training data. Other examples of where ‘AI’ could remove barriers in his post: voice preservation, voice recognition and text transformation.
I'm still sceptical, because I've seen too many claims from automated tools that don't work that well in practice, but I understand it's worth to at least explore different options, and weigh them against the reality of today's web. For the voice and text tools I am actually somewhat optimistic.
-
The LLM search engine
Ben Werdmuller tried Arc's new “AI”-based search and shares his concerns in Stripping the web of its humanity.
Like all these tools, it outputs falsehoods. But that isn't the worst issue, he explains. Without attribution, the tool gives a false sense of objectivity and hides away bias:
If I search for “who should I follow in AI?” I get the usual AI influencers, with no mention of Timnit Gebru or Joy Buolamwini (who would be my first choices). If I ask who to follow in tech, I get Elon Musk. It undoubtedly has a lens through which it sees the world.
It's a particular kind of bubble where Elon Musk is worth following and Timnit Gebru is not suggested (would very much recommend following her instead).
Ben also notes that when bots consume content instead of humans, that threatens the ecosystem of content and writing:
If we strip [payments or donations to writers] away, there’s no writing, information, or expression for the app to summarize.
Who's going to make the input these tools grab in order to generate their output? Google faced various legal issues around displaying excerpts of news outlets on their news website. But they did at least quote and attribute them, while linking to the original. The automated processing basically strips away any opportunity for writers to be paid (or known) for their work.
-
Billions for “AGI” and “metaverse”
“Artificial general intelligence” is a phrase different people assign different meanings to. Few think is actually within the realm of possibility. Yet, Zuckerberg talked to The Verge to announce Meta's new focus on trying to find out:
While he doesn’t have (…) an exact definition for it, he wants to build it.
(From: Mark Zuckerberg’s new goal is creating artificial general intelligence at The Verge)
In the same interview he also wanted to “unequivocally state” they're still focused on “the metaverse” and will spend more than 15 billion dollars per year on that. Imagine that sort of budget to go to solving some of the world's more clearly defined problems.
-
AI images look cheap and easy
iA write excellent posts that put “AI” into context. In their latest, they compare these images, that ‘often miss realness, depth, and originality’, to stock photos. This comes with a business risk: your content looks cheaper, of less value:
using AI images makes all of your content feel ordinary. Good images enrich your article, bad images devalue it. Your audience thinks: “If they use AI for images, they probably use it for content, too.”
(From: AI Art is The New Stock Image)
Unless there's a load of generated images that are so good that we can't recognise them, and we don't realise, I think iA are right: they're super obvious to spot and already look old.
Further down in the post, they predict the lack of creativity in machines may spark more human creativity:
Photography has made us question traditional art. Similarly, AI can make us question empty off-the-shelf communication. Ironically, machine-generated content might catalyze a fresh wave of humane creativity and hand-crafted innovation in verbal and visual storytelling.
I sure hope so. If we are to create things worth having around, we've got to make our choices and intentions matter.
-
More unnecessary AI
One of the things that I keep circling back to when reading about ‘AI’ is the kind of problems people are trying to solve with it, so many of which are completely futile.
Chris Person on the ‘rabbit’:
What’s most annoying about all of this is the sheer repeated imposition of this horseshit. I’m sick of being forced to think about generative AI, large language models and large action models. I’m tired of these adult toddlers who need an AI to tie their shoes and make bad Pixar characters for them. Microsoft and Google keep shoving AI features into their software, and I absolutely should not have to worry about this garbage from Firefox of all places.
(from: Why Would I Buy This Useless, Evil Thing? - Aftermath)
-
Losing the imitation game
Jennifer Moore on what LLMs can and cannot do:
The fundamental task of software development is not writing out the syntax that will execute a program. The task is to build a mental model of that complex system, make sense of it, and manage it over time.
-
Algorithmic thatcherism
Dan McQuillan says AI is algorithmic Thatcherism:
“Case after case, from Australia to the Netherlands, has proven that unleashing machine learning in welfare systems amplifies injustice and the punishment of the poor. AI doesn't provide insights as it's just a giant statistical guessing game. What it does do is amplify thoughtlessness, a lack of care, and a distancing from actual consequences.”
(via Ethan Marcotte)
-
Filler text no one wants to read or write
Sci-fi writer Ted Chiang: ‘The machines we have now are not conscious’:
Chiang’s view is that large language models (or LLMs), the technology underlying chatbots such as ChatGPT and Google’s Bard, are useful mostly for producing filler text that no one necessarily wants to read or write