Links
Posts about what I read elsewhere. Subscribe with RSS
-
Localising icons
Creating effective, trustworthy communication with language communities means doing the work to make sure your content meets them where they are.
A big part of this is learning about, and incorporating cultural norms into your efforts.
-
Hitting a wall
Just as I argued here in April 2024, LLMs have reached a point of diminishing returns.
§
The economics are likely to be grim. Sky high valuation of companies like OpenAI and Microsoft are largely based on the notion that LLMs will, with continued scaling, become artificial general intelligence. As I have always warned, that’s just a fantasy.
(From: CONFIRMED: LLMs have indeed reached a point of diminishing returns)
-
We learn
Humans aren’t trained up. We have experience. We learn. And for us, learning a language, for example, isn’t learning to generate ‘the next token’. It’s learning to work, play, eat, love, flirt, dance, fight, pray, manipulate, negotiate, pretend, invent and think. And crucially, we don’t merely incorporate what we learn and carry on; we always resist. Our values are always problematic. We are not merely word-generators. We are makers of meaning.
(From: Can computers think? No. They can’t actually do anything | Aeon Essays)
-
LLMs also hallucinate in medical contexts
This shouldn't surprise anyone, but it turns out LLMs also make up stuff when used by doctors:
[Professors Allison Koenecke and Mona Sloane] determined that nearly 40% of the hallucinations were harmful or concerning because the speaker could be misinterpreted or misrepresented.
(From: Researchers say AI transcription tool used in hospitals invents things no one ever said | AP News)
The article lists some examples: the tools made up violence, racial details and medication out of thin air.
-
Behind the facade
If the pursuit of an easier, slower and more pleasant life comes at the expense of others, is staying where you are and suffering the right thing to do? Maybe.
(From: How Digital Nomads Are Exploiting the World - Thrillist)
-
Unlicensed use of creative works
The unlicensed use of creative works for training generative AI is a major, unjust threat to the livelihoods of the people behind those works, and must not be permitted
(From: Statement on AI training)
-
Gladwell
Gladwell writes like someone who doesn't care about being correct because he doesn't care about being correct! His spitballs are truly spitballs, and he doesn't care where they land.
(From: Forget Gladwell)
-
The selected option
Jake Archibald presents some options for how
<selectedoption>
would work:what if the selected
-
Overrate
Iris van Rooij on Ada Lovelace:
Two centuries later, as we are living through yet another AI summer where AI hype and promises of artificial general intelligence (AGI) abound, Ada’s wise words remain relevant as ever. When writing about the “AI” of her time, called the Analytical Engine, she wrote: “It is desirable to guard against the possibility of exaggerated ideas that might arise as to the powers of [AI]. In considering any new subject, there is frequently a tendency [...] to overrate what we find to be already interesting or remarkable”.
(From: Editorial AI Inside: Celebrating Ada and Women in AI | Radboud University)
-
Human rights and the next 30 years of web
Last week I saw Nick Doty from the Center of Democracy and Technology give an excellent short talk at the W3C's 30tj birthday event, in which he said:
We need to consider human rights in all the work that we do at W3C in the next 30 years.
(From: Happy 30th Birthday, W3C - Center for Democracy and Technology)
He just published the full text.