Links
Posts about what I read elsewhere. Subscribe with RSS
-
The human input is more interesting , external
Computer scientist Clayton Ramsey shares reasons why people use generative AI to write: they don't care enough, they believe LLM results are better or the writing was never meant for human consumption anyway.
Focusing mostly on the first two, he concludes he would rather read the prompt than the result:
The resulting output has less substance than the prompt and lacks any human vision in its creation. The whole point of making creative work is to share one’s own experience - if there’s no experience to share, why bother?
(From: I'd rather read the prompt)
This was one of the conclusions in my talk Creativity cannot be computed,too. The point of a lot of art is that some human wanted, intended, decided to do something… not as much the artifact they created.
-
World wide web fonts , external
on web we could simply start our font stacks with Verdana, pick a couple of reasonable fallbacks, and get IKEA branding effectively for free. Everyone wins.
Or at least that was the plan, but there turned out to be a problem that developed over time.
(From: IKEA’s web fonts - Robin Whittleton)
IKEA happily used Verdana on the web. Until they expanded business across Asia and the Middle East, and found the supported languages of Verdana were lacking.
In this post, Robin Whittleton explains how this situation lead to Noto IKEA.
-
Standardising AI crawler consent , external
IETF works on building blocks to let websites declare if crawlers can take their content for traininy:
Right now, AI vendors use a confusing array of non-standard signals in the robots.txt file (defined by RFC 9309) and elsewhere to guide their crawling and training decisions. As a result, authors and publishers lose confidence that their preferences will be adhered to, and resort to measures like blocking their IP addresses.
-
Europe and US tech , external
Fascinating read on the weakening position of US big tech firms in Europe:
Technology companies such as Alphabet, Meta, and OpenAI need to wake up to an unpleasant reality. By getting close to U.S. President Donald Trump, they risk losing access to one of their biggest markets: Europe.
(From: The Brewing Transatlantic Tech War | Foreign Affairs)
-
Display of power , external
Tante thinks that Open AI didn't just steal Studio Ghibli's art to show they're still relevant, they did it to move the goalposts, and stretch what people will accept as behaviour:
It’s not that they just picked something cute and accidentally the co-founder of that studio hates their whole approach from the bottom of its heart. OpenAI picked Studio Ghibli because Miyazaki hates their approach.
It is a display of power: You as an artist, an animator, an illustrator, a writer, any creative person are powerless. We will take what we want and do what we want. Because we can.
(From: Vulgar Display of Power)
-
Careless people: courageous but incomplete? , external
Sabhanaz Rashid Diya, who was head of public policy for Bangladesh for Meta, reviewed Sarah Wynn-Willams memoir Careless people, which I'm currently reading.
She says it is incomplete:
the author glosses over her own indifference to repeated warnings from policymakers, civil society, and internal teams outside the U.S. that ultimately led to serious harm to communities.
She explains how the people at headquarters were detached:
Every visit to a country or a high-profile meeting at the World Economic Forum in Davos or the U.N. was the product of weeks of intense coordination across regional policy, legal, security, business, and operations teams. When they left after a few days, teams on the ground like my own had to spend months cleaning up the mess they left behind. That included frequently expending local policy and diplomatic relationships built over a decade, and chasing promises made to policymakers and civil society for more resources that rarely got approved.
She does call the book brave and interesting:
Despite telling an incomplete story, Careless People is a book that took enormous courage to write. This is Wynn-Williams’ story to tell, and it is an important one. It goes to show that we need many stories — especially from those who still can’t be heard — if we are to meaningfully piece together the complex puzzle of one of the world’s most powerful technology companies.
-
Pick your battles, green software edition , external
Thomas Broyer on what battles are most worth picking when you want to make software more sustainable:
So, what have we learned so far?
- It's important that end users keep their devices longer,
- we can't do much about networks,
- the location (geographic region and datacenter) of servers matter a lot, more so than how and how much we use them.
(From: Climate-friendly software: don't fight the wrong battle)
-
Sysadmins and LLM crawlers , external
The crawlers that collect data to train LLMs on cost sysadmins lots of time, writes Drew DeVault:
instead of working on our priorities at SourceHut, I have spent anywhere from 20-100% of my time in any given week mitigating hyper-aggressive LLM crawlers at scale
(From: Please stop externalizing your costs directly into my face)
-
Tech bros misunderstand stuff , external
Aaron Ross Powell explains he isn't an AI skeptic and that he finds LLMs “powerful tools with real world use cases”, but that the idea that AGI is near or that art can be made with these tools comes down to a misunderstanding on the part of tech bros:
What’s going on is a confluence of two features of Silicon Valley tech bro culture. First, Silicon Valley tech bros believe that they aren’t just skilled at computer programming, but that they are geniuses to a degree that cuts across all disciplines and realms of accomplishment. (…) The second feature is a basic lack of taste.
-
More ethics of AI , external
Richard wrote about a number of different aspects of AI, including sales people complaining they don't sell, erosion of copyright, design tools and mediocrity, AI as a trick to sack humans and bias:
I read a couple of posts about AI recently, which seemed to hold opposing ideas, but I agreed with them both to some extent. (It’s a radical idea, I know).
(From: Another uncalled-for blog post about the ethics of using AI | Clagnut by Richard Rutter)
Good post, I am glad practitioners continue to share their thoughts beyond the hype.