Links
Posts about what I read elsewhere. Subscribe with RSS
-
Display of power , external
Tante thinks that Open AI didn't just steal Studio Ghibli's art to show they're still relevant, they did it to move the goalposts, and stretch what people will accept as behaviour:
It’s not that they just picked something cute and accidentally the co-founder of that studio hates their whole approach from the bottom of its heart. OpenAI picked Studio Ghibli because Miyazaki hates their approach.
It is a display of power: You as an artist, an animator, an illustrator, a writer, any creative person are powerless. We will take what we want and do what we want. Because we can.
(From: Vulgar Display of Power)
-
Careless people: courageous but incomplete? , external
Sabhanaz Rashid Diya, who was head of public policy for Bangladesh for Meta, reviewed Sarah Wynn-Willams memoir Careless people, which I'm currently reading.
She says it is incomplete:
the author glosses over her own indifference to repeated warnings from policymakers, civil society, and internal teams outside the U.S. that ultimately led to serious harm to communities.
She explains how the people at headquarters were detached:
Every visit to a country or a high-profile meeting at the World Economic Forum in Davos or the U.N. was the product of weeks of intense coordination across regional policy, legal, security, business, and operations teams. When they left after a few days, teams on the ground like my own had to spend months cleaning up the mess they left behind. That included frequently expending local policy and diplomatic relationships built over a decade, and chasing promises made to policymakers and civil society for more resources that rarely got approved.
She does call the book brave and interesting:
Despite telling an incomplete story, Careless People is a book that took enormous courage to write. This is Wynn-Williams’ story to tell, and it is an important one. It goes to show that we need many stories — especially from those who still can’t be heard — if we are to meaningfully piece together the complex puzzle of one of the world’s most powerful technology companies.
-
Pick your battles, green software edition , external
Thomas Broyer on what battles are most worth picking when you want to make software more sustainable:
So, what have we learned so far?
- It's important that end users keep their devices longer,
- we can't do much about networks,
- the location (geographic region and datacenter) of servers matter a lot, more so than how and how much we use them.
(From: Climate-friendly software: don't fight the wrong battle)
-
Sysadmins and LLM crawlers , external
The crawlers that collect data to train LLMs on cost sysadmins lots of time, writes Drew DeVault:
instead of working on our priorities at SourceHut, I have spent anywhere from 20-100% of my time in any given week mitigating hyper-aggressive LLM crawlers at scale
(From: Please stop externalizing your costs directly into my face)
-
Tech bros misunderstand stuff , external
Aaron Ross Powell explains he isn't an AI skeptic and that he finds LLMs “powerful tools with real world use cases”, but that the idea that AGI is near or that art can be made with these tools comes down to a misunderstanding on the part of tech bros:
What’s going on is a confluence of two features of Silicon Valley tech bro culture. First, Silicon Valley tech bros believe that they aren’t just skilled at computer programming, but that they are geniuses to a degree that cuts across all disciplines and realms of accomplishment. (…) The second feature is a basic lack of taste.
-
More ethics of AI , external
Richard wrote about a number of different aspects of AI, including sales people complaining they don't sell, erosion of copyright, design tools and mediocrity, AI as a trick to sack humans and bias:
I read a couple of posts about AI recently, which seemed to hold opposing ideas, but I agreed with them both to some extent. (It’s a radical idea, I know).
(From: Another uncalled-for blog post about the ethics of using AI | Clagnut by Richard Rutter)
Good post, I am glad practitioners continue to share their thoughts beyond the hype.
-
All LLM output hallucination , external
Tante makes the point that actually, all output of LLMs is a hallucination, the true and the false parts:
If using the term hallucination is useful to describe LLM output it is to illustrate the quality of all output. Everything an LLM generates is a hallucination, some might accidentally be true.
(From: It's all hallucinations)
-
Roles and responsibilities for accessibility , external
Who doesn't love RACIs? When it comes to ensuring accessibility of your products, almost all roles in the team can contribute something.
The W3C's Web Accessibility Initiative published a mapping of roles and responsibilities to WCAG Success Criteria.
It's still a “draft”, and feedback is welcomed, but it's been worked on for many years.
They've included:
- 6 role groups: Business, Author (Content), Design, Development, Testing, Admin
- a mapping to WCAG Success Criteria of who is primarily or secondarily responsible, and who can contribute
- what can everyone do? A list of tasks and who can do them
- a decision tree for defining your own responsibility mapping
More info on the project and its aims:
Accessibility Roles and Responsibilities Mapping (ARRM) | Web Accessibility Initiative (WAI) | W3C -
Enlighten those new to the field , external
In defense of technical writing:
If reading philosophy taught me something, it is that thought can be valuable in any form, provided it’s original. (…) Narrate your failures and your wins. Telling your day-to-day routine might reveal patterns or enlighten those who are new to the field.
-
Heist , external
When you use AI, you are probably benefiting from stolen material, explains Toby Walsh, who has been an AI researcher for 40+ years. He calls these tools the “greatest heist in human history”:
I am outraged at the tech companies like OpenAI, Google and Meta for training their AI models, such as ChatGPT, Gemini and Llama, on my copyrighted books without either my consent or offering me or Black Inc any compensation.
The tech companies claim this is “fair use”. I don’t see it this way. Last year, at the Sydney Writers’ festival, I called it the greatest heist in human history. All of human culture is being ingested into these AI models for the profit of a few technology companies.