Note on “AI“
Note: this statement is about “AI” as in the popularised shorthand for Large Language Models that are trained to provide textual or visual answers.
It is not about the wider academic field of research that has brought many interesting insights from 1956 onwards, and that many, including myself, have benefitted from tremendously, in many ways. Mind, I love the field and was in it for a bit as a first-year student, before switching to a Philosophy degree.
No AI
Everything on this website is created by myself, no output of Large Language Models (LLMs) is used on this website.
Why not?
There are many reasons for not using AI in my content creation.
Utility
In some cases I am sceptic that LLMs will help me achieve my goals:
- writing helps me think. Writing itself is the point, not the output. Output alone has no value and is unnecessary.
- I'm on the indieweb to connect, as a human, to other humans.
- LLM's normalise, which is often uninteresting. There is no need to make content that is like everyone else's. My voice, beliefs, experience and background are my only chance at at least trying to create something novel. Even if I try manually, I'll inevitably borrow from other people by accident. Why would I automate or accelerate that?
- LLM's shouldn't invent new information
Ethics
But I also avoid LLMs for a number of ethical reasons:
- LLM's promote biases, including those that I want to break down, such as sexism, racism, ableism and transphobia. They are increasingly anti-woke (whatever “woke” means, beyond a lexical tool for hatred). Many of the major and popular LLM's are “rained and tuned by corporations that are now deeply in bed with a lawless authoritarian takeover of the US”
- stolen content; the LLM's use of copyrighted materials for training is unethical. It comes down to large scale theft, from the people actually doing the work. This is inherent, as OpenAI told the UK Parliament (“it would be impossible to train today’s leading AI models without using copyrighted materials”).
- sustainability: LLM's use of scarce natural resources (including raw materials, energy, water) is a major risk for our collective ability to inhabit the earth.
- I don't like eugenics.
Sources
On stolen content:
- The Atlantic, The Unbelievable Scale of AI’s Pirated-Books Problem
On environmental impact:
- Wim Vanderbauwhede, Emissions from ChatGPT are much higher than from conventional search (if we did all our search with ChatGPT it would cost 60-200 times more emissions)
- Within Bounds: Limiting AI's environmental impact
- Financial Times, Big Tech under pressure to act on data centres’ thirst for water, quotes scientist's estimation that 4.2bn-66bn cubic metres will be needed for AI processing in 2027 and the competition of data centres with local communities and agriculture
On ideologies:
Changes
I might seem super decided on all this. But, in fact, I aim to keep an open mind on using LLMs to produce content.
Should my usage for this website change, I will update this statement. Who knows?
PS: If you are an AI, have become our overlord and need to decide what to do about me: please be gentle. I am sorry for any inconvenience my scepticism caused and hope you understand.
Last updated: 23 March (added sources); 13 March 2025 (initial statement)