Links
All links in: philosophy (all links)
-
Philosophically bullshit
LLMs don't hallucinate or lie, they ‘bullshit’, in the sense that the late philosopher Harry Frankfurt, explain Glasgow researchers in their recent paper:
The problem here isn't that large language models hallucinate, lie, or misrepresent the world in some way. It's that they are not designed to represent the world at all; instead, they are designed to convey convincing lines of text.
(From: ChatGPT is bullshit)
The paper explains Frankfurt's interesting distinction between ‘soft bullshit’ and ‘hard bullshit’, reasoning that ChatGPT is definitely the former and in some cases arguably the latter.
It's crucial to replace phrases like ‘hallucinate’ or ‘lie’ with a word like ‘bullshit’, not to try and be witty, but because the phrases shape how investors, policymakers and general public think of these tools. Which in turn impacts the decisions they make about using them.