“AI” content and user centered design

Large language models (LLMs), like ChatGPT and Bard, can be used to generate sentences based on statistical likeliness. While the results of these tools can look very impressive (they're designed to), I can't think of cases where the use of LLM-generated content actually improves an end user's experience. Even if not all of the time, LLM output is often nonsensical, false, unclear and boring. Hence, when organisations force LLM-output on users instead of paying people to create their content, they don't center users.

User centered design means we make the user our main concern when we design. When I recently told a friend about this concept, explaining my new job is at a government department focused on centering users, they laughed in surprise. “This is a thing?”, they asked. “What else would you make the main concern when you design?” It made little sense to them that users had to be specifically centered.

If you work in tech, you probably saw projects center other things than users. Business needs, the profit margin, search engines, that one designer's personal preference, the desire to look as cool as a tech brand you love… and so on. Sadly, projects center them instead of users all the time. Most arguments I heard for using LLMs in the content production process quoted at least one of these non-user-centric reasons.

Organisations are starting to use or at least experiment with LLMs to create content for web projects. The hype is real and I worry that, by increasing nonsense, falsehoods and boredom, LLM-generated content is going to worsen user experiences across the board. Why force this content on users? And what about the impact of LLM-generated content beyond individual websites and user experiences: it's also going to pollute the web as a whole and make search worse (as well as itself).

None of this is new, we've had robot-like interactions way before LLMs. When the tax office sends a letter that means you need to pay or receive money, that information is often buried in civil servant speak. When Silicon Valley startup founders announce they were bought, they will mention their “incredible journey”. When lawyers describe employment, customer service phone lines pronounce “your call is important to us” (a great read, BTW)… this is all to say that, even without LLMs, we're used to people that sound more robotic and less human. They speak a lingo.

Lingo gets in the way of clarity. Not just because it feels impersonal and boring, it is also made-up, however brilliantly our prompts will be ‘engineered’. Yes, even if it's sourced—or stolen, in many cases—from original content. That makes it like the lingo humans produce, but much worse. Sure, LLM-generated content could give users clarity, except in a way that's only helpful if the user already knows a lot about the thing that is clarified (so that they can spot falsehoods). This is the crux and why the practical applicability of LLMs isn't nearly as wide as their makers claim.

I can see how a doctor's practice / government department / bank / school could save money and time by putting a chatbot between themselves and the people. There are benefits to one-click-content-creation for organisations. But I don't see how end users could benefit, at all. Who would prefer reading convincing-but-potentially-false chatbot-advice to a conversation with their doctor (or force the bot on others). Zooming out from specific use cases to the wider ecosystem… aren't even those who shrug at ideals like centering humans worried that LLMs-generated content wipes out the very “value” capitalists wants to extract from the web (by enshittification)? I certainly hope so.

Addendum: I didn't know writing this post that OpenAI's CEO Sam Altman literally wrote he looked forward to “AI medical advisors for people who can't afford care”. From his thread on 19 February 2023:

the adaptation to a world deeply integrated with AI tools is probably going to happen pretty quickly; the benefits (and fun!) have too much upside.

these tools will help us be more productive (can't wait to spend less time doing email!), healthier (AI medical advisors for people who can’t afford care), smarter (students using ChatGPT to learn), and more entertained (AI memes lolol).


we think showing these tools to the world early, while still somewhat broken, is critical if we are going to have sufficient input and repeated efforts to get it right. the level of individual empowerment coming is wonderful, but not without serious challenges.

He talks about “individual empowerment [that] is wonderful”, I think it's incredibly dystopian.

List of updates
  • 31 July 2023: Added addendum

Comments, likes & shares (28)

@hdv Someone on the World of Warcraft Reddit made a posts saying how excited they were about new (nonexistent) content in the game. Bots picked this up and generated articles off this one Reddit post.https://arstechnica.com/gaming/2023/07/redditors-prank-ai-powered-news-mill-with-glorbo-in-world-of-warcraft/ Redditors prank AI-powered news mill with “Glorbo” in World of Warcraft
Added a quick addendum to this post as I found out that Sam ChatGPT actually said the dystopian doctor scenario is a solution for those who can't afford care, calling the future of “individual empowerment” nothing less than “wonderful” https://hidde.blog/llms-user-centered/#addendum “AI” content and user centered design
My main question here is: would Sam himself rely on ChatGPT instead of doctors? Or is he building an alternative for those who can't afford what he can? If so, it needs more Golden Rule https://en.wikipedia.org/wiki/Golden_Rule Golden Rule - Wikipedia