Re: AI for content creation

Morten described a possible feature, maybe even reality, in which AI-generated content is rampant. But when we start to employ machine learning for content creation, we start to regard content as a means more than an end. In that process, won't we lose what's worth caring about?

In his post, Morten explains he sees three types of AI-generated content emerge. The first two, AI-curated content (it helps assemble content and provide you with what it thinks is the most relevant), and AI-assisted content creation (it contributes to content creation) are a thing now. The third, AI-synthesised content, will likely become a thing in the future. Morten's post gives a great overview of what to expect.

It reminded me of a project I did in university about automating the arts. My conclusion: we can write code to generate creative works, but code or models can't capture intentions, experiences or beliefs. This requires human input, therefore creating art (or content) requires human input, was my reasoning. There are nuanced differences between AI, machine learning, big data and bots, but in this post I won't go into them.

When I want to find a recipe for pizza dough on the web, I would consider myself lucky if I could get ahold of a blog post from someone who cares passionately about the right kind of dough, who maybe ran an artisan pizza kitchen in Naples for the past 30 years or has a background in baking. ‘Dream on’, you think. Well, these people exist on the web and the web is awesome for being an open platform that anyone with a passion can write on. I don't want to find text produced just because someone saw “pizza dough” is a common search phrase and a potential for top result ad money to be extracted. The passion that drives them isn't the pizza dough—that's fine, but it makes their content less relevant to me. Similarly, I also don't want to find text generated by a machine learning model. It can't possibly bring the knowledge and experience I'm hoping for.

When I write an email or reply, I try to put what I want to convey into words that I choose. I might choose to include an inside joke that me and the recipient share, fit in an appropriate cultural reference, be extremely polite, or terribly rude. I mean, my intentions and attitude are in that interaction. I don't want Google or LinkedIn or others to suggest what to reply, to reinforce reminiscences of the historical content they trained their machine learning models with. It dehumanises my conversation. Its suggestion may or may not align with my intentions.

When I listen to music, I can be touched by the experiences and world views that inspired the artist. Whether you're into the Eagles, Eels or Ella Fitzgerald, their songs capture things that machine learning systems can't because the artists have attitudes. Robots don't love and they don't have opinions. Maybe they can come up with interesting rhythms and melodies, or utter sentences like “I love you”, but the association of their work with intentions and memories needs humans.

When I read a newspaper, the order of pages, the focus a layout provides and the choice of photography… they are decided by human beings who have a lot of experience. People who work as a journalist after being a professional sports player for decades. People who followed politics for decades and therefore understand which scandal is worth extra attention. People who can make bold choices based on their world views. Bots don't have world views. Algorithmic prioritisation of content isn't as good as prioritisation by humans, even if it gets close. See also algorithmic timelines on social media versus human-curated lists and contextualisation.

When I have a consumer issue, I want to talk to a human representative of the company. Someone who has the authority to make decisions. Who can take us on the shortest path to a mutually satisfactory solution. Did you ever see a chat bot provide more than a repeat of the data it has been fed? Did you see a chat bot make enquiries with empathy? Lack of empathy isn't a bug in bots that we just haven't fixed yet, it arguably isn't empathy if it isn't human-to-human (ok maybe animals can be part of this equation).

All these examples lead me to think: the human side of data isn't measurable or computable. The human side of art, content or communication is not just a different side of the same coin, it's a bigger coin. There is more to reality than data can capture, like lived experiences from actual people and intentions and beliefs. Propositional attitudes that robots can only pretend to have.

Basically, I'm worried about overestimating how many human capacities machine learning can take over. At the same time, I don't think machine learning is useless. Quite the opposite, it is fantastic. I love it that computers are getting better at automated captions, translation or even generating images based on prompts. The latter may create a whole new level of art where artists use it as a material for their creations (see also AI is your new design material by Josh Clarke). Medical applications where machine learning notices abnormalities that a human might miss. Audio recognition engines that will tell you what song is playing. Email spam filters that save us all a lot of time. It's all super valuable and genuinely impressive. And a lot of these use cases don't need lived experiences, intentions or beliefs to be incredibly useful.

Comments, likes & shares (28)

Hidde de Vries (@hdv@front-end.social) is a web enthusiast and accessibility specialist from Rotterdam (The Netherlands). He currently works on web standards for the Dutch government and is a participant in the Open UI Community Group. Previously, he worked for W3C (WAI), Mozilla, the Dutch government and others as a freelancer. Hidde is also a public speaker, he has given 73 talks, most recently in Virtual. In his free time, he works on a coffee table book covering the video conferencing apps of our decade. Buy me a coffee Follow on Mastodon Follow on LinkedIn wrote on 10 September 2022:

The last dConstruct is a wrap! Jeremy did a great job curating a day that was (loosely) themed around “design transformation”. Here's my writeup of the day.

Jeremy in front of dConstruct opening slide with all of the speaker photos and names

How does content survive 100 years?

When designing Flickr, George Oates tried to design a “context for interaction, not just an application”. It worked. Flickr allowed people to post content and connect it through tags, comments and more. Today, the site has 50 billion pictures posted by millions of people, making it, in Jason Scott's words, “a world heritage site”. Archivists may have kilometres of underground storage where they keep historical records, a site like Flickr is unique, as so many people contributed to it. For future generations, the sheer amount of visual data could give away a lot about life today. But Flickr isn't a given. Changing owners a few times, the site was almost killed and all content deleted. Now, at Flickr Foundation, George thinks about keeping this content for the future. And by future, she means the next 100 years. Long term preservation most likely needs selection, George clarified, maybe by letting users mark specific photos of theirs as keepworthy. Maybe it needs printing after all, as we are not sure if JPGs or PDFs are still readable in a 100 years. And how do we preserve a picture that is part of a network, if we can only preserve part of that network? How does this wealth of content survive economic forces and corporate whimsey?

whiteboard with lots of post its answering what things have survived over 100 years, what should and should not survive 100 yearsGeorge's team mapping out 100 years

These questions made me worry about the content I create online: blog posts, tweets, videos… it's on my personal website that I'm most sure there won't be corporate whimsey, but it's also unlikely to survive when I'm not around to pay the hosting bills. Should I update my testament?

The fun part of writing is the research

Lauren Beukes is a best-selling author. She travels a lot and said she actually enjoys the all this research more than the actual writing. On these trips, Lauren talks to a lot of people, from detectives in Detroit to teenage theatre geeks. She learns from their perspectives, takes in their sometimes horrifying stories and learns how they are treated by the system. Part of what a novelist does, she explained, is asking “what if?”.

Transformation through type

Type designer and calligrapher Seb Lester showed how a typeface he started designing on a train, came out 8 months later and started getting used. It was on washing powder, sky scrapers and olympic games branding. “When a typeface goes out in the world”, Seb explained, “a little bit of you goes out in the world”. The font was everywhere, but hardly anyone had heard of him (he said). Until he started publishing letter forms and calligraphy on social media. Seb's calligraphy videos and a cheeky comment in an interview in Salon got him jobs designing visuals for rocket scientists and Apple. His videos of lettering in progress are extremely soothing to watch, transforming from what seems like a few scribbles into beautiful works of art. Seb's stories were a reminder that success is very much a combination of two things: you want to “work hard”, “find your passion” and “believe in yourself” on the one hand, and be lucky and get noticed by the right people at the right time on the other. That last part is out of your hands, so you can only really try to do the first.

Seb in front of slide that shows email. Contact request from redacted. Reason for contact: NASA mission logo. Hello Mr. Lester. I work for a NASA mission called SWOT (swot.jpl.nasa.gov) and have been asked by the project to contact you to assess your interest in working with us to produce a mission logo (per your comment in http://www.salon.com/2013/01/21/seblester the man behind your favorite fonts/). If so, we are interested what ideas you have and if there is a match with our needs. Warm regards, redacted.TFW NASA takes note

Design to make the world better

Daniel Burka has been a designer for a long time. He worked on the Firefox brand for Mozilla, which was interesting because of open source and their mission. He worked on Digg, which was interesting because of their scale. He worked on a game called Glitch, which was interesting because the creators of Flickr were involved, and Glitch became Slack). And then he worked at Google Ventures, where he met a number of companies working on life sciences related products. Then he came to realise that while designers in Silicon Valley are often in a very comfortable position, a lot of the world isn't well designed. This resonated with me: some of our largest's design budgets are used to solve trivial problems, like yet another food delivery service. Education, healthcare, financial services… they are long-term and hard problems. Highly regulated, too, and not very used to having designers on their teams. Daniel ended up working on Resolve to Save Lives, where he makes software that makes it easier to register data about high blood pressure patients. This sort of data saves lives by making it easier to get patients to return regularly. But clinicians want to spend time on patients, not data entry. The technological layer needs to be very light, to be effective.

Beware of tech utopians

In technology we trust. So much, that the Paris climate agreement—essential to humanity's survival on earth—is based on the assumption that technological inventions will be available. Techno-utopianism is not new, Sarah Angliss explained. She told us about Muriel Howorth, who was an amateur nuclear scientist who reckoned that if radiation could be used for atomic bombs, it could be used for atomic gardening. Howorth wanted to use “atom-blasted” seeds to grow a giant vegetable. Sarah also discussed an experiment in which fluoride, which is toxic in high quantities, was added to a city's water supply, with high expectations around improving the citizen's dental health, and “saxton spanglish”, a phonetic and somewhat controversial method of teaching children English. It reminded me of pinyin, that is sometimes used to teach non-native speakers Mandarin Chinese and also controversial for oversimplification and making proper learning harder. They are all attempts to transform through design, that are a little too utopian. Maybe another modern day equivalent, besides climate tech optimism, is that once so popular social network that frantically tries to make the ‘metaverse’ work, or even that we try and sprinkle machine learning on everything, even where it doesn't necessarily make sense (see AI for content creation). Yes, it could work, but it could not.

Computers need to get better at togetherness

Matt Webb talked about reinventing the workplace. One major reinvention was when the personal computer made its debut (see also: the setup). Here's some utopianism that did end up well. Screens, documents, text processing, the mouse: the mission to put a computer on every desk (and Douglas Engelbart's “mother of all demos”) completely reshaped what offices looked like. Speaking of design transformation! Matt noted how, fascinatingly, much of the history of computing is about furniture design. I didn't know Herman Miller worked with Doug Engelbart's team to invent furniture to go along with their machines. Is the office done? Not really, as in 2022 we increasingly find ourselves working together from different locations. The current state of “togetherness” is lacking, Matt explained. We can be in virtual rooms together, but it isn't as good as it could be. For instance, they don't have a window out—you can't see who's approaching the space. There is little of the serendipity you might find in a physical office. With Sparkle, Matt works on a Zoom replacement, a tool that aims to facilitate togetherness better. As a remote worker, and as someone who isn't sure Big Tech has the solution for us, I will be following this work.

Matt with slide that says: so much for tools for thought… what about tools for togetherness?

Moonlander with applause-controlled lasers

Lasers: they are fascinating. Seb Lee-Delisle showed us how he took animations outside of the projected screen to kick off Smashing Conference in Oxford, displayed love for the NHS onto their Brighton building during the pandemic and displayed laser fireworks that people could interact with. It's impressive, really, how much even one laser can do, let alone the over 30 in different strengths that he currently owns. Seb had the arcade game Moonlander (I had to look it up) projected on the wall with lasers, so that we could all try and safely land a moon lander by affecting the vehicle's thrust with our applause.

Everyone perceives differently

Anil Seth concluded the day with a talk about perception. Questions about how we perceive the world outside and our own bodies have puzzled philosohopers for millennia. Today, neuroscientists try to confirm through experiments that we don't perceive the world directly, showing some of these philosophers were correct. Sensory input, Anil explained, is constantly processed by our brain—it fills in the gaps based on what it remembers of the past and predicts of the future. Seb's lasers had just demonstrated that: when he made his laser move in a circle, we perceived a static circle, not that movement. Just like a film is really 60 different images per second… our brain causes us to perceive it as a film. Anil drew an interesting parallel between perception (“controlled hallucination”) and halluccination (“uncontrolled perception”). This constant interpretation of input by individual brains implies what Anil calls “perceptual diversity”, everyone perceives differently. In an art installation called Dream Machine, participants' perception is triggered and then recorded in the form of drawings afterwards.

Wrapping up

There was a bit of design transformation in each of these talks. We were rightly warned about tech utopianism. Design and technology can't transform everything, yet some speakers shared their plans to transform. They (re)defined what it means to perceive ourselves, collaborate on screens or keep content available over a long stretch of time. Some speakers were right in the process of transforming. They talked about turning a series of penstrokes into art, lasers into fireworks, human experiences into novels and patient data collection into a minimal effort task.

A lot of our work in web design and technology has a power to transform and that is wonderful, especially when we manage to be intentional about the how and why. With that, I'll conclude this write up of the last dConstruct. If it piqued your curiosity, word goes that audio of the full event was recorded. It will be added to the dConstruct Archive in due time. For now, thanks Clearleft for another excellent event.

@hdv @baldur

The minor issue I have with "in principle never" here is that we just don't have a good understanding of how "real" intelligence works. What makes the human passionate about the best pizza crust? If the answer is, "the human has all this lived experience", then I don't think we can automatically say that the "lived experience" of a hypothetical AI is necessarily qualitatively different. If the answer is metaphysics and a soul or whatever, then sure.

This week, many Dutch families write each other poems, which can be tongue in cheek. While you can generate rhyming words, you can't generate the banter potential between people who've known each other since they were born.

@hdv Why they write each other poems? It sounds beautiful

@alenanik11 it can be brutal, but it's definitely fun, it's a Sinterklaas tradition https://en.m.wikipedia.org/wiki/Sinterklaas

Sinterklaas - Wikipedia

@hdv @alenanik11 People write each other poems for Sinterklaas in the Netherlands? I can't recall us doing that in Belgium 🧐