Generative AI features have a large climate impact and water consumption. We can weigh that impact against those features' benefits, but what if they are left unused? If lots of people don't in fact use the thing? That seems like lots of avoidable waste. Which matters, we're in a climate emergency and we're dangerously far from that 1.5 degrees target.
I know, we all want people to use features we build, but it is safe to assume they often don't. For my business, I use a lot of very beautiful self service portals that I only ever log in to, to download a PDF that my accountant needs. The beautifully considered UI, fancy spinners and clever copywriting are there, but if I'm honest, I mostly ignore them (sorry).
Is that ok? A button in your app that your user doesn't press, wastes little energy. But if your app automatically generates summaries, captions or suggestions, and the user didn't want or use that functionality, a lot of energy and water was wasted. While serving no purpose. It's that combo of waste and purposelessness that we should avoid at all times.
Wait, that's absurd, you say. Does this really happen? Yeah, I come across it all the time, and it's not just because I'm somewhat of a luddite myself.
Features I didn't use
Some examples of AI features that ran on my behalf just in the past week, but that I didn't use:
- Loom's transcripts and automated titles and descriptions. They show up almost instantly after upload. I always remove them, they fail to get the point across, which I want to do pointedly to save colleagues reviewing a video time.
- Parabol's automated summary of team retrospectives: it emailed us key points, some incorrect. While we had written them down correctly already.
- Notion's AI assistance that shows up whenever you press ‘Space’. Ok, granted, it only runs once you've actually typed a prompt, but it's a good example for this post, as it's one of those I hear many people want to turn off, and you can only do that “on Enterprise“, according to this Reddit topic dedicated to turning that feature off.
Of course, these features are not redundant if users benefit from them. But let's be real, oftentimes users didn't want to generate anything. It happened anyway, and was unsolicited. They will probably discard of the output. In those cases, the energy-intensive feature was redundant. And that's an issue, as we don't have redundant energy.
Meanwhile, most major tech companies announced they are letting go of their net-zero goals. Some have bought nuclear power plants to cater for their energy needs (see Microsoft's plans with the Three Mile Island plant). This confirms to me that we don't have abundant energy. Maybe one day, but not today.
An ethical web
Is this ethical? The W3C's Ethical Web Principles have a specific principle that applies here: 2.9 The web is an environmentally sustainable platform.
It suggests new technologies should not harm the environment:
We will endeavor not to do further harm to the environment when we introduce new technologies to the web (…)
and recognises people who benefit are not always those who are harmed:
and keep in mind that people most affected by the environmental consequences of new technologies may not be those who benefit from the features introduced.
If a feature is useful for some, but indirectly causes the energy or water bills to go up for others, we're breaking this Ethical Web Principle.
Conclusion
So, I'm just a guy sitting behind a keyboard, begging anyone including generative AI features: only put them to work when users indicate they want that to happen. That's also going to turn out cheaper when OpenAI increase their rates, which is likely as investors are going to want returns. Why not consider leaving out that new LLM-powered feature in the first place: not everything needs to be “artificially intelligent”, sometimes a bunch of if statements make a killer feature (dude, that's so paternalistic and you're oversimplifying the realities of software engineering, you say… yeah, sorry, I'm trying to react to the overcomplexification that also happens).
Do you have other examples of software that forced LLM generated content on you? Let me know and I'll add them to the post.
Further reading
- Thinking about using AI? Here’s what you can and (probably) can’t change about its environmental impact by the Green Web Foundation
- AI’s Growing Carbon Footprint (cites data centres account for 2.5-3.7% of global greenhouse gas emissions, exceeding aviation)
- We’re getting a better idea of AI’s true carbon footprint
- Is generative AI bad for the environment? A computer scientist explains the carbon footprint of ChatGPT and its cousins
Comments, likes & shares (75)
mj12albert, Vayia M, aœ, Myles Lewando, Baldur Bjarnason, зміцер, Mu-An, lbineau, Steve Frenzel, Sonja Weckenmann, Ashlee, Ayo Ayco, Jan Lehnardt :couchdb:, Apple Annie :prami:, Björn, Laker Turner, graste, dndhm, Cassey Lottman, Fynn Ellie Becker, René Henrich, iroma, aprilfollies, Nelson Chu Pavlosky, Emma Loves ☕️, Glyph, Till Grallert, Sylvain Soliman ☕️, Alias McFakename, Jaakko Niemi, danslesvolcans, Apocraphilia, crabmusket, Jasmine Otto, Malvan, Feneric, JuKra00, Ruth Malan, Nick :pirat:, gheorghita, Claudia and Stefan liked this
Sietske Boer-van Vugt, Myles Lewando, Eric Eggert, lbineau, DNKrupinski, Steve Frenzel, Sonja Weckenmann, Jan Lehnardt :couchdb:, Apple Annie :prami:, Nicholas La Roux, Nick F, Laker Turner, Flo Kosiol, Fynn Ellie Becker, brenna ????????????️????????????, Cruiser, Carina C. Zona, Glyph, Emma Loves ☕️, Till Grallert, Joe Cooper ????, Dave bauer, Jaakko Niemi, Sylvain Soliman ☕️, Sheean Spoel, Sijmen, danslesvolcans, Apocraphilia, crabmusket, Kolombiken and caebr reposted this