LLM vendors promise huge time savings, while many people experience these tools cost them time. Can both be true at the same time? Yes, when it saves some, but costs others.
(I'll use “AI” and “LLMs” interchangeably in this post, but I am mostly talking about LLMs marketed as AI)
They overpromise
Microsoft says Copilot can “boost productivity to achieve more in work and life”, Google says Gemini can “supercharge your creativity and productivity” and ChatGPT's homepage boasts you can “condense hours of work into minutes”.
If you ask the companies selling AI tools, they supercharge all of your productivity, constantly.
I understand that these claims sound very attractive, and that business owners want the thing that can deliver it all. But whenever someone uses an LLM to generate text, images or code in seconds, their time saving (if real at all) could cost someone else hours.
Not only is that annoying for the “someone else”, it negates the assumed benefits and is another reason to adopt less AI, rather than more. It is a serious threat to organisational productivity.
Real-life examples
Note: this post is not about a specific situation, don't worry if we've worked together, I've combined a large number of my own and my friends' situations.
Meet Bob and Alice. They work at a large e-commerce firm, tasked with optimising product descriptions. One day, Bob uses AI to write all of the content for their new airfryer overview page. It seems genius… he spent just under 2 minutes, and can use the rest of his day for other tasks. Now Alice, his manager, only needs to review it. Throughout the review, she finds out there are lots of details subtly off. It costs her almost a full day to document them all. Errors include that it listed the wrong voltage, claimed conformance with the wrong technical norm and hallucinated two features. The tone of voice also feels much more generic than their usual style, and there were lots of unnecessary words. She's worried they'll affect sales.
Interestingly, “vibe code cleanup specialist” is a job now. The work can also remain invisible, sometimes unknowingly. Sometimes it can lead to conflicts too. This is so common in the workplace, that there is a phrase for it: workslop.
There are lots of real life problems that one person's AI use can cause another:
- person 1 used AI minutes in Zoom or Teams, person 2 is now quoted to say something they didn't say.
- person 1 shares an AI summary, person 2 ends up with incomplete information, wrong assumptions or simply false information.
- person 1 decides to use AI to monitor their workers' productivity, person 2 is an employee that is female, a group we know is less likely to be included in summaries because of algorithmic bias.
- person 1 creates a strategy document with AI, person 2 acts as reviewer and needs to make their way through synthetic text that nobody put thought into, which wastes their time.
- person 1 automatically creates a website with fun things to do in New York that is full of slop, person 2 is visiting New York and now has another search result that is essentially useless.
(I could go on…)
In all of these cases, AI “works great” for person 1, and is a burden to person 2. Person 2 could: see their time wasted, be forced to explain themselves for words they didn't use, risk being fired or worse. It may have legal consequences. They could feel dehumanised, when often having to process computer output instead of being involved in real, human collaboration.
Given the choice, I reckon nobody wants to be on the receiving end of LLM output. Not if it's instead of purposefully put together minutes, summaries, strategy documents or tourist websites.
Yes, there can be shitty behaviour without AI, but why would we make shittiness easier through automation? That isn't a sensible use of technology by any measure.
How we can do better
Below are 4 examples of policies we could put in place to work together more peacefully, at a minimum.
Mandatory disclosure
When sharing content, code or an image that is essentially from an LLM, disclose that. For instance, “I got this from AI:”. That way, anyone who needs to do something with the content can make an informed decision on how much time spent on it.
Check thoroughly before sharing
Before posting a comment, asking for a review, deploying a documentation website or emailing someone with the help of an LLM, take responsibility. Manually check all words, lines of code or images very thoroughly. Don't leave any of that to the next person.
And yes, that means turning off SaaS features that share LLM output automatically, like meeting recaps — “automatically” means not human checked, which means nobody took responsibility.
No profiling
With all we know about algorithmic bias, and without any sound evidence of any system at that avoided it, the best (only?) policy around profiling with AI would be to not do it. It cannot be done fairly, even if you put a human in the loop.
Voluntary use
Make your employees' AI use voluntary, and avoid basing performance reviews on AI used (some companies do this). Make it easy to opt out and fully respect the choices of employees who choose not to use it.
Summing up
In this post, I've discussed a phenomenon I feel needs mitigation: when AI usage benefits one person, while burdening another. To me, this would be another reason to be very cautious when using LLM's in the workplace.
Comments, likes & shares (12)
Ben Tsai, s:mon, Johannes Odland, Samir, Jeffrey Yasskin, Ondřej Pokorný, Cem K., Denys and dycule liked this
Eric Eggert, James and Hans Laagland reposted this