Today I was at an event on sustainability reporting. It was hosted by Digital Catapult and excellently moderated by Jo Lindsay Walton and Chanell Daniels.
I was glad I was able to to join, given my interest in AI ethics (at work and W3C), a greener future (as a contributor to WSG), and in reporting methodologies (but for accessibility, as an editor of WCAG-EM). I don't know a lot about sustainability reporting, so was keen to learn.
The event had a keynote, two panels and two interactive session. In this post, I'll share notes and links that I picked up from the day. And yes, this post too adheres to my AI statement.
The event was backdropped by most of London's St Pancras Station.
Keynote: why reporting matters
1.5 degree promise
In his keynote, Jo Lindsay Walton positioned the event as being about climate change more than AI, reminding us that we promised one another to stay “well below 2 degrees Celcius” compared to pre-industrial levels, and recently “effectively missed the 1.5 degrees target”. This is why carbon removal matters, now more than ever.
The bath tub
Using John Sterman's bathtub metaphor, he said we're collectively pouring more water into the tub than we're allowing to drain. “We need to equalise the drain with the tab going into the bath tub”, Jo explained, “doing more removals than emissions”. This applies to AI in the sense that AI is the main reason that tech companies are currently increasing (rather than decreasing) their energy use, they're pouring more into the tub than they are letting out. The assumption may be that technology could eventually lead to reaching carbon reduction goals faster, but the required scale and pace is not yet to be seen. At the same time, AI can divert our focus from greenhouse gas removals.
Policy implications
The report “Big Tech’s climate performance and policy implications for the UK”, that Jo mentioned, notes “there is a real risk that emissions from the AI build-out will outstrip any climate gains as tech companies abandon net zero goals and pursue huge AI-driven profits”.
Net benefit/detriment framing
“Is AI a net benefit or net detriment to sustainability?”, people often ask. Jo explained this is the wrong question—it's misleading as it conflates many different kinds of AI systems that have different infrastructural requirements. It puts things like ChatGPT in the same bucket as other more traditional systems. Like when we talk transport we don't want to conflate planes with bicycles. Jo and colleagues reject this framing in their paper “Modelling diverse futures of AI and the climate”, recommending an approach that includes open data, being explicit about what's uncertain, and inclusion of broad stakeholders.
How you measure matters
Measuring sustainability is essential, and there is a risk of divergence between scores (more on that below). Without good measurements, claims could become greenwashing (a practice EU Directive 2024/825 effectively prohibits). We mention greenwashing in the introduction of the Web Sustainability Guidelines, and in the group making WCAG, the accessibility standard, something similar has our attention: it's important that people can't “abuse” the system.
On greenwashing, Jo mentioned “The AI Climate Hoax: Behind the Curtain of How Big Tech Greenwashes Impacts” (PDF), that talks about how being vague about the meaning of the word “AI” helps corporations avoid responsibility.
Differences in methodologies
Lastly, Jo discussed that between ways to measure carbon, results can differ quite a bit, even between methods that are considered reasonable. A (preprint) paper explaining this effect is Beyond Counting Carbon: AI Environmental Assessments Struggle to Inform Net Impact Decisions.
Panel 1: Is AI sustainable?
After Jo's talk, he moderated a panel on risks, trade-offs, and futures.
The supply-driven nature
Loïc Lannelongue said it's problematic that AI is pretty much entirely supply driven, rather than driven by demand to solve problems. Copilot, for example, is introduced absolutely everywhere and it's like we're waiting for someone to raise their hand to find reasons to use it… that's terrible from a sustainability perspective.
The full lifecycle
Melissa Gregg talked about how helpful it was to, when she worked at Intel, bring the company's sustainability folk together with engineers, to close specifically those gaps. She also explained that it was important to move from spend-based carbon accounting to a more holistic type of accounting that includes embodied carbon and the full lifecycle. Especially as companies release devices that contain more and more chips (like smarts glasses).
Accuracy
The panel discussed whether accuracy in reporting even matters, as even inaccurate reporting can already help movement in the right directions. Besides, there is also sustainability impact (concerning the S of ESG) that cannot easily be captured by metrics, like the effects of increasing digitalisation on working conditions, human rights, and communities.
Policy and five 9's
Policy and regulation was also discussed. Loic mentioned that it's common for data centres to guarantee uptime with “five 9's”, meaning they guarantee to be available 99,99999% of the time. This sounds great and very reliable indeed, but in terms of energy it means that those data centres use diesel power to achieve it. We could introduce policy and regulation for such metrics, and decide when it really matters: a cat video on YouTube could be unavailable for a couple of minutes, while for a hospital, the possible life and death consequences of downtime could justify lots of 9s. With most more services striving for less 9s, might we avoid lots of carbon emissions?
Panel 2: AI tools in sustainability reporting
The second panel was about using AI in the process of sustainability reporting.
I was surprised to hear that many in the panel advocated for using AI in all sorts of ways, including for analysis. Given what we know about hallucination and the inner workings of LLMs (and then I'm still ignoring the energy use), I struggle to understand this cost/benefit analysis. But admittedly, I don't write sustainability reports, so I can't really speak to whether my choice deprives me of productivity wins that would have enabled me to do more carbon removal.
AI helps do sustainability work faster, was a common theme in this panel. One of the panelists explained his agentic AI setup as “having cognitive slaves (sic) work as assistants for you”. I struggled to identify with the desire to “have” “slaves” (I could not separate it from my immediate connotation with a historical wrong), and felt attributing “cognition” to agents anthropomorphises them. There is way too much that philosophers, psychologists, and neuroscientists don't yet understand about cognition to back that up.
In this panel, it was interesting to hear how the panelists worked with AI, and get their insights on how AI benefits their sustainability work, as well as how it may end up impacting the industry.
Activity: the future
In the last session we worked in groups to imagine the year 2040, when all the problems with AI have disappeared and all is peachy. Jo acted as a time machine that took us from 2040 to 2035 to 2030, having us figure out what changes to make to get to this future.
This was a fun thought process, and it had me considering what it would be like to have public-interest, non-problematic AI only. One group proposed to bring critical thinking to the classroom, as early as possible. As someone who was lucky enough to have dedicated philosophy classes in my high school curriculum, that I still often think about decades later, I wholeheartedly agreed.
Summing up
I had a great time at the “AI and the future of sustainability reporting” event, and am grateful to the organisers for putting it on, and Digital Catapult for hosting it in their beautiful location.
Comments, likes & shares
No webmentions about this post yet! (Or I've broken my implementation)