This week I attended a symposium called ”The Politics of AI: governance, resistance, alternatives” at Goldsmiths in London. In this post, I'll share some takeaways.
The Goldsmiths building
The event was organised by Fieke Jansen, Dan McQuillan, Jo Lindsay Walton and Pat Brodie, academics working in the fields of critical AI studies, technology infrastructure and sustainability. The overlap between these fields is probably obvious to most, but the closer you look, the more apparent it becomes.
Why critical AI? Tech companies continue to tout AI as the solution for all of our problems. Meanwhile, many researchers, bloggers and non-profits warn about the wide range of issues with how current AI technology is made, marketed and used (I wrote about ethical issues earlier). Those issues don't disappear by themselves.
If we want to change we can protest, and that sometimes works. But at The politics of AI, the vibe was more towards trying to come up with alternatives. Clearly there are problems with the status quo, what should we do instead?
AI today
Many of the speakers discussed today's problems with AI, most notably regarding sustainability, necropolitics and labour conditions.
Sustainability
Ana Valdivia said we are in the “antropocene” now, a geological time scale where humans actively alter nature, and encouraged us to look at the physical side of AI: the impact of its infrastructure on the environment.
Joseph Lane talked about how agriculture is shifting to be more data-driven, which he explained is too centralised, rigid and abstract, and ignores ecological complexities.
Becky Kazansky discussed greenwashing, and how reading corporate sustainability reports as an expert can be a “gaslighty experience”. They can sound like they do a lot of amazing work, but often discuss solutions that are unlikely to help. “Market-based carbon accounting” (basically allowing for subtraction of energy purchases and credits) means there is huge disparity between what they report and reality, and they become a “mortgage the future”. There are also solutions that may never work, but count towards credits, such as solar-geoengineering (attempts to lower how much solar reaches the earth), see this call for non-use.
Predatory Delay and Other Myths of “Sustainable AI”, co-written by Fieke Jansen and Michelle Torne, has a useful overview of various myths around sustainability and AI.
Necropolitics
Sarah Fathallah showed us how AI is actively used as a mass assasination factory by the Israeli army. This is now a technology that can identify kill lists based on characteristics given specific thresholds, where army officers can modify those tresholds if they want longer lists, with the responsibility outsourced to a black box. AI technology is used for ”necropolitics”: data of living Palestinans training the system that decides on Palestian deaths.
System of extraction
Berhan Taye explained how the pipeline through which modern AI is made is a system of extraction, including data labeling practices. “Without data, machine learning and automated decision-making systems are empty vessels”, she explained, and said a lot of the work (65%) is in data cleaning, labeling and augmentation. Labeling is a billion dollar industry in 2025, that has unfair compensation (under living wage and without contracts) and is exploitative of global inequalities, while it has made the owner of one the largest data-labeling companies a billionaire.
The event was clearly signposted
Possible futures
There was also, as was the premise of the event, plenty of discussion of what different futures could look like, and what we would need to get there.
AIs that humans want
Isadora Cruxên talked about her Data Against Feminicide project, which has been running for over 5 years. The project is involved with data production, but starkly contrasts with the extractive data annotation industry mentioned above. Instead, it reimagines it as a “site of co-constructed knowledge, community care and resistance against exploitative work practices”. By collecting data, they help public understand the problem as systemic and support the search for justice. Their AI is feminist in the sense that it carefully considers questions of power. Isadora explained involving community leads to AI that humans actually want. People have a lot to say about what AI they want and corporate-led AI may tell you they understand you. But when you ask people in workshops, it turns out they actually have their own ideas about this. A better future of AI treats involves users much more widely.
Scarcity approach to AI
Fieke Jansen advocated for a “scarcity” approach to AI infrastructure. She explained that AI is positioned as “we need to build more of it”, with hyperscalers like Google and Microsoft rapidly building new data centers to facilitate the growth. Growth that is clearly outside of planetary boundaries. Designing from “scarcity”, Fieke explained, can be tool to focus on governance and need-based priorities instead of growth. Using scarcity approach in workshops, she saw participants do things like categorise and prioritise AI uses (some may be worth the damage more than others) and bring public interest in the discussion (in a power outage, should we prioritise a hospital or a data centre for the Metaverse?). See also Fieke's recent Branch article.
A different pipeline
Berhan Taye proposed alternatives for the current extractive pipeline, in various ways (see also AI Commons paper (pdf)): smaller models with smaller sets of training data, infrastructure that is shared instead of privatised, more community oversight and careful analysis of “should this AI even exist?”
More democracy
Kars Alfrink talked to us about how to get closer to “technological self-determination”, giving communities more ownership over AI infrastructure, rather than leaving all control to Big Tech. He proposes three shifts to that end: from specific applications to infrastructures (design should make invisible AI infrastructure more visible), from individuals to collectives (and from idealism to realism (starting from actual power relations instead of abstract ethics; who does what to whom for whose benefit?).
Better input to regulation
Mateus Correia de Carvalho, actually a lawyer, discussed the promise of EU AI regulation, then critiqued it and proposed how to rethink. The promise is that harms and negative effects are included in regulation (and sometimes prohibited, like in article 5 of the AI act). And they are, but, Mateus explained, only a narrow subset of European society can contribute, and regulators use civil society organisations as evidence-providers, rather than for genuine open dialogue. Some participation is performative, he said. To make this better, Mateus argued we need to rethink AI governance along three lines: redestribution of material resources, recognition of articulated concerns and visions and representation of more more communities.
Summing up
As I have to consider AI governance and the relationship to standards in my work, and have a keen interest in (web) sustainability, this symposium piqued my interest. I am actively searching for similar events (recommendations welcomed!), and will definitely try to make it to the next from this group.
Comments, likes & shares (5)
and Ariel Salminen liked this
Evil Jim O’Donnell, Hidde and Hans Laagland reposted this
@livliilvil thank you! And much appreciate your pointer to the Utrecht event as I had not heard about it, just signed up!