Dear Mozilla, I don't want an “Al kill switch”, I want a more responsible approach for all

My concern is that Mozilla is too excited about a technology that has inherent downsides and ethical problems, and I would like to see better defaults and clearer risk mitigations.

In this post, I want to offer one more perspective on the recent backlash about AI functionality in Firefox, because I like Mozilla (full disclosure: used to work there), and despite everything, I think I still have some belief in their ability to do things differently. I'd rather be too naive than too cynical.

Vegetarians and vegans like having the option to skip meat in restaurants, but we'll tell our friends about the ones that genuinely make an effort to center plants in the menu.

The issue isn't the switch or the features

When Mozilla announced it would become an “AI browser”, there was wide backlash. I'll admit I am already on an AI-less fork, and sighed audibly when I read the announcement, because I associate “AI” with things (see next section).

But reading the accompanying blog posts, the features themselves strike me as fairly sensible ones. And they are actually identifyably “the Mozilla way”, centering human experience.

Automatic alt text generation seems useful in cases authors didn't provide it, and has disclosure so users can know when it happened. Page translation is helpful, and has the option to translate just part (more sustainable), and even automated tab groupings and group names seem like a useful ways to use AI. I'm less convinced on chat and summarisation, but maybe I'm not the audience, it is likely useful to others.

And still, I welcome an “AI kill switch”, a way to turn it all off (despite the name of this post). I think it's going to make it easier for the fork I use to stay in sync. However, I think there's more to the backlash than people personally wanting to turn it off.

The larger issue is this: that AI functionality is forced on people, while it undeniably does a lot of harm (see below) and doesn't always work as advertised. I'm aware others do it too: see also, Google's on-by-default AI summary on the research result page, Meta's impossible to turn off AI button in WhatsApp and Microsoft's eager Copilot in pretty much every product they make.

On top of that, “AI” is associated with very questionable practices, such that many of us have struggle to trust companies who embrace it excitedly.

Why trust in “AI” is (justifiably) low

There exists genuinely useful generative AI, plenty have written about their experiences.

Yet, low trust in AI is justifyable. Low trust does not indicate someone misunderstands the technology or holds it wrong. It's not “haters gonna hate” either. Ethical AI is still somewhat of an oxymoron, a lot of that is inherent to the technology.

AI vendors:

  • have stolen art from artists to train LLMs (from literature to music to illustration), and laughed it off, think Altman using a Studio Ghibli-ripoff as his profile pic (and haha, servers are melting) and posting “her” after ripping off Scarlett Johansson's voice after she refused to work with them.
  • tried to make tools that embrace specific ideologies, including far right ones, like the "anti-woke” Mecha Hitler that we've probably all seen?
  • refuse to share verifiable data about energy use of their operation with researchers and the concerned public. There's no way they don't have that data, as it's directly associated with business costs. They choose to not make it available.

All three suck (pardon my French). You could call the specific actors bad apples, but this is very wide spread.

AI products have:

  • made an SEO-optimised web even harder to browsers due to easily produced synthetic content.
  • shifted product development away from other functionality. Tech companies have made AI part of performance reviews, and it shows directly in the products, with AI functionality nobody ever asked for.
  • normalised bias-increasing business practices, like auto-scanning resumes, automated minute taking and more.
  • contributed to devaluation of human critical thinking. This is very subtle, but the effects this could have are really worrying to me.

Again, not all AI-branded products are bad, and Mozilla has done great and forward-thinking AI projects in the past (like Common Voice). Yet, I don't think it's hard to see why people have negative sentiments.

Of course, none of this is Mozilla's fault, other tech companies (including browsers), have jumped on the bandwagon faster and less cautiously. It just hurts more, as Mozilla, with its foundation's manifesto has been one of the last bastions of centering the common good.

Harms that “AI by default” could propagate

I'm sure many at Mozilla are aware of the possible harms of AI use. The Foundation has funded lots of work in this direction over many years, also way before the hype.

For others, these are some effects that could be caused even if personally used the AI kill switch:

  • fellow citizens whose radicalisation is amplified by biased AI summaries and propaganda poisoning.
  • a generation of students that cannot manually take in long form information as they question and summarise everything with AI.
  • an energy net that prioritises data centres (including those for AI) over arguably more important things, like hospitals (happened during last year's power outage in Spain and is what AI evangelists have recently lobbied the Dutch government for).

My hope for AI in Firefox

I think it's good Mozilla experiments, even while I'm pretty sceptical personally. I could be convinced there are great AI uses that Mozilla could implement more responsibly and ethically than others, and that would benefit users, even if they still harm non-users.

To me, responsible implementation would look something like this:

  • not “everything AI”: make lots of space for non-AI features, as most features users care about don't need AI, most of us don't live in Silicon Valley or own NVDIA stock, regular people don't ask for AI everywhere.
  • respect my own agency: clearly let me opt in, don't ever undo my opt out or re-enable features, even if that means I'm missing a feature the team thinks is amazing. This erodes trust.
  • respect societal effects: roll out carefully, and with safeguards, communicate about those if they are things users need to do.
  • honesty tell users about the risks.
  • prioritise sustainability; if it affects quality, give users choice to be less sustainable, but offer sustainable defaults.
  • recognise potential harms, including indirect ones and try to mitigate where possible.
  • avoid hype, we already hear plenty of excitement and hype from other technology companies, going hypeless is a market differentiator at this point.

More responsible implementation would make it easier for me to recommend Firefox to friends and organisations I work for. Less would make that really challenging.

More harm mitigation, please

I understand the backlash Mozilla faced after announcing more AI in the browser, scepticism about the product direction is warranted for many reasons. At the same time, I see how some of the features could be useful, and that other big tech companies are less responsible about it. Mozilla tries to be responsible and my hope is for more of that, it's the reason I and many others choose Mozilla products.

I hope Mozilla succeeds in their aim to do AI “right”, adhering to the manifesto, and inspires others to do the same. It's very much needed too, as at this rate our industry is on the way to beat big tobacco and big oil in breaking things.

Comments, likes & shares (14)

you can call me naive, but I still have hopes for Mozilla to be more reponsible than most, and hope they will do it hypeless, opt-in (actually) and focused on harm mitigation.