A group of AI safety activists protest in front of a modern glass building labeled “OpenAI” in Silicon Valley. Protesters hold cardboard signs that read “ETHICS OVER PROFIT” and “AI NEEDS REGULATION.” One woman speaks passionately into a megaphone. A drone hovers above the crowd, and people appear serious and determined. Bold white and yellow text over the image reads: “10 Days Without Food To Stop AGI?” The scene conveys urgency and resistance against unchecked AI development.

Anti-AI Activists on Hunger Strike At Anthropic and DeepMind

Two Anti-AI activists are currently more than 10 days into hunger strikes outside major AI company headquarters, refusing to eat until their demands about artificial intelligence development are heard. Guido Reichstadter has been camped outside Anthropic’s offices while Michael Trazzi protests at DeepMind, both arguing that the path toward artificial general intelligence poses existential threats to humanity. Their says that AI companies are pushing toward dangerous territory that could harm families and children worldwide.

The protests represent a growing but still small movement of people who believe current AI development is reckless and potentially catastrophic. Reichstadter, a father of two who previously spent 28 hours protesting on a bridge in Washington DC, claims that companies like Anthropic are putting his children’s lives at risk through their research. Other activist groups like StopGenAI have emerged with related concerns, though they focus more on economic impacts like job displacement rather than apocalyptic scenarios.

The question many are asking is whether hunger strikes are an appropriate or effective way to address complex technological policy issues. While AI safety is a legitimate area of research and debate, the decision to refuse food over these concerns raises questions about proportionality and the best ways to influence how AI technology develops.

What’s Actually Happening

Guido Reichstadter has been refusing food outside Anthropic’s headquarters for over 10 days, while Michael Trazzi conducts a similar protest at DeepMind‘s offices. Both men are part of activist groups called StopAI and “For Humanity” that believe artificial intelligence development has become reckless. Reichstadter, who has two children, claims that Anthropic’s research into artificial general intelligence puts his family’s lives at risk.

The protests follow a familiar pattern for Reichstadter, who previously spent 28 hours on top of the Frederick Douglass Memorial Bridge in Washington DC protesting the Supreme Court’s decision on Roe v. Wade. This time, his demands focus on getting AI companies to stop their research into AGI – artificial general intelligence systems that could potentially match or exceed human cognitive abilities across all domains.

Their central argument revolves around what they call an “AI race” between major tech companies. According to the activists, companies like Anthropic, OpenAI, and others are rushing toward increasingly powerful AI systems without adequate safety measures.

The protesters have tried to frame their actions as emergency responses to an emergency situation. In statements posted online, they argue that:

  • Current AI development is already causing societal harm
  • The race toward AGI threatens the lives and wellbeing of families worldwide
  • AI companies prioritize profits over safety considerations
  • Immediate action is needed to prevent catastrophic outcomes

The Activists’ Arguments and Broader AI Safety Concerns

Reichstadter argues that AI companies are “driving us to a point of no return” where advanced AI systems could potentially harm or eliminate human life.

Anti-AI activist’s economic arguments focus on job displacement and what they see as deliberate elimination of human labor. StopGenAI’s Kim Crawley argues that tech billionaires want to “eliminate paying humans for our thinking labor” entirely. She points to statements from tech CEOs like Nvidia’s Jensen Huang and OpenAI’s Sam Altman who have expressed enthusiasm about AI replacing human jobs across various sectors.

The activists claim current AI systems are already causing measurable harm to society, though they tend to be vague about specific examples. Their argument suggests that problems with today’s AI – from misinformation to job losses – will only accelerate as systems become more powerful. They frame this as an emergency requiring immediate action rather than gradual policy responses.

These concerns do overlap with legitimate research areas in AI safety and ethics. Academic researchers, policy experts, and some technologists do study questions about:

  • How to ensure AI systems remain aligned with human values
  • Economic disruption from automation and potential policy responses
  • Risks from increasingly powerful AI capabilities
  • Governance frameworks for AI development

A Realistic View on the Situation

Looking at the actual AI releases this year, the gap between apocalyptic warnings and current reality is pretty clear. GPT-5 faced significant user backlash because it often failed to meet expectations for relatively basic tasks, let alone pose existential threats to humanity. Most recent AI updates have been incremental improvements – faster processing, broader knowledge bases, better benchmark scores – rather than the revolutionary breakthroughs that would justify emergency-level concerns.

The current model of AI development through scaling and training optimization has clear limitations that make the activists’ catastrophic scenarios seem unlikely without fundamental changes to how these systems are built.

While AI companies are indeed investing heavily in research – with Meta reportedly offering $100 million packages to top researchers – the goal of finding breakthrough approaches to AI development remains uncertain both in timeline and feasibility. We don’t know what’s happening inside these companies, and it’s possible they’ve made more progress than publicly known. However, based on observable outputs and expert assessments, claims that we’re facing an immediate emergency requiring hunger strikes appear overstated.

The choice of extreme protest methods like refusing food also seems disproportionate to addressing what is fundamentally a complex policy and research challenge. While raising questions about AI safety and governance is legitimate and important, the framing of imminent existential crisis needing emergency intervention doesn’t align with the measured, research-based approaches that actually influence technology policy. There are established channels for AI safety research, regulatory input, and public advocacy that don’t involve putting one’s health at serious risk through prolonged food refusal.

Conclusion

The hunger strikes outside Anthropic and DeepMind represent a dramatic response to concerns about AI development, with activists claiming that artificial general intelligence (AGI) poses near-term existential threats. While AI safety and governance deserve serious attention, the activists’ framing of an emergency requiring extreme protest methods appears disproportionate to current AI capabilities, which show incremental improvements rather than breakthrough advances.

Recent AI releases have often struggled with relatively basic tasks, creating a significant gap between today’s systems and the world-ending scenarios described by protesters. The situation highlights the challenge of balancing genuine policy concerns with proportionate responses – hunger strikes involve serious health risks and may not effectively influence complex technology policy decisions.

Whether these protests will meaningfully impact AI company practices or policy discussions remains unclear. However, there are established channels for AI safety research and advocacy that don’t require putting one’s health at risk through prolonged food refusal.

Ricevi suggerimenti esclusivi sull'intelligenza artificiale nella tua casella di posta!

Rimanete al passo con le intuizioni degli esperti di IA, fidati dei migliori professionisti del settore tecnologico!

it_ITItaliano