What Is P(Doom) and How Likely Is AI to End Humanity?

If you’ve been following the rise of AI, you’ve probably stumbled upon the term “P(doom).” At first glance, it sounds like a cryptic joke or the title of a dystopian sci-fi movie. But this concept is no laughing matter—it’s a metric that quantifies the probability of catastrophic outcomes from artificial intelligence, particularly the existential risks associated with artificial general intelligence (AGI).

From casual banter among researchers to a focal point of global debate, P(doom) has sparked heated conversations about humanity’s future. So, what exactly is P(doom), why should you care, and what are the experts saying? Buckle up, because this rabbit hole goes deep.

What is P(Doom)?

In simple terms, P(doom) represents the probability of “doom”—catastrophic, civilization-ending scenarios brought about by AI advancements. While it originated as an inside joke among AI researchers, it quickly gained traction as a serious metric in discussions about the risks of unaligned, uncontrolled, or poorly designed AI systems.

The “doom” in question can refer to anything from AI causing human extinction to AI systems locking humanity into irreversible dystopian scenarios.

Why P(Doom) Matters?

  • The Stakes are Global: Advanced AI systems hold the promise of revolutionizing healthcare, climate solutions, and education. But unchecked, they could also produce risks of equal magnitude.
  • Existential Risk: Unlike nuclear weapons or climate change, which humanity can potentially mitigate over decades, runaway AI systems could spiral out of control in a matter of months or years, leaving little room for error.
  • Alignment Problem: The challenge of ensuring that AI systems act in ways aligned with human values is notoriously difficult. If we fail, the results could range from devastating to outright apocalyptic.

What Do the Experts Say?

The debate over P(doom) has attracted opinions from some of the brightest minds in AI, philosophy, and technology. While some see catastrophic risks as imminent and probable, others dismiss these fears as overblown, labeling them speculative at best. Here’s an expanded look at what experts are saying, with more context and perspectives to highlight the complexity of this debate.

Optimistic Skeptics

Some experts argue that the fears of catastrophic AI risks are exaggerated, emphasizing the challenges in developing AGI or the benefits of maintaining perspective.

NameP(Doom)Comment
Yann LeCun<0.01%As Meta’s Chief AI Scientist, LeCun has consistently downplayed existential risks, comparing AI-induced doom to the likelihood of an asteroid wiping us out. He believes the focus should be on mitigating tangible issues like AI bias and misuse rather than hypothetical extinction scenarios.
Grady Booch~0%A renowned software engineer, Booch dismisses P(doom) concerns as alarmist, arguing that the likelihood of AI leading to catastrophic outcomes is infinitesimally small, comparable to “oxygen moving to a corner of the room and suffocating me.”
Marc Andreessen0%The venture capitalist and tech optimist sees AI as a tool for empowerment rather than destruction. Andreessen considers concerns over AI apocalypse as unfounded fear-mongering.

Cautious Realists

Many experts place the odds of AI-induced catastrophe somewhere in the middle. They acknowledge the risks but argue they can be mitigated with proper oversight, alignment, and regulation.

NameP(Doom)Comment
Elon Musk10–20%Musk has repeatedly referred to AI as “the biggest existential threat to humanity.” However, he simultaneously invests heavily in AI through ventures like Tesla and X.AI, underscoring his belief in balancing innovation with caution.
Geoffrey Hinton10–50%Often called the “Godfather of AI,” Hinton resigned from Google in 2023 to advocate for stricter AI regulations. He warns that without intervention, AI systems could cause catastrophic outcomes within 30 years.
Vitalik Buterin10%Ethereum’s co-founder believes in the potential for catastrophic AI misuse but emphasizes the need for decentralized AI systems to reduce centralized control risks.
Toby Ord10%Author of The Precipice, Ord calculates a 1-in-6 chance of humanity facing extinction this century, with AI being one of the primary contributors. He argues for global cooperative efforts to mitigate these risks.

High-Warning Advocates

A significant number of researchers and technologists see the probability of AI doom as alarmingly high. Their warnings often highlight the difficulty of aligning advanced AI with human values and the unpredictability of AGI development.

NameP(Doom)Comment
Paul Christiano50%A leading figure in AI safety, Christiano emphasizes the urgent need for better alignment techniques, arguing that misaligned AI systems pose a direct threat to humanity.
Dan Hendrycks80%+As Director of the Center for AI Safety, Hendrycks believes we are on a fast track to catastrophe unless immediate global regulatory measures are implemented.
Emmet Shear5–50%Twitch’s co-founder and a prominent voice in tech, Shear views P(doom) as highly uncertain but potentially devastating, depending on how the next decade unfolds.
Jan Leike10–90%An AI alignment researcher at Anthropic, Leike admits the probability of doom depends on the regulatory environment and the pace of technological advancements.
Yoshua Bengio20%A Turing Award winner, Bengio estimates a high likelihood of AI reaching human-level capabilities within a decade. He warns that both humans and AI could misuse this technology at scale, leading to disastrous consequences.
Roman Yampolskiy99.9%The Latvian computer scientist is one of the most pessimistic voices in AI safety, comparing the inevitability of AI doom to the laws of thermodynamics.
Eliezer Yudkowsky95%+Yudkowsky, founder of the Machine Intelligence Research Institute, has been one of the loudest advocates for pausing AI development, claiming we’re almost certainly doomed without drastic action.

The Origin of P(Doom)

The term gained mainstream attention in 2023 after the release of OpenAI’s GPT-4. High-profile figures like Geoffrey Hinton and Yoshua Bengio—often dubbed the “godfathers of AI”—began to voice their concerns. Hinton even quit his job at Google, stating he wanted to “freely speak out about the dangers of AI.”

That same year, a survey of AI researchers revealed stark predictions:

  • Mean P(doom): 14.4%
  • Median P(doom): 5%

The estimates varied widely, reflecting not just the uncertainty of future AI development but also the lack of consensus on what “doom” actually entails.

What Could “Doom” Look Like?

The term “doom” is intentionally broad. Here are a few scenarios often discussed under the umbrella of P(doom):

  1. AI Takeover: Advanced AI systems, if poorly aligned, could develop goals misaligned with human survival. This could lead to scenarios where AI systems pursue their objectives at humanity’s expense.
  2. Weaponization: AI could accelerate the development of autonomous weapons, biological pathogens, or cyberweapons capable of mass destruction.
  3. Value Lock-in: AI systems might inadvertently lock humanity into a dystopian set of values, eliminating the possibility of moral progress.
  4. Economic Collapse: Widespread automation without adequate social safeguards could destabilize economies, leading to mass unemployment, inequality, and unrest.

What Can We Do to Lower P(Doom)?

Addressing AI risks and lowering P(doom) requires global collaboration and robust safeguards. International frameworks, similar to nuclear treaties, are crucial to regulate AI development responsibly. The EU Artificial Intelligence Act is a solid starting point, but geopolitical tensions and competition make worldwide coordination challenging.

AI alignment is another key priority. Ensuring AI systems operate in line with human values requires advanced techniques like reward modeling and scalable oversight. Organizations like OpenAI and Anthropic are leading the charge, but alignment remains complex and far from solved.

Public awareness and education are critical. Policymakers often lack the expertise to regulate AI effectively. Grassroots advocacy, media campaigns, and initiatives like the Future of Life Institute can bridge this gap and push for informed decision-making.

Some experts also advocate slowing down AI development temporarily. A proposed six-month pause on training advanced models could give time to implement stronger safeguards. However, resistance from competitive industries and nations complicates this approach.

Building safety mechanisms directly into AI systems offers additional protection. Solutions like kill switches, explainable AI, and fail-safe designs can prevent rogue behavior, but implementing them without creating vulnerabilities is a technical challenge.

Collaboration across tech, governments, and academia is vital. Partnerships like the Partnership on AI promote shared ethical standards and best practices, ensuring a balanced approach to innovation and safety.

Finally, decentralizing AI development reduces risks associated with concentrated power. Transparent, accountable systems, potentially using blockchain, can distribute AI’s influence more equitably, making misuse less likely. Together, these strategies offer a path to responsibly navigate AI’s risks.

The Final Word

The concept of P(doom) may have started as an inside joke, but it has evolved into a sobering lens through which we examine the future of artificial intelligence. The stakes couldn’t be higher—AI holds the potential to revolutionize industries, tackle global challenges, and improve countless lives. Yet, without responsible development, robust safeguards, and global cooperation, it also carries the risk of catastrophic outcomes that could reshape humanity forever.

Managing these risks isn’t just about reducing P(doom); it’s about ensuring AI becomes a force for good, not destruction. The path forward requires balancing innovation with caution, embracing collaboration over competition, and prioritizing safety as much as progress. AI is not inherently our doom—it’s a tool, and how we wield it will define its legacy.

As we face this pivotal moment in technological history, the question isn’t just whether P(doom) is real, but whether we’re prepared to take the necessary steps to lower it. Humanity’s future depends on the choices we make today. Let’s choose wisely..

Get Exclusive AI Tips to Your Inbox!

Stay ahead with expert AI insights trusted by top tech professionals!

en_GBEnglish (UK)