DeepMind Co-Founder Predicts AGI by 2028 And Says Humanity Is at Risk

Artificial General Intelligence (AGI) is often described as a game-changer for humanity—a technology capable of thinking, learning, and solving problems like humans. It promises breakthroughs in medicine, climate solutions, and beyond. But alongside these possibilities lie dire warnings about the risks it poses, including the potential to threaten humanity’s very existence.

Shane Legg, co-founder of DeepMind, is one of the experts raising the alarm. He predicts that AGI could arrive sooner than most people think and warns it carries a significant chance of catastrophic consequences. Just like Elon Musk, another vocal critic of AI, Legg highlights the dangers of unregulated development and the potential for devastating outcomes.

In this article, we’ll dive into these predictions, explore why experts are sounding the alarm, and unpack the unsettling probabilities of AGI leading to human extinction.

How Close Are We to Human-Level AI?

Shane Legg has been predicting the arrival of human-level AI for over a decade. Back in 2011, he estimated the chances of AGI arriving by certain dates:

  • 10% chance by 2018.
  • 50% chance by 2028.
  • 90% chance by 2050.

Fast forward to today, and Legg still believes there’s a 50% chance of AGI by 2028. That’s just three years away. This isn’t a far-off future—experts like Legg believe we are rapidly approaching a turning point where machines could think and learn on their own.

AGI isn’t just another step in technology. It’s a leap forward, with the potential to change everything, from medicine to climate change. But with such power comes risk.

The Risk of Human Extinction

Legg’s predictions don’t stop at when AGI might arrive. He also warns about the dangers it could bring, estimating a 5-50% chance that AGI could lead to human extinction within a year of its creation. This aligns with what some AI experts call P(doom), or the probability that AI development could end in catastrophe. While the exact numbers vary, they all paint a worrying picture of humanity gambling with its future.

Why such a grim outlook? Once AGI surpasses human intelligence, it could take actions we don’t understand or control. Unlike current AI, which follows strict programming, AGI could learn, adapt, and make decisions in ways we can’t predict.

Other experts share similar concerns. Geoffrey Hinton, often called the “Godfather of AI,” believes there’s a 10-20% chance AI could wipe out humanity within the next 30 years. Meanwhile, Elon Musk has repeatedly warned about the dangers of AI, estimating a similar 10-20% risk of existential disaster. He has even called it humanity’s greatest risk, comparing it to “summoning the demon.”

A 2022 survey of 2,778 AI researchers found that, on average, they estimate a 16% chance AI could cause human extinction. These aren’t random guesses—they come from people deeply involved in creating and studying AI systems.

The Future is in Tech CEOs’ Hands

One of the biggest concerns is who decides how AGI is developed. Right now, those decisions are mostly in the hands of a small group of tech CEOs. These leaders are racing to develop more advanced AI systems, often without broad input or oversight.

Stuart Russell, a well-known AI expert, has compared this to playing Russian Roulette with humanity’s future. He argues that building superhuman AI is too important to be decided by a few people, especially when their main motivation might be profit or competition.

“This isn’t just another technology,” Russell says. “This could be the biggest event in human history.”

Instead, he believes decisions about AGI should involve everyone, not just a handful of CEOs. After all, if AGI affects the entire human race, shouldn’t we all have a say in how it’s developed?

How Do Regular People Feel?

It’s not just scientists and tech leaders who are worried about AGI. Surveys show that many regular people are uneasy about how fast AI is advancing. In one poll, 72% of Americans said they want to stop or slow down AI development until we better understand its risks.

This growing public concern highlights an important point: the rush to create powerful AI isn’t just happening in labs. It’s something that could impact all of us, and people are starting to demand a say in how it’s handled.

Can We Manage the Risks?

Some researchers are working on ways to evaluate and manage AGI as it develops. A recent paper introduced a framework for assessing AGI’s capabilities and risks. It includes principles like testing AI in real-world scenarios and focusing on its potential dangers, not just its current applications.

This framework is a good start, but frameworks alone won’t be enough. Companies, governments, and researchers need to agree on safety standards—and enforce them. Otherwise, the risks could outweigh the rewards.

Why This Matters Now

Shane Legg and others have made it clear: AGI is closer than we think, and its risks are too big to ignore. While AGI could solve huge problems, like curing diseases or reversing climate change, it also has the potential to cause problems we can’t fix. A single misstep could lead to consequences that impact not just us, but every generation that follows.

The future of AGI depends on the decisions we make today. Will we approach it with the caution it demands, creating frameworks for safety and oversight? Or will we rush ahead, driven by competition and profit, hoping we can control something we might not fully understand? It’s a critical crossroads, and the choices made now could define the next chapter of human history.

The clock is ticking, and the stakes couldn’t be higher. The world’s brightest minds are divided, and the window for collective action is shrinking. 

So the question is: Are we ready for AGI, or are we gambling with humanity’s future?

Get Exclusive AI Tips to Your Inbox!

Stay ahead with expert AI insights trusted by top tech professionals!

cs_CZČeština