Safe Superintelligence (SSI), a newly-founded AI company led by former OpenAI Chief Scientist Ilya Sutskever, has raised $1 billion in a record-setting funding round to accelerate its mission to develop safe artificial intelligence systems.
The financing comes just three months after SSI’s founding and values the company at $5 billion, according to sources close to the matter.
A Focus on Safe Superintelligence
The company plans to use the funds to acquire significant computing power and attract top-tier talent, focusing on foundational research in artificial intelligence safety. With a current team of just 10 employees, SSI aims to build a small, highly skilled, and trusted group of engineers and researchers.
The company will operate out of Palo Alto, California, y Tel Aviv, Israel, with an emphasis on AI systems that surpass human capabilities while remaining aligned with human values.
A major focus is on developing artificial general intelligence (AGI), a system capable of performing any intellectual task a human can, while ensuring it stays safe and aligned with human goals. Unlike narrow AI systems, AGI represents a far more powerful, versatile, and potentially risky technology, and SSI’s mission is to build it in a way that prioritizes safety.
Sutskever, an AI veteran and co-founder of OpenAI, was previously part of OpenAI’s Superalignment team, which worked to ensure AI systems remained aligned with human values. Following a leadership dispute at OpenAI, which briefly saw CEO Sam Altman ousted, Sutskever left the company in May 2024.
SSI’s approach will differ from OpenAI’s, focusing on a slower, more deliberate path to scaling AI systems, according to Sutskever, with a core emphasis on safely advancing toward AGI.
Big Names Back SSI
The funding round attracted a list of high-profile investors, including venture capital giants Andreessen Horowitz, Sequoia Capital, DST Global, y SV Angel. NFDG, an investment partnership co-led by Nat Friedman and SSI CEO Daniel Gross, also participated in the round.
This massive investment underscores the confidence these investors have in the talent behind SSI, despite a broader slowdown in venture capital funding for AI startups.
Gross emphasized the importance of having investors who fully support SSI’s mission to make a “straight shot to superintelligence”, focusing on years of research and development before bringing any products to market. He noted that the company is dedicated to building an ethical and safety-first approach to AI, in stark contrast to the rapid product releases from other companies in the industry.
The AI Safety Debate
SSI’s focus on safety comes at a time when the AI industry is deeply divided on how to handle its growing power. A proposed bill in California, aimed at regulating AI safety, has split major players. OpenAI and Google have come out against it, while others, including Elon Musk’s xAI, support the move.
SSI’s leadership team seems intent on making safety the centerpiece of its mission, a decision that might differentiate it in a crowded field where companies often race to market without addressing all the risks.
What’s Next for SSI?
Next on SSI’s agenda is securing partners for its computing needs, likely involving cloud providers and chipmakers. Many AI startups work with companies like Microsoft and Nvidia to handle the enormous computing power required for training advanced AI models.
While SSI hasn’t yet announced its partnerships, it’s clear the newly raised funds will go toward building the infrastructure needed for the long research phase ahead.