On September 22, 2025, Nvidia and OpenAI made headlines with a bold, unprecedented move: a strategic partnership involving $100 billion in investments and a plan to deploy 10 gigawatts worth of AI data centers. This deal represents a long-term, structured alliance between the world’s top chipmaker and the most powerful AI lab behind ChatGPT.
The partnership goes far beyond buying chips. Nvidia will progressively invest in OpenAI as new data centers go live, while OpenAI pays Nvidia in cash for millions of GPUs. They’ll also co-design their future hardware and software — effectively locking in Nvidia as OpenAI’s preferred supplier and strategic partner for building the next generation of AI, including systems aimed at artificial general intelligence (AGI).
We're proud to announce a landmark partnership with @OpenAI to build new gigascale AI factories using millions of NVIDIA GPUs. 🤝
— NVIDIA (@nvidia) September 22, 2025
This partnership will supply 10 gigawatts of GPUs to fuel @OpenAI's data center growth. pic.twitter.com/CYEB2PdfWY
Let’s take a look at how the deal works, what it means for Nvidia, OpenAI, and the entire AI industry, and why 10 gigawatts of compute power could change everything.
What exactly is in the deal?
The partnership between NVIDIA and OpenAI is centered around a bold infrastructure goal: building out at least 10 gigawatts of AI datacenters powered by NVIDIA systems. This is an enormous amount of computing power — enough to support the training and deployment of next-generation AI models, including successors to ChatGPT.
To make that happen, NVIDIA has committed up to $100 billion in investment. The money will be invested progressively, in phases, as each chunk of infrastructure comes online. The first stage of this rollout will involve 1 gigawatt of systems, which is scheduled to go live in the second half of 2026. This initial deployment will use NVIDIA’s new Vera Rubin platform, a powerful upgrade expected to significantly boost training speed and inference performance thanks to advanced hardware like HBM4 memory.
It’s important to understand the financial structure of this partnership. OpenAI will be purchasing NVIDIA hardware directly, paying in cash for the systems it needs. At the same time, NVIDIA will invest in OpenAI by taking non-controlling equity stakes — meaning it won’t have decision-making power, but will be financially tied to OpenAI’s long-term growth.
This deal doesn’t exist in a vacuum. OpenAI is already closely linked with other major players like Microsoft, Oracle, and SoftBank. This partnership with NVIDIA adds another layer to OpenAI’s expanding global infrastructure network. But the 10-gigawatt figure alone puts this deal in a league of its own. It signals OpenAI’s ambition to scale fast and far — and NVIDIA’s willingness to back that vision with both hardware and capital.
10 Gigawatts of Compute Power
Ten gigawatts is a huge number — but what does it mean in practical terms?
To put it simply, one gigawatt is roughly the output of a full-sized nuclear power reactor. So when OpenAI and Nvidia say they’re deploying ten gigawatts of compute infrastructure, they’re talking about building data centers that together will draw as much power as ten nuclear plants. That’s enough electricity to power over 8 million U.S. homes.
These won’t be ordinary data centers. They’ll be AI factories, filled with millions of high-end Nvidia GPUs, specifically the company’s newest hardware — the Vera Rubin platform. The top-tier Rubin Ultra chips are expected to deliver up to 3.6 exaflops for inference and 5 exaflops for training, backed by hundreds of terabytes of ultra-fast HBM4e memory. That’s exascale computing — the same scale used in supercomputers designed for weather modeling, physics simulations, and now, large-scale AI.
This kind of compute is essential for training future versions of models like ChatGPT, as well as more advanced systems that can see, hear, code, and reason. It also opens the door for OpenAI to move closer toward its long-term goal of building artificial general intelligence (AGI) — systems that can perform a wide range of tasks at human or superhuman levels.
The Future of AI
This partnership doesn’t just signal more money flowing into AI — it sets the stage for what’s coming next in the industry. The most immediate outcome is more headroom for bigger models. With access to 10 gigawatts of compute, OpenAI will be able to train and deploy much larger systems: models with longer context windows, better memory, and more advanced reasoning or tool-using capabilities. As OpenAI’s user base climbs past 700 million weekly active users, scaling serving infrastructure becomes just as critical as training.
At the same time, this deal tightens the connection between hardware and software design. OpenAI and NVIDIA have committed to co-optimizing their roadmaps — meaning future models will likely be designed with specific hardware in mind. From chip architecture and training frameworks to networking and compiler stacks, we’ll see tighter integration that squeezes out more performance per watt and shortens the time it takes to bring new capabilities online.
A 10-gigawatt datacenter facilities will require massive power supplies, cooling systems, renewable energy contracts, and favorable local policies. Expect growing regulatory scrutiny on everything from environmental sustainability to market dominance and fair competition. It signals a shift into what many are now calling the Age of AI Factories— an era where compute, not data or algorithms, becomes the main bottleneck. As NVIDIA CEO Jensen Huang put it:
“This investment and infrastructure partnership marks the next leap forward — deploying 10 gigawatts to power the next era of intelligence.”
And for OpenAI, this isn’t just about capacity — it’s about ambition. Sam Altman has made it clear:
“Compute infrastructure will be the basis for the economy of the future.”
That future may include artificial general intelligence (AGI) — and this partnership gives OpenAI the infrastructure to pursue it more aggressively. With NVIDIA’s chips and capital, OpenAI is positioned to train larger multimodal models, potentially capable of human-level reasoning or better in some domains.
We’re already seeing early signs of this shift. Just days before the announcement, OpenAI CEO Sam Altman tweeted that the company is preparing to launch several “compute-intensive offerings” over the coming weeks. Some of these features will be limited to Pro subscribers or come with additional fees — not because of artificial paywalls, but because of the real costs of running today’s large models at scale.
Over the next few weeks, we are launching some new compute-intensive offerings. Because of the associated costs, some features will initially only be available to Pro subscribers, and some new products will have additional fees.
— Sam Altman (@sama) September 21, 2025
Our intention remains to drive the cost of…
This speaks directly to the need for expanded infrastructure. OpenAI doesn’t just want to scale up — it wants to experiment freely, without being constrained by GPU limits or cost ceilings. The partnership with NVIDIA gives them the path to do that — and opens the door to pricing experiments, premium features, and entirely new AI products.
So what exactly does it mean for each side of the deal?
NVIDIA
For NVIDIA, this deal reinforces its position as the default compute supplier for AI’s biggest players. It secures long-term chip demand, gives NVIDIA a financial stake in the growth of the ecosystem, and locks OpenAI into its hardware roadmap. With over 80% market share in AI chips, NVIDIA already dominates the space — and this deal pushes that dominance further.
Markets reacted instantly: NVIDIA stock jumped over 4%, adding more than $200 billion in market cap within hours of the announcement.
But that strength comes with regulatory risk. As NVIDIA deepens ties with OpenAI and Microsoft — already two of the most influential players in AI — expect antitrust scrutiny to intensify. Concerns around vertical integration and market concentration are already on the radar of U.S. and global regulators.
OpenAI
For OpenAI, the benefit is clear: it gets the compute capacity it needs to stay competitive in a space where training costs are rising exponentially. The deal helps solve a critical bottleneck — but not without risks.
Building multi-gigawatt datacenters across several regions means navigating complex challenges: real estate, power procurement, cooling infrastructure, supply chain delays, and local regulations. And while this deal aligns well with OpenAI’s partnerships with Microsoft, Oracle, and SoftBank, it also deepens reliance on NVIDIA — which could backfire if pricing changes or supply chain disruptions emerge.
Another hidden risk: the money isn’t delivered up front. NVIDIA’s $100 billion investment is conditional — funds are released only as each new gigawatt is deployed. If OpenAI hits delays or fails to scale fast enough, parts of the funding could stall.
Still, OpenAI’s valuation is already climbing. Analysts suggest it could reach $500 billion, driven by growth in enterprise usage and long-term bets on AGI.
Wider AI Ecosystem
For the broader AI ecosystem, this is a signal that the infrastructure arms race is accelerating. Competitors like AMD and Google’s TPUs remain relevant, but NVIDIA just made a massive move to dominate the next wave of AI compute.
Startups and smaller research labs may find themselves priced out of the high-end compute market, making it harder to compete at the frontier. Access to top-tier hardware is no longer just a technical issue — it’s becoming a financial one.
At the same time, the scale of this buildout introduces new pressure on public policy. A 10-gigawatt deployment raises real questions around energy consumption, carbon footprint, and resource allocation. Policymakers will need to grapple with how to balance innovation with sustainability — and how to ensure that AI progress remains accessible and beneficial to more than just a handful of powerful players.
Conclusion
The Nvidia–OpenAI partnership marks a turning point. Not long ago, a $100 billion bet on compute would’ve sounded like a moonshot. Now it’s the baseline for staying competitive. The scale, speed, and ambition here reflect how far artificial intelligence has moved from research labs into the core of global industry.
This deal shows that raw computing power is becoming the most important resource in AI. Whoever controls it can shape the pace of innovation, define capabilities, and set the direction of the market. Chips are no longer just components — they’re strategic leverage.
But with this scale come hard questions. Powering AI at 10 gigawatts raises concerns around energy use, environmental cost, and global inequality in access to technology. Regulators, governments, and the public will need to catch up fast — because the infrastructure is already being built.
AI is now a global force. The decisions being made today, by companies like Nvidia and OpenAI, will define how intelligence is developed, deployed, and distributed in the future. Whether that future feels empowering or extractive will depend on what happens next.




