OpenAI’s next frontier model is no longer a distant rumor. Codenamed “Spud,” the model finished pretraining on March 24, 2026, as confirmed by Sam Altman himself. Greg Brockman described it as “two years of research” with a “big model feel.” The question is no longer if it’s coming, but when it drops and whether OpenAI will call it GPT-5.5 or GPT-6.
Prediction markets are pricing in a launch within weeks. Polymarket traders assign 78% probability of release by April 30 and over 95% by June 30, 2026. Early benchmark leaks suggest Spud could score in the high 70s on SWE-bench Pro, up from GPT-5.4’s 57.70%, which would put it in striking distance of Claude Mythos Preview. If you have been waiting for the next major leap in AI capability, the wait is almost over.
The Key Takeaways
- Pretraining Done: Sam Altman confirmed that OpenAI’s next frontier model (codenamed “Spud”) finished pretraining on March 24, 2026 and is now “a few weeks” from release.
- Naming Uncertain: OpenAI has not decided whether to brand it GPT-5.5 or GPT-6. The final name depends on how large the performance gap is compared to GPT-5.4.
- Unified Super App: Rumors point to a new product combining ChatGPT, Codex, and a new “Atlas” browser into a single app, potentially launching alongside the model.
- Hardware is Real: OpenAI has confirmed multi-billion dollar deals with AMD (6 GW) and Broadcom (10 GW) for next-generation AI infrastructure, with Spud trained at the Stargate facility in Abilene, Texas on over 100,000 H100 GPUs.
What We Officially Know About GPT-6
In the fast-moving world of artificial intelligence, it can be tough to separate solid facts from speculative hype. But with GPT-6 (or GPT-5.5, depending on the final branding), OpenAI has been less secretive than usual. Several key facts have been confirmed directly by the company’s leadership, giving us a much clearer picture than we had even a few weeks ago.
Here is a breakdown of what is actually confirmed as of April 2026:
- Pretraining Complete: Sam Altman confirmed on March 24, 2026 that pretraining for the next frontier model is finished. He described the timeline to release as “a few weeks” and called it “a model that could meaningfully accelerate the economy.”
- Two Years in the Making: Greg Brockman provided additional context, describing Spud as representing “two years of research” with a “big model feel with no incremental framing.” This signals OpenAI views this as a generational leap, not a minor update.
- A Promise of a Major Leap: In a widely cited interview with WIRED, OpenAI CEO Sam Altman made a direct promise: “GPT-6 will be significantly better than GPT-5… and GPT-7 will be significantly better than GPT-6.”
- Scientific Ambitions: Altman has also hinted that “by 6 or 7 we’ll see more meaningful scientific capability,” pointing to AI moving beyond conversation and content creation into complex problem-solving and research.
- Sora Sacrificed for Spud: OpenAI discontinued its Sora video generation tool to redirect GPU resources toward Spud’s training. The Sora team was reassigned to world simulation research for robotics applications.
These confirmed details paint a picture of a model that OpenAI internally views as a landmark release, not a routine upgrade. The infrastructure investment, the two-year development cycle, and the willingness to shut down a high-profile product all point to something genuinely significant.
ChatGPT 6 Will Have Long-Term Memory and Personalization
While official specs remain limited, the most persistent rumor surrounding GPT-6 features is a massive upgrade in its ability to remember. This isn’t just about recalling the last few lines of a conversation. We are talking about true long-term memory, a system that would allow the AI to retain key information about you, your preferences, and your projects across all your conversations, potentially for weeks or months.
The goal of this would be a revolutionary level of personalization. Imagine an AI that remembers you are a software developer who prefers Python, that you’re planning a vacation to Italy next spring, or that you prefer a formal tone in business emails. It would stop asking repetitive questions and start anticipating your needs, making every interaction feel uniquely tailored to you.
GPT-5.4 already introduced improved memory features, but GPT-6 is expected to take this significantly further. An unverified leak suggests a 2 million token context window, double the current model, which would allow the AI to process and retain vastly more information in a single session. Combined with persistent memory across sessions, this could fundamentally shift the user experience from interacting with a powerful but forgetful tool to collaborating with a true digital assistant.
Will ChatGPT 6 Be “Self-Learning”?
Perhaps the most groundbreaking of all the GPT-6 leaks is the theory that the next model will be capable of continuous learning. This idea, which has spread rapidly online, is largely fueled by a real research paper from MIT titled Self-Adapting Language Models (SEAL).
The concept suggests a radical departure from how current AIs work. Instead of being a static tool that is trained and then released, a self-adapting model could theoretically learn and improve on its own after deployment, getting progressively smarter through its daily interactions. So, what would this actually look like in practice?
The core ideas behind these self-adapting LLMs
- Learning from Experience: The AI would be able to analyze its own performance on tasks. When it generates a suboptimal or incorrect answer, it could identify the error and create its own training data to correct the mistake in the future.
- Making Permanent Changes: This is the most critical part. Through a process of generating self-edits, the model could use reinforcement learning to make persistent weight updates. In simple terms, this means it wouldn’t just learn for a single session. It would permanently modify its own internal neural network to embed the new knowledge.
- Adapting to New Information: A self-learning model could theoretically keep up with a changing world. If a new scientific discovery is made, it could integrate this new information without needing a full-scale retraining by its developers.
It is crucial to understand that OpenAI has never confirmed a link between the SEAL paper and GPT-6. The connection is pure speculation, largely driven by the fact that one of the paper’s authors now works at OpenAI. Still, OpenAI employees have hinted at “a capability that is very different from what we’ve seen before” in Spud, which has only added fuel to this theory.
GPT-6 Agents and Agentic AI
Building on improved memory and reasoning, many experts believe GPT-6 will significantly advance the field of agentic AI. GPT-5.4 already introduced computer use capabilities, scoring 75% on the OSWorld benchmark for desktop tasks. Spud is expected to push this much further, with the model potentially serving as the backbone for a unified product suite integrating ChatGPT, Codex, and autonomous agent workflows.
Imagine telling your AI to “find the best flights for a business trip and book the one with the best balance of cost and convenience.” With GPT-5.4’s computer use already beating humans on desktop tasks, GPT-6 could make this level of autonomous, reliable digital assistance feel routine rather than experimental. The enhanced power of GPT-6 agents paired with a rumored “Atlas” browser could finally deliver truly autonomous web-based task completion.
The Massive AI Infrastructure for GPT-6, 7 & 8
While the software features of GPT-6 remain partly speculative, the hardware being built to power it is very real. Spud was trained at OpenAI’s Stargate facility in Abilene, Texas, using over 100,000 H100 GPUs. But the infrastructure story doesn’t end there. OpenAI has confirmed massive partnerships to create next-generation capacity for the models that come after Spud.
The OpenAI AMD Deal
One of the cornerstones of this infrastructure is a major partnership with chipmaker AMD. OpenAI has signed a deal to acquire a massive fleet of new GPUs, committing to an enormous 6 GW of computing power. The rollout for these new AMD Instinct GPUs is confirmed to begin in the second half of 2026.
Custom AI Chips with Broadcom
Going beyond standard hardware, OpenAI is also co-developing custom AI chips with Broadcom. This long-term partnership will create highly specialized data center accelerators designed specifically for OpenAI’s unique workloads. The scale of this project is even larger than the AMD deal, targeting an eventual 10 GW of accelerator capacity, with deployments scheduled to run from the second half of 2026 through the end of 2029.
The Timeline for the GPT-6 Release Date
The GPT-6 release date picture has become much clearer since pretraining finished on March 24, 2026. While OpenAI still hasn’t announced a specific date, the evidence is converging on an imminent launch.
Here’s what we know about the timing:
- Altman’s Own Timeline: On the same day pretraining completed, Altman described the launch as “a few weeks” away. That puts the earliest window at mid-April 2026.
- Prediction Markets: Polymarket traders assign 78% probability of release by April 30 and over 95% by June 30, 2026. The market has priced in a spring launch with high confidence.
- Unverified Leak: An anonymous source claimed April 14-16, 2026 as the launch date, alongside a unified “super app” combining ChatGPT, Codex, and a new “Atlas” browser. The source provides no credentials and the claims remain unverified.
- Naming Decision Pending: OpenAI has not decided whether to brand it GPT-5.5 or GPT-6. If benchmark scores land in the high 70s on SWE-bench Pro (up from GPT-5.4’s 57.70%), the GPT-6 name is more likely. A smaller gap would favor GPT-5.5.
- Expected Rollout Order: Based on previous launches, expect ChatGPT Plus and Pro subscribers first, followed by the free tier (2-4 weeks later), then Enterprise API access (2-4 weeks after that).
The signals are all pointing toward an April or May reveal. OpenAI is also retiring GPT-5.2-Codex and GPT-5.1-Codex-Mini from service around April 14, which many interpret as clearing the deck for the new model.
🚨 BREAKING: GPT-6 is likely arriving before the end of this year and it’s built on the same reasoning core that just won gold medals at the International Mathematical Olympiad (IMO) and the International Olympiad in Informatics (IOI) without tools or internet, pure reasoning.… pic.twitter.com/9pyu91xc2R
— VraserX e/acc (@VraserX) October 17, 2025
Overview
The picture around GPT-6 has shifted dramatically in the last few weeks. What was once a collection of vague promises and fan theories now has a concrete anchor: pretraining is done, the model is in safety evaluation, and Sam Altman himself says it’s weeks away. The infrastructure at Stargate is built. The prediction markets are pricing in an imminent launch. And the early benchmark leaks, if accurate, suggest a model that closes the gap with Claude Mythos Preview and redefines what a ChatGPT model can do.
The biggest remaining unknowns are the final name (GPT-5.5 or GPT-6), whether the unified “super app” with Atlas browser launches alongside it, and how the rumored self-learning capabilities will manifest in practice. For now, the smartest move is to keep watching OpenAI’s official channels and be ready to test it the moment it drops.




