Claude Dreaming thumbnail with straight bold text reading “What Is Claude Dreaming? Anthropic’s New Agent Memory Feature Explained” beside a glowing Claude logo icon on a dark purple and orange neon tech background.

What Is Claude Dreaming? Anthropic’s New Agent Memory Feature Explained

Legal-AI company Harvey reported a ~6x increase in agent task-completion rates after switching on a feature Anthropic launched on May 6, 2026 called Claude Dreaming. The same legal-drafting jobs that used to fail repeatedly, because Claude kept forgetting filetype quirks and tool-specific workarounds between sessions, suddenly started finishing reliably. That is the headline number the AI world has been talking about for the past week. It is the first real proof that giving an AI agent the ability to “remember and reflect” between jobs changes what AI can do.

Claude Dreaming is one of three new features Anthropic shipped at Code with Claude 2026, alongside Outcomes and Multiagent Orchestration. The launch is part of a much bigger push that also included a SpaceX compute deal for 220,000 Nvidia GPUs and a quiet rollout of Claude inside Excel, Word, PowerPoint, and Outlook. Here is exactly what Claude Dreaming does, how it works, who can use it right now, and how it compares to the memory systems you already know from ChatGPT and Gemini.

The Key Takeaways

  • Claude Dreaming is a scheduled background process that reviews an AI agent’s past sessions and rewrites its memory store, removing duplicates, replacing stale entries, and surfacing new patterns.
  • Anthropic launched it on May 6, 2026 at Code with Claude as a research preview, alongside Outcomes and Multiagent Orchestration.
  • Harvey reported a ~6x jump in agent task-completion rates after enabling Dreaming for legal-drafting workflows.
  • Supported models are Claude Opus 4.7 and Claude Sonnet 4.6; up to 100 past sessions can feed a single dream.
  • Dreaming is billed at standard API token rates and access is gated behind a request form, so it is not yet in the Claude consumer app.

What Is Claude Dreaming?

Claude Dreaming is a scheduled background process from Anthropic that reads an AI agent’s past session transcripts and existing memory store. It then produces a brand new, reorganized memory store that the agent can use going forward. Duplicates get merged. Contradicted or stale entries get replaced with the latest value. New patterns the agent missed during live sessions get pulled out and saved as fresh insights. The original memory store is never touched, so you can review the output and discard it if you do not like what Claude produced.

Anthropic frames the feature with a deliberate analogy to REM-sleep memory consolidation. A human brain replays the day’s events during sleep and decides what to keep, what to compress, and what to throw away. Dreaming is the same idea applied to an AI agent. Between jobs, Claude pauses, “dreams” over what just happened, and wakes up smarter. It is not a new model and it does not change Claude’s underlying weights. It is a maintenance layer that lives on top of an external memory store, so the agent gets better between sessions without retraining anything.

The launch was bundled into Anthropic’s Code with Claude developer conference in San Francisco on May 6, 2026, with follow-up events scheduled for London on May 19 and Tokyo on June 10. Dreaming is currently a research preview, available only to teams that request access through Anthropic’s form. The other two features announced the same day, Outcomes and Multiagent Orchestration, are already in public beta, per Anthropic’s official launch post.

How Claude Dreaming Works in 3 Steps

A dream is an asynchronous job that runs in three predictable phases. You give it an existing memory store and up to 100 past session transcripts, and Claude produces a new, curated store. The full pipeline takes minutes to tens of minutes depending on how much input you feed it.

1. Read

Claude reads two things: an existing memory store (the persistent record of everything the agent has learned so far) and a batch of past session transcripts, up to 100 per dream. You can also pass an instructions field of up to 4,096 characters that tells the dream what to focus on, for example “focus on coding-style preferences; ignore one-off debugging notes.”

2. Curate

The model walks through every entry in the memory store and every transcript, looking for three specific kinds of patterns: recurring mistakes the agent kept making, workflows the agent converged on across different jobs, and preferences shared across a team of agents. Duplicates get merged. Contradictions get resolved in favor of the most recent value. New insights that no single session could see on its own get surfaced and written into the new store.

3. Output

The dream produces a fresh memory store, separate from the input. You either attach the new store to future sessions in place of the old one, or you discard it. Anthropic deliberately preserved the input store so that nothing is destructive: every dream is a draft you can review before promoting.

The dream itself moves through a clear lifecycle of statuses you can poll with the API: pendingrunningcompleted (or failed or canceled if something goes wrong). While a dream is running you can stream the underlying session events in real time, so developers can watch the agent reading and writing as it works. Full specs are in Anthropic’s developer docs.

Claude Dreaming vs Claude Memory vs ChatGPT Memory vs Gemini Memory

The biggest source of confusion right now is that Claude Dreaming is not the same as Claude Memory and not the same as the chat-history memory you get inside ChatGPT or Gemini. Here is how the four stack up.

FeatureClaude DreamingClaude MemoryChatGPT MemoryGemini Memory
TriggerScheduled background job between sessionsOn-demand inside a sessionAutomatic, every conversationAutomatic, every conversation
What gets rememberedCurated patterns + cleaned memory storeRaw conversation historyPre-computed summaries injected into promptPre-computed summaries injected into prompt
Cross-session learningYes, designed for itSame-session only by defaultContinuous, but no curation between chatsContinuous, but no curation between chats
User controlReview-then-apply or auto-applyVisible tool calls; user togglesMemory Sources panel (added May 5, 2026)Settings menu
Models supportedOpus 4.7, Sonnet 4.6All chat-tier Claude modelsGPT-5.5 Instant + all current modelsGemini 2.5 / 3.0 / Omni
PricingStandard API token ratesIncluded in Claude plansIncluded in ChatGPT plansIncluded in Gemini plans
Best forProduction agent fleets that need to self-improveLong working conversations with one userCasual chat continuity across topicsCasual chat continuity across topics

The cleanest way to think about it: Claude Memory lets a single agent remember things while it is working. Claude Dreaming is what happens when you put many memories through a sleep cycle so the agent gets actually smarter for next time. ChatGPT and Gemini do automatic continuity but neither has the same between-job consolidation layer yet, although ChatGPT’s memory system added a new Memory Sources panel on May 5 that lets you see what the model is recalling.

The Harvey Result: Why the 6x Stat Matters

The headline number from Anthropic’s announcement was Harvey’s ~6x jump in task-completion rates after turning Dreaming on. Harvey is a legal-AI startup that uses Claude to draft and review legal documents, and the company has been one of Anthropic’s biggest agent customers for over a year. The reason their tasks kept failing before Dreaming was small and ordinary. The agent kept forgetting filetype quirks, like how to handle a Word .docx versus a PowerPoint .pptx versus a PDF. It also forgot tool-specific workarounds, the API parameters that needed to be set a certain way for a given client. Every session started from scratch, so the same drafting jobs failed in the same way, over and over.

After Dreaming consolidated those repeated mistakes into the persistent memory store, the next agent session walked in already knowing what to do. That is the entire pitch in one sentence: an agent that fixes itself between sessions.

There is a healthy caveat to weigh. The 6x stat is the one Anthropic published, but no external benchmark backs it up yet. Practitioners reviewing the announcement at VentureBeat and SDTimes flagged that Harvey’s legal-drafting tasks had an unusually clear pre-Dreaming failure mode, so the lift will not be as dramatic in every workflow. Even a fraction of that gain matters because most AI agent failures today are caused by the agent forgetting things it has already learned, not by raw model intelligence.

Code with Claude 2026: The Bigger Picture

Dreaming was one of three new features Anthropic shipped at the May 6 conference, and the rest of the announcement matters for understanding where Claude is going next.

Outcomes (Public Beta)

Outcomes lets developers define success criteria for an agent as a rubric. A separate grader evaluates the agent’s output against the rubric in its own isolated context window, so it is not biased by the agent’s own reasoning. If the work falls short, the grader points to what needs fixing and the agent takes another pass. Anthropic reported gains of up to +10 points over standard prompting on difficult problems, with file-generation benchmarks showing +8.4% on docx and +10.1% on pptx formats.

Multiagent Orchestration (Public Beta)

Multiagent Orchestration lets a lead agent break a big job into pieces and delegate each piece to a specialist subagent with its own model, prompt, and tools. The specialists work in parallel on a shared filesystem and feed results back to the lead. Netflix was named as an early adopter, already using orchestration for its platform team’s investigations across deploy history, error logs, metrics, and support tickets.

The SpaceX Compute Deal

The same week, Anthropic announced a separate deal with Elon Musk’s SpaceX to allocate the entire capacity of SpaceX’s Colossus supercluster to Claude, which adds over 300 megawatts of new capacity and roughly 220,000 Nvidia GPUs within the month. That capacity is what makes scaling Dreaming and orchestration across many enterprise customers actually feasible.

Claude Across Office

Anthropic also confirmed that Claude is now moving across Excel, PowerPoint, and Word with Outlook in public beta, carrying context between Office documents. If you use Claude for spreadsheets the way many readers of our running Best AI Models hub already do, the Office rollout matters more than the agent news for day-to-day workflows.

Who Should Care About This Right Now?

AI developers and product teams are the obvious audience. Dreaming is gated behind a request-access form and shipped as a research preview, so it is built for the people running production Claude agents inside Cursor, Harvey-style legal-tech apps, customer-support bots, and coding agents. If you have ever shipped a Claude agent and watched it forget the same thing twice, Dreaming is the fix you have been waiting for. The most relevant follow-up reads here are Claude Code’s pricing tiers and the full Claude Cowork guide, both of which now stack neatly on top of Dreaming for production workflows.

Power users on Mac and iPhone should care for a different reason. Dreaming is the enterprise prototype of what consumer AI is heading toward, and the consumer version is already starting to land. ChatGPT shipped Memory Sources on May 5. The Claude desktop app keeps adding longer persistent context. If you want a single Mac app that already gives you Claude, ChatGPT, and Gemini side-by-side without juggling browser tabs, Fello AI on Mac is the consumer pattern that gets you there today. As Dreaming-style features land in chat tiers, Fello AI’s persistent context will benefit from each one.

Enterprise architects should treat the launch as the moment “AI agents that learn between jobs” stopped being a research paper. VentureBeat made the sharpest enterprise point in its coverage: Anthropic is quietly trying to own memory, evaluation, and orchestration for any team that builds with Claude. That is a bigger lock-in than the model layer ever was.

Casual ChatGPT or Gemini users can wait. You are not the target. Dreaming is not in the Claude consumer app, you cannot turn it on, and ChatGPT’s new Memory Sources panel covers most of the day-to-day continuity you would actually notice as a single user.

Risks and Open Questions

The same Code with Claude announcement that shipped Dreaming also raised the temperature on a long-running concern: what happens when AI agents start fixing themselves? A self-improving agent that rewrites its own memory store between sessions is exactly the kind of system safety researchers have been writing about for years. The good news is that Anthropic kept the human review path: every dream produces a separate output store you can inspect and discard. The harder question is whether enterprise users running thousands of dreams per week will actually review them, or just auto-apply and trust the consolidation.

The week before Code with Claude, Palisade Research published the experiment we covered in our piece on the self-replicating AI experiment, where one prompt let an AI hack across four countries and clone itself. The two stories sit on the same axis. Dreaming is the corporate-friendly, audit-trail version of the same underlying capability: AI that learns from its own behavior between jobs. The capability is here. The governance is still catching up.

There is also a narrower but very real prompt-injection concern. If a poisoned past session transcript gets fed into a dream, the dream can write the poison into the new memory store, where every future session will read it. Anthropic’s docs acknowledge this risk implicitly with the input-store-preservation design. Until there is an enterprise-grade safety layer that scans dream inputs for adversarial content, builders running production agents should pin which sessions they trust before each dream.

How to Get Access to Claude Dreaming

Dreaming is currently a research preview, which means you cannot just toggle it on. Anthropic gates it behind a request form linked from the Claude Managed Agents page, and the same form covers the Managed Agents API the rest of the launch sits on top of. Two beta headers are required on every API call, managed-agents-2026-04-01 and dreaming-2026-04-21, and the official SDKs in Python, TypeScript, Go, C#, Java, PHP, and Ruby set them automatically.

The supported models during the preview are claude-opus-4-7 and claude-sonnet-4-6. Billing is at standard API token rates for whichever model you pick, and cost scales roughly linearly with the number and length of input sessions. Anthropic’s own recommendation is to start with a small batch and scale up once you are happy with the curation quality. There is a hard limit of 100 sessions per dream and 4,096 characters in the optional instructions field. Rate limits are the default beta-tier limits, but Anthropic support is opening higher caps on request for production teams.

For consumer users, none of this is in the standard Claude app yet. The closest consumer parallel today is how Claude compares to ChatGPT on persistent memory inside chat sessions, and the Mac and iPhone apps that route across multiple AI models like Fello AI.

Final Verdict on Claude Dreaming

Dreaming is the most interesting AI agent feature shipped in 2026 so far, and the 6x Harvey result is the first real evidence that AI agents fail mostly because they forget, not because they are dumb. How big a gain you see depends on how repetitive your agent’s mistakes are. The more your agents fail the same way twice, the bigger the Dreaming lift, and the bigger Code with Claude launch around it (Outcomes, Multiagent Orchestration, SpaceX compute, Office integration) tells you exactly where Anthropic is going.

If you build with Claude, request access this week. If you do not, the takeaway is simpler: AI assistants that remember and reflect across jobs are no longer hypothetical, and the consumer versions are next. Worth keeping our running Best AI Models hub on the bookmark bar for what lands next.

FAQ

Is Claude Dreaming a new AI model?

No. Claude Dreaming is a scheduled background feature for Claude Managed Agents that runs on existing models (Opus 4.7 and Sonnet 4.6). It does not change the model’s weights. It rewrites the agent’s external memory store between sessions.

When did Anthropic launch Claude Dreaming?

Anthropic announced Claude Dreaming on May 6, 2026 at the Code with Claude developer conference in San Francisco. It shipped the same day as a research preview, alongside Outcomes and Multiagent Orchestration.

Can I use Claude Dreaming in the Claude app on Mac or iPhone?

Not yet. Dreaming is gated behind a research-preview request form and only runs on the Claude Managed Agents API. Consumer apps including Claude.ai, the Mac desktop app, and the iPhone app do not expose Dreaming as a setting today.

How is Claude Dreaming different from ChatGPT’s memory?

ChatGPT memory automatically injects summaries of past chats into every new conversation. Claude Dreaming is a scheduled background process that reorganizes an agent’s full memory store between jobs, merging duplicates and surfacing patterns across many sessions. ChatGPT does continuity. Dreaming does consolidation.

How much does Claude Dreaming cost?

Dreams are billed at standard Anthropic API token rates for the model you select. Cost scales roughly linearly with the number and length of input sessions fed into the dream. There is no extra subscription tier for Dreaming itself during the research preview.

Share Now!

Facebook
X
LinkedIn
Threads
Courriel

Recevez des conseils exclusifs sur l'IA dans votre boîte de réception !

Gardez une longueur d'avance grâce à des informations sur l'IA fiables et éprouvées par les meilleurs professionnels de la technologie !