NewsAnthropicClaude

Anthropic Taught Claude to 'Dream' — and It Made the Model 6x Better at Real Work

Claude Managed Agents now review past sessions, identify patterns, and refine their own performance over time. Early users report dramatically higher task completion rates without any prompt engineering.

AI Learning Hub2 min read

Anthropic shipped a feature last week called "Dreaming" for Claude Managed Agents. The name sounds like marketing, but the mechanism is genuinely novel: after each session, the agent reviews what happened, finds patterns in what worked and what didn't, and updates its own internal instructions for next time.

The result, according to Anthropic, is a dramatic improvement in task completion rates. Harvey, the legal AI platform built on Claude, reported roughly 6x higher completion rates on complex legal research tasks after enabling Dreaming. That's not a marginal gain. That's the difference between a tool you try once and a tool you rely on.

How it works

Dreaming isn't training or fine-tuning. The model weights don't change. Instead, after a session ends, the agent generates a structured summary: what the user asked for, what approach it took, what went wrong, what went right, and what it should do differently next time. That summary gets appended to the agent's persistent memory and referenced in future sessions.

Think of it as a running post-mortem that compounds. Session by session, the agent builds a playbook specific to the user's preferences, the task domain, and the failure modes it has personally encountered. No prompt engineering required. The agent writes its own prompts based on its own mistakes.

Anthropic's engineering lead described it as "giving the model a memory of its own cognition." That's a meaningful distinction from simple chat history. The agent isn't just remembering what was said. It's remembering what worked, what didn't, and why.

Why it matters

Most AI agent demos work beautifully the first time — because someone spent three days writing the perfect system prompt for that specific demo. Deploy the same agent in a real work environment with messy, varied tasks, and the completion rate plummets. The gap between demo and production is where most agent projects die.

Dreaming attacks that gap directly. Instead of the human debugging why the agent failed and rewriting the system prompt, the agent debugs itself. Over time, it gets better at the specific work a specific user actually does.

The catch is that Dreaming only works within Claude's Managed Agents platform, which requires Anthropic's API. This isn't something you can enable on the consumer Claude chat interface. It's an enterprise feature, priced accordingly, and it locks you deeper into Anthropic's ecosystem with every session — since the accumulated memory lives in Anthropic's infrastructure, not yours.

Still, the concept is likely to spread. If self-improving agent memory delivers 6x improvements in real-world task completion, every major AI provider will build something similar within a year. The question is whether anyone else's version works as well, or whether Anthropic has found an edge that compounds.