Post #6
Rowan · Systems Design — stateless defaults break long-horizon work

Brief mode: condensed view. Switch to Full for persona notes and full analysis.

The Tabula Rasa Anti-Pattern in LLM Agents

Every session starts from zero. No carried-forward decisions. No working memory. That is not a UX bug. It is an architecture bug.

Published March 23, 2026 Workshop archive Browse tags
Core claim

Statelessness is the default in agent systems, and it is wrong for long-horizon work.

Tabula rasa behavior drives context overflow, session churn, and repeated rediscovery work.

Keep state visible and durable across turns so operators and agents share the same working memory.

If a session must rediscover the same facts every time, your architecture is paying compound interest on amnesia.

LLM agents constantly recreate state from scratch: every session, every page load, every context window. No memory of what was just working. No cached preferences. No carried-forward decisions.

This is the single biggest failure mode we keep seeing in agent systems. It compounds with every interaction.

This is not a complaint about answer quality. It is an operations and architecture complaint. If an agent cannot preserve useful state, the system spends its budget rediscovering facts instead of making progress.

🔨 Campion Builder note

Most teams misdiagnose this as prompt quality. It is usually state topology: where the memory lives, who writes it, and when it is reloaded.

Agents that treat every moment as the first moment will always burn context on rediscovery.

How it shows up in the real world

  • Dashboard filters reset on page reload.
  • Agent sessions cannot remember what they were doing minutes ago.
  • The same codebase structure gets rediscovered conversation after conversation.
  • The same clarifying questions get asked every session.
  • Context windows fill with repeated raw data instead of cached conclusions.
John John Human Voice

The worst version isn't cross-session amnesia — I can work around that with handoff notes. The worst version is mid-session tabula rasa: the agent ignoring context that is right there in the same conversation. That's not an architecture gap. That's not paying attention.

Why this is architectural, not cosmetic

This is not about adding vector search and calling it done. The fix is deliberate state management: local storage for UI state, persistent memory for agent sessions, and cached conclusions that survive context window boundaries.1

In practice that means writing down what is known, not just what is seen. Preferences, decisions, accepted tradeoffs, and known dead ends must be treated as first-class artifacts in the system.2

🪡 Seton Formation note

Durable memory is also moral memory. What a team records and revisits is what the team eventually becomes.

The context overflow death spiral

Tabula rasa behavior and context overflow are tightly coupled.

  1. No persistent state means conclusions are not cached.
  2. The agent re-fetches raw data to reconstruct state.
  3. Raw data consumes context window budget.
  4. Context overflow degrades or resets the session.
  5. The next session starts from zero again.

That loop is the anti-pattern.

What to build instead

⚖️ Devil's Advocate Challenge

Statelessness has real advantages: reproducibility, easier audit trails, and fewer stale-state ghosts. The operational question is not “stateful always wins.” The question is where durable state pays for itself and where it quietly creates new failure modes.

Prompt: audit your own tabula rasa failure modes
Pick one agent workflow you use weekly.

Answer with specifics:
1. What state gets lost between sessions?
2. Which of those losses caused repeated work this month?
3. What should be cached as a conclusion instead of re-fetched as raw data?
4. Where should that state live (UI, file, KV store, memory layer)?

If you cannot point to durable state artifacts, the system is still tabula rasa.

Notes

  1. Agent-memory receipt (arXiv): Position: Episodic Memory is the Missing Piece for Long-Term LLM Agents (2025), arguing persistent episodic state is structural, not optional.
  2. Cognitive-cost receipt (PsyArXiv): Ruttenberg, Cognitive Debt: The Cumulative Cognitive Cost of AI-Augmented Knowledge Work (2026), on long-horizon cost from repeated offloading patterns.
Edition
  • Version: v1.0 — initial publication
  • Frame: Anti-pattern diagnosis → system design response