memories.sh logomemories.sh
Concepts

Memory Segmentation

How memories.sh separates session, semantic, episodic, and procedural memory so context survives resets and compaction.

Memory segmentation is a second axis on top of Memory Types. Types describe the kind of memory (rule, decision, fact, note, skill). Segmentation describes where memory lives across the lifecycle (session, semantic, episodic, procedural).

Why Segmentation Exists

Agent context fails when all history is treated as one blob. memories.sh segments memory so each store has one job:

  • Session memory for active work
  • Semantic memory for durable truths
  • Episodic memory for chronological history
  • Procedural memory for repeatable workflows

The Segmented Stores

1) Session Memory (working context)

Use explicit sessions for long-running tasks:

memories session start --title "Auth refactor" --client codex
memories session checkpoint <session-id> "User approved rollout plan"
memories session status <session-id>

Session memory tracks active conversation state and supports checkpoints and snapshots before boundaries like reset or compaction.

2) Long-term Semantic Memory (stable truths)

Store durable facts, preferences, and rules in semantic memory:

  • CLI/local database memory records
  • OpenClaw memory.md for deterministic file-mode semantic context

Semantic memory should stay concise and current. Use consolidation/edits to avoid contradictory duplicates.

3) Long-term Episodic Memory (history)

Store chronological events in append-friendly logs:

  • OpenClaw daily logs: memory/daily/YYYY-MM-DD.md
  • Session snapshots: memory/snapshots/YYYY-MM-DD/<slug>.md

Episodic memory is where timeline and fidelity live. Use it when you need what happened, not just the final truth.

4) Procedural Memory (how-to patterns)

Procedural memory captures reusable workflows and successful operating patterns:

  • Skills and workflow artifacts
  • Retrieval signals for intent-matched workflow recall

Use procedural memory for repeatable tasks like release checklists, incident response, or migration runbooks.

Compaction and Lifecycle Triggers

Compaction is context compression with checkpoints to avoid losing important state.

TriggerWhen it firesTypical mechanism
Count-basedToken/turn budget is near limitSession checkpoint + snapshot
Time-basedSession inactive past thresholdmemories compact run
Event-basedTask boundary (/new, /reset, handoff)Snapshot with explicit trigger

Examples:

memories compact run --inactivity-minutes 60
memories session snapshot <session-id> --trigger auto_compaction
memories session snapshot <session-id> --trigger reset

OpenClaw File-Mode Flow

Use deterministic file operations around session boundaries:

# 1) Read semantic + recent episodic context
memories openclaw memory bootstrap

# 2) Flush meaningful events before compaction/reset
memories openclaw memory flush <session-id>

# 3) Write DB snapshot + file snapshot
memories openclaw memory snapshot <session-id> --trigger reset

# 4) Keep DB and files aligned
memories openclaw memory sync --direction both

What Goes Where

Memory contentBest storeWhy
Durable coding standardsSemantic (rule)Must always be injected
Stable project constraintsSemantic (fact/decision)High-value truth over time
Conversation milestonesSession checkpointsKeeps active task coherent
End-of-task transcript sliceSnapshot (episodic)Preserves high-fidelity boundary state
Daily work chronologyDaily logs (episodic)Append-only timeline for recall
Repeatable runbook/processProcedural (skills/workflows)Reuse successful patterns

Anti-patterns

  • Storing full raw transcripts in semantic memory
  • Treating all memory as one flat list
  • Letting conflicting semantic entries stack without consolidation
  • Skipping pre-compaction/session-boundary checkpoints

Next Steps

On this page