memories.sh logomemories.sh

Getting Started

Install memories.sh, add your first memories, and generate config files for your AI coding tools.

Installation

Install the CLI globally:

pnpm add -g @memories.sh/cli

Verify the installation:

memories --version

The semantic search model (~100MB) downloads on first use and runs entirely locally — no API calls, no data leaves your machine.

Initialize

Navigate to a git repository and initialize memories:

cd your-project
memories setup

This guide is the CLI/local flow (global + git project scope). For SaaS SDK scope, use tenantId as the security/database boundary, userId as end-user scope, and projectId as an optional repo context filter. See AI SDK Projects.

The init command does several things automatically:

  1. Creates the local database at ~/.config/memories/local.db
  2. Detects installed AI tools (Cursor, Claude Code, Windsurf, VS Code)
  3. Configures MCP for each detected tool
  4. Generates instruction files with your existing memories

Example output:

[1/4] Setting up local storage...
  Database: ~/.config/memories/local.db
[2/4] Detecting scope...
✓ Project scope detected
  Project: github.com/your-org/your-project
[3/4] Detecting AI coding tools...
  Cursor ✓ MCP ○ Rules
  Claude Code ○ MCP ✓ Rules
  
✓ Cursor: MCP already configured
✓ Claude Code: MCP configured → .mcp.json

✓ Cursor: Generated .cursor/rules/memories.mdc
[4/4] Finalizing...

Init Options

# Skip automatic MCP configuration
memories setup --skip-mcp

# Skip generating instruction files  
memories setup --skip-generate

# Auto-confirm all prompts
memories setup -y

# Minimal local mode (no cloud/workspace dependency)
memories setup --minimal-local -y

# Initialize with starter rules
memories setup --rule "Use TypeScript strict mode" --rule "Prefer pnpm"

# Initialize global memories (apply to all projects)
memories setup --global

10-minute local happy path

Local Onboarding Smoke

memories setup --minimal-local -y
memories doctor --local-only
memories add --rule "Local setup smoke test"
memories search "Local setup smoke test"

Add Your First Memory

Memories come in four types: rules, decisions, facts, and notes.

Rules

Rules are always-active coding standards that should be followed:

memories add --rule "Always use early returns to reduce nesting"
memories add --rule "Use pnpm as the package manager"
memories add --rule "Prefer named exports over default exports"

Decisions

Decisions capture the "why" behind architectural choices:

memories add --decision "Chose Tailwind CSS over styled-components for utility-first approach and smaller bundle size"
memories add --decision "Using Supabase for auth because it has built-in RLS and a generous free tier"

Facts

Facts store project-specific knowledge:

memories add --fact "API rate limit is 100 requests per minute per user"
memories add --fact "The main database is PostgreSQL 15 hosted on Supabase"

Notes

Notes are general-purpose memories (the default type):

memories add "The legacy API will be deprecated in Q3 2026"

Tag Your Memories

Tags help organize and filter memories:

memories add --rule "Use React Server Components by default" --tags "react,architecture"
memories add --fact "Stripe webhook secret is in STRIPE_WEBHOOK_SECRET env var" --tags "stripe,config"

Generate Config Files

Generate native rule files for your AI tools:

# Generate for a specific tool
memories generate cursor
memories generate claude
memories generate copilot

# Generate for all supported tools at once
memories generate all

Supported targets: cursor, claude, agents, copilot, windsurf, cline, roo, gemini.

Each target writes to its standard location:

TargetOutput Path
cursor.cursor/rules/memories.mdc
claudeCLAUDE.md
agents.agents/
copilot.github/copilot-instructions.md
windsurf.windsurf/rules/memories.md
cline.clinerules/memories.md
roo.roo/rules/memories.md
geminiGEMINI.md

Search Your Memories

memories.sh supports both keyword and semantic search:

# Keyword search (FTS5 with BM25 ranking)
memories search "authentication"

# Semantic search (AI-powered, finds related concepts)
memories search "how to handle user login" --semantic

First-time semantic search downloads the embedding model (~100MB) to ~/.cache/memories/models/. This happens once and runs entirely locally — no API calls, no data leaves your machine.

To generate embeddings for existing memories:

memories embed

Session Lifecycle + Compaction

For longer-running agent tasks, use explicit sessions and compaction-aware checkpoints:

memories session start --title "Auth migration" --client codex
memories session checkpoint <session-id> "Migration plan approved"
memories session snapshot <session-id> --trigger auto_compaction
memories compact run --inactivity-minutes 60

If you use OpenClaw file mode, pair session lifecycle with deterministic files:

memories openclaw memory bootstrap
memories openclaw memory flush <session-id>
memories openclaw memory snapshot <session-id> --trigger reset

See Memory Segmentation for the full model (session, semantic, episodic, procedural).

MCP Server (Fallback)

The primary workflow is memories generate — it writes native config files that each tool reads natively. For browser-based agents (v0, bolt.new, Lovable) or any MCP client where the CLI can't run, the built-in MCP server provides real-time access.

If you ran memories setup (or memories init), MCP is already configured for your detected tools. To start the server manually:

memories serve

MCP gives agents live access to search, add, and manage memories directly — useful when static configs aren't enough.

See the MCP Server guide for details.

Sync Config Files Across Machines

Beyond memories, you can sync your AI tool configuration files:

# Import global configs (skills, commands, rules)
memories files ingest

# See what would be imported
memories files ingest --dry-run

# Apply synced files to a new machine
memories files apply --global --force

This syncs files from .agents/, .claude/, .cursor/, .codex/, .windsurf/, and other tool directories. See Files Sync for details.

GitHub Capture Queue

Capture GitHub activity automatically and review before insertion:

# 1) Configure webhook secret in your web app env
GITHUB_WEBHOOK_SECRET=your-secret

# 2) Point GitHub webhook to:
#    POST /api/github/webhook
#    Events: pull_request, issues, push, release

Captured items appear in dashboard GitHub Capture Queue. Approve to write into workspace memory (with graph sync), or reject to discard.

Queue API supports richer review filters:

# pending release items for a repo
/api/github/capture/queue?status=pending&event=release&repo=webrenew/memories

# text search across title/content/repo/source-id
/api/github/capture/queue?status=all&q=release+notes

Next Steps

On this page