Getting Started
Install memories.sh, add your first memories, and generate config files for your AI coding tools.
Installation
Install the CLI globally:
pnpm add -g @memories.sh/cliVerify the installation:
memories --versionThe semantic search model (~100MB) downloads on first use and runs entirely locally — no API calls, no data leaves your machine.
Initialize
Navigate to a git repository and initialize memories:
cd your-project
memories setupThis guide is the CLI/local flow (global + git project scope). For SaaS SDK scope,
use tenantId as the security/database boundary, userId as end-user scope, and
projectId as an optional repo context filter. See AI SDK Projects.
The init command does several things automatically:
- Creates the local database at
~/.config/memories/local.db - Detects installed AI tools (Cursor, Claude Code, Windsurf, VS Code)
- Configures MCP for each detected tool
- Generates instruction files with your existing memories
Example output:
[1/4] Setting up local storage...
Database: ~/.config/memories/local.db
[2/4] Detecting scope...
✓ Project scope detected
Project: github.com/your-org/your-project
[3/4] Detecting AI coding tools...
Cursor ✓ MCP ○ Rules
Claude Code ○ MCP ✓ Rules
✓ Cursor: MCP already configured
✓ Claude Code: MCP configured → .mcp.json
✓ Cursor: Generated .cursor/rules/memories.mdc
[4/4] Finalizing...Init Options
# Skip automatic MCP configuration
memories setup --skip-mcp
# Skip generating instruction files
memories setup --skip-generate
# Auto-confirm all prompts
memories setup -y
# Minimal local mode (no cloud/workspace dependency)
memories setup --minimal-local -y
# Initialize with starter rules
memories setup --rule "Use TypeScript strict mode" --rule "Prefer pnpm"
# Initialize global memories (apply to all projects)
memories setup --global10-minute local happy path
memories setup --minimal-local -y
memories doctor --local-only
memories add --rule "Local setup smoke test"
memories search "Local setup smoke test"Add Your First Memory
Memories come in four types: rules, decisions, facts, and notes.
Rules
Rules are always-active coding standards that should be followed:
memories add --rule "Always use early returns to reduce nesting"
memories add --rule "Use pnpm as the package manager"
memories add --rule "Prefer named exports over default exports"Decisions
Decisions capture the "why" behind architectural choices:
memories add --decision "Chose Tailwind CSS over styled-components for utility-first approach and smaller bundle size"
memories add --decision "Using Supabase for auth because it has built-in RLS and a generous free tier"Facts
Facts store project-specific knowledge:
memories add --fact "API rate limit is 100 requests per minute per user"
memories add --fact "The main database is PostgreSQL 15 hosted on Supabase"Notes
Notes are general-purpose memories (the default type):
memories add "The legacy API will be deprecated in Q3 2026"Tag Your Memories
Tags help organize and filter memories:
memories add --rule "Use React Server Components by default" --tags "react,architecture"
memories add --fact "Stripe webhook secret is in STRIPE_WEBHOOK_SECRET env var" --tags "stripe,config"Generate Config Files
Generate native rule files for your AI tools:
# Generate for a specific tool
memories generate cursor
memories generate claude
memories generate copilot
# Generate for all supported tools at once
memories generate allSupported targets: cursor, claude, agents, copilot, windsurf, cline, roo, gemini.
Each target writes to its standard location:
| Target | Output Path |
|---|---|
cursor | .cursor/rules/memories.mdc |
claude | CLAUDE.md |
agents | .agents/ |
copilot | .github/copilot-instructions.md |
windsurf | .windsurf/rules/memories.md |
cline | .clinerules/memories.md |
roo | .roo/rules/memories.md |
gemini | GEMINI.md |
Search Your Memories
memories.sh supports both keyword and semantic search:
# Keyword search (FTS5 with BM25 ranking)
memories search "authentication"
# Semantic search (AI-powered, finds related concepts)
memories search "how to handle user login" --semanticFirst-time semantic search downloads the embedding model (~100MB) to ~/.cache/memories/models/. This happens once and runs entirely locally — no API calls, no data leaves your machine.
To generate embeddings for existing memories:
memories embedSession Lifecycle + Compaction
For longer-running agent tasks, use explicit sessions and compaction-aware checkpoints:
memories session start --title "Auth migration" --client codex
memories session checkpoint <session-id> "Migration plan approved"
memories session snapshot <session-id> --trigger auto_compaction
memories compact run --inactivity-minutes 60If you use OpenClaw file mode, pair session lifecycle with deterministic files:
memories openclaw memory bootstrap
memories openclaw memory flush <session-id>
memories openclaw memory snapshot <session-id> --trigger resetSee Memory Segmentation for the full model (session, semantic, episodic, procedural).
MCP Server (Fallback)
The primary workflow is memories generate — it writes native config files that each tool reads natively. For browser-based agents (v0, bolt.new, Lovable) or any MCP client where the CLI can't run, the built-in MCP server provides real-time access.
If you ran memories setup (or memories init), MCP is already configured for your detected tools. To start the server manually:
memories serveMCP gives agents live access to search, add, and manage memories directly — useful when static configs aren't enough.
See the MCP Server guide for details.
Sync Config Files Across Machines
Beyond memories, you can sync your AI tool configuration files:
# Import global configs (skills, commands, rules)
memories files ingest
# See what would be imported
memories files ingest --dry-run
# Apply synced files to a new machine
memories files apply --global --forceThis syncs files from .agents/, .claude/, .cursor/, .codex/, .windsurf/, and other tool directories. See Files Sync for details.
GitHub Capture Queue
Capture GitHub activity automatically and review before insertion:
# 1) Configure webhook secret in your web app env
GITHUB_WEBHOOK_SECRET=your-secret
# 2) Point GitHub webhook to:
# POST /api/github/webhook
# Events: pull_request, issues, push, releaseCaptured items appear in dashboard GitHub Capture Queue. Approve to write into workspace memory (with graph sync), or reject to discard.
Queue API supports richer review filters:
# pending release items for a repo
/api/github/capture/queue?status=pending&event=release&repo=webrenew/memories
# text search across title/content/repo/source-id
/api/github/capture/queue?status=all&q=release+notesNext Steps
- Starter Apps Quickstart — Next.js, Express, and Python templates
- CLI Reference — Complete command documentation
- Memory Types and Scopes — Understanding the type system
- Memory Segmentation — Session + long-term lifecycle model
- MCP Server — Fallback for real-time agent access
- Files Sync — Sync config files across machines
- Cloud Sync — Multi-device sync with Pro