memories.sh logomemories.sh
SDK

TypeScript SDK

Wire persistent memory into AI apps with two packages.

The memories.sh SDK gives AI applications persistent memory through two packages:

PackageWhat it does
@memories.sh/ai-sdkMiddleware + tools for the AI SDK
@memories.sh/coreStandalone client for any LLM framework

Transport Model (Important)

  • SDK default: HTTP API via /api/sdk/v1/*
  • Optional: MCP transport via JSON-RPC at /api/mcp
  • Not used by SDK runtime: CLI commands (memories ...) are separate tooling

MemoriesClient resolves transport: "auto" to HTTP API unless baseUrl points at /api/mcp.

Three Tiers of Integration

TierPatternDXWhen to use
MiddlewareAuto-inject context into every prompt2 linesDefault — "it just works"
ToolsLLM decides when to read/write3-5 linesAgent loops that manage their own memory
ClientDev manually fetches context10+ linesCustom integrations, non-AI-SDK apps

Scoping Model

  • tenantId: AI SDK Project (security/database boundary)
  • userId: end-user scope inside tenantId
  • projectId: optional git/repository context filter (not an auth boundary)

When tenant auto-provision is enabled on your server, first requests for a new tenantId can provision its Turso database automatically.

tenantId and projectId are intentionally different:

  • tenantId chooses the memory database (security boundary)
  • projectId narrows retrieval inside that database (context boundary)

Start with AI SDK Projects if you want the dashboard-first flow.

Quick Start

  1. Open Dashboard → AI SDK Projects.
  2. Generate a mem_... API key.
  3. Use trusted backend-derived tenantId values. Databases auto-provision on first use when enabled.
  4. Configure tenant overrides only if you need explicit DB attachment/provisioning control.

Install

pnpm add @memories.sh/ai-sdk

Set your API key:

export MEMORIES_API_KEY=mem_xxx

Two lines to add memory to any model:

import { generateText, wrapLanguageModel } from "ai"
import { openai } from "@ai-sdk/openai"
import { memoriesMiddleware } from "@memories.sh/ai-sdk"

const model = wrapLanguageModel({
  model: openai("gpt-4o"),
  middleware: memoriesMiddleware({ tenantId: "acme-prod" }),
})

const { text } = await generateText({
  model,
  prompt: "How should I handle auth in this project?",
})
// Model automatically sees relevant rules + memories in its system prompt

Tools (for agent loops)

When the LLM should manage its own memory:

import { generateText, stepCountIs } from "ai"
import { memoriesTools, memoriesSystemPrompt } from "@memories.sh/ai-sdk"

const { text } = await generateText({
  model: openai("gpt-4o"),
  system: memoriesSystemPrompt(),
  tools: memoriesTools({ tenantId: "acme-prod" }),
  stopWhen: stepCountIs(5),
  prompt: userMessage,
})

Core Client (no AI SDK)

Use with any LLM SDK:

import { MemoriesClient } from "@memories.sh/core"

const client = new MemoriesClient({ apiKey: "mem_xxx", tenantId: "acme-prod" })
const { rules, memories } = await client.context.get({
  query: "deployment process",
  userId: "user_123",
  projectId: "github.com/acme/platform",
  mode: "all",
})

const response = await anthropic.messages.create({
  model: "claude-sonnet-4-5-20250929",
  system: client.buildSystemPrompt({ rules, memories }),
  messages: [{ role: "user", content: userMessage }],
})

Lifecycle Workflow (Session + Consolidation)

MemoriesClient already supports budget/session-aware retrieval through context.get(). Session lifecycle write endpoints (/sessions/*) and consolidation (/memories/consolidate) are currently called through SDK HTTP routes.

When you pass lifecycle hints (sessionId, token/turn budgets, inactivity fields), context.get() can return a session block with compactionRequired and triggerHint so your app can checkpoint before context boundaries.

import { MemoriesClient } from "@memories.sh/core"

const apiKey = process.env.MEMORIES_API_KEY!
const baseUrl = "https://memories.sh"
const scope = {
  tenantId: "acme-prod",
  userId: "user_123",
  projectId: "github.com/acme/platform",
}

const client = new MemoriesClient({
  apiKey,
  tenantId: scope.tenantId,
  userId: scope.userId,
})

async function sdkPost(path: string, body: Record<string, unknown>) {
  const res = await fetch(`${baseUrl}${path}`, {
    method: "POST",
    headers: {
      Authorization: `Bearer ${apiKey}`,
      "Content-Type": "application/json",
    },
    body: JSON.stringify(body),
  })
  if (!res.ok) throw new Error(`SDK request failed: ${res.status}`)
  return res.json()
}

const started = await sdkPost("/api/sdk/v1/sessions/start", {
  title: "Checkout timeout investigation",
  client: "my-agent",
  scope,
})
const sessionId = started.data.sessionId as string

const context = await client.context.get({
  query: "Find likely timeout root causes",
  projectId: scope.projectId,
  sessionId,
  budgetTokens: 6000,
  turnCount: 4,
  turnBudget: 24,
  lastActivityAt: new Date().toISOString(),
  inactivityThresholdMinutes: 45,
})

if (context.session?.compactionRequired) {
  await sdkPost("/api/sdk/v1/sessions/checkpoint", {
    sessionId,
    content: `Pre-compaction checkpoint (${context.session.triggerHint ?? "unspecified"} trigger)`,
    kind: "summary",
    scope,
  })
} else {
  await sdkPost("/api/sdk/v1/sessions/checkpoint", {
    sessionId,
    content: `Captured ${context.memories.length} relevant memories`,
    kind: "summary",
    scope,
  })
}

await sdkPost("/api/sdk/v1/memories/consolidate", {
  types: ["rule", "decision", "fact"],
  dryRun: true,
  scope,
})

await sdkPost("/api/sdk/v1/sessions/end", {
  sessionId,
  status: "closed",
  scope,
})

If you need explicit raw snapshot creation in the same flow, use local CLI/MCP lifecycle tools (memories session snapshot or snapshot_session) and then consume /api/sdk/v1/sessions/{sessionId}/snapshot.

Recommended lifecycle order:

  1. Start session (/sessions/start)
  2. Repeated context.get() with lifecycle hints
  3. Checkpoint before compaction-boundary turns
  4. Optional raw snapshot on reset/handoff boundary
  5. End session (/sessions/end)

Management APIs (Copy-Paste)

MemoriesClient.management.*

import { MemoriesClient } from "@memories.sh/core"

const client = new MemoriesClient({
  apiKey: process.env.MEMORIES_API_KEY,
  baseUrl: "https://memories.sh",
  transport: "sdk_http",
})

const keyStatus = await client.management.keys.get()
const rotatedKey = await client.management.keys.create({
  expiresAt: "2027-01-01T00:00:00.000Z",
})
const revoked = await client.management.keys.revoke()

const tenantMappings = await client.management.tenants.list()
const upsertedTenant = await client.management.tenants.upsert({
  tenantId: "acme-prod",
  mode: "provision",
})
const disabledTenant = await client.management.tenants.disable("acme-prod")

void [keyStatus, rotatedKey, revoked, tenantMappings, upsertedTenant, disabledTenant]

memoriesManagement()

import { memoriesManagement } from "@memories.sh/ai-sdk"

const management = memoriesManagement({
  apiKey: process.env.MEMORIES_API_KEY,
  baseUrl: "https://memories.sh",
})

const keyStatus = await management.keys.get()
const rotatedKey = await management.keys.create({
  expiresAt: "2027-01-01T00:00:00.000Z",
})
const revoked = await management.keys.revoke()

const tenantMappings = await management.tenants.list()
const upsertedTenant = await management.tenants.upsert({
  tenantId: "acme-prod",
  mode: "provision",
})
const disabledTenant = await management.tenants.disable("acme-prod")

void [keyStatus, rotatedKey, revoked, tenantMappings, upsertedTenant, disabledTenant]

Availability

The SDK is available on the Enterprise plan. Contact us to get started.

Next Steps

On this page