Tools
memoriesTools() gives LLMs direct access to read and write memory.
For agent loops where the LLM should actively manage its own memory, use memoriesTools(). This gives the model tools to get context, store memories, search, edit, list, and forget.
memoriesTools() currently focuses on memory + skill-file CRUD. Session lifecycle endpoints (/sessions/*) and consolidation (/memories/consolidate) are called directly via SDK HTTP routes from your backend.
Tool Bundle
import { generateText, stepCountIs } from "ai"
import { memoriesTools } from "@memories.sh/ai-sdk"
const { text } = await generateText({
model: openai("gpt-4o"),
tools: memoriesTools({ tenantId: "acme-prod" }),
stopWhen: stepCountIs(5),
system: "You have persistent memory. Use getContext at conversation start.",
prompt: userMessage,
})memoriesTools() returns all tools as a single object you can spread into your tools config.
Scope Model
tenantId= AI SDK Project (security/database boundary)userId= end-user scope insidetenantIdprojectId= optional repo context filter (not auth boundary)
Management Helper
Use memoriesManagement() when your app needs to rotate API keys or manage AI SDK Projects (tenantId mappings) without calling raw HTTP endpoints.
import { memoriesManagement } from "@memories.sh/ai-sdk"
const management = memoriesManagement({
apiKey: process.env.MEMORIES_API_KEY,
baseUrl: "https://memories.sh",
})
const keyStatus = await management.keys.get()
const rotatedKey = await management.keys.create({
expiresAt: "2027-01-01T00:00:00.000Z",
})
const revoked = await management.keys.revoke()
const sdkProjects = await management.tenants.list()
const upsertedProject = await management.tenants.upsert({
tenantId: "acme-prod",
mode: "provision",
})
const disabledProject = await management.tenants.disable("acme-prod")
void [keyStatus, rotatedKey, revoked, sdkProjects, upsertedProject, disabledProject]Individual Tools
For fine-grained control, import tools individually:
import {
getContext,
storeMemory,
searchMemories,
editMemory,
forgetMemory,
listMemories,
bulkForgetMemories,
vacuumMemories,
} from "@memories.sh/ai-sdk"
const { text } = await generateText({
model: openai("gpt-4o"),
tools: {
recall: getContext({ tenantId: "acme-prod" }),
remember: storeMemory({ tenantId: "acme-prod" }),
search: searchMemories({ tenantId: "acme-prod" }),
edit: editMemory({ tenantId: "acme-prod" }),
forget: forgetMemory({ tenantId: "acme-prod" }),
list: listMemories({ tenantId: "acme-prod" }),
bulkForget: bulkForgetMemories({ tenantId: "acme-prod" }),
vacuum: vacuumMemories({ tenantId: "acme-prod" }),
},
prompt: userMessage,
})getContext(config?)
Fetches rules and relevant memories for a query. The primary "read" tool.
You can pass lifecycle/compaction hints through the same tool input:
const ctx = await getContext({ tenantId: "acme-prod" })({
query: "summarize checkout timeout findings",
projectId: "github.com/acme/platform",
sessionId: "sess_abc123",
budgetTokens: 6000,
turnCount: 5,
turnBudget: 24,
lastActivityAt: new Date().toISOString(),
inactivityThresholdMinutes: 45,
taskCompleted: false,
})storeMemory(config?)
Stores a new memory. Accepts content, type, tags, and paths.
searchMemories(config?)
Full-text search across all memories. Returns ranked results.
editMemory(config?)
Updates an existing memory by ID.
forgetMemory(config?)
Soft-deletes a memory by ID.
listMemories(config?)
Lists memories with optional filters for type, tags, and scope.
bulkForgetMemories(config?)
Bulk soft-delete memories matching filters. Accepts types, tags, olderThanDays, pattern, projectId, all, and dryRun.
vacuumMemories(config?)
Permanently purge all soft-deleted memories to reclaim storage space.
Lifecycle Writes via SDK HTTP
Use direct SDK endpoint calls for session lifecycle and consolidation flows:
const apiKey = process.env.MEMORIES_API_KEY!
const baseUrl = "https://memories.sh"
const scope = {
tenantId: "acme-prod",
userId: "user_123",
projectId: "github.com/acme/platform",
}
async function sdkPost(path: string, body: Record<string, unknown>) {
const res = await fetch(`${baseUrl}${path}`, {
method: "POST",
headers: {
Authorization: `Bearer ${apiKey}`,
"Content-Type": "application/json",
},
body: JSON.stringify(body),
})
if (!res.ok) throw new Error(`SDK request failed: ${res.status}`)
return res.json()
}
const started = await sdkPost("/api/sdk/v1/sessions/start", {
title: "Investigate timeout",
client: "agent-loop",
scope,
})
await sdkPost("/api/sdk/v1/sessions/checkpoint", {
sessionId: started.data.sessionId,
content: "Captured checkpoint from tool loop",
kind: "summary",
scope,
})
await sdkPost("/api/sdk/v1/memories/consolidate", {
types: ["rule", "decision", "fact"],
dryRun: true,
scope,
})
await sdkPost("/api/sdk/v1/sessions/end", {
sessionId: started.data.sessionId,
status: "closed",
scope,
})For explicit snapshot creation in local/dev agents, use CLI/MCP lifecycle tools (memories session snapshot or snapshot_session), then read snapshots through /api/sdk/v1/sessions/{sessionId}/snapshot.
System Prompt Helper
Use memoriesSystemPrompt() to generate optimized instructions for tools usage:
import { memoriesSystemPrompt } from "@memories.sh/ai-sdk"
const system = memoriesSystemPrompt({
includeInstructions: true,
persona: "coding assistant",
rules: preloadedRules, // optional: inject rules directly
})This generates a system prompt that tells the model when and how to use memory tools effectively.
Auto-Store Callback
Use createMemoriesOnFinish() to automatically store learnings after each response:
import { streamText } from "ai"
import { memoriesTools, createMemoriesOnFinish } from "@memories.sh/ai-sdk"
const result = streamText({
model: openai("gpt-4o"),
tools: memoriesTools({ tenantId: "acme-prod" }),
prompt: userMessage,
onFinish: createMemoriesOnFinish({
mode: "tool-calls-only", // or "auto-extract"
tenantId: "acme-prod",
userId: "user_123",
}),
})Modes
tool-calls-only— Only stores memories when the model explicitly callsstoreMemory. Explicit and predictable.auto-extract— Calls yourextractMemories(payload)function and stores whatever it returns. No built-in extraction runs unless you provide that function.
Combining Middleware + Tools
Use middleware for automatic context injection and tools for writes:
import { generateText, wrapLanguageModel, stepCountIs } from "ai"
import { memoriesMiddleware, storeMemory, forgetMemory } from "@memories.sh/ai-sdk"
const model = wrapLanguageModel({
model: openai("gpt-4o"),
middleware: memoriesMiddleware({ tenantId: "acme-prod" }),
})
const { text } = await generateText({
model,
tools: {
store: storeMemory({ tenantId: "acme-prod" }),
forget: forgetMemory({ tenantId: "acme-prod" }),
},
stopWhen: stepCountIs(3),
prompt: "Remember that we decided to use Supabase for auth",
})This gives you the best of both worlds: automatic reads via middleware, explicit writes via tools.