Tools
memoriesTools() gives LLMs direct access to read and write memory.
For agent loops where the LLM should actively manage its own memory, use memoriesTools(). This gives the model tools to get context, store memories, search, edit, list, and forget.
Tool Bundle
import { generateText, stepCountIs } from "ai"
import { memoriesTools } from "@memories.sh/ai-sdk"
const { text } = await generateText({
model: openai("gpt-4o"),
tools: memoriesTools({ tenantId: "acme-prod" }),
stopWhen: stepCountIs(5),
system: "You have persistent memory. Use getContext at conversation start.",
prompt: userMessage,
})memoriesTools() returns all six tools as a single object you can spread into your tools config.
Scope Model
tenantId= AI SDK Project (security/database boundary)userId= end-user scope insidetenantIdprojectId= optional repo context filter (not auth boundary)
Management Helper
Use memoriesManagement() when your app needs to rotate API keys or manage AI SDK Projects (tenantId mappings) without calling raw HTTP endpoints.
import { memoriesManagement } from "@memories.sh/ai-sdk"
const management = memoriesManagement({
apiKey: process.env.MEMORIES_API_KEY,
baseUrl: "https://memories.sh",
})
const keyStatus = await management.keys.get()
const rotatedKey = await management.keys.create({
expiresAt: "2027-01-01T00:00:00.000Z",
})
const revoked = await management.keys.revoke()
const sdkProjects = await management.tenants.list()
const upsertedProject = await management.tenants.upsert({
tenantId: "acme-prod",
mode: "provision",
})
const disabledProject = await management.tenants.disable("acme-prod")
void [keyStatus, rotatedKey, revoked, sdkProjects, upsertedProject, disabledProject]Individual Tools
For fine-grained control, import tools individually:
import {
getContext,
storeMemory,
searchMemories,
editMemory,
forgetMemory,
listMemories,
} from "@memories.sh/ai-sdk"
const { text } = await generateText({
model: openai("gpt-4o"),
tools: {
recall: getContext({ tenantId: "acme-prod" }),
remember: storeMemory({ tenantId: "acme-prod" }),
search: searchMemories({ tenantId: "acme-prod" }),
edit: editMemory({ tenantId: "acme-prod" }),
forget: forgetMemory({ tenantId: "acme-prod" }),
list: listMemories({ tenantId: "acme-prod" }),
},
prompt: userMessage,
})getContext(config?)
Fetches rules and relevant memories for a query. The primary "read" tool.
storeMemory(config?)
Stores a new memory. Accepts content, type, tags, and paths.
searchMemories(config?)
Full-text search across all memories. Returns ranked results.
editMemory(config?)
Updates an existing memory by ID.
forgetMemory(config?)
Soft-deletes a memory by ID.
listMemories(config?)
Lists memories with optional filters for type, tags, and scope.
System Prompt Helper
Use memoriesSystemPrompt() to generate optimized instructions for tools usage:
import { memoriesSystemPrompt } from "@memories.sh/ai-sdk"
const system = memoriesSystemPrompt({
includeInstructions: true,
persona: "coding assistant",
rules: preloadedRules, // optional: inject rules directly
})This generates a system prompt that tells the model when and how to use memory tools effectively.
Auto-Store Callback
Use createMemoriesOnFinish() to automatically store learnings after each response:
import { streamText } from "ai"
import { memoriesTools, createMemoriesOnFinish } from "@memories.sh/ai-sdk"
const result = streamText({
model: openai("gpt-4o"),
tools: memoriesTools({ tenantId: "acme-prod" }),
prompt: userMessage,
onFinish: createMemoriesOnFinish({
mode: "tool-calls-only", // or "auto-extract"
tenantId: "acme-prod",
userId: "user_123",
}),
})Modes
tool-calls-only— Only stores memories when the model explicitly callsstoreMemory. Explicit and predictable.auto-extract— Calls yourextractMemories(payload)function and stores whatever it returns. No built-in extraction runs unless you provide that function.
Combining Middleware + Tools
Use middleware for automatic context injection and tools for writes:
import { generateText, wrapLanguageModel, stepCountIs } from "ai"
import { memoriesMiddleware, storeMemory, forgetMemory } from "@memories.sh/ai-sdk"
const model = wrapLanguageModel({
model: openai("gpt-4o"),
middleware: memoriesMiddleware({ tenantId: "acme-prod" }),
})
const { text } = await generateText({
model,
tools: {
store: storeMemory({ tenantId: "acme-prod" }),
forget: forgetMemory({ tenantId: "acme-prod" }),
},
stopWhen: stepCountIs(3),
prompt: "Remember that we decided to use Supabase for auth",
})This gives you the best of both worlds: automatic reads via middleware, explicit writes via tools.