Tutorial: Memory-Enabled Chatbot
Build a chatbot that remembers users across sessions using RetainDB and OpenAI.
Applies to: All Users
Build a chatbot that remembers each user's preferences, past conversations, and context — automatically, across every session.
What you'll build
A chat handler where:
- Memories accumulate with every conversation
- Preferences and facts carry forward without repetition
- The more a user chats, the better the responses
Prerequisites
- Node.js 18+
- RetainDB API key (
RETAINDB_KEY) - OpenAI API key
Setup
npm init -y
npm install @retaindb/sdk openaiexport RETAINDB_KEY="rdb_..."
export OPENAI_API_KEY="sk-..."The complete implementation
import { RetainDB } from "@retaindb/sdk";
import OpenAI from "openai";
const db = new RetainDB({ apiKey: process.env.RETAINDB_KEY });
const openai = new OpenAI();
export async function chat(
userId: string,
messages: Array<{ role: "user" | "assistant" | "system"; content: string }>
) {
const { response } = await db.user(userId).runTurn({
messages,
generate: (ctx) =>
openai.chat.completions.create({
model: "gpt-4o",
messages: ctx.messages, // user's memory injected automatically
}),
});
return response.choices[0].message.content;
}That's the full chatbot. runTurn retrieves relevant memories, injects them into the system prompt, calls OpenAI, and stores the conversation — all in one call.
With session isolation
If you want memories scoped to individual conversations:
export async function chat(
userId: string,
sessionId: string,
messages: Array<{ role: "user" | "assistant" | "system"; content: string }>
) {
const { response } = await db.user(userId).session(sessionId).runTurn({
messages,
generate: (ctx) =>
openai.chat.completions.create({
model: "gpt-4o",
messages: ctx.messages,
}),
});
return response.choices[0].message.content;
}Storing explicit facts
When a user states a preference or fact, store it directly:
// User said "I prefer TypeScript over JavaScript"
await db.user(userId).remember("Prefers TypeScript over JavaScript");It will surface in future getContext calls automatically.
Manually retrieving context
When you want control over how context is used:
const { context } = await db.user(userId).getContext("How to answer this user?");
const response = await openai.chat.completions.create({
model: "gpt-4o",
messages: [
{ role: "system", content: `User context:\n${context}` },
...messages,
],
});What happens over time
After a few conversations:
User: "I work at Acme as a senior engineer"
→ stored as factual memory
User: "I prefer concise, technical answers"
→ stored as preference memory
User: "We just deployed the new auth system"
→ stored as event memoryNext session, when the user asks anything, getContext returns the relevant subset. The system prompt is enriched automatically.
Next step
Was this page helpful?
Your feedback helps us prioritize docs improvements weekly.