Skip to main content

Memory & Commands

James opened WhatsApp the next morning and sent: "What formatting preferences do I have for summaries?"

The agent responded with a generic list of formatting best practices. Nothing about the bullet-point format James had asked for yesterday. Nothing about the 100-word limit.

"It forgot everything I told it."

Emma set her coffee down. "Session memory fades when the conversation ends. The workspace files you edited in Lesson 4 persist because they are files on disk. Anything you said in conversation is gone when the session closes."

"So I have to re-explain my preferences every morning?"

"No. You tell it to save them to memory." She stood up. "Send it this: 'Save in memory: I prefer all summaries under 100 words as bullet points.' Then close the conversation, start a new one, and ask if it remembers. When I get back, show me WHERE it stored the memory and PROVE it persisted." She picked up her coffee and left.


You are doing exactly what James is doing. Your agent's session memory fades, and you need preferences that survive across conversations.

Store a Preference

Send this to your agent on WhatsApp:

Save in memory: I prefer all summaries under 100 words as bullet points.
Never use numbered lists for summaries.

Start your message with "Save in memory:" to trigger an actual file write. If you use "Remember this:" instead, the agent may respond with "Got it, noted!" without actually writing anything to disk. The verbal confirmation is not proof.

Verify on Disk

Open a terminal and read the file:

cat ~/.openclaw/workspace/MEMORY.md

You should see your preference stored on disk. The output will look something like this:

# Preferences
- Summary format: Always under 100 words, use bullet points, never numbered lists.

If the file does not exist or does not contain your preference, the agent acknowledged your request without actually saving it. Send the request again with "Save in memory:" at the start.

Verify in the Dashboard

Open the OpenClaw dashboard and look at your recent chat. You will see a tool badge next to the agent's response indicating a file write operation. This badge is visual proof the agent wrote to disk.

Verbal Confirmation Is Not Enough

If your agent says "Got it" or "I've noted that" but you do not see a tool badge for a file write, the preference was NOT saved to disk. The agent confirmed your intent without acting on it. Send the request again with "Save in memory:" at the start and check for the tool badge.

Test Across Sessions

Close the conversation. Start a new one. Then ask:

What are my preferences for summaries?

The agent recalls your bullet-point preference without you repeating it. It did not re-read your old conversation. It read from the memory file that loaded when the session started.

How the Memory System Works

Now that you have experienced it, here is what is happening under the hood. Your agent has two memory locations in ~/.openclaw/workspace/:

LocationWhat It StoresWhen It Loads
MEMORY.mdCurated long-term memory: preferences, facts, key decisionsEvery session start
memory/YYYY-MM-DD.mdDaily logs: session notes, conversation summariesToday + yesterday at session start

MEMORY.md: The Curated Notebook

You just wrote to this file. It loads at the start of every session, so the agent always has access to your stored preferences.

memory/ Directory: The Daily Journal

The memory/ directory holds daily logs named by date. The agent writes session notes and observations here automatically. Check yours:

ls ~/.openclaw/workspace/memory/

You will see one file per day your agent has been active.

memory_search: Finding Old Notes

Today's log and yesterday's log load automatically. Everything older stays on disk but does not load at session start. When the agent needs something from an older log, it uses memory_search: hybrid retrieval that combines vector similarity (matching meaning) with keyword matching (matching exact terms). This finds relevant notes even when the wording in the stored note differs from your question.

The Loading Summary

Session starts:
├── MEMORY.md → always loads (curated, long-term)
├── memory/today.md → always loads (today's journal)
├── memory/yesterday.md → always loads (yesterday's journal)
└── memory/older/*.md → available via memory_search only

The design keeps context small. Loading every daily log since installation would burn tokens on irrelevant history. The agent loads recent context automatically and searches older context on demand.

Why This Matters: Compaction

When a conversation runs long, the gateway compacts older turns into a summary to free context space. Before compacting, the agent is automatically reminded to save anything important to memory files. This is why persistent memory matters: session context can be summarized away at any time, but MEMORY.md and daily logs survive on disk. You can also trigger compaction manually with /compact if your conversation feels sluggish.

Slash Commands

Your agent responds to natural language, but it also accepts direct commands. Send this:

/help

Notice the response: instant, no "thinking" indicator, no tool badge. The gateway intercepted /help and returned the result directly. The model never processed your message. No tokens spent, no inference time.

Now send:

/status

You see diagnostics: your current model, session state, and quota information. Again, instant. The gateway knows this information without asking the model.

Compare with Natural Language

Ask the same thing in natural language:

What model are you currently using?

The response is slower. You may see a thinking indicator or tool badge. The model had to interpret your question, reason about it, and compose a response. The speed difference you just experienced is the proof: slash commands bypass the model entirely.

Useful Commands

CommandWhat It Does
/helpLists available commands
/statusShows model, session state, diagnostics
/model <name>Switches model mid-conversation
/resetFresh session (clears conversation, keeps memory)
/compactSummarizes older turns to free context space
/commandsLists all available slash commands

The /commands list grows as you enable plugins. After you install your first plugin in Lesson 6, run /commands again to see what it added.

Try With AI

Exercise 1: Build a Memory Profile

Tell your agent to save five things about you. Use "Save in memory:" for each one: your role, your preferred communication style, your timezone, a current project, and one thing the agent should never do. Close the conversation and start a new one. Ask "What do you know about me?"

Then verify with:

cat ~/.openclaw/workspace/MEMORY.md

Compare what the agent recalls versus what is actually on disk. Are they the same?

What you are learning: Persistent memory is how you train an agent over time. Each "Save in memory:" adds to MEMORY.md, building a profile that loads on every session start. The terminal check confirms the agent is not fabricating recall.

Exercise 2: Dashboard Detective

Open the dashboard and scroll through your recent chat history. Count the tool badges. Which of your messages triggered file writes? Which got verbal acknowledgment only?

Send a message that starts with "Remember this: my favorite color is blue." Then send another: "Save in memory: my preferred meeting length is 25 minutes." Check the dashboard for tool badges on each. Did both write to disk, or did only one?

What you are learning: The dashboard is your verification layer. Tool badges are evidence of action. Verbal confirmation without a tool badge means the agent understood your intent but may not have acted on it.

Exercise 3: Gateway vs Model

Send /status. Then ask "What is your current status?" in natural language. Time both responses. Which was faster? Which showed a tool badge or thinking indicator?

For a bonus, try switching models mid-conversation:

/model gemini-2.5-flash

Ask a question. Then switch back:

/model gemini-2.5-pro

What you are learning: Slash commands are gateway-intercepted. They cost zero tokens and respond instantly. Natural language goes through the full model pipeline. The /model command lets you switch inference providers without restarting your session.


When Emma came back, James had the terminal and the dashboard open side by side. He pointed at the terminal first.

"MEMORY.md has six entries. My name, timezone, summary format, report style, and two things it decided to remember on its own from yesterday's conversation." He tapped the dashboard. "Every entry has a matching tool badge in the chat. That is how I know the writes actually happened."

"And the commands?"

James switched to WhatsApp. "I sent /status and then asked the same question in plain English. /status came back in under a second. The natural language version took three seconds and showed a thinking indicator." He paused. "At my old warehouse job, we had two systems: the employee handbook that everyone got on day one, and the daily shift log the floor manager kept. The handbook had the big picture rules. The shift log was 'what happened today.' MEMORY.md is the handbook. The daily logs are the shift log. And if you needed to find something from a shift three months ago, you searched the archive. You did not read every log since January."

Emma tilted her head. "That actually maps. The handbook loads every session. The shift log loads for today and yesterday. Everything older goes through search."

"And slash commands?"

"The walkie-talkie," James said. "Direct channel. No interpretation needed. You do not ask the walkie-talkie to think about your request."

Emma nodded. Then she looked at MEMORY.md again. "One thing. I once crammed about two hundred entries into MEMORY.md for a project. Preferences, facts, project notes, meeting summaries. The agent started ignoring entries near the bottom because the file was so long it pushed past the useful part of the context window. I had to go back and curate it down to the thirty entries that actually mattered." She closed the terminal. "Keep MEMORY.md short. It is a curated notebook, not a dump file."

James opened MEMORY.md in his editor. "Six entries. Noted."

"What if I outgrow this setup? Hundreds of daily logs, and the search gets slow, or I want the agent to build a profile of me automatically instead of me saying 'save in memory' every time?"

Emma pulled up the memory docs. "The builtin engine you are using now is SQLite with vector and keyword search. It handles most personal setups. When you outgrow it, there are two alternatives worth reading about: QMD is a local search sidecar that adds reranking and can index entire project directories, not just your workspace. Honcho is an AI-native memory service that builds user profiles automatically from conversations and works across multiple agents." She closed the tab. "You do not need either today. But when you hit the limits of the builtin engine, those are the two upgrades. Read the docs when you are ready."

"Your agent remembers now," Emma said. "Next question is whether it knows enough. You might want it to know financial modeling or legal review. That is what skills and the ecosystem are for."

Flashcards Study Aid