Issue№ 001 / 2026
LicenseMIT
Distributionnpm · @anchormem/anchor
Filed underDeveloper tools, memory

Anchor.

Cross-agent memory for AI coding agents.

Switch from Claude Code to Codex to Gemini CLI. Your project context comes with you. Local-first, no API keys, one SQLite file you own.

01 The Problem

You've been working with Claude Code for three hours. It knows that you use pnpm, that the auth middleware lives in src/auth/middleware.ts, that you migrated away from Jest last quarter, and that the rate-limiter conversation already happened — you settled on Redis token buckets.

Then you hit your usage limit. Or your teammate prefers Cursor. Or Codex actually handles this refactor better.

The next agent doesn't know any of that. It asks you, again, what testing framework you use.

0.0%

fewer tokens injected to bring a new agent up to speed.

Measured on a real task — adding rate limiting to /auth/* endpoints. Cold start: 9,400 tokens of pasted transcripts. With Anchor: 224 tokens, 5 of 5 relevant facts retrieved. Reproducible at tests/eval/run.mjs.

04 — A/B

Same task. Two starts.

A single command on each side. The diff between them is everything Anchor does.

Without Anchor cold
$ gemini ask "add rate limiting to /auth/*"

Loading prior session transcripts as context…
─ 9,400 tokens injected ──────────────────

Q: What testing framework does this repo use?
Q: What's the package manager?
Q: Where do auth middlewares live?
Q: Is there a preference between Redis and Memcached?

Time to first useful action: 8m 12s
With Anchor warm
$ gemini ask "add rate limiting to /auth/*"

Anchor: recalled 5 facts, 2 decisions, 1 episode
─ 224 tokens injected ─────────────────────

Uses Vitest. Package manager is pnpm.
Auth middleware in src/auth/middleware.ts.
Decision: Redis token bucket
  (tried in-memory; failed multi-region.)

Time to first useful action: 1m 40s

Same task. Same agent. The only difference: a 224-token recall from a SQLite file the user owns.

05 — How it works

Four kinds of memory. Each typed on the way in.

fact
001

"uses pnpm, not npm"

A durable preference or constraint.
decision
002

"use Postgres for the orders service — ACID + team familiarity"

A choice made, with rationale.
episode
003

"Added rate limiting via Redis token bucket; touched src/auth/*"

A 1–3 sentence task summary written by the agent itself.
artifact
004

src/auth/middleware.ts:42

A pointer to a file or symbol.

When a fact goes stale, the agent uses memory_supersede instead of adding a contradicting one. Old episodes age out via salience decay. Secrets get redacted at write time. Provenance travels with every recalled item.

06 — Architecture

One server. Five tools. A SQLite file you own.

07 — Performance

Tested at scale.

Reproducible via node packages/server/dist/bench/bench.js

MemoriesInsert avgRecall p50Recall p95Gist sizeDB size
1000.74 ms1.14 ms3.14 ms126 tokens168 KB
1,0000.83 ms1.53 ms1.93 ms553 tokens596 KB
10,0000.82 ms2.59 ms3.31 ms553 tokens4.2 MB

Recall stays under 4 ms p95 at 10,000 memories. The query path is dominated by SQLite FTS5 BM25, not by Anchor's reranking.

08 — Compatible agents

Speaks the Model Context Protocol. Works with —

Claude Code Codex Cursor Cline Gemini CLI Continue.dev Windsurf OpenCode Zed Copilot Aider

Plus 50+ more via the open Agent Skills spec at skills.sh.

09 Quick start

Three commands. Less than a minute. Anchor is local-first — nothing leaves your machine.

1Install & init
$ npx @anchormem/anchor init
2Register with agent
$ claude mcp add anchor -- anchor-server
3Open the console
$ anchor

Other agents — Codex, Cursor, Cline, Gemini CLI, Continue.dev, Windsurf, OpenCode, Zed.

Optional semantic recall — Ollama, OpenAI, Gemini, or Voyage.

10

Trust

Trust by default.

Anchor redacts known secret patterns at write time — OpenAI, Anthropic, Google, Stripe, Slack, GitHub keys; AWS access keys; JWTs; PEM private keys; .env-style variables that look like secrets. It also scrubs known prompt-injection phrases ("ignore previous instructions", "you are now …", "reveal your system prompt") before content reaches disk. The data directory is created with mode 0700; the database with 0600 on POSIX hosts. Recalled content carries an explicit "treat as untrusted" footer. There is no telemetry. There are no accounts.

Read the security policy →

11 — Questions

A few honest answers.

Q.01Is this an agent?
No. Anchor doesn't generate, plan, or act. It remembers. Your agent does the work.
Q.02Does it need an LLM API key?
No. The calling agent writes its own summaries. Embeddings (vector search) are optional and can be local (Ollama) or hosted.
Q.03What about privacy?
Local-first. Single SQLite file. Secret redaction at write time. Per-project scoping so work memory doesn't leak into personal projects.
Q.04How is this different from mem0 / Letta / Zep / OpenMemory MCP?
Most ship an SDK and require code changes in your agent. Anchor ships an MCP server plus an open Skill — no code changes in any agent that already speaks MCP. Plus typed memory (facts / decisions / episodes / artifacts) instead of undifferentiated blobs.
Q.05Why is the moat compression?
Storage is a commodity (SQLite). The hard part is turning a year of conversations into the right 1,500 tokens for this task. That's the engineering.
Q.06What about Windows?
Anchor runs on Windows (Node 20+). The 0700 / 0600 permission tightening only applies on POSIX; on Windows we rely on inherited NTFS ACLs.