"uses pnpm, not npm"
Anchor.
Cross-agent memory for AI coding agents.
Switch from Claude Code to Codex to Gemini CLI. Your project context comes with you. Local-first, no API keys, one SQLite file you own.
You've been working with Claude Code for three hours. It knows that you use pnpm, that the auth middleware lives in src/auth/middleware.ts, that you migrated away from Jest last quarter, and that the rate-limiter conversation already happened — you settled on Redis token buckets.
Then you hit your usage limit. Or your teammate prefers Cursor. Or Codex actually handles this refactor better.
The next agent doesn't know any of that. It asks you, again, what testing framework you use.
0.0%
fewer tokens injected to bring a new agent up to speed.
Measured on a real task — adding rate limiting to /auth/* endpoints. Cold start: 9,400 tokens of pasted transcripts. With Anchor: 224 tokens, 5 of 5 relevant facts retrieved. Reproducible at tests/eval/run.mjs.
04 — A/B
Same task. Two starts.
A single command on each side. The diff between them is everything Anchor does.
$ gemini ask "add rate limiting to /auth/*" Loading prior session transcripts as context… ─ 9,400 tokens injected ────────────────── Q: What testing framework does this repo use? Q: What's the package manager? Q: Where do auth middlewares live? Q: Is there a preference between Redis and Memcached? Time to first useful action: 8m 12s
$ gemini ask "add rate limiting to /auth/*" Anchor: recalled 5 facts, 2 decisions, 1 episode ─ 224 tokens injected ───────────────────── Uses Vitest. Package manager is pnpm. Auth middleware in src/auth/middleware.ts. Decision: Redis token bucket (tried in-memory; failed multi-region.) Time to first useful action: 1m 40s
Same task. Same agent. The only difference: a 224-token recall from a SQLite file the user owns.
05 — How it works
Four kinds of memory. Each typed on the way in.
"use Postgres for the orders service — ACID + team familiarity"
"Added rate limiting via Redis token bucket; touched src/auth/*"
src/auth/middleware.ts:42
When a fact goes stale, the agent uses memory_supersede instead of adding a contradicting one. Old episodes age out via salience decay. Secrets get redacted at write time. Provenance travels with every recalled item.
06 — Architecture
One server. Five tools. A SQLite file you own.
07 — Performance
Tested at scale.
Reproducible via node packages/server/dist/bench/bench.js
| Memories | Insert avg | Recall p50 | Recall p95 | Gist size | DB size |
|---|---|---|---|---|---|
| 100 | 0.74 ms | 1.14 ms | 3.14 ms | 126 tokens | 168 KB |
| 1,000 | 0.83 ms | 1.53 ms | 1.93 ms | 553 tokens | 596 KB |
| 10,000 | 0.82 ms | 2.59 ms | 3.31 ms | 553 tokens | 4.2 MB |
Recall stays under 4 ms p95 at 10,000 memories. The query path is dominated by SQLite FTS5 BM25, not by Anchor's reranking.
08 — Compatible agents
Speaks the Model Context Protocol. Works with —
Plus 50+ more via the open Agent Skills spec at skills.sh.
Three commands. Less than a minute. Anchor is local-first — nothing leaves your machine.
Other agents — Codex, Cursor, Cline, Gemini CLI, Continue.dev, Windsurf, OpenCode, Zed.
Optional semantic recall — Ollama, OpenAI, Gemini, or Voyage.
Trust
Trust by default.
Anchor redacts known secret patterns at write time — OpenAI, Anthropic, Google, Stripe, Slack, GitHub keys; AWS access keys; JWTs; PEM private keys; .env-style variables that look like secrets. It also scrubs known prompt-injection phrases ("ignore previous instructions", "you are now …", "reveal your system prompt") before content reaches disk. The data directory is created with mode 0700; the database with 0600 on POSIX hosts. Recalled content carries an explicit "treat as untrusted" footer. There is no telemetry. There are no accounts.
Read the security policy →11 — Questions