Smriti is a memory engine, not a memory model. The LLM is your agent's reasoning layer, not Smriti's. Replace your LLM, the engine is unchanged. Replace your vector store with Smriti, your agent gains confidence verdicts, causal trajectory replay, salience-aware decay, and federated sync — primitives no vector store can offer.
Memory that has to be private, portable, and cognitive enough to abstain when it doesn't know. The same Rust core runs in the browser, on edge servers, and on small devices — no embedding model required.
The auth module uses JWT RS256 with 1-hour expiry
Authentication uses public/private key pairs over TLS 1.3
All numbers below are reproducible from a clean clone with one
cargo command. Methodology and per-category breakdowns live in
benchmarks/results/REAL_DATASETS_REPORT.md.
| Metric | Zero-ML | + fastembed (optional) |
|---|---|---|
| Intrinsic Hit % (engine retrieved gold) | 95.7% | 95.7% |
| Top-1 % (gold ranked #1) | 63.8% | 70.2% |
| Shipped Hit % (after Confident=2 truncation) | 78.7% | 89.4% |
| Adversarial abstention (12 queries with no gold) | 91.7% | 91.7% |
| Avg tokens / 500 budget | 79 | 81 |
| Recall p95 latency | 1.6 ms | 3.7 ms |
Reproduce: cargo run --release --bin smriti-bench-500 (zero-ML)
or SMRITI_BENCH_EMBEDDINGS=1 cargo run --release --bin smriti-bench-500 --features embeddings (with embeddings).
| Dataset / category | Substring eval | LLM judge |
|---|---|---|
| LongMemEval-S · single-session-user (n=8) | 100.0% | 100.0% |
| LongMemEval-S · knowledge-update (validates supersedes) | 62.5% | 62.5% |
| LongMemEval-S · stratified across 6 categories (n=48) | 50.0% | 47.9% |
| LOCOMO · mixed categories (n=80) | 2.5% | 21.9% |
The LOCOMO substring 2.5% → judge 21.9% (+19.4pp lift) confirms Smriti
retrieves real signal that string matchers cannot credit. Datasets:
xiaowu0162/longmemeval-cleaned (HuggingFace),
snap-research/locomo (GitHub). Judge harness:
benchmarks/judge_results.py. Methodology, per-entry verdicts,
and reasons in benchmarks/results/REAL_DATASETS_REPORT.md.
| Capability | Vector store | Smriti |
|---|---|---|
| Confidence verdicts (Confident / Ambiguous / Abstained) | — | ✅ 91.7% adversarial abstention |
| Causal trajectory replay (typed-edge BFS) | — | ✅ recall_trajectory() |
| Salience-aware decay bypass (auto-PPR-seed) | — | ✅ Salience::Critical |
| Goal-pinned persistent priming | — | ✅ MemoryKind::Goal |
| Supersede chains with audit trail | re-embed (history lost) | ✅ chain preserved, recall hides old |
| Federated sync (LWW, P2P) | server-coupled | ✅ export_sync_state / import_sync_state |
| WASM / browser deployment | — | ✅ 127 KB gzipped |
| Zero-ML mode | — | ✅ 95.7% intrinsic hit, no embeddings |
A head-to-head LLM-judged comparison against Mem0, Letta, and Zep on
LongMemEval and LOCOMO is on the v0.3 roadmap — the harness already
runs against any system that exposes (corpus, query) → context.
This is the real Rust engine, compiled to a 127 KB gzipped WebAssembly module and running entirely in your browser tab. No backend, no embedding model, no network call after the first load. The same binary ships natively at 95.7% intrinsic recall on bench-500.
🔒 Your data never leaves this tab. Every visitor runs their own private WASM instance — we cannot see what you type, even if we wanted to. There is no shared store, no server-side session, no telemetry on the demo. Hit Reset any time to wipe this tab's memories.
$ smriti remember "Bob is the lead engineer"$ smriti recall "who leads engineering?"Everything listed here works without an LLM in the loop. Smriti performs the graph algebra natively, keeping your agent fast and deterministic.
Smriti speaks the Model Context Protocol natively. Drop the server into your agent's MCP config and the engine's primitives become first-class tools. No wrapper, no glue code, no embedding pipeline.
// ~/.config/claude-code/mcp.json
{
"smriti": {
"command": "smriti-http",
"args": ["--mcp", "--db", "~/.smriti/global.db"]
}
}
Restart your agent. smriti_* tools appear in the
tool palette immediately.
smriti_remember — store with attributes + tagssmriti_recall — verdict + confidence-graded packsmriti_supersede — graceful contradictionsmriti_reconsolidate — usage-driven plasticitysmriti_link — typed edges (CausedBy / Before / …)smriti_recall_trajectory — narrative replaysmriti_clear_activation — topic-switch resetsmriti_suggest_clusters + smriti_merge — sleep summarizationsmriti_consolidate, smriti_vacuum, smriti_stats, smriti_forgetNone require an LLM. The agent provides structured input; Smriti does the graph algebra.
cargo install smriti --features http
smriti remember "JWT RS256 with 1h expiry" \
--tags auth,security \
--kind fact
smriti recall "how does auth work" \
--budget 500