Skip to main content

Introduction

AtomicMemory is an open-source memory engine for AI applications — semantic retrieval, AUDN mutation (Add / Update / Delete / No-op), and contradiction-safe claim versioning, delivered as an HTTP service you can run with one docker compose up. It is pluggable at every seam: swap the embedding provider, the LLM, the storage backend, or the scope model without forking. The engine ships as a standardized platform layer — not a framework, not a SaaS — so your agents, assistants, and products can compose the memory stack they need.

Why AtomicMemory

AI memory is becoming a platform concern, not a product feature. Most existing options force a hosted runtime, a specific agent framework, or a proprietary query language. AtomicMemory is designed around the opposite defaults.

DimensionAtomicMemorymem0lettazep
LicenseMITApache 2.0 (OSS + hosted)Apache 2.0Apache 2.0 (OSS + hosted)
Primary languageTypeScript (Node, ESM)PythonPythonGo
DeploymentSelf-host, Docker Compose, HTTP-firstSelf-host OSS + hosted platformSelf-hostSelf-host OSS + hosted platform
Storage backendPostgres + pgvector (swappable store interfaces)Pluggable (multiple vector DBs)Built-in agent state + vectorInternal store (self-managed)
Embedding providersopenai, openai-compatible, ollama, transformers (local WASM)Multi-providerMulti-providerMulti-provider
LLM providersopenai, openai-compatible, ollama, anthropic, google, groqMulti-providerMulti-providerMulti-provider
Scope modelExplicit user / workspace / agent scopes at the request boundaryUser + agent memoryAgent-centricSession / user centric
ObservabilityStable observability envelope on search responses (retrieval / packaging / assembly)Logs + integrationsFramework-dependentHosted dashboards
API surfaceHTTP (snake_case) + TypeScript SDK (coming soon)Python + Node SDKs, RESTPython + TypeScript SDKs, RESTPython + TypeScript + Go SDKs, REST

Sources for third-party claims: Mem0 OSS overview, Letta intro, Zep overview.

The pitch is not "we do more." It is: the seams are explicit, the contracts are stable, and you compose your own stack.

Platform at a glance

  • Pluggable storage — five domain-facing store interfaces so ingest, search, CRUD, lifecycle, and trust each see only the contract they need (stores)
  • Pluggable providers — embeddings via openai, openai-compatible, ollama, or transformers (local WASM); LLM via openai, openai-compatible, ollama, anthropic, google, or groq (providers)
  • Explicit composition — a single composition root wires the runtime container; no hidden singletons, no global state (composition)
  • First-class scope — user, workspace, and agent scopes dispatched at the request boundary, not bolted on after (scope)
  • Observability as contract — every search response carries a stable trace schema so dashboards and evals never break on a refactor (observability)
  • Domain separation — Ingest, Search, CRUD, Lifecycle, and Trust are independent domains with their own routes and services (architecture)

Try it in 2 minutes

The fastest path is the Quickstart: clone the core repo, set an API key, docker compose up, and run your first ingest and search with two curl commands.

Core is HTTP-first, so any language works today. A dedicated TypeScript client (atomicmemory-sdk) is coming soon for typed request and response shapes, richer ergonomics, and scope-aware helpers — but nothing about core requires it.

AtomicMemory is MIT-licensed. Self-host it, fork it, run it behind your own gateway — the platform is yours.