Skip to main content

Browser primitives

note

This guide bypasses AtomicMemorySDK. You get storage, local embeddings, and semantic search — but no provider, no extensions, no gating or consent layer, no remote sync. If you want a backend-agnostic client, start at Using the atomicmemory backend instead.

The /storage, /embedding, and /search subpath exports work on their own. You can build client-side semantic search over your app's data without ever constructing AtomicMemorySDK, and without a memory backend of any kind.

This is the right path when:

  • The data is local to the browser and should not leave
  • You want to prototype without standing up a server
  • You already have application state and just want similarity search over it
  • You're embedding a small, bounded corpus (bundled docs, site content)

The pipeline

import {
StorageManager,
IndexedDBStorageAdapter,
} from '@atomicmemory/atomicmemory-sdk/storage';
import { EmbeddingGenerator } from '@atomicmemory/atomicmemory-sdk/embedding';
import { SemanticSearch } from '@atomicmemory/atomicmemory-sdk/search';

// 1. Storage
const adapter = new IndexedDBStorageAdapter();
await adapter.initialize({ dbName: 'my-app' });
const storage = new StorageManager([adapter]);
await storage.initialize();

// 2. Embeddings
const generator = new EmbeddingGenerator({
model: 'Xenova/all-MiniLM-L6-v2',
dimensions: 384,
provider: 'transformers',
});

// 3. Index some documents
const docs = [
{ id: '1', text: 'Prefer gRPC for internal service calls.' },
{ id: '2', text: 'Dark mode uses prefers-color-scheme.' },
{ id: '3', text: 'Postgres pgvector supports cosine distance.' },
];

for (const doc of docs) {
const { embedding } = await generator.generateEmbedding(doc.text);
await storage.set(`doc:${doc.id}`, {
...doc,
embedding: Array.from(embedding),
});
}

// 4. Search
const { embedding: queryVec } = await generator.generateEmbedding(
'service communication',
);

const keys = await storage.keys();
const entries = await Promise.all(
keys
.filter((k) => k.startsWith('doc:'))
.map((k) => storage.get<{ id: string; text: string; embedding: number[] }>(k)),
);

const search = new SemanticSearch();
const results = search.searchSimilar(queryVec, entries.filter((e) => e !== null));

for (const hit of results.slice(0, 3)) {
console.log(hit.score.toFixed(3), hit.item.text);
}

The exact SemanticSearch API differs slightly by version — import the types from @atomicmemory/atomicmemory-sdk/search and read the shape. The pattern is always: vectorize the query, score it against stored vectors, rank.

When to graduate to AtomicMemorySDK

The moment you need any of these, switch to the top-level SDK:

  • Remote sync to atomicmemory-core or Mem0
  • Multi-device memory
  • Server-side embedding for larger corpora
  • The AUDN mutation model (add / update / delete / no-op) for contradiction-safe writes
  • Context packaging, temporal search, versioning

At that point, the backend owns storage and embedding; the subpath primitives drop out of your data path.

Trade-offs

You getYou don't get
Zero-backend prototypingRemote sync
Data stays on the deviceCross-session persistence unless you use IndexedDB
Full control of the indexing strategyAUDN, versioning, observability
No identity / consent layerNo gating or capture policy

Both modes are legitimate. Pick the one that matches the constraint.

Next

  • Embeddings — the default model and caching behaviour
  • Storage adapters — the adapter interface if IndexedDBStorageAdapter doesn't fit