Engram
Engram is a persistent memory store built for AI agents. It provides hybrid search — combining vector similarity with keyword matching — so your agents can find relevant context from past conversations even when the exact wording differs.
What it solves
LLMs have no memory between sessions. Engram gives your agents access to everything they’ve seen before, retrieved semantically.
How it works
- After each conversation turn, Meridian stores the message and its embedding in Engram
- At the start of each turn, Meridian queries Engram for related past context
- Relevant memories are injected into the system prompt
Quickstart
Engram runs as a standalone service:
docker run -p 8080:8080
-e DATABASE_URL=postgres://...
ghcr.io/aiconnai/engram:latest Then connect from Meridian:
const agent = new MeridianAgent({
memory: {
provider: "engram",
url: "http://localhost:8080",
},
});