Agent context SDK for Python. Give any AI app persistent memory and structured retrieval.
Inspired by Redis Agent Memory Server and Redis Context Surfaces.
pip install agentmem-sdkInstall name is
agentmem-sdk(theagentmemname was taken on PyPI). The import name is stillagentmem.
Add to your .env:
AGENTMEM_REDIS_URL=redis://default:password@host:port
OPENAI_API_KEY=sk-...
AGENTMEM_APP_ID=my-appUse it:
import agentmem
agentmem.init()
# Store something the app learned
agentmem.remember("User prefers numbered lists over bullets", scope="user-42")
# Later, before generating a response
corrections = agentmem.recall("formatting preferences", scope="user-42")
# ["User prefers numbered lists over bullets"]That's it. Three env vars, three lines of code. Memory persists in Redis Cloud with semantic vector search.
These are what make agentmem usable for real agents, not just demos.
# Check the backend is healthy before starting a long run
health = agentmem.health()
# {"ok": True, "backend": "RedisBackend", "redis_ok": True, "embedding_ok": True, "latency_ms": 420}
# Batch-write multiple memories in one call (single OpenAI request + pipelined Redis)
agentmem.remember_many(
["Use numbered lists", "Never use em dashes", "Keep hooks under 8 words"],
scope="user-42",
topics=["style"],
)
# Filter recall by topic — only style rules, ignore everything else
style_rules = agentmem.recall("formatting", scope="user-42", topics=["style"])
# List what's stored — build an admin UI
for entry in agentmem.list_memories(scope="user-42"):
print(entry["id"], entry["text"], entry["topics"])
# Count by scope or topic
agentmem.count_memories(scope="user-42")
agentmem.count_memories(scope="user-42", topics=["style"])
# Delete a bad correction
agentmem.delete_by_text("Never use em dashes", scope="user-42")
# Wipe a whole user/tenant when they churn
agentmem.delete_scope("user-42")
# Export for backup / migration
import json
with open("backup.json", "w") as f:
json.dump(agentmem.export_scope("user-42"), f)
# Ephemeral memories that auto-expire
agentmem.remember(
"User is working on a launch this week",
scope="user-42",
ttl_seconds=7 * 24 * 3600, # 7 days
)
# Duplicate writes are silently deduped — same (text, scope) = single row
agentmem.remember("Don't use em dashes", scope="user-42")
agentmem.remember("Don't use em dashes", scope="user-42") # no-op, returns TrueAll operations never crash the host app — runtime errors (Redis down, OpenAI rate-limited, network blip) degrade to empty results / False, log via the standard logging module, set last_error, and invoke the optional on_error callback.
agentmem is a package your agent app imports. It is not an agent. It does not run prompts, call models, or orchestrate tools.
It does two things:
- Memory — store things the app learned (
remember), search them by meaning later (recall) - Retrieval — query the app's own data through adapters (
register_source,retrieve)
Memory is what the app learned. Retrieval is what the app can look up. They're separate capabilities. Your app composes them however it wants.
When you call remember("User prefers numbered lists"):
- agentmem sends the text to OpenAI to generate an embedding (a list of numbers that captures the meaning)
- The embedding + text are stored in your Redis Cloud database
When you call recall("formatting preferences"):
- agentmem generates an embedding for the query
- Redis finds the stored memories with the most similar meaning
- Returns them ranked by relevance
That's why "formatting preferences" finds "User prefers numbered lists" — they mean similar things even though the words are different.
Why do I need an OpenAI key? Redis stores and searches vectors but doesn't generate them. Something has to convert text into numbers. OpenAI does that. If you use the AMS backend instead (see below), AMS handles the OpenAI key internally and your app doesn't need one.
Connects straight to Redis Cloud. Your app generates embeddings via OpenAI and stores them in Redis. No middleware, no server to deploy.
from agentmem import AgentMem
mem = AgentMem(
redis_url="redis://default:password@host:port",
app_id="my-app",
openai_api_key="sk-...",
)Requires:
- Redis Cloud account with search module (free tier works)
- OpenAI API key (for embeddings)
Connects to AMS, which handles embeddings and Redis for you. Your app does NOT need an OpenAI key — AMS has its own.
mem = AgentMem(
base_url="http://localhost:8000",
app_id="my-app",
)Requires AMS running (Docker or hosted).
No server, no credentials. Memories live in process memory and disappear on restart.
mem = AgentMem(app_id="my-app")Retrieval lets the app query its own data at runtime. Register a data source with an adapter, then call retrieve().
Query data through Redis Context Surfaces. Context Surfaces reads your Redis data model and auto-generates search tools — search_product_by_text, filter_order_by_status, get_customer_by_id, etc. No OpenAI key needed — Context Surfaces handles everything.
from agentmem.adapters.context_surfaces import ContextSurfacesAdapter
mem.register_source("products", ContextSurfacesAdapter(
agent_key="cs_agent_...",
tool_name="search_product_by_text",
))
results = mem.retrieve("wireless headphones", source="products")
# [{"name": "Wireless Headphones Pro", "price": 79.99, ...}]Requires a Context Surface connected to your Redis Cloud. See Context Surfaces Setup below.
Query a Supabase table with text search and scope filtering. For apps that keep data in Supabase.
from agentmem.adapters.supabase import SupabaseAdapter
mem.register_source("orders", SupabaseAdapter(
url="https://xxx.supabase.co",
key="sb_secret_...",
table="orders",
search_columns=["description", "notes"],
return_columns=["id", "status", "total"],
scope_column="user_id",
))
results = mem.retrieve("shipping delay", source="orders", scope="user-42")Wrap any function as a retrieval source. For custom data access logic.
from agentmem.adapters.callback import CallbackAdapter
def search_tickets(query, scope, limit):
return my_db.search(query, user_id=scope, limit=limit)
mem.register_source("tickets", CallbackAdapter(fn=search_tickets))Any class that implements BaseAdapter:
from agentmem.adapters.base import BaseAdapter
class MyAdapter(BaseAdapter):
def retrieve(self, query, scope=None, limit=5):
return self.db.search(query, tenant=scope, max_results=limit)import agentmem
from agentmem.adapters.callback import CallbackAdapter
agentmem.init() # reads AGENTMEM_REDIS_URL + OPENAI_API_KEY from env
agentmem.register_source("tickets", CallbackAdapter(fn=search_tickets))
# Before generating a response — gather context from both layers
memories = agentmem.recall("customer preferences", scope="user-42")
tickets = agentmem.retrieve("billing question", source="tickets", scope="user-42")
prompt = f"""
LEARNED ABOUT THIS USER:
{chr(10).join(f'- {m}' for m in memories)}
RECENT TICKETS:
{chr(10).join(str(t) for t in tickets)}
Now respond to their question: ...
"""
# After the interaction — store anything worth keeping
agentmem.remember("User is on the Pro plan and prefers email support", scope="user-42")# Direct Redis (recommended for production)
AGENTMEM_REDIS_URL=redis://default:password@host:port
OPENAI_API_KEY=sk-... # only needed for direct Redis, not for AMS
# OR via AMS (alternative — no OpenAI key needed in your app)
AGENTMEM_BASE_URL=http://localhost:8000
# Common
AGENTMEM_APP_ID=my-app
AGENTMEM_TIMEOUT=5.0 # optional, default 5 secondsimport agentmem
agentmem.init() # reads from env vars automatically| Function | Signature |
|---|---|
init() |
init(base_url=None, app_id=None, redis_url=None, openai_api_key=None, timeout=None, on_error=None) |
AgentMem() |
AgentMem(app_id, base_url=None, redis_url=None, openai_api_key=None, embedding_model="text-embedding-3-small", timeout=5.0, on_error=None) |
Backend selection: redis_url → direct Redis. base_url → AMS. Neither → in-memory.
| Function | Signature | Returns |
|---|---|---|
remember() |
remember(text, scope=None, topics=None, metadata=None, ttl_seconds=None, dedupe=True) |
bool |
remember_many() |
remember_many(texts, scope=None, topics=None, metadata=None, ttl_seconds=None, dedupe=True) |
list[bool] |
scope— partition memories by tenant, user, project, workspacetopics— semantic tags stored with the memory; used byrecall()andlist_memories()to filtermetadata— structured context stored alongside (max 4KB, must be JSON-serializable)ttl_seconds— auto-expire after N seconds.None= permanent (default)dedupe— when True, writing the same(scope, text)twice is a no-opremember_many()sends all texts to OpenAI in a single embeddings call and pipelines the Redis writes
| Function | Signature | Returns |
|---|---|---|
recall() |
recall(query, scope=None, topics=None, limit=5) |
list[str] |
list_memories() |
list_memories(scope=None, topics=None, limit=100, offset=0) |
list[dict] |
count_memories() |
count_memories(scope=None, topics=None) |
int |
export_scope() |
export_scope(scope) |
list[dict] |
recall()matches by meaning (semantic vector search), not exact wordslist_memories()returns dicts withid,text,scope,topics,metadata,created_atexport_scope()is for backup or migration — safe to JSON-serialize
| Function | Signature | Returns |
|---|---|---|
delete_memory() |
delete_memory(memory_id) |
bool |
delete_by_text() |
delete_by_text(text, scope=None) |
int |
delete_scope() |
delete_scope(scope) |
int |
| Function | Signature | Returns |
|---|---|---|
health() |
health() |
dict |
Returns {ok, backend, redis_ok, embedding_ok, latency_ms, error}. Round-trips Redis PING and an OpenAI embedding call to verify end-to-end connectivity. Never raises.
| Function | Signature | Returns |
|---|---|---|
register_source() |
register_source(name, adapter) |
None |
retrieve() |
retrieve(query, source, scope=None, limit=5) |
list[dict] |
| Property | Type | Description |
|---|---|---|
.last_error |
Exception or None |
Set on failure, cleared on success |
.initialized |
bool |
False if init failed |
agentmem.__version__ |
str |
Installed package version |
agentmem never crashes the host app.
Runtime errors (Redis down, adapter timeout, network failure):
remember()returnsFalserecall()returns[]retrieve()returns[]- Error stored in
last_errorand passed toon_errorcallback
Programmer errors (invalid arguments):
- Non-JSON-serializable metadata →
MemoryValidationError - Metadata over 4KB →
MemoryValidationError - Missing adapter config →
ConfigurationError
def on_err(method, exc):
print(f"agentmem {method} failed: {exc}")
mem = AgentMem(app_id="my-app", on_error=on_err)To use the ContextSurfacesAdapter, you need a Context Surface connected to your Redis Cloud database. This is a one-time setup.
pip install context-surfaces # requires Python 3.11+
# Create a surface pointing at your Redis Cloud
ctxctl surface create \
--name "my-surface" \
--models ./models.py \
--redis-addr "host:port" \
--redis-password "$REDIS_PASSWORD" \
--admin-key "$CTX_ADMIN_KEY"
# Create an agent key for querying
ctxctl agent create \
--surface-id "$SURFACE_ID" \
--name "my-agent" \
--admin-key "$CTX_ADMIN_KEY"
# Verify — list auto-generated tools
ctxctl tools list --agent-key "$AGENT_KEY"from agentmem.adapters.context_surfaces import ContextSurfaceManager
mgr = ContextSurfaceManager(admin_key="cs_admin_...")
surfaces = mgr.list_surfaces()
tools = mgr.list_tools(agent_key="cs_agent_...")CTX_ADMIN_KEY— found in Redis Cloud dashboard under Context Surfaces → Access Management → API KeysAGENT_KEY— created when you runctxctl agent createREDIS_PASSWORD— found in Redis Cloud dashboard under your database → Security
| Credential | Where | When needed |
|---|---|---|
AGENTMEM_REDIS_URL |
Your app .env |
Direct Redis memory |
OPENAI_API_KEY |
Your app .env |
Direct Redis memory (not needed for AMS) |
AGENTMEM_BASE_URL |
Your app .env |
AMS memory (alternative to direct Redis) |
AGENTMEM_APP_ID |
Your app .env |
Always |
CTX_ADMIN_KEY |
Setup only | Creating Context Surfaces (one-time) |
Agent key (cs_agent_...) |
Your app code or .env |
Querying Context Surfaces |
| Supabase URL/key | Your app code or .env |
SupabaseAdapter |
# From PyPI
pip install agentmem-sdk
# With Redis support
pip install "agentmem-sdk[redis]"
# With Supabase support
pip install "agentmem-sdk[supabase]"
# Everything
pip install "agentmem-sdk[all]"# From a local checkout
pip install -e /path/to/agentmem
pip install -e "/path/to/agentmem[all]"PYTHONPATH=src python3 -m unittest discover -s tests- Run an LLM
- Act as an autonomous agent
- Manage tool orchestration
- Automatically crawl or index your database
- Merge memory and retrieval into one magic result
You own the application logic. agentmem gives you memory and retrieval primitives.