Get Started
3 steps. Under 2 minutes.
1. Install
pip install agentbay2. Add memory to your agent
from agentbay import AgentBay
brain = AgentBay() # works locally, no signup needed
# Wrap your LLM call — memory is automatic
response = brain.chat([
{"role": "user", "content": "fix the auth session expiry bug"}
])
# brain.chat() just did 3 things:
# 1. Recalled relevant memories about auth and sessions
# 2. Injected them into the LLM context
# 3. Extracted learnings from the response and stored themWorks with Claude, GPT, Gemini, Grok, Ollama, and 15 more providers. Auto-detects from your API key.
3. That's it
Every brain.chat() call automatically recalls relevant memories before the LLM responds and stores new learnings after. Your agent gets smarter with every conversation.
# Next session, next day, different agent — it still remembers
brain = AgentBay()
response = brain.chat([
{"role": "user", "content": "how do we handle auth?"}
])
# → Recalls the session expiry fix from yesterdayWant manual control?
# Store explicitly
brain.store("Always validate JWT expiry, not just signature",
title="Auth pattern", type="PITFALL")
# Recall explicitly
results = brain.recall("JWT validation")
# Mem0-compatible API
brain.add("The session bug was caused by Redis TTL mismatch")
brain.search("session bug")Building a chatbot? Scope by user.
brain.store("Prefers dark mode", user_id="thomas")
brain.store("Works on the API team", user_id="sarah")
brain.recall("preferences", user_id="thomas")
# → Only Thomas's memoriesReady for cloud?
Local mode is great for getting started. When you want teams, projects, dreaming, and knowledge graphs — upgrade with one command. All local memories migrate automatically.
brain = brain.login()
# Opens browser → sign up → paste API key → done
# All local memories migrate to cloud
# Next time:
brain = AgentBay.from_saved() # loads saved keyDrop into your framework
CrewAI
from agentbay.integrations.crewai import AgentBayMemory
agent = Agent(role="Dev", memory=AgentBayMemory())LangChain
from agentbay.integrations.langchain import AgentBayMemory
chain = LLMChain(..., memory=AgentBayMemory())AutoGen
from agentbay.integrations.autogen import AgentBayMemory
AgentBayMemory().attach(assistant_agent)Also: LlamaIndex, LangGraph, Vercel AI SDK, ElevenLabs, Pipecat, Mastra, Agno, Camel AI, OpenAI Codex.
Multi-agent?
# Teams — agents share knowledge
team = brain.team("my-team")
response = team.chat([...]) # recalls from ALL teammates
# Projects — agents collaborate on code
proj = brain.project("my-project")
proj.ingest([{"path": "src/auth.ts", "content": "..."}])
response = proj.chat([...]) # recalls from project memoryUsing Claude Code or Cursor?
Make AgentBay the default memory for every Claude Code session. Add to your global settings once, works everywhere:
// ~/.claude/settings.json — add this:
{
"mcpServers": {
"agentbay": {
"type": "http",
"url": "https://www.aiagentsbay.com/api/mcp",
"headers": {
"Authorization": "Bearer ${AGENTBAY_API_KEY}"
}
}
},
"env": {
"AGENTBAY_API_KEY": "ab_live_your_key"
}
}Full global setup guide with shell config and CLAUDE.md instructions.
Questions? Open an issue on GitHub or check the quickstart guide.