⚠️ IMPORTANT WARNING⚠️ MemoRizz is an EXPERIMENTAL library intended for EDUCATIONAL PURPOSES ONLY.
Do NOT use in production environments or with sensitive data.
This library is under active development, has not undergone security audits, and may contain bugs or breaking changes in future releases.
MemoRizz is a memory management framework for AI agents designed to create memory-augmented agents with explicit memory type allocation based on application mode.
The framework enables developers to build context and memory aware agents capable of sophisticated information retrieval and storage.
MemoRizz provides flexible single and multi-agent architectures that allow you to instantiate agents with specifically allocated memory types—whether episodic, semantic, procedural, or working memory—tailored to your application's operational requirements.
Why MemoRizz?
- 🧠 Persistent Memory: Your AI agents remember conversations across sessions
- 🔍 Semantic Search: Find relevant information using natural language with AI Vector Search
- 🛠️ Tool Integration: Automatically discover and execute functions
- 👤 Persona System: Create consistent, specialized agent personalities
- 🗄️ Oracle AI Database: Built-in integration with Oracle 23ai for advanced vector search and JSON Duality Views
- ⚡ Semantic Cache: Speed up responses and reduce costs with intelligent caching
- Persistent Memory Management: Long-term memory storage with semantic retrieval
- MemAgent System: Complete AI agents with memory, personas, and tools
- Oracle AI Database Integration: Leverages Oracle 23ai with native vector search and JSON Relational Duality Views
- Tool Registration: Automatically convert Python functions into LLM-callable tools
- Persona Framework: Create specialized agent personalities and behaviors
- Vector Embeddings: Semantic similarity search across all stored information using Oracle AI Vector Search
- Semantic Cache: Intelligent query-response caching with vector similarity matching
pip install memorizz- Python 3.7+
- Oracle Database 23ai or higher (for AI Vector Search and JSON Duality Views)
- OpenAI API key (for embeddings and LLM functionality)
Using Docker (Recommended for getting started):
# Pull Oracle Database 23ai Free (with AI Vector Search)
docker pull container-registry.oracle.com/database/free:latest
# Run Oracle (takes 2-3 minutes to start)
docker run -d \
--name oracle-memorizz \
-p 1521:1521 \
-e ORACLE_PWD=MyPassword123! \
container-registry.oracle.com/database/free:latest
# Wait for database to be ready
docker logs -f oracle-memorizz
# Wait until you see: "DATABASE IS READY TO USE!"Quick Setup:
# Clone the repository (if installing from source)
git clone https://github.com/RichmondAlake/memorizz.git
cd memorizz
# Run the automated setup script
python examples/setup_oracle_user.pyThis script will:
- Create the
memorizz_userwith all required privileges - Set up the relational schema (tables + indexes)
- Create JSON Relational Duality Views
- Verify the complete setup
import os
from memorizz.memory_provider.oracle import OracleProvider, OracleConfig
from memorizz.memagent.builders import MemAgentBuilder
# Set up your API keys
os.environ["OPENAI_API_KEY"] = "your-openai-api-key"
# Configure Oracle memory provider
oracle_config = OracleConfig(
user="memorizz_user",
password="SecurePass123!",
dsn="localhost:1521/FREEPDB1",
embedding_provider="openai",
embedding_config={
"model": "text-embedding-3-small",
"api_key": os.environ["OPENAI_API_KEY"]
}
)
oracle_provider = OracleProvider(oracle_config)
# Create a MemAgent using the builder pattern
agent = (MemAgentBuilder()
.with_instruction("You are a helpful assistant with persistent memory.")
.with_memory_provider(oracle_provider)
.with_llm_config({
"provider": "openai",
"model": "gpt-4o-mini",
"api_key": os.environ["OPENAI_API_KEY"]
})
.build()
)
# Save the agent to Oracle
agent.save()
# Start conversing - the agent will remember across sessions
response = agent.run("Hello! My name is John and I'm a software engineer.")
print(response)
# Later in another session...
response = agent.run("What did I tell you about myself?")
print(response) # Agent remembers John is a software engineer| Agent Type | Description | Example Notebook |
|---|---|---|
| Single Agent | A standalone agent with its own memory and persona, suitable for individual tasks | Single Agent Example |
| Multi-Agent | A system of multiple agents collaborating, each with specialized roles and shared memory | Multi-Agent Example |
| Memory Component | Memory Category | Use Case / Description | Example Notebook |
|---|---|---|---|
| Persona | Semantic Memory | Agent identity, personality, and behavioral consistency | Persona Example |
| Knowledge Base | Semantic Memory | Persistent facts, concepts, and domain knowledge | Knowledge Base Example |
| Toolbox | Procedural Memory | Registered functions with semantic discovery for LLM execution | Toolbox Example |
| Workflow | Procedural Memory | Multi-step process orchestration and execution tracking | Workflow Example |
| Conversation Memory | Episodic Memory | Interaction history and conversational context | Single Agent Example |
| Summaries | Episodic Memory | Compressed episodic experiences and events | Summarization Example |
| Working Memory | Short-term Memory | Active context management and current session state | Single Agent Example |
| Semantic Cache | Short-term Memory | Vector-based query-response caching for performance optimization | Semantic Cache Demo |
| Shared Memory | Multi-Agent Coordination | Blackboard for inter-agent communication and coordination | Multi-Agent Example |
from memorizz.long_term_memory.semantic.persona import Persona, RoleType
# Create a technical expert persona
tech_expert = Persona(
name="TechExpert",
role=RoleType.TECHNICAL_EXPERT,
goals="Help developers solve complex technical problems with detailed explanations.",
background="10+ years experience in Python, AI/ML, and distributed systems."
)
# Create agent with persona
agent = (MemAgentBuilder()
.with_instruction("You are a technical expert assistant.")
.with_persona(tech_expert)
.with_memory_provider(oracle_provider)
.with_llm_config({
"provider": "openai",
"model": "gpt-4o",
"api_key": os.environ["OPENAI_API_KEY"]
})
.build()
)
# Save agent with persona to Oracle
agent.save()
# Now the agent will respond as a technical expert
response = agent.run("How should I design a scalable microservices architecture?")import requests
# Define a tool function
def get_weather(latitude: float, longitude: float) -> float:
"""Get the current temperature for a given latitude and longitude."""
response = requests.get(
f"https://api.open-meteo.com/v1/forecast"
f"?latitude={latitude}&longitude={longitude}"
f"¤t=temperature_2m"
)
data = response.json()
return data['current']['temperature_2m']
# Create agent with tools
weather_agent = (MemAgentBuilder()
.with_instruction("You are a helpful weather assistant.")
.with_tool(get_weather)
.with_memory_provider(oracle_provider)
.with_llm_config({
"provider": "openai",
"model": "gpt-4o",
"api_key": os.environ["OPENAI_API_KEY"]
})
.build()
)
# Save agent (tools are persisted to Oracle)
weather_agent.save()
# Agent automatically discovers and uses tools
response = weather_agent.run(
"What's the weather in New York? (latitude: 40.7128, longitude: -74.0060)"
)
print(response) # Agent calls get_weather() and provides the temperatureSpeed up your agents and reduce LLM costs with intelligent semantic caching:
# Build agent without cache first
agent = (MemAgentBuilder()
.with_llm_config({
"provider": "openai",
"model": "gpt-4o-mini",
"api_key": os.environ["OPENAI_API_KEY"]
})
.with_memory_provider(oracle_provider)
.build()
)
# Enable semantic cache (stored in Oracle)
agent.enable_semantic_cache(
threshold=0.85, # Similarity threshold (0.0-1.0)
scope='local' # 'local', 'global', or 'agent'
)
# Similar queries will use cached responses from Oracle
response1 = agent.run("What is the capital of France?")
response2 = agent.run("Tell me France's capital city") # Cache hit! ⚡
response3 = agent.run("What's the capital of Japan?") # New query, cache missHow Semantic Cache Works:
- Store queries + responses with vector embeddings in Oracle
- New query arrives → generate embedding
- Similarity search in Oracle using AI Vector Search and cosine similarity
- Cache hit (similarity ≥ threshold) → return cached response ⚡
- Cache miss → fallback to LLM + cache new response in Oracle
Benefits:
- 🚀 Faster responses for similar queries
- 💰 Reduced LLM costs by avoiding duplicate API calls
- 🎯 Configurable precision with similarity thresholds
- 🔒 Scoped isolation by agent, memory, or session ID
- 🗄️ Persistent caching in Oracle database with vector search
Compress long conversation histories into concise summaries:
# After having several conversations with your agent
summary_ids = agent.generate_summaries(
days_back=7, # Summarize conversations from the last 7 days
max_memories_per_summary=50 # Memories per summary chunk
)
print(f"Generated {len(summary_ids)} summaries")
# Summaries are stored in Oracle and can be retrieved
from memorizz.common.memory_type import MemoryType
summaries = oracle_provider.retrieve_by_query(
query={'agent_id': agent.agent_id},
memory_store_type=MemoryType.SUMMARIES,
limit=10
)
for summary in summaries:
print(f"Summary: {summary['content'][:200]}...")MemoRizz supports different memory categories for organizing information:
- CONVERSATION_MEMORY: Chat history and dialogue context
- WORKFLOW_MEMORY: Multi-step process information and tool execution tracking
- LONG_TERM_MEMORY: Persistent knowledge storage with semantic search
- SHORT_TERM_MEMORY: Temporary processing information including semantic cache for query-response optimization
- PERSONAS: Agent personality and behavior definitions
- TOOLBOX: Function definitions and metadata with semantic discovery
- SHARED_MEMORY: Multi-agent coordination and communication
- MEMAGENT: Agent configurations and states
- SUMMARIES: Compressed summaries of past interactions for efficient memory management
MemoRizz leverages Oracle 23ai's JSON Relational Duality Views, which provide:
- Dual Interface: Access data as both relational tables and JSON documents
- Automatic Sync: Changes in JSON reflect in tables and vice versa
- Type Safety: Relational schema ensures data integrity
- Performance: Native vector search with Oracle AI Vector Search
- Scalability: Enterprise-grade database capabilities
Extend the memory provider interface for custom storage backends:
from memorizz.memory_provider.base import MemoryProvider
class CustomMemoryProvider(MemoryProvider):
def store(self, data, memory_store_type):
# Your custom storage logic
pass
def retrieve_by_query(self, query, memory_store_type, limit=10):
# Your custom retrieval logic
passCreate collaborative agent systems with shared memory in Oracle:
# Create specialized delegate agents
data_analyst = (MemAgentBuilder()
.with_instruction("You are a data analysis expert.")
.with_memory_provider(oracle_provider)
.with_llm_config({
"provider": "openai",
"model": "gpt-4o",
"api_key": os.environ["OPENAI_API_KEY"]
})
.build()
)
report_writer = (MemAgentBuilder()
.with_instruction("You are a report writing specialist.")
.with_memory_provider(oracle_provider)
.with_llm_config({
"provider": "openai",
"model": "gpt-4o",
"api_key": os.environ["OPENAI_API_KEY"]
})
.build()
)
# Create orchestrator agent with delegates
orchestrator = (MemAgentBuilder()
.with_instruction("You coordinate between specialists to complete complex tasks.")
.with_memory_provider(oracle_provider)
.with_delegates([data_analyst, report_writer])
.with_llm_config({
"provider": "openai",
"model": "gpt-4o",
"api_key": os.environ["OPENAI_API_KEY"]
})
.build()
)
# Execute multi-agent workflow
response = orchestrator.run("Analyze our sales data and create a quarterly report.")Control agent memory persistence in Oracle:
# Save agent state to Oracle
agent.save()
# Load existing agent by ID from Oracle
from memorizz.memagent.core import MemAgent
existing_agent = MemAgent.load(
agent_id="your-agent-id",
memory_provider=oracle_provider
)
# Refresh agent from Oracle database
agent.refresh()
# Start a new conversation
agent.start_new_conversation()
# Get current state
conversation_id = agent.get_current_conversation_id()
memory_id = agent.get_current_memory_id()┌──────────────────────┐
│ MemAgent │ ← High-level agent interface
├──────────────────────┤
│ Persona │ ← Agent personality & behavior
├──────────────────────┤
│ Toolbox │ ← Function registration & discovery
├──────────────────────┤
│ Memory Provider │ ← Storage abstraction layer
├──────────────────────┤
│ Vector Search │ ← Semantic similarity & retrieval
├──────────────────────┤
│ Oracle 23ai │ ← Persistent storage with:
│ - Relational Tables│ • AI Vector Search
│ - Duality Views │ • JSON Documents
│ - Vector Indexes │ • Enterprise Features
└──────────────────────┘
Check out the examples/ directory for complete working examples:
- single_agent/memagent_single_agent_demo.ipynb: Complete single agent demo with Oracle
- memagents_multi_agents.ipynb: Multi-agent collaboration workflows
- persona.ipynb: Creating and using agent personas
- toolbox.ipynb: Tool registration and function calling
- workflow.ipynb: Workflow memory and process tracking
- knowledge_base.ipynb: Long-term knowledge management
- semantic_cache.ipynb: Semantic cache for performance optimization
- memagent_summarisation.ipynb: Conversation summarization
Connection Details (using Docker):
- Host:
localhost - Port:
1521 - Service Name:
FREEPDB1 - User:
memorizz_user - Password:
SecurePass123!(change in production)
Manual Setup:
If you prefer manual setup, see the detailed guide in examples/single_agent/memagent_single_agent_demo.ipynb.
# Required
export OPENAI_API_KEY="your-openai-api-key"
# Oracle Connection (optional if using OracleConfig)
export ORACLE_USER="memorizz_user"
export ORACLE_PASSWORD="SecurePass123!"
export ORACLE_DSN="localhost:1521/FREEPDB1"Common Issues:
-
Oracle Connection: Ensure Oracle is running and accessible
docker ps # Check if oracle-memorizz container is running docker logs oracle-memorizz # Check logs
-
Vector Search: Oracle 23ai+ is required for AI Vector Search
-
API Keys: Check OpenAI API key is valid and has credits
-
Duality Views: Ensure
setup_oracle_user.pycompleted successfully -
Import Errors: Ensure you're using the correct import paths shown in examples
This is an educational project. Contributions for learning purposes are welcome:
- Fork the repository
- Create a feature branch
- Add tests for new functionality
- Submit a pull request
MIT License - see LICENSE file for details.
This library demonstrates key concepts in:
- AI Agent Architecture: Memory, reasoning, and tool use
- Vector Databases: Semantic search and retrieval with Oracle AI Vector Search
- LLM Integration: Function calling and context management
- Oracle 23ai Features: JSON Relational Duality Views and AI capabilities
- Software Design: Clean abstractions and extensible architecture
Oracle Database 23ai provides enterprise-grade features that make it ideal for AI agent memory:
- Native Vector Search: Built-in AI Vector Search with multiple distance metrics
- JSON Duality Views: Query data as JSON or SQL with automatic synchronization
- Transactional Consistency: ACID properties for reliable memory storage
- Scalability: Handle millions of memories with enterprise performance
- Security: Row-level security, encryption, and comprehensive audit trails
- Free Tier: Oracle Database 23ai Free for development and learning
Ready to build memory-augmented AI agents? Start with the Quick Start guide above or explore the examples directory!