Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
139 changes: 139 additions & 0 deletions examples/hipposync-ai-chat/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,139 @@
# HippoSync - AI Chat with Persistent Memory

A production-ready AI chat application demonstrating MemMachine V2's persistent memory capabilities.

## Overview

HippoSync showcases how to build an AI assistant that remembers user context across multiple conversations using MemMachine's advanced memory architecture.

## Features

- 🧠 **Cross-Chat Memory**: Conversations persist across sessions
- 🤖 **Multi-Provider AI**: OpenAI GPT-4, Anthropic Claude, Google Gemini
- 👥 **Team Collaboration**: Shared project workspaces
- 📄 **Document Processing**: Upload and discuss files
- 🔐 **Secure Authentication**: JWT-based user management

## Architecture
```
React Frontend ←→ FastAPI Backend ←→ MemMachine V2
├── PostgreSQL (vectors)
└── Neo4j (graph)
```

## MemMachine Integration

### Storing Episodic Memory
```python
import requests

def add_memory(user_email: str, content: str):
response = requests.post(
"http://localhost:8080/api/v2/memories",
json={
"org_id": f"user-{user_email}",
"project_id": "personal",
"agent_id": "web-assistant",
"content": content
}
)
return response.json()
```

### Storing Semantic Facts
```python
def add_semantic_memory(user_email: str, fact: str):
response = requests.post(
"http://localhost:8080/api/v2/memories/semantic/add",
json={
"org_id": f"user-{user_email}",
"project_id": "personal",
"content": fact,
"memory_type": "semantic"
}
)
return response.json()
```

### Searching Across Conversations
```python
def search_memories(user_email: str, query: str):
response = requests.post(
"http://localhost:8080/api/v2/memories/search",
json={
"org_id": f"user-{user_email}",
"project_id": "personal",
"query": query,
"top_k": 20,
"search_episodic": True,
"search_semantic": True
}
)
return response.json()
```


## Key Implementation Files

In the full repository:
- `backend/app/memmachine_client.py` - MemMachine V2 client
- `backend/app/routes/chat.py` - Chat with memory integration
- `backend/app/utils/memory.py` - Fact extraction utilities

## Memory Organization
```
user-{email}/
├── personal/ # Personal conversations
│ ├── thread-1/ # Individual chat threads
│ ├── thread-2/
│ └── ...
└── project-{id}/ # Team projects
└── shared memory
```

## Example Usage

**First conversation:**
```
User: "My name is Sarah and I'm a software engineer"
Assistant: "Nice to meet you, Sarah! How can I help you today?"
[Memory stored: User's name is Sarah, occupation is software engineer]
```

**New conversation (different day):**
```
User: "What projects would be good for someone in my field?"
Assistant: "As a software engineer, I'd recommend..."
[Retrieved memory: User is a software engineer]
```

## Tech Stack

**Backend:**
- FastAPI
- SQLAlchemy
- MemMachine V2 API

**Frontend:**
- React 18
- Vite
- Tailwind CSS

**Infrastructure:**
- Docker
- PostgreSQL (via MemMachine)
- Neo4j (via MemMachine)

## Deployment Environments

- ✅ Local Development (Docker Compose)
- ✅ AWS EC2 (Production)
- ✅ Docker Swarm (Scalable)


## Author

**Viranshu Paruparla**
- GitHub: [@Viranshu-30](https://github.com/Viranshu-30)


23 changes: 23 additions & 0 deletions examples/hipposync-ai-chat/backend/.env.example
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
# Application settings
SECRET_KEY=your-secret-key-here-change-this-in-production
DATABASE_URL=sqlite:///./app.db

# OpenAI API Key (default/fallback)
OPENAI_API_KEY=sk-your-openai-api-key-here

# Tavily API Key (for web search)
# Get your free API key at: https://tavily.com
TAVILY_API_KEY=tvly-your-tavily-api-key-here

# MemMachine configuration
MEMMACHINE_BASE_URL=http://localhost:8080
MEMMACHINE_GROUP_PREFIX=group
MEMMACHINE_AGENT_ID=web-assistant

# CORS origins (comma-separated for multiple origins)
CORS_ORIGINS=http://localhost:5173,http://localhost:3000

# Server configuration
API_HOST=0.0.0.0
API_PORT=8000
ACCESS_TOKEN_EXPIRE_MINUTES=10080
18 changes: 18 additions & 0 deletions examples/hipposync-ai-chat/backend/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
FROM python:3.12-slim

# Install build dependencies for bcrypt
RUN apt-get update && apt-get install -y \
gcc \
build-essential \
&& rm -rf /var/lib/apt/lists/*

WORKDIR /app

COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

COPY . .

EXPOSE 8000

CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]
12 changes: 12 additions & 0 deletions examples/hipposync-ai-chat/backend/Dockerfile.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
FROM python:3.11-slim

WORKDIR /app

COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

COPY . .

EXPOSE 8000

CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]
Empty file.
Loading
Loading