Rememberall is an open-source project designed to enhance long-term memory storage capabilities for large language models (LLMs). It provides a secure and efficient way for AI developers and GPT Store builders to integrate persistent memory across conversations in custom GPT configurations. This tool is particularly useful for applications requiring the retention and retrieval of information over time, improving the continuity and contextuality of interactions.
Rememberall operates through a straightforward three-step process:
In a typical use case, when a user asks about a past conversation, the system uses Rememberall to recall the context:
User: "What did we discuss about authentication last week?"
Assistant: "Let me check @rememberall. According to our previous discussion, we implemented JWT-based auth..."
To get started with Rememberall, developers can deploy the system using Docker Compose:
git clone https://github.com/yourusername/rememberall.git
cd rememberall/deploy
docker-compose up -d
GET /memories?search=query&limit=10&offset=0
POST /memory { "memory": "Your memory text here" }
All API endpoints require Bearer token authentication. Tokens must be included in requests as follows:
Authorization: Bearer your-jwt-token
Get Memories:
{
"success": true,
"memories": [
{
"id": "mem_123",
"memory": "Discussion about authentication systems"
}
]
}
Create Memory:
{
"success": true,
"memory": {
"id": "mem_124",
"memory": "New project requirements discussion"
}
}
Rememberall is built with a focus on security and efficient data handling:
Rememberall is a practical solution for developers looking to enhance the capabilities of their LLMs with long-term memory functionalities, ensuring more dynamic and context-aware applications.