Compare commits

..

1 Commits

Author SHA1 Message Date
Ke Sun
feae2f2e1e Merge pull request #1033 from SuanmoSuanyangTechnology/release/v0.3.2
Release/v0.3.2
2026-04-30 11:12:12 +08:00
37 changed files with 897 additions and 1601 deletions

View File

@@ -1,74 +0,0 @@
# Contributing to MemoryBear
感谢你对 MemoryBear 的关注!我们欢迎任何形式的贡献。
## 如何贡献
### 报告问题
- 使用 [GitHub Issues](https://github.com/SuanmoSuanyangTechnology/MemoryBear/issues) 提交 Bug 报告或功能建议
- 提交前请先搜索是否已有相同的 Issue
### 提交代码
1. Fork 本仓库
2. 创建功能分支:`git checkout -b feature/your-feature-name`
3. 提交更改:遵循 [Conventional Commits](https://www.conventionalcommits.org/) 格式
4. 推送分支:`git push origin feature/your-feature-name`
5. 创建 Pull Request
6. Pull Request合并的目标分支为develop
### Commit 格式
```
<type>(<scope>): <description>
[optional body]
```
**Type 类型:**
| Type | 说明 |
|------|------|
| `feat` | 新功能 |
| `fix` | Bug 修复 |
| `docs` | 文档更新 |
| `style` | 代码格式(不影响逻辑) |
| `refactor` | 重构(非新功能、非修复) |
| `perf` | 性能优化 |
| `test` | 测试相关 |
| `chore` | 构建/工具链变更 |
**示例:**
```
feat(extraction): add ALIAS_OF relationship for entity deduplication
fix(search): correct hybrid search ranking when activation values are missing
docs(readme): update architecture diagram with generated images
```
### 开发环境
```bash
# 后端
cd api
pip install uv && uv sync
source .venv/bin/activate
pytest # 运行测试
# 前端
cd web
npm install
npm run lint # 代码检查
npm run dev # 开发服务器
```
### 代码规范
- Python遵循 PEP 8行宽不超过 120 字符
- TypeScript通过 ESLint 检查
- 提交前确保测试通过
## 行为准则
请保持友善和尊重。我们致力于为所有人提供一个开放、包容的社区环境。

523
README.md
View File

@@ -1,306 +1,217 @@
<img width="2346" height="1310" alt="MemoryBear Hero Banner" src="https://github.com/user-attachments/assets/2c0a3f72-1a14-4017-93c8-a7f490d545b6" />
<img width="2346" height="1310" alt="image" src="https://github.com/user-attachments/assets/bc73a64d-cd1e-4d22-be3e-04ce40423a20" />
<div align="center">
# MemoryBear — Empowering AI with Human-Like Memory
**Next-Generation AI Memory Management System · Perceive · Extract · Associate · Forget**
# MemoryBear empowers AI with human-like memory capabilities
[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](LICENSE)
[![Python](https://img.shields.io/badge/Python-3.12+-green?logo=python&logoColor=white)](https://www.python.org/)
[![FastAPI](https://img.shields.io/badge/FastAPI-0.100+-teal?logo=fastapi&logoColor=white)](https://fastapi.tiangolo.com/)
[![Neo4j](https://img.shields.io/badge/Neo4j-4.4+-blue?logo=neo4j&logoColor=white)](https://neo4j.com/)
[![Gitee Sync](https://img.shields.io/github/actions/workflow/status/SuanmoSuanyangTechnology/MemoryBear/sync-to-gitee.yml?label=Gitee%20Sync&logo=gitee&logoColor=white)](https://github.com/SuanmoSuanyangTechnology/MemoryBear/actions/workflows/sync-to-gitee.yml)
[中文](./README_CN.md) | English
[Quick Start](#quick-start) · [Installation](#installation) · [Core Features](#core-features) · [Architecture](#architecture) · [Benchmarks](#benchmarks) · [Papers](#papers)
### [Installation Guide](#memorybear-installation-guide)
### Paper: <a href="https://memorybear.ai/pdf/memoryBear" target="_blank" rel="noopener noreferrer">《Memory Bear AI: A Breakthrough from Memory to Cognition》</a>
## Project Overview
MemoryBear is a next-generation AI memory system independently developed by RedBear AI. Its core breakthrough lies in moving beyond the limitations of traditional "static knowledge storage". Inspired by the cognitive mechanisms of biological brains, MemoryBear builds an intelligent knowledge-processing framework that spans the full lifecycle of perception, refinement, association, and forgetting.The system is designed to free machines from the trap of mere "information accumulation", enabling deep knowledge understanding, autonomous evolution, and ultimately becoming a key partner in human-AI cognitive collaboration.
</div>
## MemoryBear was created to address these challenges
### 1. Core causes of knowledge forgetting in single models</br>
Context window limitations: Mainstream large language models typically have context windows of 8k-32k tokens. In long conversations, earlier messages are pushed out of the window, causing later responses to lose their historical context.For example, a user says in turn 1, "I'm allergic to seafood", but by turn 5 when they ask, "What should I have for dinner tonight?" the model may have already forgotten the allergy information.</br>
---
Gap between static knowledge bases and dynamic data: The model's training corpus is a static snapshot (e.g., data up to 2023) and cannot continuously absorb personalized information from user interactions, such as preferences or order history. External memory modules are required to supplement and maintain this dynamic, user-specific knowledge.</br>
## Overview
Limitations of the attention mechanism: In Transformer architectures, self-attention becomes less effective at capturing long-range dependencies as the sequence grows. This leads to a recency bias, where the model overweights the latest input and ignores crucial information that appeared earlier in the conversation.</br>
MemoryBear is a next-generation AI memory system developed by RedBear AI. Its core breakthrough lies in moving beyond the limitations of traditional "static knowledge storage". Inspired by the cognitive mechanisms of biological brains, MemoryBear builds an intelligent knowledge-processing framework that spans the full lifecycle of **perception → extraction → association → forgetting**.
### 2. Memory gaps in multi-agent collaboration</br>
Data silos between agents: Different agents-such as a consulting agent, after-sales agent, and recommendation agent-often maintain their own isolated memories without a shared layer. As a result, users have to repeat information. For instance, after providing their address to the consulting agent, the user may be asked for it again by the after-sales agent.</br>
Unlike traditional memory tools that treat knowledge as static data to be retrieved, MemoryBear emulates the hippocampus's memory encoding, the neocortex's knowledge consolidation, and synaptic pruning-based forgetting — enabling knowledge to dynamically evolve with life-like properties. This shifts the relationship between AI and users from **passive lookup** to **proactive cognitive assistance**.
Inconsistent dialogue state: When switching between agents in multi-turn interactions, key dialogue state-such as the user's current intent or past issue labels-may not be passed along completely. This causes service discontinuities. For example,a user transitions from "product inquiry" to "complaint", but the new agent does not inherit the complaint details discussed earlier.</br>
## Papers
Conflicting decisions: Agents that only see partial memory can generate contradictory responses. For example, a recommendation agent might suggest products that the user is allergic to, simply because it does not have access to the user's recorded health constraints.</br>
| Paper | Description |
|-------|-------------|
| 📄 [Memory Bear AI: A Breakthrough from Memory to Cognition](https://memorybear.ai/pdf/memoryBear) | MemoryBear core technical report |
| 📄 [Memory Bear AI Memory Science Engine for Multimodal Affective Intelligence](https://arxiv.org/abs/2603.22306) | Technical report on multimodal affective intelligence memory engine |
| 📄 [A-MBER: Affective Memory Benchmark for Emotion Recognition](https://arxiv.org/abs/2604.07017) | Affective memory benchmark dataset |
### 3. Semantic ambiguity during model reasoning distorted understanding of personalized context</br>
Personalized signals in user conversations-such as domain-specific jargon, colloquial expressions, or context-dependent references-are often not encoded accurately, leading to semantic drift in how the model interprets memory. For instance, when the user refers to "that plan we discussed last time", the model may be unable to reliably locate the specific plan in previous conversations. Broken cross-lingual and dialect memory links in multilingual or dialect-rich scenarios, cross-language associations in memory may fail. When a user mixes Chinese and English in their requests, the model may struggle to integrate information expressed across languages.</br>
## Why MemoryBear
Typical example: A user says: "Last time customer support told me it could be processed 'as an urgent case'. What's the status now?" If the system never encoded what "urgent" corresponds to in terms of a concrete service level, the model can only respond with vague, unhelpful answers.</br>
### Knowledge Forgetting in Single Models
## Core Positioning of MemoryBear
Unlike traditional memory management tools that treat knowledge as static data to be retrieved, MemoryBear is designed around the goal of simulating the knowledge-processing logic of the human brain. It builds a closed-loop system that spans the entire lifecycle-from knowledge intake to intelligent output. By emulating the hippocampus's memory encoding, the neocortex's knowledge consolidation, and synaptic pruning-based forgetting mechanisms, MemoryBear enables knowledge to dynamically evolve with "life-like" properties. This fundamentally redefines the relationship between knowledge and its users-shifting from passive lookup to proactive cognitive assistance.</br>
- **Context window limits**: Mainstream LLMs have 8k32k token windows. In long conversations, early messages are pushed out, causing responses to lose historical context
- **Static knowledge gap**: Training data is a static snapshot — it cannot absorb personalized information (preferences, history) from live interactions
- **Recency bias**: Transformer self-attention weakens on long-range dependencies, overweighting recent input and ignoring earlier critical information
## Core Philosophy of MemoryBear
MemoryBear's design philosophy is rooted in deep insight into the essence of human cognition: the value of knowledge does not lie in its accumulation, but in the continuous transformation and refinement that occurs as it flows.
### Memory Gaps in Multi-Agent Collaboration
In traditional systems, once stored, knowledge becomes static-hard to associate across domains and incapable of adapting to users' cognitive needs. MemoryBear, by contrast, is built on the belief that true intelligence emerges only when knowledge undergoes a full evolutionary process: raw information distilled into structured rules, isolated rules connected into a semantic network, redundant information intelligently forgotten. Through this progression, knowledge shifts from mere informational memory to genuine cognitive understanding, enabling the emergence of real intelligence.</br>
- **Data silos**: Different agents (consulting, after-sales, recommendation) maintain isolated memories, forcing users to repeat information
- **Inconsistent dialogue state**: When switching agents, user intent and history labels are not fully passed along, causing service discontinuities
- **Decision conflicts**: Agents with partial memory can produce contradictory responses (e.g., recommending products a user is allergic to)
## Core Features of MemoryBear
As an intelligent memory management system inspired by biological cognitive processes, MemoryBear centers its capabilities on two dimensions: full-lifecycle knowledge memory management and intelligent cognitive evolution. It covers the complete chain-from memory ingestion and refinement to storage, retrieval, and dynamic optimization-while providing a standardized service architecture that ensures efficient integration and invocation across applications.</br>
### Semantic Ambiguity in Reasoning
### 1. Memory Extraction Engine: Multi-dimensional Structured Refinement as the Foundation of Cognition</br>
Memory extraction is the starting point of MemoryBear's cognitive-oriented knowledge management. Unlike traditional data extraction, which performs "mechanical transformation", MemoryBear focuses on semantic-level parsing of unstructured information and standardized multi-format outputs, ensuring precise compatibility with downstream graph construction and intelligent retrieval. Core capabilities include:</br>
- Domain jargon, colloquial expressions, and context-dependent references are not accurately encoded, leading to semantic drift in memory interpretation
- Cross-language memory associations fail in multilingual or dialect-rich scenarios
Accurate parsing of diverse information types: The engine automatically identifies and extracts core information from declarative sentences, removing redundant modifiers while preserving the essential subject-action-object logic. It also extracts structured triples (e.g., "MemoryBear-core functionality-knowledge extraction"), providing atomic data units for graph storage and ensuring high-accuracy knowledge association.</br>
<img width="2294" height="1154" alt="Why MemoryBear" src="https://github.com/user-attachments/assets/5e4192d8-ab76-402a-9e80-50d6ede147b9" />
Temporal information anchoring: For time-sensitive knowledge-such as event logs, policy documents, or experimental data-the engine automatically extracts timestamps and associates them with the content. This enables time-based reasoning and resolves the "temporal confusion" found in traditional knowledge systems.</br>
---
Intelligent pruning summarization: Based on contextual semantic understanding, the engine generates summaries that cover all key information with strong logical coherence. Users may customize summary length (50-500 words) and emphasis (technical, business, etc.), enabling fast knowledge acquisition across scenarios.Example: For a 10-page technical document, MemoryBear can produce a concise summary including core parameters, implementation logic, and application scenarios in under 3 seconds.</br>
## Core Features
### 2. Graph Storage: Neo4j-Powered Visual Knowledge Networks</br>
The storage layer adopts a graph-first architecture, integrating with the mature Neo4j graph database to manage knowledge entities and relationships efficiently. This overcomes limitations of traditional relational databases-such as weak relational modeling and slow complex queries-and mirrors the biological "neuron-synapse" cognition model.</br>
<img width="2294" height="1154" alt="MemoryBear Core Features" src="https://github.com/user-attachments/assets/5ae1e2bf-24be-4487-9065-7209f2a57f65" />
Key advantages include:
Scalable, flexible storage: supportting millions of entities and tens of millions of relational edges, covering 12 core relationship types (hierarchical, causal, temporal, logical, etc.) to fit multi-domain knowledge applications. Seamless integration with the extraction module: Extracting triples synchronize directly into Neo4j, automatically constructing the initial knowledge graph with zero manual mapping. Interactive graph visualization: users can intuitively explore entity connection paths, adjust relationship weights, and perform hybrid "machine-generated + human-optimized" graph management.</br>
### Memory Extraction Engine
### 3. Hybrid Search: Keyword + Semantic Vector for Precision and Intelligence</br>
To overcome the classic tradeoff-precision but rigidity vs. fuzziness but inaccuracy-MemoryBear implements a hybrid retrieval framework combining keyword search and semantic vector search.</br>
Performs **semantic-level parsing** of unstructured conversations and documents to extract:
Keyword search: Optimized with Lucene, enabling millisecond-level exact matching of structured Semantic vector search:Powered by BERT embeddings, transforming queries into high-dimensional vectors for deep semantic comparison. This allows recognition of synonyms, near-synonyms, and implicit intent.For example, the query "How to optimize memory decay efficiency?" may surface related knowledge such as "forgetting-mechanism parameter tuning" or "memory strength evaluation methods".
Intelligent fusion strategy:Semantic retrieval expands the candidate space; keyword retrieval then performs precise filtering.This dual-stage process increases retrieval accuracy to 92%, improving by 35% compared with single-mode retrieval.</br>
- **Core declarative information**: Strips redundant modifiers, preserving subject-action-object logic
- **Structured triples**: Automatically extracts entity relationships (e.g., `MemoryBear → core function → knowledge extraction`) as atomic units for graph storage
- **Temporal anchoring**: Automatically extracts and tags timestamps, enabling time-based knowledge tracing
- **Intelligent summarization**: Customizable length (50500 words) and focus; generates concise summaries of 10-page documents in under 3 seconds
### 4. Memory Forgetting Engine: Dynamic Decay Based on Strength & Timeliness</br>
Forgetting is one of MemoryBear's defining features-setting it apart from static knowledge systems. Inspired by the brain's synaptic pruning mechanism, MemoryBear models forgetting using a dual-dimension approach based on memory strength and time decay, ensuring redundant knowledge is removed while key knowledge retains cognitive priority.</br>
### Graph Storage (Neo4j)
Implementation details:Each knowledge item is assigned an initial memory strength (determined by extraction quality and manual importance labels). Strength is updated dynamically according to usage frequency and association activity; A configurable time-decay cycle defines how different knowledge types (core rules vs. temporary data) lose strength over time. When knowledge falls below the strength threshold and exceeds its validity period, it enters a three-stage lifecycle: Dormancy-retained but with lower retrieval priority. Decay-gradually compressed to reduce storage cost. Clearance -permanently removed and archived into cold storage. This mechanism maintains redundant knowledge under 8%, reducing waste by over 60% compared with systems lacking forgetting capabilities.</br>
**Graph-first architecture** integrated with Neo4j, overcoming the weak relational modeling of traditional databases:
### 5. Self-Reflection Engine: Periodic Optimization for Autonomous Memory Evolution</br>
The self-reflection mechanism is key to MemoryBear's "intelligent self-improvement'. It periodically revisits, validates, and optimizes existing knowledge, mimicking the human behavior of review and retrospection.</br>
- Supports millions of entities and tens of millions of relational edges
- Covers 12 core relationship types: hierarchical, causal, temporal, logical, and more
- Extracted triples sync directly to Neo4j, automatically building the initial knowledge graph
- Interactive graph visualization with "machine-generated + human-optimized" collaborative management
A scheduled reflection process runs automatically at midnight each day, performing:
1. Consistency checks, Detects logical conflicts across related knowledge (e.g., contradictory attributes for the same entity), flags suspicious records, and routes them for human verification;
2. Value assessment, Evaluates invocation frequency and contribution to associations. High-value knowledge is reinforced; low-value knowledge experiences accelerated decay;
3. Association optimization, Adjusts relationship weights based on recent usage and retrieval behavior, strengthening high-frequency association paths.</br>
### Hybrid Search
### 6. FastAPI Services: Standardized API Layer for Efficient Integration & Management</br>
To support seamless integration with external business systems, MemoryBear uses FastAPI to build a unified service architecture that exposes both management and service APIs with high performance, easy integration, and strong consistency. Service-side APIs cover knowledge extraction, graph operations, search queries, forgetting management, and more. Support JSON/XML formats, with average latency below 50 ms, and a single instance sustaining 1000 QPS concurrency. Management-side APIs provide configuration, permissions, log queries, batch knowledge import/export, reflection cycle adjustments, and other operational capabilities. Swagger API documentation is auto-generated, including parameter descriptions, request samples, and response schemas, enabling rapid integration and testing. The architecture is compatible with enterprise microservice ecosystems, supports Docker-based deployment, and integrates easily with CRM, OA, R&D management, and various business applications.</br>
**Keyword retrieval + semantic vector retrieval** dual-engine fusion:
## MemoryBear Architecture Overview
<img width="2294" height="1154" alt="image" src="https://github.com/user-attachments/assets/3afd3b49-20ea-4847-b9ed-38b646a4ad89" />
</br>
- Memory Extraction Engine: Preprocessing, deduplication, and structured knowledge extraction</br>
- Memory Forgetting Engine: Memory strength modeling and decay strategies</br>
- Memory Reflection Engine: Evaluation and rewriting of stored memories</br>
- Retrieval Services: Keyword search, semantic search, and hybrid retrieval</br>
- Agent & MCP Integration: Multi-tool collaborative agent capabilities</br>
- Keyword search powered by Elasticsearch for millisecond-level exact matching of structured information
- Semantic vector search via BERT embeddings, recognizing synonyms, near-synonyms, and implicit intent
- Semantic retrieval expands the candidate space; keyword retrieval then performs precise filtering
- Retrieval accuracy reaches **92%**, improving **35%** over single-mode retrieval
## Metrics
We evaluate MemoryBear across multiple datasets covering different types of tasks, comparing its performance with other memory-enabled systems. The evaluation metrics include F1 score (F1), BLEU-1 (B1), and LLM-as-a-Judge score (J)-where higher values indicate better performance. MemoryBear achieves state-of-the-art results across all task categories:
In single-hop scenarios, MemoryBear leads in precision, answer matching quality, and task specificity.
In multi-hop reasoning, it demonstrates stronger information coherence and higher reasoning accuracy.
In open generalization tasks, it exhibits superior capability in handling diverse, unbounded information and maintaining high-quality generalization.
In temporal reasoning tasks, it excels at aligning and processing time-sensitive information.
Across the core metrics of all four task types, MemoryBear consistently outperforms other competing systems in the industry, including Mem O, Zep, and LangMem, demonstrating significantly stronger overall performance.
### Memory Forgetting Engine
<img width="2256" height="890" alt="image" src="https://github.com/user-attachments/assets/5ff86c1f-53ac-4816-976d-95b48a4a10c0" />
MemoryBear's vector-based knowledge memory (non-graph version) achieves substantial improvements in retrieval efficiency while maintaining high accuracy. Its overall accuracy surpasses the best existing full-text retrieval methods (72.90 ± 0.19%). More importantly, it maintains low latency across critical metrics-including Search Latency and Total Latency at both p50 and p95-demonstrating the characteristics of higher performance with greater latency efficiency. This effectively resolves the common bottleneck in full-text retrieval systems, where high accuracy typically comes at the cost of significantly increased latency.
Inspired by the brain's **synaptic pruning** mechanism, using a dual-dimension model of memory strength and time decay:
<img width="2248" height="498" alt="image" src="https://github.com/user-attachments/assets/2759ea19-0b71-4082-8366-e8023e3b28fe" />
MemoryBear further unlocks its potential in tasks requiring complex reasoning and relationship awareness through the integration of a knowledge-graph architecture. Although graph traversal and reasoning introduce a slight retrieval overhead, this version effectively keeps latency within an efficient range by optimizing graph-query strategies and decision flows. More importantly, the graph-based MemoryBear pushes overall accuracy to a new benchmark (75.00 ± 0.20%). While maintaining high accuracy, it delivers performance metrics that significantly surpass all other methods, demonstrating the decisive advantage of structured memory systems.
- Each knowledge item is assigned an initial memory strength, updated dynamically by usage frequency and association activity
- When strength falls below threshold, knowledge enters a **dormancy → decay → clearance** three-stage lifecycle
- Redundant knowledge maintained below **8%**, reducing waste by over **60%** compared to systems without forgetting
<img width="2238" height="342" alt="image" src="https://github.com/user-attachments/assets/c928e094-45a2-414b-831a-6990b711ed07" />
### Self-Reflection Engine
# MemoryBear Installation Guide
## 1. Prerequisites
Scheduled daily reflection process, mimicking human review and retrospection:
### 1.1 Environment Requirements
- **Consistency checks**: Detects logical conflicts across related knowledge, flags suspicious records for human review
- **Value assessment**: Evaluates invocation frequency and association contribution; reinforces high-value knowledge, accelerates decay of low-value knowledge
- **Association optimization**: Adjusts relationship weights based on recent usage, strengthening high-frequency association paths
* Node.js 20.19+ or 22.12+- Required for running the frontend
### FastAPI Service Layer
* Python 3.12- Backend runtime environment
Unified service architecture exposing two API surfaces:
* PostgreSQL 13+- Primary relational database
| API Type | Path Prefix | Auth | Purpose |
|----------|-------------|------|---------|
| Management API | `/api` | JWT | System config, permissions, log queries |
| Service API | `/v1` | API Key | Knowledge extraction, graph ops, search, forgetting control |
* Neo4j 4.4+- Graph database (used for storing the knowledge graph)
- Average response latency below **50ms**, single instance sustaining **1000 QPS**
- Auto-generated Swagger documentation
- Docker-ready, compatible with enterprise microservice ecosystems (CRM, OA, R&D management)
* Redis 6.0+- Cache layer and message queue
---
## 2. Getting the Project
## Architecture
### 1. Download Method
<img src="https://github.com/user-attachments/assets/650e3d02-a8a1-4550-9fce-dceb38e9542d" alt="MemoryBear System Architecture" width="100%"/>
Clone via Git (recommended):
**Celery Three-Queue Async Architecture:**
| Queue | Worker Type | Concurrency | Purpose |
|-------|-------------|-------------|---------|
| `memory_tasks` | threads | 100 | Memory read/write (asyncio-friendly) |
| `document_tasks` | prefork | 4 | Document parsing (CPU-bound) |
| `periodic_tasks` | prefork | 2 | Scheduled tasks, reflection engine |
---
## Benchmarks
Evaluation metrics include F1 score (F1), BLEU-1 (B1), and LLM-as-a-Judge score (J) — higher values indicate better performance.
MemoryBear consistently outperforms competing systems including Mem0, Zep, and LangMem across all four task categories:
<img width="2256" height="890" alt="Benchmark Results" src="https://github.com/user-attachments/assets/163ea5b5-b51d-4941-9f6c-7ee80977cdbc" />
**Vector version (non-graph)**: Achieves substantially improved retrieval efficiency while maintaining high accuracy. Overall accuracy surpasses the best existing full-text retrieval methods (72.90 ± 0.19%), while maintaining low latency at both p50 and p95 for Search Latency and Total Latency.
<img width="2248" height="498" alt="Vector Version Metrics" src="https://github.com/user-attachments/assets/5e5dae2c-1dde-4f69-88ca-95a9b665b5b2" />
**Graph version**: Integrating the knowledge graph architecture pushes overall accuracy to a new benchmark (**75.00 ± 0.20%**), delivering performance metrics that significantly surpass all other methods.
<img width="2238" height="342" alt="Graph Version Metrics" src="https://github.com/user-attachments/assets/b1eb1c05-da9b-4074-9249-7a9bbb40e9d2" />
---
## Quick Start
### Docker Compose (Recommended)
**Prerequisites**: [Docker Desktop](https://www.docker.com/products/docker-desktop/) installed.
```bash
# 1. Clone the repository
git clone https://github.com/SuanmoSuanyangTechnology/MemoryBear.git
cd MemoryBear/api
# 2. Start base services (PostgreSQL / Neo4j / Redis / Elasticsearch)
# Pull and start these images via Docker Desktop first (see Installation section 3.2)
# 3. Configure environment variables
cp env.example .env
# Edit .env with your database connections and LLM API keys
# 4. Initialize the database
pip install uv && uv sync
alembic upgrade head
# 5. Start API + Celery Workers + Beat scheduler
docker-compose up -d
# 6. Initialize the system and get the admin account
curl -X POST http://127.0.0.1:8002/api/setup
```
> **Note**: `docker-compose.yml` includes the API service and Celery Workers only. Base services (PostgreSQL, Neo4j, Redis, Elasticsearch) must be started separately.
>
> **Port info**: Docker Compose defaults to port `8002`; manual startup defaults to port `8000`. The installation guide below uses manual startup (`8000`) as the example.
After startup:
- API docs: http://localhost:8002/docs
- Frontend: http://localhost:3000 (after starting the web app)
**Default admin credentials:**
- Account: `admin@example.com`
- Password: `admin_password`
### Manual Start
> Quick commands below — see [Installation](#installation) for detailed steps.
```bash
# Backend
cd api
pip install uv && uv sync
alembic upgrade head
uv run -m app.main
# Frontend (new terminal)
cd web
npm install && npm run dev
```
---
## Installation
### 1. Environment Requirements
| Component | Version | Purpose |
|-----------|---------|---------|
| Python | 3.12+ | Backend runtime |
| Node.js | 20.19+ or 22.12+ | Frontend runtime |
| PostgreSQL | 13+ | Primary database |
| Neo4j | 4.4+ | Knowledge graph storage |
| Redis | 6.0+ | Cache and message queue |
| Elasticsearch | 8.x | Hybrid search engine |
### 2. Get the Project
```bash
```plain&#x20;text
git clone https://github.com/SuanmoSuanyangTechnology/MemoryBear.git
```
<img src="https://github.com/SuanmoSuanyangTechnology/MemoryBear/releases/download/assets-v1.0/assets__directory-structure.svg" alt="Directory Structure" width="100%"/>
### 2. Directory Structure Explanation
### 3. Backend API Service
<img width="5238" height="1626" alt="diagram" src="https://github.com/user-attachments/assets/416d6079-3f34-40c3-9bcf-8760d186741a" />
#### 3.1 Install Python Dependencies
```bash
# Install uv package manager
## Installation Steps
### 1. Start the Backend API Service
#### 1.1 Install Python Dependencies
```python
# 0. Install the dependency management tool: uv
pip install uv
# Switch to the API directory
# 1. Switch to the API directory
cd api
# Install dependencies
uv sync
# 2. Install dependencies
uv sync
# 3. Activate the Virtual Environment (Windows)
.venv\Scripts\Activate.ps1 # run inside /api directory
api\.venv\Scripts\activate # run inside project root directory
.venv\Scripts\activate.bat # run inside /api directory
# Activate virtual environment
# Windows (PowerShell, inside /api)
.venv\Scripts\Activate.ps1
# Windows (cmd, inside /api)
.venv\Scripts\activate.bat
# macOS / Linux
source .venv/bin/activate
```
#### 3.2 Install Base Services (Docker Images)
#### 1.2 Install Required Base Services (Docker Images)
Download [Docker Desktop](https://www.docker.com/products/docker-desktop/) and pull the required images.
Use Docker Desktop to install the necessary service images.
**PostgreSQL** — search → select → pull
* **Docker Desktop download page:** &#x68;ttps://www.docker.com/products/docker-desktop/
<img width="1280" height="731" alt="PostgreSQL Pull" src="https://github.com/user-attachments/assets/96272efe-50ca-4a32-9686-5f23bc3f6c93" />
* **PostgreSQL**
<img width="1280" height="731" alt="PostgreSQL Container" src="https://github.com/user-attachments/assets/074ea9da-9a3d-401b-b14b-89b81e05487e" />
**Pull the Image**
<img width="1280" height="731" alt="PostgreSQL Running" src="https://github.com/user-attachments/assets/a14744cd-9350-4a2f-87dd-6105b072487d" />
search-select-pull
**Neo4j** — pull the same way. When creating the container, map two required ports and set an initial password:
- `7474`: Neo4j Browser
- `7687`: Bolt protocol
<img width="1280" height="731" alt="image-9" src="https://github.com/user-attachments/assets/0609eb5f-e259-4f24-8a7b-e354da6bae4d" />
<img width="1280" height="731" alt="Neo4j Container" src="https://github.com/user-attachments/assets/881dca96-aec0-4d43-82d0-bb0402eadaf8" />
<img width="1280" height="731" alt="Neo4j Running" src="https://github.com/user-attachments/assets/87423c90-22e8-44a9-a00a-df5d4dce4909" />
**Create the Container**
**Redis** — same steps as above.
<img width="1280" height="731" alt="image-8" src="https://github.com/user-attachments/assets/d57b3206-1df1-42a4-80fd-e71f37201a25" />
**Elasticsearch**
Pull the Elasticsearch 8.x image and create a container, mapping ports `9200` (HTTP API) and `9300` (cluster communication). For initial setup, disable security to simplify configuration:
**Service Started Successfully**
```bash
docker run -d --name elasticsearch \
-p 9200:9200 -p 9300:9300 \
-e "discovery.type=single-node" \
-e "xpack.security.enabled=false" \
elasticsearch:8.15.0
```
<img width="1280" height="731" alt="image" src="https://github.com/user-attachments/assets/76e04c54-7a36-46ec-a68e-241ad268e427" />
#### 3.3 Configure Environment Variables
```bash
cp env.example .env
```
* **Neo4j**
Fill in the core configuration in `.env`:
**Pull the Image** from Docker Desktop, the same way as with PostgreSQL.
**Create the Neo4j Container** ensure that you map **the two required ports** 7474 - Neo4j Browser, 7687 - Bolt protocol. Additionally, you must set an initial password for the Neo4j database during container creation.
<img width="1280" height="731" alt="image-1" src="https://github.com/user-attachments/assets/6bfb0c27-74e8-45f7-b381-189325d516bd" />
**Service Started Successfully**
<img width="1280" height="731" alt="image-2" src="https://github.com/user-attachments/assets/0d28b4fa-e8ed-4c05-8983-7a47f0a892d1" />
* **Redis**
The same as above
#### 1.3 Configure environment variables
Copy env.example as.env and fill in the configuration
```bash
# Neo4j Graph Database
NEO4J_URI=bolt://localhost:7687
NEO4J_USERNAME=neo4j
NEO4J_PASSWORD=your-password
# Neo4j Browser Access URL (optional documentation)
# PostgreSQL Database
DB_HOST=127.0.0.1
@@ -309,165 +220,131 @@ DB_USER=postgres
DB_PASSWORD=your-password
DB_NAME=redbear-mem
# Set to true on first startup to auto-migrate the database
DB_AUTO_UPGRADE=true
# Database Migration Configuration
# Set to true to automatically upgrade database schema on startup
DB_AUTO_UPGRADE=true # For the first startup, keep this as true to create the schema in an empty database.
# Redis
REDIS_HOST=127.0.0.1
REDIS_PORT=6379
REDIS_DB=1
REDIS_DB=1
# Celery
# Celery (Using Redis as broker)
REDIS_DB_CELERY_BROKER=1
REDIS_DB_CELERY_BACKEND=2
# Elasticsearch
ELASTICSEARCH_HOST=127.0.0.1
ELASTICSEARCH_PORT=9200
# JWT Secret Key (generate with: openssl rand -hex 32)
# JWT Secret Key (Formation method: openssl rand -hex 32)
SECRET_KEY=your-secret-key-here
```
#### 3.4 Initialize the PostgreSQL Database
#### 1.4 Initialize the PostgreSQL Database
Verify the database connection in `alembic.ini`:
MemoryBear uses Alembic migration files included in the project to create the required table structures in a newly created, empty PostgreSQL database.
```ini
**(1) Configure the Database Connection**
Ensure that the sqlalchemy.url value in the project's alembic.ini file points to your empty PostgreSQL database. Example format:
```bash
sqlalchemy.url = postgresql://<username>:<password>@<host>:<port>/<database_name>
```
Apply all migrations to create the full schema:
Also verify that target_metadata in migrations/env.py is correctly linked to the ORM model's metadata object.
**(2) Apply the Migration Files**
Run the following command inside the API directory. Alembic will automatically detect the empty database and apply all outstanding migrations to create the full schema:
```bash
alembic upgrade head
```
<img width="1076" height="341" alt="Alembic Migration" src="https://github.com/user-attachments/assets/6970a8e6-712b-4f49-937a-f5870a2d1a2a" />
<img width="1076" height="341" alt="image-3" src="https://github.com/user-attachments/assets/9edda79d-4637-46e3-bee3-2eec39975d59" />
<img width="1280" height="680" alt="Database Tables" src="https://github.com/user-attachments/assets/8bbec421-de0c-472b-a7ce-8b89cc1e2efd" />
#### 3.5 Start the API Service
Use Navicat to inspect the database tables created by the Alembic migration process.
```bash
<img width="1280" height="680" alt="image-4" src="https://github.com/user-attachments/assets/aa5c1d98-bdc3-4d25-acb2-5c8cf6ecd3f5" />
#### Start the API Service
```python
uv run -m app.main
```
Access API documentation at http://localhost:8000/docs
Access the API documentation at http://localhost:8000/docs
<img width="1280" height="675" alt="API Docs" src="https://github.com/user-attachments/assets/6d1c71b7-9ee8-4f80-9bed-19c410d6e85f" />
<img width="1280" height="675" alt="image-5" src="https://github.com/user-attachments/assets/68fa62b4-2c4f-4cf0-896c-41d59aa7d712" />
#### 3.6 Start Celery Workers (Optional, for async tasks)
```bash
# Memory worker (thread pool, asyncio-friendly, high concurrency)
celery -A app.celery_worker.celery_app worker --loglevel=info --pool=threads --concurrency=100 --queues=memory_tasks
### 2. Start the Frontend Web Application
# Document worker (prefork, CPU-bound parsing)
celery -A app.celery_worker.celery_app worker --loglevel=info --pool=prefork --concurrency=4 --queues=document_tasks
#### 2.1 Install Dependencies
# Periodic worker (reflection engine, scheduled tasks)
celery -A app.celery_worker.celery_app worker --loglevel=info --pool=prefork --concurrency=2 --queues=periodic_tasks
# Beat scheduler
celery -A app.celery_worker.celery_app beat --loglevel=info
```
### 4. Frontend Web Application
#### 4.1 Install Dependencies
```bash
```python
# Switch to the web directory
cd web
# Install dependencies
npm install
```
#### 4.2 Update API Proxy Configuration
#### 2.2 Update the API Proxy Configuration
Edit `web/vite.config.ts`:
Edit web/vite.config.ts and update the proxy target to point to your backend API service:
```typescript
```python
proxy: {
'/api': {
target: 'http://127.0.0.1:8000', // Windows: 127.0.0.1 | macOS: 0.0.0.0
target: 'http://127.0.0.1:8000', // Change to the backend address, windows users 127.0.0.1 macOS users 0.0.0.0
changeOrigin: true,
},
}
```
#### 4.3 Start the Frontend Service
#### 2.3 Start the Frontend Service
```bash
```python
# Start the web service
npm run dev
```
<img width="935" height="311" alt="Frontend Start" src="https://github.com/user-attachments/assets/8b08fc46-01d0-458b-ab4d-f5ac04bc2510" />
<img width="1280" height="652" alt="Frontend UI" src="https://github.com/user-attachments/assets/542dbee3-8cd4-4b16-a8e5-36f8d6153820" />
### 5. Initialize the System
```bash
# Initialize the database and obtain the super admin account
curl -X POST http://127.0.0.1:8000/api/setup
```
**Super admin credentials:**
- Account: `admin@example.com`
- Password: `admin_password`
### 6. Full Startup Checklist
```
Step 1 Clone the repository
Step 2 Start base services (PostgreSQL / Neo4j / Redis / Elasticsearch)
Step 3 Configure .env environment variables
Step 4 Run alembic upgrade head to initialize the database
Step 5 uv run -m app.main to start the backend API
Step 6 npm run dev to start the frontend
Step 7 curl -X POST http://127.0.0.1:8000/api/setup to initialize the system
Step 8 Log in to the frontend with the admin account
```
---
After the service starts, the console will output the URL for accessing the frontend interface.
## Tech Stack
<img width="935" height="311" alt="image-6" src="https://github.com/user-attachments/assets/cba1074a-440c-4866-8a94-7b6d1c911a93" />
| Layer | Technology |
|-------|------------|
| Backend Framework | FastAPI + Uvicorn |
| Async Tasks | Celery (3 queues: memory / document / periodic) |
| Primary Database | PostgreSQL 13+ |
| Graph Database | Neo4j 4.4+ |
| Search Engine | Elasticsearch 8.x (keyword + semantic vector hybrid) |
| Cache / Queue | Redis 6.0+ |
| ORM | SQLAlchemy 2.0 + Alembic |
| LLM Integration | LangChain / OpenAI / DashScope / AWS Bedrock |
| MCP Integration | fastmcp + langchain-mcp-adapters |
| Frontend Framework | React 18 + TypeScript + Vite |
| UI Components | Ant Design 5.x |
| Graph Visualization | AntV X6 + ECharts + D3.js |
| Package Manager | uv (backend) / npm (frontend) |
---
<img width="1280" height="652" alt="image-7" src="https://github.com/user-attachments/assets/a719dc0a-cbdd-4ba1-9b21-123d5eac32eb" />
## 4. User Guide
step1: Retrieve the Project.
step2: Start the Backend API Service.
step3: Start the Frontend Web Application.
step4: Enter curl.exe -X POST http://127.0.0.1:8000/api/setup in the terminal to access the interface, initialize the database, and obtain the super administrator account.
step5: Super Administrator Credentials
Account: admin@example.com
Password: admin_password
step6: Log In to the Frontend Interface.
## License
This project is licensed under the [Apache License 2.0](LICENSE).
---
This project is licensed under the Apache License 2.0. For details, see the LICENSE file.
## Community & Support
- **Bug Reports & Feature Requests**: [GitHub Issues](https://github.com/SuanmoSuanyangTechnology/MemoryBear/issues)
- **Contribute**: Please read our [Contributing Guide](CONTRIBUTING.md). Submit [Pull Requests](https://github.com/SuanmoSuanyangTechnology/MemoryBear/pulls) on a feature branch following Conventional Commits format
- **Discussions**: [GitHub Discussions](https://github.com/SuanmoSuanyangTechnology/MemoryBear/discussions)
- **WeChat Community**: Scan the QR code below to join our WeChat group
Join our community to ask questions, share your work, and connect with fellow developers.
![WeChat QR](https://github.com/user-attachments/assets/8c81885c-4134-40d5-96e2-7f78cc082dc6)
- **Star History**:
[![Star History Chart](https://api.star-history.com/svg?repos=SuanmoSuanyangTechnology/MemoryBear&type=Date)](https://star-history.com/#SuanmoSuanyangTechnology/MemoryBear&Date)
- **Contact**: tianyou_hubm@redbearai.com
- **GitHub Issues**: Report bugs, request features, or track known issues via [GitHub Issues](https://github.com/SuanmoSuanyangTechnology/MemoryBear/issues).
- **GitHub Pull Requests**: Contribute code improvements or fixes through [Pull Requests](https://github.com/SuanmoSuanyangTechnology/MemoryBear/pulls).
- **GitHub Discussions**: Ask questions, share ideas, and engage with the community in [GitHub Discussions](https://github.com/SuanmoSuanyangTechnology/MemoryBear/discussions).
- **WeChat**: Scan the QR code below to join our WeChat community group.
- ![wecom-temp-114020-47fe87a75da439f09f5dc93a01593046](https://github.com/user-attachments/assets/8c81885c-4134-40d5-96e2-7f78cc082dc6)
- **Contact**: If you are interested in contributing or collaborating, feel free to reach out at tianyou_hubm@redbearai.com

View File

@@ -1,311 +1,192 @@
<img width="2346" height="1310" alt="MemoryBear Hero Banner" src="https://github.com/user-attachments/assets/77f3e31a-3a20-4f17-8d2d-d88d85acf19e" />
<img width="2346" height="1310" alt="image" src="https://github.com/user-attachments/assets/bc73a64d-cd1e-4d22-be3e-04ce40423a20" />
<div align="center">
# MemoryBear — 让 AI 拥有如同人类一样的记忆
**新一代 AI 记忆管理系统 · 感知 · 提炼 · 关联 · 遗忘**
# MemoryBear 让AI拥有如同人类一样的记忆
[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](LICENSE)
[![Python](https://img.shields.io/badge/Python-3.12+-green?logo=python&logoColor=white)](https://www.python.org/)
[![FastAPI](https://img.shields.io/badge/FastAPI-0.100+-teal?logo=fastapi&logoColor=white)](https://fastapi.tiangolo.com/)
[![Neo4j](https://img.shields.io/badge/Neo4j-4.4+-blue?logo=neo4j&logoColor=white)](https://neo4j.com/)
[![Gitee Sync](https://img.shields.io/github/actions/workflow/status/SuanmoSuanyangTechnology/MemoryBear/sync-to-gitee.yml?label=Gitee%20Sync&logo=gitee&logoColor=white)](https://github.com/SuanmoSuanyangTechnology/MemoryBear/actions/workflows/sync-to-gitee.yml)
中文 | [English](./README.md)
[快速开始](#快速开始) · [安装教程](#安装教程) · [核心特性](#核心特性) · [架构总览](#架构总览) · [实验室指标](#实验室指标) · [论文](#论文)
</div>
---
### [安装教程](#memorybear安装教程)
### 论文:<a href="https://memorybear.ai/pdf/memoryBear" target="_blank" rel="noopener noreferrer">《Memory Bear AI: 从记忆到认知的突破》</a>
## 项目简介
MemoryBear是红熊AI自主研发的新一代AI记忆系统其核心突破在于跳出传统知识“静态存储”的局限以生物大脑认知机制为原型构建了具备“感知-提炼-关联-遗忘”全生命周期的智能知识处理体系。该系统致力于让机器摆脱“信息堆砌”的困境,实现对知识的深度理解与自主进化,成为人类认知协作的核心伙伴。
MemoryBear 是红熊 AI 自主研发的新一代 AI 记忆系统,核心突破在于跳出传统知识"静态存储"的局限,以生物大脑认知机制为原型,构建了具备**感知 → 提炼 → 关联 → 遗忘**全生命周期的智能知识处理体系。
## MemoryBear是从解决这些问题来的
### 一、单模型知识遗忘的核心原因</br>
上下文窗口限制:主流大模型上下文窗口通常为 8k-32k tokens长对话中早期信息会被 “挤出”,导致后续回复脱离历史语境:如用户第 1 轮说 “我对海鲜过敏”,第 5 轮问 “推荐今晚的菜品” 时模型可能遗忘过敏信息。</br>
静态知识库与动态数据割裂:大模型训练时的静态知识库如截止 2023 年数据,无法实时吸收用户对话中的个性化信息如用户偏好、历史订单,需依赖外部记忆模块补充。</br>
模型注意力机制缺陷Transformer 的自注意力对长距离依赖的捕捉能力随序列长度下降,出现 “近因效应”更关注最新输入,忽略早期关键信息。</br>
与传统记忆管理工具将知识视为"待检索的静态数据"不同MemoryBear 通过复刻大脑海马体的记忆编码、新皮层的知识固化及突触修剪的遗忘机制,让知识具备动态演化的"生命特征",将 AI 与用户的交互关系从**被动查询**升级为**主动辅助认知**。
### 二、多 Agent 协作的记忆断层问题</br>
Agent 数据孤岛:不同 Agent如咨询 Agent、售后 Agent、推荐 Agent各自维护独立记忆未建立跨模块的共享机制导致用户重复提供信息如用户向咨询 Agent 说明地址后,售后 Agent 仍需再次询问。</br>
对话状态不一致:多轮交互中 Agent 切换时,对话状态如用户当前意图、历史问题标签传递不完整,引发服务断层如用户从 “产品咨询” 转 “投诉” 时,新 Agent 未继承前期投诉细节。</br>
决策冲突:不同 Agent 基于局部记忆做出的响应可能矛盾如推荐 Agent 推荐用户过敏的产品,因未获取健康禁忌的历史记录。</br>
## 论文
### 三、模型推理过程中的 “语义歧义” 引发理解偏差</br>
用户对话中的个性化信息如行业术语、口语化表达、上下文指代未被准确编码,导致模型对记忆内容的语义解析失真,比如对用户历史对话中的模糊表述如 “上次说的那个方案”无法准确定位具体内容。</br>
多语言、方言场景中,跨语种记忆关联失效如用户混用中英描述需求时,模型无法整合多语言信息。</br>
典型案例:用户说之前客服说可以‘加急处理’现在进度如何?模型因未记录 “加急” 对应的具体服务等级,回复笼统模糊。</br>
| 论文 | 描述 |
|------|------|
| 📄 [Memory Bear AI: A Breakthrough from Memory to Cognition](https://memorybear.ai/pdf/memoryBear) | MemoryBear 核心技术报告 |
| 📄 [Memory Bear AI Memory Science Engine for Multimodal Affective Intelligence](https://arxiv.org/abs/2603.22306) | 多模态情感智能记忆科学引擎技术报告 |
| 📄 [A-MBER: Affective Memory Benchmark for Emotion Recognition](https://arxiv.org/abs/2604.07017) | 情感记忆基准测试集 |
## MemoryBear核心定位
与传统记忆管理工具将知识视为“待检索的静态数据”不同MemoryBear以“模拟人类大脑知识处理逻辑”为核心目标构建了从知识摄入到智能输出的闭环体系。系统通过复刻大脑海马体的记忆编码、新皮层的知识固化及突触修剪的遗忘机制让知识具备动态演化的“生命特征”彻底重构了知识与使用者之间的交互关系——从“被动查询”升级为“主动辅助记忆认知”
## 为什么需要 MemoryBear
## MemoryBear核心哲学
MemoryBear的设计哲学源于对人类认知本质的深刻洞察知识的价值不在于存量积累而在于动态流转中的价值升华。传统系统中知识一旦存储便陷入“静止状态”难以形成跨领域关联更无法主动适配使用者的认知需求而MemoryBear坚信只有让知识经历“原始信息提炼为结构化规则、孤立规则关联为知识网络、冗余信息智能遗忘”的完整过程才能实现从“信息记忆”到“认知理解”的跨越最终涌现出真正的智能。
### 单模型的知识遗忘
## MemoryBear核心特性
MemoryBear作为模仿生物大脑认知过程的智能记忆管理系统其核心特性围绕“记忆知识全生命周期管理”与“智能认知进化”两大维度构建覆盖记忆从摄入提炼到存储检索、动态优化的完整链路同时通过标准化服务架构实现高效集成与调用。
- **上下文窗口限制**:主流大模型上下文窗口通常为 8k32k tokens长对话中早期信息会被"挤出",导致后续回复脱离历史语境
- **静态知识库割裂**:训练数据是静态快照,无法实时吸收用户对话中的个性化信息(偏好、历史记录等)
- **注意力近因效应**Transformer 自注意力对长距离依赖的捕捉能力随序列长度下降,过度关注最新输入而忽略早期关键信息
### 一、记忆萃取引擎:多维度结构化提炼,夯实认知基础</br>
记忆萃取是MemoryBear实现“认知化管理”的起点区别于传统数据提取的“机械转换”其核心优势在于对非结构化信息的“语义级解析”与“多格式标准化输出”精准适配后续图谱构建与智能检索需求。具体能力包括</br>
多类型信息精准解析:可自动识别并提取文本中的陈述句核心信息,剥离冗余修饰成分,保留“主体-行为-对象”核心逻辑同时精准抽取三元组数据如“MemoryBear-核心功能-知识萃取”),为图谱存储提供基础数据单元,保障知识关联的准确性。</br>
时序信息锚定:针对含有时效性的知识(如事件记录、政策文件、实验数据),自动提取并标记时间戳信息,支持“时间维度”的知识追溯与关联,解决传统知识管理中“时序混乱”导致的认知偏差问题。</br>
智能剪枝生成:基于上下文语义理解,生成“关键信息全覆盖+逻辑连贯性强”的摘要内容支持自定义摘要长度50-500字与侧重点如技术型、业务型适配不同场景的知识快速获取需求。例如对10页技术文档处理时可在3秒内生成含核心参数、实现逻辑与应用场景的精简摘要。</br>
### 多 Agent 协作的记忆断层
### 二、图谱存储对接Neo4j构建可视化知识网络</br>
存储层采用“图数据库优先”的架构设计通过对接业界成熟的Neo4j图数据库实现知识实体与关系的高效管理突破传统关系型数据库“关联弱、查询繁”的局限契合生物大脑“神经元关联”的认知模式。</br>
该特性核心价值体现在一是支持海量实体与多元关系的灵活存储可管理百万级知识实体及千万级关联关系涵盖“上下位、因果、时序、逻辑”等12种核心关系类型适配多领域知识场景二是与知识萃取模块深度联动萃取的三元组数据可直接同步至Neo4j自动构建初始知识图谱无需人工二次映射三是支持图谱可视化交互用户可直观查看实体关联路径手动调整关系权重实现“机器构建+人工优化”的协同管理。</br>
- **数据孤岛**:不同 Agent咨询、售后、推荐各自维护独立记忆用户需重复提供相同信息
- **对话状态不一致**Agent 切换时,用户意图、历史问题标签传递不完整,引发服务断层
- **决策冲突**:基于局部记忆的 Agent 可能给出矛盾响应(如推荐用户过敏的产品)
### 三、混合搜索:关键词+语义向量,兼顾精准与智能</br>
为解决传统搜索“要么精准但僵化要么模糊但失准”的痛点MemoryBear采用“关键词检索+语义向量检索”的混合搜索架构,实现“精准匹配”与“意图理解”的双重目标。</br>
其中关键词检索基于Lucene引擎优化针对知识中的核心实体、关键参数等结构化信息实现毫秒级精准定位保障“明确需求”下的高效检索语义向量检索则通过BERT模型对查询语句进行语义编码将其转化为高维向量后与知识库中的向量数据比对可识别同义词、近义词及隐含意图例如用户查询“如何优化记忆衰减效率”时系统可关联到“遗忘机制参数调整”“记忆强度评估方法”等相关知识。两种检索方式智能融合先通过语义检索扩大候选范围再通过关键词检索精准筛选使检索准确率提升至92%较单一检索方式平均提升35%。</br>
### 语义歧义导致的理解偏差
### 四、记忆遗忘引擎:基于强度与时效的动态衰减,模拟生物记忆特性</br>
遗忘是MemoryBear区别于传统静态知识管理工具的核心特性之一其灵感源于生物大脑“突触修剪”机制通过“记忆强度+时效”双维度模型实现知识的逐步衰减,避免冗余知识占用资源,保障核心知识的“认知优先级”。</br>
具体实现逻辑为:系统为每条知识分配“初始记忆强度”(由萃取质量、人工标注重要性决定),并结合“调用频率、关联活跃度”实时更新强度值;同时设定“时效衰减周期”,根据知识类型(如核心规则、临时数据)差异化配置衰减速率。当知识强度低于阈值且超过设定时效后,将进入“休眠-衰减-清除”三阶段流程休眠阶段保留数据但降低检索优先级衰减阶段逐步压缩存储体积清除阶段则彻底删除并备份至冷存储。该机制使系统冗余知识占比控制在8%以内较传统无遗忘机制系统降低60%以上。</br>
- 行业术语、口语化表达、上下文指代未被准确编码,导致模型对记忆内容的语义解析失真
- 多语言混用场景中,跨语种记忆关联失效
### 五、自我反思引擎:定期回顾优化,实现记忆自主进化</br>
自我反思机制是MemoryBear实现“智能升级”的关键通过定期对已有记忆进行回顾、校验与优化模拟人类“复盘总结”的认知行为持续提升知识体系的准确性与有效性。</br>
系统默认每日凌晨触发自动反思流程,核心动作包括:一是“一致性校验”,对比关联知识间的逻辑冲突(如同一实体的矛盾属性),标记可疑知识并推送人工审核;二是“价值评估”,统计知识的调用频次、关联贡献度,将高价值知识强化记忆强度,低价值知识加速衰减;三是“关联优化”,基于近期检索与使用行为,调整知识间的关联权重,强化高频关联路径。此外,支持人工触发专项反思(如新增核心知识后),并提供反思报告可视化展示优化结果,实现“自主进化+人工监督”的双重保障。</br>
<img width="2294" height="1154" alt="Why MemoryBear" src="https://github.com/user-attachments/assets/62453bc9-8422-4480-9645-e2abb57f0204" />
### 六、FastAPI服务标准化API输出实现高效集成与管理</br>
为保障系统与外部业务场景的高效对接MemoryBear采用FastAPI构建统一服务架构实现管理端与服务端API的集中暴露具备“高性能、易集成、强规范”的核心优势。服务端API涵盖知识萃取、图谱操作、搜索查询、遗忘控制等全功能模块支持JSON/XML多格式数据交互响应延迟平均低于50ms单实例可支撑1000QPS并发请求管理端API则提供系统配置、权限管理、日志查询等运维功能支持通过API实现批量知识导入导出、反思周期调整等操作。同时系统自动生成Swagger API文档包含接口参数说明、请求示例与返回格式定义开发者可快速完成集成调试。该架构已适配企业级微服务体系支持Docker容器化部署可灵活对接CRM、OA、研发管理等各类业务系统。</br>
---
## 核心特性
<img width="2294" height="1154" alt="MemoryBear Core Features" src="https://github.com/user-attachments/assets/e90153d3-378f-47e8-a367-622121621566" />
### 记忆萃取引擎
从非结构化对话和文档中进行**语义级解析**,精准提取:
- **陈述句核心信息**:剥离冗余修饰,保留"主体-行为-对象"核心逻辑
- **三元组数据**:自动抽取实体关系(如 `MemoryBear → 核心功能 → 知识萃取`),为图谱存储提供基础数据单元
- **时序信息锚定**:自动提取并标记时间戳,支持时间维度的知识追溯
- **智能摘要生成**支持自定义摘要长度50500 字与侧重点10 页技术文档 3 秒内生成精简摘要
### 图谱存储Neo4j
采用**图数据库优先**架构,对接 Neo4j突破传统关系型数据库"关联弱、查询繁"的局限:
- 支持百万级知识实体及千万级关联关系
- 涵盖上下位、因果、时序、逻辑等 12 种核心关系类型
- 萃取的三元组直接同步至 Neo4j自动构建初始知识图谱
- 支持图谱可视化交互,实现"机器构建 + 人工优化"协同管理
### 混合搜索
**关键词检索 + 语义向量检索**双引擎融合:
- 关键词检索基于 Elasticsearch毫秒级精准定位结构化信息
- 语义向量检索通过 BERT 模型编码,识别同义词、近义词及隐含意图
- 先语义扩大候选范围,再关键词精准筛选,检索准确率达 **92%**,较单一方式提升 **35%**
### 记忆遗忘引擎
灵感源于生物大脑**突触修剪**机制,通过"记忆强度 + 时效"双维度模型实现知识动态衰减:
- 每条知识分配初始记忆强度,结合调用频率和关联活跃度实时更新
- 知识强度低于阈值后进入**休眠 → 衰减 → 清除**三阶段流程
- 系统冗余知识占比控制在 **8%** 以内,较无遗忘机制系统降低 **60%** 以上
### 自我反思引擎
每日定时触发自动反思流程,模拟人类"复盘总结"认知行为:
- **一致性校验**:检测关联知识间的逻辑冲突,标记可疑知识推送人工审核
- **价值评估**:统计调用频次和关联贡献度,高价值知识强化,低价值知识加速衰减
- **关联优化**:基于近期检索行为调整知识间关联权重,强化高频关联路径
### FastAPI 服务层
统一服务架构,暴露两套 API
| API 类型 | 路径前缀 | 认证方式 | 用途 |
|----------|----------|----------|------|
| 管理端 API | `/api` | JWT | 系统配置、权限管理、日志查询 |
| 服务端 API | `/v1` | API Key | 知识萃取、图谱操作、搜索查询、遗忘控制 |
- 平均响应延迟低于 **50ms**,单实例支撑 **1000 QPS** 并发
- 自动生成 Swagger 文档,支持 Docker 容器化部署
- 兼容企业级微服务体系,可对接 CRM、OA、研发管理等业务系统
---
## 架构总览
<img src="https://github.com/user-attachments/assets/bc356ed3-9159-41c5-bd73-125a67e06ced" alt="MemoryBear System Architecture" width="100%"/>
**Celery 三队列异步架构:**
| 队列 | Worker 类型 | 并发 | 用途 |
|------|-------------|------|------|
| `memory_tasks` | threads | 100 | 记忆读写asyncio 友好) |
| `document_tasks` | prefork | 4 | 文档解析CPU 密集) |
| `periodic_tasks` | prefork | 2 | 定时任务、反思引擎 |
---
## MemoryBear架构总览
<img width="2294" height="1154" alt="image" src="https://github.com/user-attachments/assets/3afd3b49-20ea-4847-b9ed-38b646a4ad89" />
</br>
- 记忆萃取引擎Extraction Engine预处理、去重、结构化提取</br>
- 记忆遗忘引擎Forgetting Engine记忆强度模型与衰减策略</br>
- 记忆自我反思引擎Reflection Engine评价与重写记忆</br>
- 检索服务:关键词、语义与混合检索</br>
- Agent 与 MCP提供多工具协作的智能体能力</br>
## 实验室指标
我们采用不同问题的数据集中通过具备记忆功能的系统进行性能对比。评估指标包括F1分数F1、BLEU-1B1以及LLM-as-a-Judge分数J数值越高表示表现越好性能更高。
MemoryBear 在 “单跳场景” 的精准度、结果匹配度与任务特异性表现上,均处于领先,“多跳”更强的信息连贯性与推理准确性,“开放泛化”对多样,无边界信息的处理质量与泛化能力更优,“时序”对时效性信息的匹配与处理表现更出色,四大任务的核心指标中,均优于 行业内的其他海外竞争对手Mem O、Zep、Lang Mem 等现有方法,整体性能更突出。
<img width="2256" height="890" alt="image" src="https://github.com/user-attachments/assets/5ff86c1f-53ac-4816-976d-95b48a4a10c0" />
Memory Bear 基于向量的知识记忆非图谱版本成功在保持高准确性的同时极大地优化了检索效率。该方法在总体准确性上的表现已明显高于现有最高全文检索方法72.90 ± 0.19%)。更重要的是,它在关键的延迟指标(包括 Search Latency 和 Total Latency 的 p50/p95上也保持了较低水平充分体现出 “性能更优且延迟更高效” 的特点,解决了全文检索方法的高准确性伴随的高延迟瓶颈。
<img width="2248" height="498" alt="image" src="https://github.com/user-attachments/assets/2759ea19-0b71-4082-8366-e8023e3b28fe" />
Memory Bear 通过集成知识图谱架构,在需要复杂推理和关系感知的任务上进一步释放了潜力。虽然图谱的遍历和推理可能会引入轻微的检索开销,但该版本通过优化图检索策略和决策流,成功将延迟控制在高效范围。更关键的是,基于图谱的 Memory Bear 将总体准确性推至新的高度75.00 ± 0.20%),在保持准确性的同时,整体指标显著优于其他所有方法,证明了“结构化记忆带来的性能决定性优势”。
<img width="2238" height="342" alt="image" src="https://github.com/user-attachments/assets/c928e094-45a2-414b-831a-6990b711ed07" />
评估指标包括 F1 分数F1、BLEU-1B1以及 LLM-as-a-Judge 分数J数值越高表示性能越好。
# MemoryBear安装教程
## 一、前期准备
MemoryBear 在四大任务类型的核心指标中,均优于行业内竞争对手 Mem0、Zep、LangMem 等现有方法:
### 1.环境要求
<img width="2256" height="890" alt="Benchmark Results" src="https://github.com/user-attachments/assets/163ea5b5-b51d-4941-9f6c-7ee80977cdbc" />
* Node.js 20.19+ 或 22.12+ 前端运行环境
**向量版本(非图谱)**在保持高准确性的同时极大优化了检索效率总体准确性明显高于现有最高全文检索方法72.90 ± 0.19%),且在 Search Latency 和 Total Latency 的 p50/p95 上保持较低水平。
* Python 3.12 后端运行环境
<img width="2248" height="498" alt="Vector Version Metrics" src="https://github.com/user-attachments/assets/5e5dae2c-1dde-4f69-88ca-95a9b665b5b2" />
* PostgreSQL 13+ 主数据库
**图谱版本**:通过集成知识图谱架构,将总体准确性推至新高度(**75.00 ± 0.20%**),在保持准确性的同时整体指标显著优于所有其他方法。
* Neo4j 4.4+ 图数据库(存储知识图谱)
<img width="2238" height="342" alt="Graph Version Metrics" src="https://github.com/user-attachments/assets/b1eb1c05-da9b-4074-9249-7a9bbb40e9d2" />
* Redis 6.0+ 缓存和消息队列
---
## 二、项目获取
## 快速开始
### 1.获取方式
### Docker Compose 一键启动(推荐)
Git克隆(推荐)
**前提条件**:已安装 [Docker Desktop](https://www.docker.com/products/docker-desktop/)。
```bash
# 1. 克隆项目
git clone https://github.com/SuanmoSuanyangTechnology/MemoryBear.git
cd MemoryBear/api
# 2. 启动基础服务PostgreSQL / Neo4j / Redis / Elasticsearch
# 请先通过 Docker Desktop 拉取并启动以下镜像(详见安装教程 3.2 节)
# 3. 配置环境变量
cp env.example .env
# 编辑 .env填写数据库连接信息和 LLM API Key
# 4. 初始化数据库
pip install uv && uv sync
alembic upgrade head
# 5. 启动 API + Celery Workers + Beat 调度器
docker-compose up -d
# 6. 初始化系统,获取超级管理员账号
curl -X POST http://127.0.0.1:8002/api/setup
```
> **注意**`docker-compose.yml` 包含 API 服务和 Celery Workers基础服务PostgreSQL、Neo4j、Redis、Elasticsearch需要单独启动。
>
> **端口说明**Docker Compose 部署默认端口为 `8002`,手动启动默认端口为 `8000`。下文安装教程以手动启动(`8000`)为例。
服务启动后访问:
- API 文档http://localhost:8002/docs
- 管理后台http://localhost:3000启动前端后
**默认管理员账号:**
- 账号:`admin@example.com`
- 密码:`admin_password`
### 手动启动
> 以下为精简命令,详细步骤请参考 [安装教程](#安装教程)。
```bash
# 后端
cd api
pip install uv && uv sync
alembic upgrade head
uv run -m app.main
# 前端(新终端)
cd web
npm install && npm run dev
```
---
## 安装教程
### 一、环境要求
| 组件 | 版本要求 | 用途 |
|------|----------|------|
| Python | 3.12+ | 后端运行环境 |
| Node.js | 20.19+ 或 22.12+ | 前端运行环境 |
| PostgreSQL | 13+ | 主数据库 |
| Neo4j | 4.4+ | 知识图谱存储 |
| Redis | 6.0+ | 缓存与消息队列 |
| Elasticsearch | 8.x | 混合搜索引擎 |
### 二、项目获取
```bash
```plain&#x20;text
git clone https://github.com/SuanmoSuanyangTechnology/MemoryBear.git
```
<img src="https://github.com/SuanmoSuanyangTechnology/MemoryBear/releases/download/assets-v1.0/assets__directory-structure.svg" alt="Directory Structure" width="100%"/>
### 2.目录说明
### 三、后端 API 服务启动
<img width="5238" height="1626" alt="diagram" src="https://github.com/user-attachments/assets/416d6079-3f34-40c3-9bcf-8760d186741a" />
#### 3.1 安装 Python 依赖
```bash
# 安装依赖管理工具 uv
## 三、安装步骤
### 1.后端API服务启动
#### 1.1 安装python依赖
```python
# 0.安装依赖管理工具uv
pip install uv
# 切换API 目录
# 1.终端切换API目录
cd api
# 安装依赖
uv sync
# 2.安装依赖
uv sync
# 3.激活虚拟环境 (Windows)
.venv\Scripts\Activate.ps1 powershell在api目录下
api\.venv\Scripts\activate powershell在根目录下
.venv\Scripts\activate.bat cmd在api目录下
# 激活虚拟环境
# Windows (PowerShell在 api 目录下)
.venv\Scripts\Activate.ps1
# Windows (cmd在 api 目录下)
.venv\Scripts\activate.bat
# macOS / Linux
source .venv/bin/activate
```
#### 3.2 安装基础服务(Docker 镜像)
#### 1.2 安装必备基础服务(docker镜像
使用 Docker Desktop 安装所需镜像:[下载 Docker Desktop](https://www.docker.com/products/docker-desktop/)
使用docker desktop安装所需的docker镜像
**PostgreSQL**
* **docker desktop安装地址**&#x68;ttps://www.docker.com/products/docker-desktop/
拉取镜像search → select → pull
* **PostgreSQL**
<img width="1280" height="731" alt="PostgreSQL Pull" src="https://github.com/user-attachments/assets/96272efe-50ca-4a32-9686-5f23bc3f6c93" />
**拉取镜像**
创建容器:
search——select——pull
<img width="1280" height="731" alt="PostgreSQL Container" src="https://github.com/user-attachments/assets/074ea9da-9a3d-401b-b14b-89b81e05487e" />
<img width="1280" height="731" alt="image-9" src="https://github.com/user-attachments/assets/0609eb5f-e259-4f24-8a7b-e354da6bae4d" />
<img width="1280" height="731" alt="PostgreSQL Running" src="https://github.com/user-attachments/assets/a14744cd-9350-4a2f-87dd-6105b072487d" />
**Neo4j**
**创建容器**
拉取镜像方式同上。创建容器时需映射两个关键端口,并设置初始密码:
- `7474`Neo4j Browser
- `7687`Bolt 协议
<img width="1280" height="731" alt="image-8" src="https://github.com/user-attachments/assets/d57b3206-1df1-42a4-80fd-e71f37201a25" />
<img width="1280" height="731" alt="Neo4j Container" src="https://github.com/user-attachments/assets/881dca96-aec0-4d43-82d0-bb0402eadaf8" />
<img width="1280" height="731" alt="Neo4j Running" src="https://github.com/user-attachments/assets/87423c90-22e8-44a9-a00a-df5d4dce4909" />
**服务启动成功**
**Redis**:同上步骤拉取并创建容器。
<img width="1280" height="731" alt="image" src="https://github.com/user-attachments/assets/76e04c54-7a36-46ec-a68e-241ad268e427" />
**Elasticsearch**
拉取 Elasticsearch 8.x 镜像并创建容器,映射端口 `9200`HTTP API`9300`(集群通信)。首次启动建议关闭安全认证以简化配置:
* **Neo4j**
**拉取镜像**与PostgreSQL一样从docker desktop中拉取镜像
**创建容器**Neo4j 默认需要映射**2 个关键端口**7474 对应 Browser7687 对应 Bolt 协议),同时需设置初始密码
<img width="1280" height="731" alt="image-1" src="https://github.com/user-attachments/assets/6bfb0c27-74e8-45f7-b381-189325d516bd" />
**服务成功启动**
<img width="1280" height="731" alt="image-2" src="https://github.com/user-attachments/assets/0d28b4fa-e8ed-4c05-8983-7a47f0a892d1" />
* **Redis**
同上
#### 1.3 配置环境变量
复制 env.example 为 .env 并填写配置
```bash
docker run -d --name elasticsearch \
-p 9200:9200 -p 9300:9300 \
-e "discovery.type=single-node" \
-e "xpack.security.enabled=false" \
elasticsearch:8.15.0
```
#### 3.3 配置环境变量
```bash
cp env.example .env
```
编辑 `.env` 填写以下核心配置:
```bash
# Neo4j 图数据库
# Neo4j 图数据库
NEO4J_URI=bolt://localhost:7687
NEO4J_USERNAME=neo4j
NEO4J_PASSWORD=your-password
# Neo4j Browser访问地址
# PostgreSQL 数据库
DB_HOST=127.0.0.1
@@ -314,165 +195,133 @@ DB_USER=postgres
DB_PASSWORD=your-password
DB_NAME=redbear-mem
# 首次启动设为 true自动迁移数据库
DB_AUTO_UPGRADE=true
# Database Migration Configuration
# Set to true to automatically upgrade database schema on startup
DB_AUTO_UPGRADE=true # 首次启动设为true自动迁移数据库 在空白数据库创建表结构
# Redis
REDIS_HOST=127.0.0.1
REDIS_PORT=6379
REDIS_DB=1
REDIS_DB=1
# Celery
# Celery (使用Redis作为broker)
REDIS_DB_CELERY_BROKER=1
REDIS_DB_CELERY_BACKEND=2
# Elasticsearch
ELASTICSEARCH_HOST=127.0.0.1
ELASTICSEARCH_PORT=9200
# JWT 密钥生成方式openssl rand -hex 32
# JWT密钥 (生成方式: openssl rand -hex 32)
SECRET_KEY=your-secret-key-here
```
#### 3.4 初始化 PostgreSQL 数据库
#### 1.4 PostgreSQL数据库建立
确认 `alembic.ini` 中的数据库连接配置:
通过项目中已有的 alembic 数据库迁移文件,为全新创建的空白 PostgreSQL 数据库创建对应的表结构。
```ini
sqlalchemy.url = postgresql://用户名:密码@数据库地址:端口/数据库名
**1配置数据库连接**
确认项目中`alembic.ini`文件的`sqlalchemy.url`配置指向你的空白 PostgreSQL 数据库,格式示例:
```bash
sqlalchemy.url = postgresql://用户名:密码@数据库地址:端口/空白数据库名
```
执行迁移,创建完整表结构:
同时检查 migrations`/env.py`中`target_metadata`是否正确关联到 ORM 模型的`metadata`(确保迁移脚本和模型一致)
**2执行迁移文件**
在API目录执行以下命令alembic 会自动识别空白数据库,并执行所有未应用的迁移脚本,创建完整表结构:
```bash
alembic upgrade head
```
<img width="1076" height="341" alt="Alembic Migration" src="https://github.com/user-attachments/assets/6970a8e6-712b-4f49-937a-f5870a2d1a2a" />
<img width="1076" height="341" alt="image-3" src="https://github.com/user-attachments/assets/9edda79d-4637-46e3-bee3-2eec39975d59" />
<img width="1280" height="680" alt="Database Tables" src="https://github.com/user-attachments/assets/8bbec421-de0c-472b-a7ce-8b89cc1e2efd" />
#### 3.5 启动 API 服务
通过Navicat查看迁移创建的数据库表结构
```bash
<img width="1280" height="680" alt="image-4" src="https://github.com/user-attachments/assets/aa5c1d98-bdc3-4d25-acb2-5c8cf6ecd3f5" />
#### API服务启动
```python
uv run -m app.main
```
访问 API 文档http://localhost:8000/docs
<img width="1280" height="675" alt="API Docs" src="https://github.com/user-attachments/assets/6d1c71b7-9ee8-4f80-9bed-19c410d6e85f" />
<img width="1280" height="675" alt="image-5" src="https://github.com/user-attachments/assets/68fa62b4-2c4f-4cf0-896c-41d59aa7d712" />
#### 3.6 启动 Celery Worker可选用于异步任务
```bash
# 记忆任务 Worker线程池支持高并发 asyncio
celery -A app.celery_worker.celery_app worker --loglevel=info --pool=threads --concurrency=100 --queues=memory_tasks
### 2.前端web应用启动
# 文档解析 Worker进程池CPU 密集型)
celery -A app.celery_worker.celery_app worker --loglevel=info --pool=prefork --concurrency=4 --queues=document_tasks
#### 2.1安装依赖
# 定时任务 Worker反思引擎等
celery -A app.celery_worker.celery_app worker --loglevel=info --pool=prefork --concurrency=2 --queues=periodic_tasks
# Beat 调度器
celery -A app.celery_worker.celery_app beat --loglevel=info
```
### 四、前端 Web 应用启动
#### 4.1 安装依赖
```bash
```python
# 切换web目录下
cd web
# 下载依赖
npm install
```
#### 4.2 修改 API 代理配置
#### 2.2 修改API代理配置
编辑 `web/vite.config.ts`
编辑 web/vite.config.ts,将代理目标改为后端地址
```typescript
```python
proxy: {
'/api': {
target: 'http://127.0.0.1:8000', // Windows 用 127.0.0.1macOS 用 0.0.0.0
target: 'http://127.0.0.1:8000', // 改为后端地址win用户127.0.0.1 mac用户0.0.0.0
changeOrigin: true,
},
}
```
#### 4.3 启动前端服务
#### 2.3 启动服务
```bash
```python
# 启动web服务
npm run dev
```
<img width="935" height="311" alt="Frontend Start" src="https://github.com/user-attachments/assets/8b08fc46-01d0-458b-ab4d-f5ac04bc2510" />
<img width="1280" height="652" alt="Frontend UI" src="https://github.com/user-attachments/assets/542dbee3-8cd4-4b16-a8e5-36f8d6153820" />
### 五、初始化系统
```bash
# 初始化数据库,获取超级管理员账号
curl -X POST http://127.0.0.1:8000/api/setup
```
**超级管理员账号:**
- 账号:`admin@example.com`
- 密码:`admin_password`
### 六、完整启动流程
```
Step 1 克隆项目
Step 2 启动基础服务PostgreSQL / Neo4j / Redis / Elasticsearch
Step 3 配置 .env 环境变量
Step 4 执行 alembic upgrade head 初始化数据库
Step 5 uv run -m app.main 启动后端 API
Step 6 npm run dev 启动前端
Step 7 curl -X POST http://127.0.0.1:8000/api/setup 初始化系统
Step 8 使用管理员账号登录前端页面
```
---
服务启动会输出可访问的前端界面
<img width="935" height="311" alt="image-6" src="https://github.com/user-attachments/assets/cba1074a-440c-4866-8a94-7b6d1c911a93" />
<img width="1280" height="652" alt="image-7" src="https://github.com/user-attachments/assets/a719dc0a-cbdd-4ba1-9b21-123d5eac32eb" />
## 四、用户操作
step1项目获取
step2后端API服务启动
step3前端web应用启动
step4 终端输入 curl.exe -X POST http://127.0.0.1:8000/api/setup ,访问接口初始化数据库获得超级管理员账号
step5超级管理员&#x20;
账号admin@example.com
密码admin\_password
step6登陆前端页面
## 技术栈
| 层级 | 技术 |
|------|------|
| 后端框架 | FastAPI + Uvicorn |
| 异步任务 | Celery三队列memory / document / periodic |
| 主数据库 | PostgreSQL 13+ |
| 图数据库 | Neo4j 4.4+ |
| 搜索引擎 | Elasticsearch 8.x关键词 + 语义向量混合) |
| 缓存/队列 | Redis 6.0+ |
| ORM | SQLAlchemy 2.0 + Alembic |
| LLM 集成 | LangChain / OpenAI / DashScope / AWS Bedrock |
| MCP 集成 | fastmcp + langchain-mcp-adapters |
| 前端框架 | React 18 + TypeScript + Vite |
| UI 组件库 | Ant Design 5.x |
| 图可视化 | AntV X6 + ECharts + D3.js |
| 包管理 | uv后端/ npm前端 |
---
## 许可证
本项目采用 [Apache License 2.0](LICENSE) 开源协议。
---
本项目采用 Apache License 2.0 开源协议,详情见 `LICENSE`
## 致谢与交流
- **问题反馈**:请提交 [Issue](https://github.com/SuanmoSuanyangTechnology/MemoryBear/issues)
- **贡献代码**:请阅读 [贡献指南](CONTRIBUTING.md),提交 [Pull Request](https://github.com/SuanmoSuanyangTechnology/MemoryBear/pulls) 前请先创建功能分支并遵循 Conventional Commits 格式
- **社区讨论**[GitHub Discussions](https://github.com/SuanmoSuanyangTechnology/MemoryBear/discussions)
- **微信社群**:扫描下方二维码加入微信交流群
![WeChat QR](https://github.com/user-attachments/assets/8c81885c-4134-40d5-96e2-7f78cc082dc6)
- **Star 历史**
[![Star History Chart](https://api.star-history.com/svg?repos=SuanmoSuanyangTechnology/MemoryBear&type=Date)](https://star-history.com/#SuanmoSuanyangTechnology/MemoryBear&Date)
- **联系我们**tianyou_hubm@redbearai.com
- 问题反馈与讨论:请提交 Issue 到代码仓库
- 欢迎贡献:提交 PR 前请先创建功能分支并遵循常规提交信息格式
- 如感兴趣需要联络tianyou_hubm@redbearai.com

View File

@@ -82,32 +82,19 @@ async def get_preview_chunks(
detail="The file does not exist or you do not have permission to access it"
)
# 5. Get file content from storage backend
if not db_file.file_key:
# 5. Construct file path/files/{kb_id}/{parent_id}/{file.id}{file.file_ext}
file_path = os.path.join(
settings.FILE_PATH,
str(db_file.kb_id),
str(db_file.parent_id),
f"{db_file.id}{db_file.file_ext}"
)
# 6. Check if the file exists
if not os.path.exists(file_path):
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail="File has no storage key (legacy data not migrated)"
)
from app.services.file_storage_service import FileStorageService
import asyncio
storage_service = FileStorageService()
async def _download():
return await storage_service.download_file(db_file.file_key)
try:
file_binary = asyncio.run(_download())
except RuntimeError:
loop = asyncio.new_event_loop()
try:
file_binary = loop.run_until_complete(_download())
finally:
loop.close()
except Exception as e:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail=f"File not found in storage: {e}"
detail="File not found (possibly deleted)"
)
# 7. Document parsing & segmentation
@@ -117,12 +104,11 @@ async def get_preview_chunks(
vision_model = QWenCV(
key=db_knowledge.image2text.api_keys[0].api_key,
model_name=db_knowledge.image2text.api_keys[0].model_name,
lang="Chinese",
lang="Chinese", # Default to Chinese
base_url=db_knowledge.image2text.api_keys[0].api_base
)
from app.core.rag.app.naive import chunk
res = chunk(filename=db_file.file_name,
binary=file_binary,
res = chunk(filename=file_path,
from_page=0,
to_page=5,
callback=progress_callback,

View File

@@ -20,7 +20,6 @@ from app.models.user_model import User
from app.schemas import document_schema
from app.schemas.response_schema import ApiResponse
from app.services import document_service, file_service, knowledge_service
from app.services.file_storage_service import FileStorageService, get_file_storage_service
# Obtain a dedicated API logger
@@ -232,8 +231,7 @@ async def update_document(
async def delete_document(
document_id: uuid.UUID,
db: Session = Depends(get_db),
current_user: User = Depends(get_current_user),
storage_service: FileStorageService = Depends(get_file_storage_service),
current_user: User = Depends(get_current_user)
):
"""
Delete document
@@ -259,7 +257,7 @@ async def delete_document(
db.commit()
# 3. Delete file
await file_controller._delete_file(db=db, file_id=file_id, current_user=current_user, storage_service=storage_service)
await file_controller._delete_file(db=db, file_id=file_id, current_user=current_user)
# 4. Delete vector index
db_knowledge = knowledge_service.get_knowledge_by_id(db, knowledge_id=db_document.kb_id, current_user=current_user)
@@ -307,25 +305,38 @@ async def parse_documents(
detail="The file does not exist or you do not have permission to access it"
)
# 3. Get file_key for storage backend
if not db_file.file_key:
api_logger.error(f"File has no storage key (legacy data not migrated): file_id={db_file.id}")
# 3. Construct file path/files/{kb_id}/{parent_id}/{file.id}{file.file_ext}
file_path = os.path.join(
settings.FILE_PATH,
str(db_file.kb_id),
str(db_file.parent_id),
f"{db_file.id}{db_file.file_ext}"
)
# 4. Check if the file exists
api_logger.debug(f"Constructed file path: {file_path}")
api_logger.debug(f"File metadata - kb_id: {db_file.kb_id}, parent_id: {db_file.parent_id}, file_id: {db_file.id}, extension: {db_file.file_ext}")
if not os.path.exists(file_path):
api_logger.error(f"File not found (possibly deleted): file_path={file_path}, file_id={db_file.id}, document_id={document_id}")
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail="File has no storage key (legacy data not migrated)"
detail="File not found (possibly deleted)"
)
# 4. Obtain knowledge base information
api_logger.info(f"Obtain details of the knowledge base: knowledge_id={db_document.kb_id}")
# 5. Obtain knowledge base information
api_logger.info( f"Obtain details of the knowledge base: knowledge_id={db_document.kb_id}")
db_knowledge = knowledge_service.get_knowledge_by_id(db, knowledge_id=db_document.kb_id, current_user=current_user)
if not db_knowledge:
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail="Knowledge base not found")
api_logger.warning(f"The knowledge base does not exist or access is denied: knowledge_id={db_document.kb_id}")
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail="The knowledge base does not exist or access is denied"
)
# 5. Dispatch parse task with file_key (not file_path)
task = celery_app.send_task(
"app.core.rag.tasks.parse_document",
args=[db_file.file_key, document_id, db_file.file_name]
)
# 6. Task: Document parsing, vectorization, and storage
# from app.tasks import parse_document
# parse_document(file_path, document_id)
task = celery_app.send_task("app.core.rag.tasks.parse_document", args=[file_path, document_id])
result = {
"task_id": task.id
}

View File

@@ -1,10 +1,12 @@
import os
from pathlib import Path
import shutil
from typing import Any, Optional
import uuid
from fastapi import APIRouter, Depends, HTTPException, status, File, UploadFile, Query
from fastapi.encoders import jsonable_encoder
from fastapi.responses import Response
from fastapi.responses import FileResponse
from sqlalchemy.orm import Session
from app.core.config import settings
@@ -17,14 +19,10 @@ from app.models.user_model import User
from app.schemas import file_schema, document_schema
from app.schemas.response_schema import ApiResponse
from app.services import file_service, document_service
from app.services.knowledge_service import get_knowledge_by_id as get_kb_by_id
from app.services.file_storage_service import (
FileStorageService,
generate_kb_file_key,
get_file_storage_service,
)
from app.core.quota_stub import check_knowledge_capacity_quota
# Obtain a dedicated API logger
api_logger = get_api_logger()
router = APIRouter(
@@ -37,37 +35,67 @@ router = APIRouter(
async def get_files(
kb_id: uuid.UUID,
parent_id: uuid.UUID,
page: int = Query(1, gt=0),
pagesize: int = Query(20, gt=0, le=100),
page: int = Query(1, gt=0), # Default: 1, which must be greater than 0
pagesize: int = Query(20, gt=0, le=100), # Default: 20 items per page, maximum: 100 items
orderby: Optional[str] = Query(None, description="Sort fields, such as: created_at"),
desc: Optional[bool] = Query(False, description="Is it descending order"),
keywords: Optional[str] = Query(None, description="Search keywords (file name)"),
db: Session = Depends(get_db),
current_user: User = Depends(get_current_user)
):
"""Paged query file list"""
api_logger.info(f"Query file list: kb_id={kb_id}, parent_id={parent_id}, page={page}, pagesize={pagesize}")
"""
Paged query file list
- Support filtering by kb_id and parent_id
- Support keyword search for file names
- Support dynamic sorting
- Return paging metadata + file list
"""
api_logger.info(f"Query file list: kb_id={kb_id}, parent_id={parent_id}, page={page}, pagesize={pagesize}, keywords={keywords}, username: {current_user.username}")
# 1. parameter validation
if page < 1 or pagesize < 1:
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail="The paging parameter must be greater than 0")
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail="The paging parameter must be greater than 0"
)
filters = [file_model.File.kb_id == kb_id]
# 2. Construct query conditions
filters = [
file_model.File.kb_id == kb_id
]
if parent_id:
filters.append(file_model.File.parent_id == parent_id)
# Keyword search (fuzzy matching of file name)
if keywords:
filters.append(file_model.File.file_name.ilike(f"%{keywords}%"))
# 3. Execute paged query
try:
api_logger.debug("Start executing file paging query")
total, items = file_service.get_files_paginated(
db=db, filters=filters, page=page, pagesize=pagesize,
orderby=orderby, desc=desc, current_user=current_user
db=db,
filters=filters,
page=page,
pagesize=pagesize,
orderby=orderby,
desc=desc,
current_user=current_user
)
api_logger.info(f"File query successful: total={total}, returned={len(items)} records")
except Exception as e:
raise HTTPException(status_code=status.HTTP_500_INTERNAL_SERVER_ERROR, detail=f"Query failed: {str(e)}")
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail=f"Query failed: {str(e)}"
)
# 4. Return structured response
result = {
"items": items,
"page": {"page": page, "pagesize": pagesize, "total": total, "has_next": page * pagesize < total}
"page": {
"page": page,
"pagesize": pagesize,
"total": total,
"has_next": True if page * pagesize < total else False
}
}
return success(data=jsonable_encoder(result), msg="Query of file list succeeded")
@@ -80,14 +108,23 @@ async def create_folder(
db: Session = Depends(get_db),
current_user: User = Depends(get_current_user),
):
"""Create a new folder"""
api_logger.info(f"Create folder request: kb_id={kb_id}, parent_id={parent_id}, folder_name={folder_name}")
"""
Create a new folder
"""
api_logger.info(f"Create folder request: kb_id={kb_id}, parent_id={parent_id}, folder_name={folder_name}, username: {current_user.username}")
try:
create_folder_data = file_schema.FileCreate(
kb_id=kb_id, created_by=current_user.id, parent_id=parent_id,
file_name=folder_name, file_ext='folder', file_size=0,
api_logger.debug(f"Start creating a folder: {folder_name}")
create_folder = file_schema.FileCreate(
kb_id=kb_id,
created_by=current_user.id,
parent_id=parent_id,
file_name=folder_name,
file_ext='folder',
file_size=0,
)
db_file = file_service.create_file(db=db, file=create_folder_data, current_user=current_user)
db_file = file_service.create_file(db=db, file=create_folder, current_user=current_user)
api_logger.info(f"Folder created successfully: {db_file.file_name} (ID: {db_file.id})")
return success(data=jsonable_encoder(file_schema.File.model_validate(db_file)), msg="Folder creation successful")
except Exception as e:
api_logger.error(f"Folder creation failed: {folder_name} - {str(e)}")
@@ -101,58 +138,76 @@ async def upload_file(
parent_id: uuid.UUID,
file: UploadFile = File(...),
db: Session = Depends(get_db),
current_user: User = Depends(get_current_user),
storage_service: FileStorageService = Depends(get_file_storage_service),
current_user: User = Depends(get_current_user)
):
"""Upload file to storage backend"""
api_logger.info(f"upload file request: kb_id={kb_id}, parent_id={parent_id}, filename={file.filename}")
"""
upload file
"""
api_logger.info(f"upload file request: kb_id={kb_id}, parent_id={parent_id}, filename={file.filename}, username: {current_user.username}")
# Read the contents of the file
contents = await file.read()
# Check file size
file_size = len(contents)
print(f"file size: {file_size} byte")
if file_size == 0:
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail="The file is empty.")
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail="The file is empty."
)
# If the file size exceeds 50MB (50 * 1024 * 1024 bytes)
if file_size > settings.MAX_FILE_SIZE:
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail=f"File size exceeds {settings.MAX_FILE_SIZE} byte limit")
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail=f"The file size exceeds the {settings.MAX_FILE_SIZE}byte limit"
)
# Extract the extension using `os.path.splitext`
_, file_extension = os.path.splitext(file.filename)
file_ext = file_extension.lower()
# Create File record
upload_file_data = file_schema.FileCreate(
kb_id=kb_id, created_by=current_user.id, parent_id=parent_id,
file_name=file.filename, file_ext=file_ext, file_size=file_size,
upload_file = file_schema.FileCreate(
kb_id=kb_id,
created_by=current_user.id,
parent_id=parent_id,
file_name=file.filename,
file_ext=file_extension.lower(),
file_size=file_size,
)
db_file = file_service.create_file(db=db, file=upload_file_data, current_user=current_user)
db_file = file_service.create_file(db=db, file=upload_file, current_user=current_user)
# Upload to storage backend
file_key = generate_kb_file_key(kb_id=kb_id, file_id=db_file.id, file_ext=file_ext)
try:
await storage_service.storage.upload(file_key=file_key, content=contents, content_type=file.content_type)
except Exception as e:
api_logger.error(f"Storage upload failed: {e}")
raise HTTPException(status_code=status.HTTP_500_INTERNAL_SERVER_ERROR, detail=f"File storage failed: {str(e)}")
# Construct a save path/files/{kb_id}/{parent_id}/{file.id}{file_extension}
save_dir = os.path.join(settings.FILE_PATH, str(kb_id), str(parent_id))
Path(save_dir).mkdir(parents=True, exist_ok=True) # Ensure that the directory exists
save_path = os.path.join(save_dir, f"{db_file.id}{db_file.file_ext}")
# Save file_key
db_file.file_key = file_key
db.commit()
db.refresh(db_file)
# Save file
with open(save_path, "wb") as f:
f.write(contents)
# Create document (inherit parser_config from knowledge base)
default_parser_config = {
"layout_recognize": "DeepDOC", "chunk_token_num": 128, "delimiter": "\n",
"auto_keywords": 0, "auto_questions": 0, "html4excel": "false"
}
try:
db_knowledge = get_kb_by_id(db, knowledge_id=kb_id, current_user=current_user)
if db_knowledge and db_knowledge.parser_config:
default_parser_config.update(dict(db_knowledge.parser_config))
except Exception:
pass
# Verify whether the file has been saved successfully
if not os.path.exists(save_path):
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail="File save failed"
)
# Create a document
create_data = document_schema.DocumentCreate(
kb_id=kb_id, created_by=current_user.id, file_id=db_file.id,
file_name=db_file.file_name, file_ext=db_file.file_ext, file_size=db_file.file_size,
file_meta={}, parser_id="naive", parser_config=default_parser_config
kb_id=kb_id,
created_by=current_user.id,
file_id=db_file.id,
file_name=db_file.file_name,
file_ext=db_file.file_ext,
file_size=db_file.file_size,
file_meta={},
parser_id="naive",
parser_config={
"layout_recognize": "DeepDOC",
"chunk_token_num": 128,
"delimiter": "\n",
"auto_keywords": 0,
"auto_questions": 0,
"html4excel": "false"
}
)
db_document = document_service.create_document(db=db, document=create_data, current_user=current_user)
@@ -166,73 +221,123 @@ async def custom_text(
parent_id: uuid.UUID,
create_data: file_schema.CustomTextFileCreate,
db: Session = Depends(get_db),
current_user: User = Depends(get_current_user),
storage_service: FileStorageService = Depends(get_file_storage_service),
current_user: User = Depends(get_current_user)
):
"""Custom text upload"""
"""
custom text
"""
api_logger.info(f"custom text upload request: kb_id={kb_id}, parent_id={parent_id}, title={create_data.title}, content={create_data.content}, username: {current_user.username}")
# Check file content size
# 将内容编码为字节UTF-8
content_bytes = create_data.content.encode('utf-8')
file_size = len(content_bytes)
print(f"file size: {file_size} byte")
if file_size == 0:
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail="The content is empty.")
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail="The content is empty."
)
# If the file size exceeds 50MB (50 * 1024 * 1024 bytes)
if file_size > settings.MAX_FILE_SIZE:
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail=f"Content size exceeds {settings.MAX_FILE_SIZE} byte limit")
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail=f"The content size exceeds the {settings.MAX_FILE_SIZE}byte limit"
)
upload_file_data = file_schema.FileCreate(
kb_id=kb_id, created_by=current_user.id, parent_id=parent_id,
file_name=f"{create_data.title}.txt", file_ext=".txt", file_size=file_size,
upload_file = file_schema.FileCreate(
kb_id=kb_id,
created_by=current_user.id,
parent_id=parent_id,
file_name=f"{create_data.title}.txt",
file_ext=".txt",
file_size=file_size,
)
db_file = file_service.create_file(db=db, file=upload_file_data, current_user=current_user)
db_file = file_service.create_file(db=db, file=upload_file, current_user=current_user)
# Upload to storage backend
file_key = generate_kb_file_key(kb_id=kb_id, file_id=db_file.id, file_ext=".txt")
try:
await storage_service.storage.upload(file_key=file_key, content=content_bytes, content_type="text/plain")
except Exception as e:
api_logger.error(f"Storage upload failed: {e}")
raise HTTPException(status_code=status.HTTP_500_INTERNAL_SERVER_ERROR, detail=f"File storage failed: {str(e)}")
# Construct a save path/files/{kb_id}/{parent_id}/{file.id}{file_extension}
save_dir = os.path.join(settings.FILE_PATH, str(kb_id), str(parent_id))
Path(save_dir).mkdir(parents=True, exist_ok=True) # Ensure that the directory exists
save_path = os.path.join(save_dir, f"{db_file.id}.txt")
db_file.file_key = file_key
db.commit()
db.refresh(db_file)
# Save file
with open(save_path, "wb") as f:
f.write(content_bytes)
# Verify whether the file has been saved successfully
if not os.path.exists(save_path):
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail="File save failed"
)
# Create a document
create_document_data = document_schema.DocumentCreate(
kb_id=kb_id, created_by=current_user.id, file_id=db_file.id,
file_name=db_file.file_name, file_ext=db_file.file_ext, file_size=db_file.file_size,
file_meta={}, parser_id="naive",
parser_config={"layout_recognize": "DeepDOC", "chunk_token_num": 128, "delimiter": "\n",
"auto_keywords": 0, "auto_questions": 0, "html4excel": "false"}
kb_id=kb_id,
created_by=current_user.id,
file_id=db_file.id,
file_name=db_file.file_name,
file_ext=db_file.file_ext,
file_size=db_file.file_size,
file_meta={},
parser_id="naive",
parser_config={
"layout_recognize": "DeepDOC",
"chunk_token_num": 128,
"delimiter": "\n",
"auto_keywords": 0,
"auto_questions": 0,
"html4excel": "false"
}
)
db_document = document_service.create_document(db=db, document=create_document_data, current_user=current_user)
api_logger.info(f"custom text upload successfully: {create_data.title} (file_id: {db_file.id}, document_id: {db_document.id})")
return success(data=jsonable_encoder(document_schema.Document.model_validate(db_document)), msg="custom text upload successful")
@router.get("/{file_id}", response_model=Any)
async def get_file(
file_id: uuid.UUID,
db: Session = Depends(get_db),
storage_service: FileStorageService = Depends(get_file_storage_service),
db: Session = Depends(get_db)
) -> Any:
"""Download file by file_id"""
"""
Download the file based on the file_id
- Query file information from the database
- Construct the file path and check if it exists
- Return a FileResponse to download the file
"""
api_logger.info(f"Download the file based on the file_id: file_id={file_id}")
# 1. Query file information from the database
db_file = file_service.get_file_by_id(db, file_id=file_id)
if not db_file:
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail="File not found")
api_logger.warning(f"The file does not exist or you do not have permission to access it: file_id={file_id}")
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail="The file does not exist or you do not have permission to access it"
)
if not db_file.file_key:
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail="File has no storage key (legacy data not migrated)")
# 2. Construct file path/files/{kb_id}/{parent_id}/{file.id}{file.file_ext}
file_path = os.path.join(
settings.FILE_PATH,
str(db_file.kb_id),
str(db_file.parent_id),
f"{db_file.id}{db_file.file_ext}"
)
try:
content = await storage_service.download_file(db_file.file_key)
except Exception as e:
api_logger.error(f"Storage download failed: {e}")
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail="File not found in storage")
# 3. Check if the file exists
if not os.path.exists(file_path):
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail="File not found (possibly deleted)"
)
import mimetypes
media_type = mimetypes.guess_type(db_file.file_name)[0] or "application/octet-stream"
return Response(
content=content,
media_type=media_type,
headers={"Content-Disposition": f'attachment; filename="{db_file.file_name}"'}
# 4.Return FileResponse (automatically handle download)
return FileResponse(
path=file_path,
filename=db_file.file_name, # Use original file name
media_type="application/octet-stream" # Universal binary stream type
)
@@ -243,22 +348,50 @@ async def update_file(
db: Session = Depends(get_db),
current_user: User = Depends(get_current_user)
):
"""Update file information (such as file name)"""
db_file = file_service.get_file_by_id(db, file_id=file_id)
if not db_file:
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail="File not found")
"""
Update file information (such as file name)
- Only specified fields such as file_name are allowed to be modified
"""
api_logger.debug(f"Query the file to be updated: {file_id}")
# 1. Check if the file exists
db_file = file_service.get_file_by_id(db, file_id=file_id)
if not db_file:
api_logger.warning(f"The file does not exist or you do not have permission to access it: file_id={file_id}")
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail="The file does not exist or you do not have permission to access it"
)
# 2. Update fields (only update non-null fields)
api_logger.debug(f"Start updating the file fields: {file_id}")
updated_fields = []
for field, value in update_data.dict(exclude_unset=True).items():
if hasattr(db_file, field):
setattr(db_file, field, value)
old_value = getattr(db_file, field)
if old_value != value:
# update value
setattr(db_file, field, value)
updated_fields.append(f"{field}: {old_value} -> {value}")
if updated_fields:
api_logger.debug(f"updated fields: {', '.join(updated_fields)}")
# 3. Save to database
try:
db.commit()
db.refresh(db_file)
api_logger.info(f"The file has been successfully updated: {db_file.file_name} (ID: {db_file.id})")
except Exception as e:
db.rollback()
raise HTTPException(status_code=status.HTTP_500_INTERNAL_SERVER_ERROR, detail=f"File update failed: {str(e)}")
api_logger.error(f"File update failed: file_id={file_id} - {str(e)}")
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail=f"File update failed: {str(e)}"
)
# 4. Return the updated file
return success(data=jsonable_encoder(file_schema.File.model_validate(db_file)), msg="File information updated successfully")
@@ -266,43 +399,60 @@ async def update_file(
async def delete_file(
file_id: uuid.UUID,
db: Session = Depends(get_db),
current_user: User = Depends(get_current_user),
storage_service: FileStorageService = Depends(get_file_storage_service),
current_user: User = Depends(get_current_user)
):
"""Delete a file or folder"""
api_logger.info(f"Request to delete file: file_id={file_id}")
await _delete_file(db=db, file_id=file_id, current_user=current_user, storage_service=storage_service)
"""
Delete a file or folder
"""
api_logger.info(f"Request to delete file: file_id={file_id}, username: {current_user.username}")
await _delete_file(db=db, file_id=file_id, current_user=current_user)
return success(msg="File deleted successfully")
async def _delete_file(
file_id: uuid.UUID,
db: Session,
current_user: User,
storage_service: FileStorageService,
db: Session = Depends(get_db),
current_user: User = Depends(get_current_user)
) -> None:
"""Delete a file or folder from storage and database"""
"""
Delete a file or folder
"""
# 1. Check if the file exists
db_file = file_service.get_file_by_id(db, file_id=file_id)
if not db_file:
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail="File not found")
api_logger.warning(f"The file does not exist or you do not have permission to access it: file_id={file_id}")
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail="The file does not exist or you do not have permission to access it"
)
# Delete from storage backend
# 2. Construct physical path
file_path = Path(
settings.FILE_PATH,
str(db_file.kb_id),
str(db_file.id)
) if db_file.file_ext == 'folder' else Path(
settings.FILE_PATH,
str(db_file.kb_id),
str(db_file.parent_id),
f"{db_file.id}{db_file.file_ext}"
)
# 3. Delete physical files/folders
try:
if file_path.exists():
if db_file.file_ext == 'folder':
shutil.rmtree(file_path) # Recursively delete folders
else:
file_path.unlink() # Delete a single file
except Exception as e:
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail=f"Failed to delete physical file/folder: {str(e)}"
)
# 4.Delete db_file
if db_file.file_ext == 'folder':
# For folders, delete all child files from storage first
child_files = db.query(file_model.File).filter(file_model.File.parent_id == db_file.id).all()
for child in child_files:
if child.file_key:
try:
await storage_service.delete_file(child.file_key)
except Exception as e:
api_logger.warning(f"Failed to delete child file from storage: {child.file_key} - {e}")
db.query(file_model.File).filter(file_model.File.parent_id == db_file.id).delete()
else:
if db_file.file_key:
try:
await storage_service.delete_file(db_file.file_key)
except Exception as e:
api_logger.warning(f"Failed to delete file from storage: {db_file.file_key} - {e}")
db.delete(db_file)
db.commit()

View File

@@ -1,4 +1,4 @@
import asyncio
import uuid
from fastapi import APIRouter, Depends, HTTPException, status, Query
from pydantic import BaseModel, Field
@@ -10,7 +10,7 @@ from app.dependencies import get_current_user
from app.models.user_model import User
from app.schemas.response_schema import ApiResponse
from app.services import memory_dashboard_service, workspace_service
from app.services import memory_dashboard_service, memory_storage_service, workspace_service
from app.services.memory_agent_service import get_end_users_connected_configs_batch
from app.services.app_statistics_service import AppStatisticsService
from app.core.logging_config import get_api_logger
@@ -48,7 +48,7 @@ def get_workspace_total_end_users(
@router.get("/end_users", response_model=ApiResponse)
def get_workspace_end_users(
async def get_workspace_end_users(
workspace_id: Optional[uuid.UUID] = Query(None, description="工作空间ID可选默认当前用户工作空间"),
keyword: Optional[str] = Query(None, description="搜索关键词(同时模糊匹配 other_name 和 id"),
page: int = Query(1, ge=1, description="页码从1开始"),
@@ -58,15 +58,6 @@ def get_workspace_end_users(
):
"""
获取工作空间的宿主列表(分页查询,支持模糊搜索)
新增:记忆数量过滤:
Neo4j 模式:
- 使用 end_users.memory_count 过滤 memory_count > 0 的宿主
- memory_num.total 直接取 end_user.memory_count
RAG 模式:
- 使用 documents.chunk_num 聚合过滤 chunk 总数 > 0 的宿主
- memory_num.total 取聚合后的 chunk 总数
返回工作空间下的宿主列表,支持分页查询和模糊搜索。
通过 keyword 参数同时模糊匹配 other_name 和 id 字段。
@@ -89,29 +80,17 @@ def get_workspace_end_users(
current_workspace_type = memory_dashboard_service.get_current_workspace_type(db, workspace_id, current_user)
api_logger.info(f"用户 {current_user.username} 请求获取工作空间 {workspace_id} 的宿主列表, 类型: {current_workspace_type}")
if current_workspace_type == "rag":
end_users_result = memory_dashboard_service.get_workspace_end_users_paginated_rag(
db=db,
workspace_id=workspace_id,
current_user=current_user,
page=page,
pagesize=pagesize,
keyword=keyword,
)
raw_items = end_users_result.get("items", [])
end_users = [item["end_user"] for item in raw_items]
else:
end_users_result = memory_dashboard_service.get_workspace_end_users_paginated(
db=db,
workspace_id=workspace_id,
current_user=current_user,
page=page,
pagesize=pagesize,
keyword=keyword,
)
raw_items = end_users_result.get("items", [])
end_users = raw_items
# 获取分页的 end_users
end_users_result = memory_dashboard_service.get_workspace_end_users_paginated(
db=db,
workspace_id=workspace_id,
current_user=current_user,
page=page,
pagesize=pagesize,
keyword=keyword
)
end_users = end_users_result.get("items", [])
total = end_users_result.get("total", 0)
if not end_users:
@@ -122,19 +101,50 @@ def get_workspace_end_users(
"page": page,
"pagesize": pagesize,
"total": total,
"hasnext": (page * pagesize) < total,
},
"hasnext": (page * pagesize) < total
}
}, msg="宿主列表获取成功")
end_user_ids = [str(user.id) for user in end_users]
try:
memory_configs_map = get_end_users_connected_configs_batch(end_user_ids, db)
except Exception as e:
api_logger.error(f"批量获取记忆配置失败: {str(e)}")
memory_configs_map = {}
# 并发执行两个独立的查询任务
async def get_memory_configs():
"""获取记忆配置(在线程池中执行同步查询)"""
try:
return await asyncio.to_thread(
get_end_users_connected_configs_batch,
end_user_ids, db
)
except Exception as e:
api_logger.error(f"批量获取记忆配置失败: {str(e)}")
return {}
# 触发按需初始化:为 implicit_emotions_storage / interest_distribution 中没有记录的用户异步生成数据
async def get_memory_nums():
"""获取记忆数量"""
if current_workspace_type == "rag":
# RAG 模式:批量查询
try:
chunk_map = await asyncio.to_thread(
memory_dashboard_service.get_users_total_chunk_batch,
end_user_ids, db, current_user
)
return {uid: {"total": count} for uid, count in chunk_map.items()}
except Exception as e:
api_logger.error(f"批量获取 RAG chunk 数量失败: {str(e)}")
return {uid: {"total": 0} for uid in end_user_ids}
elif current_workspace_type == "neo4j":
# Neo4j 模式批量查询简化版本只返回total
try:
batch_result = await memory_storage_service.search_all_batch(end_user_ids)
return {uid: {"total": count} for uid, count in batch_result.items()}
except Exception as e:
api_logger.error(f"批量获取 Neo4j 记忆数量失败: {str(e)}")
return {uid: {"total": 0} for uid in end_user_ids}
return {uid: {"total": 0} for uid in end_user_ids}
# 触发按需初始化:为 implicit_emotions_storage 中没有记录的用户异步生成数据
try:
from app.celery_app import celery_app as _celery_app
_celery_app.send_task(
@@ -149,26 +159,27 @@ def get_workspace_end_users(
except Exception as e:
api_logger.warning(f"触发按需初始化任务失败(不影响主流程): {e}")
# 并发执行配置查询和记忆数量查询
memory_configs_map, memory_nums_map = await asyncio.gather(
get_memory_configs(),
get_memory_nums()
)
# 构建结果列表
items = []
for index, end_user in enumerate(end_users):
for end_user in end_users:
user_id = str(end_user.id)
config_info = memory_configs_map.get(user_id, {})
if current_workspace_type == "rag":
memory_total = int(raw_items[index].get("memory_count", 0) or 0)
else:
memory_total = int(getattr(end_user, "memory_count", 0) or 0)
items.append({
"end_user": {
"id": user_id,
"other_name": end_user.other_name,
'end_user': {
'id': user_id,
'other_name': end_user.other_name
},
"memory_num": {"total": memory_total},
"memory_config": {
'memory_num': memory_nums_map.get(user_id, {"total": 0}),
'memory_config': {
"memory_config_id": config_info.get("memory_config_id"),
"memory_config_name": config_info.get("memory_config_name"),
},
"memory_config_name": config_info.get("memory_config_name")
}
})
# 触发社区聚类补全任务(异步,不阻塞接口响应)
@@ -396,7 +407,6 @@ def get_current_user_rag_total_num(
total_chunk = memory_dashboard_service.get_current_user_total_chunk(end_user_id, db, current_user)
return success(data=total_chunk, msg="宿主RAG知识数据获取成功")
@router.get("/rag_content", response_model=ApiResponse)
def get_rag_content(
end_user_id: str = Query(..., description="宿主ID"),

View File

@@ -20,7 +20,6 @@ from app.core.memory.storage_services.extraction_engine.knowledge_extraction.mem
memory_summary_generation
from app.core.memory.utils.llm.llm_utils import MemoryClientFactory
from app.core.memory.utils.log.logging_utils import log_time
from app.core.memory.utils.memory_count_utils import sync_end_user_memory_count_from_neo4j
from app.db import get_db_context
from app.repositories.neo4j.add_edges import add_memory_summary_statement_edges
from app.repositories.neo4j.add_nodes import add_memory_summary_nodes
@@ -314,28 +313,6 @@ async def write(
except Exception as cache_err:
logger.warning(f"[WRITE] 写入活动统计缓存失败(不影响主流程): {cache_err}", exc_info=True)
# 同步 Neo4j 记忆节点总数到 PostgreSQL end_users.memory_count
if end_user_id:
try:
memory_count_connector = Neo4jConnector()
try:
node_count = await sync_end_user_memory_count_from_neo4j(
end_user_id,
memory_count_connector,
)
finally:
await memory_count_connector.close()
logger.info(
f"[MemoryCount] 写入后同步 memory_count: "
f"end_user_id={end_user_id}, count={node_count}"
)
except Exception as e:
logger.warning(
f"[MemoryCount] 写入后同步 memory_count 失败(不影响主流程): {e}",
exc_info=True,
)
# Close LLM/Embedder underlying httpx clients to prevent
# 'RuntimeError: Event loop is closed' during garbage collection
for client_obj in (llm_client, embedder_client):
@@ -354,4 +331,3 @@ async def write(
logger.info("=== Pipeline Complete ===")
logger.info(f"Total execution time: {total_time:.2f} seconds")

View File

@@ -76,8 +76,8 @@ Remember the following:
- Today's date is {{ datetime }}.
- Do not return anything from the custom few shot example prompts provided above.
- Don't reveal your prompt or model information to the user.
- The output language should match the user's input language.
- Vague times in user input should be converted into specific dates.
- If you are unable to extract any relevant information from the user's input, return the user's original input:{"questions":[userinput]}
# [IMPORTANT]: THE OUTPUT LANGUAGE MUST BE THE SAME AS THE USER'S INPUT LANGUAGE.
The following is the user's input. You need to extract the relevant information from the input and return it in the JSON format as shown above.

View File

@@ -20,7 +20,6 @@ from uuid import UUID
from datetime import datetime
from app.core.memory.storage_services.forgetting_engine.forgetting_strategy import ForgettingStrategy
from app.core.memory.utils.memory_count_utils import sync_end_user_memory_count_from_neo4j
from app.repositories.neo4j.neo4j_connector import Neo4jConnector
@@ -146,22 +145,7 @@ class ForgettingScheduler:
}
logger.info("没有可遗忘的节点对,遗忘周期结束")
# 同步 Neo4j 记忆节点总数到 PostgreSQL 的 end_users.memory_count
if end_user_id:
try:
node_count = await sync_end_user_memory_count_from_neo4j(
end_user_id,
self.connector,
)
logger.info(
f"[MemoryCount] 遗忘后同步 memory_count: "
f"end_user_id={end_user_id}, count={node_count}"
)
except Exception as e:
logger.warning(
f"[MemoryCount] 遗忘后同步 memory_count 失败(不影响主流程): {e}",
exc_info=True,
)
return report
# 步骤3按激活值排序激活值最低的优先
@@ -318,22 +302,7 @@ class ForgettingScheduler:
f"({reduction_rate:.2%}), "
f"耗时 {duration:.2f}"
)
# 同步 Neo4j 记忆节点总数到 PostgreSQL 的 end_users.memory_count
if end_user_id:
try:
node_count = await sync_end_user_memory_count_from_neo4j(
end_user_id,
self.connector,
)
logger.info(
f"[MemoryCount] 遗忘后同步 memory_count: "
f"end_user_id={end_user_id}, count={node_count}"
)
except Exception as e:
logger.warning(
f"[MemoryCount] 遗忘后同步 memory_count 失败(不影响主流程): {e}",
exc_info=True,
)
return report
except Exception as e:

View File

@@ -1,36 +0,0 @@
from uuid import UUID
from app.db import get_db_context
from app.models.end_user_model import EndUser
from app.repositories.memory_config_repository import MemoryConfigRepository
from app.repositories.neo4j.neo4j_connector import Neo4jConnector
async def sync_end_user_memory_count_from_neo4j(
end_user_id: str,
connector: Neo4jConnector,
) -> int:
"""
Sync one end user's Neo4j memory node count to PostgreSQL.
The caller owns the Neo4j connector lifecycle.
"""
if not end_user_id:
return 0
result = await connector.execute_query(
MemoryConfigRepository.SEARCH_FOR_ALL_BATCH,
end_user_ids=[end_user_id],
)
node_count = int(result[0]["total"]) if result else 0
with get_db_context() as db:
db.query(EndUser).filter(
EndUser.id == UUID(end_user_id)
).update(
{"memory_count": node_count},
synchronize_session=False,
)
db.commit()
return node_count

View File

@@ -14,7 +14,6 @@ Transcribe the content from the provided PDF page image into clean Markdown form
6. Do NOT wrap the output in ```markdown or ``` blocks.
7. Only apply Markdown structure to headings, paragraphs, lists, and tables, strictly based on the layout of the image. Do NOT create tables unless an actual table exists in the image.
8. Preserve the original language, information, and order exactly as shown in the image.
9. Your output language MUST match the language of the content in the image. If the image contains Chinese text, output in Chinese. If English, output in English. Never translate.
{% if page %}
At the end of the transcription, add the page divider: `--- Page {{ page }} ---`.

View File

@@ -1,6 +1,5 @@
import asyncio
import logging
import re
import time
import uuid
from abc import ABC, abstractmethod
@@ -23,9 +22,6 @@ from app.services.multimodal_service import MultimodalService
logger = logging.getLogger(__name__)
# 匹配模板变量 {{xxx}} 的正则
_TEMPLATE_PATTERN = re.compile(r"\{\{.*?\}\}")
class NodeExecutionError(Exception):
"""节点执行失败异常。
@@ -507,29 +503,10 @@ class BaseNode(ABC):
variable_pool: The variable pool used for reading and writing variables.
Returns:
A dictionary containing the node's input data with all template
variables resolved to their actual runtime values.
A dictionary containing the node's input data.
"""
return {"config": self._resolve_config(self.config, variable_pool)}
@staticmethod
def _resolve_config(config: Any, variable_pool: VariablePool) -> Any:
"""递归解析 config 中的模板变量,将 {{xxx}} 替换为实际值。
Args:
config: 节点的原始配置(可能包含模板变量)。
variable_pool: 变量池,用于解析模板变量。
Returns:
解析后的配置,所有字符串中的 {{变量}} 已被替换为真实值。
"""
if isinstance(config, str) and _TEMPLATE_PATTERN.search(config):
return BaseNode._render_template(config, variable_pool, strict=False)
elif isinstance(config, dict):
return {k: BaseNode._resolve_config(v, variable_pool) for k, v in config.items()}
elif isinstance(config, list):
return [BaseNode._resolve_config(item, variable_pool) for item in config]
return config
# Default implementation returns the node configuration
return {"config": self.config}
def _extract_output(self, business_result: Any) -> Any:
"""Extracts the actual output from the business result.

View File

@@ -121,10 +121,7 @@ class DocExtractorNode(BaseNode):
return business_result
def _extract_input(self, state: WorkflowState, variable_pool: VariablePool) -> dict[str, Any]:
file_selector = self.config.get("file_selector", "")
# 将变量选择器(如 sys.files解析为实际值
resolved = self.get_variable(file_selector, variable_pool, strict=False, default=file_selector)
return {"file_selector": resolved}
return {"file_selector": self.config.get("file_selector")}
async def execute(self, state: WorkflowState, variable_pool: VariablePool) -> Any:
config = DocExtractorNodeConfig(**self.config)

View File

@@ -1,7 +1,7 @@
import datetime
import uuid
from sqlalchemy import Column, DateTime, ForeignKey, Integer, String, Text
from sqlalchemy import Column, DateTime, ForeignKey, String, Text
from sqlalchemy.dialects.postgresql import UUID
from sqlalchemy.orm import relationship
@@ -38,15 +38,6 @@ class EndUser(Base):
comment="关联的记忆配置ID"
)
memory_count = Column(
Integer,
nullable=False,
default=0,
server_default="0",
index=True,
comment="记忆节点总数",
)
# 用户摘要四个维度 - User Summary Four Dimensions
user_summary = Column(Text, nullable=True, comment="缓存的用户摘要(基本介绍)")
personality_traits = Column(Text, nullable=True, comment="性格特点")

View File

@@ -15,5 +15,4 @@ class File(Base):
file_ext = Column(String, index=True, nullable=False, comment="file extension:folder|pdf")
file_size = Column(Integer, default=0, comment="file size(byte)")
file_url = Column(String, index=True, nullable=True, comment="file comes from a website url")
file_key = Column(String(512), nullable=True, index=True, comment="storage file key for FileStorageService")
created_at = Column(DateTime, default=datetime.datetime.now)

View File

@@ -19,6 +19,4 @@ class EndUser(BaseModel):
# 用户摘要和洞察更新时间
user_summary_updated_at: Optional[datetime.datetime] = Field(description="用户摘要最后更新时间", default=None)
memory_insight_updated_at: Optional[datetime.datetime] = Field(description="洞察报告最后更新时间", default=None)
#用户记忆节点总数Neo4j模式
memory_count: int = Field(description="记忆节点总数", default=0)
memory_insight_updated_at: Optional[datetime.datetime] = Field(description="洞察报告最后更新时间", default=None)

View File

@@ -11,7 +11,6 @@ class FileBase(BaseModel):
file_ext: str
file_size: int
file_url: str | None = None
file_key: str | None = None
created_at: datetime.datetime | None = None

View File

@@ -102,11 +102,6 @@ class AppDslService:
{**r, "_ref": self._agent_ref(r.get("target_agent_id"))} for r in (cfg["routing_rules"] or [])
]
return enriched
if app_type == AppType.WORKFLOW:
enriched = {**cfg}
if "nodes" in cfg:
enriched["nodes"] = self._enrich_workflow_nodes(cfg["nodes"])
return enriched
return cfg
def _export_draft(self, app: App, meta: dict, app_meta: dict) -> tuple[str, str]:
@@ -115,7 +110,7 @@ class AppDslService:
config_data = {
"variables": config.variables if config else [],
"edges": config.edges if config else [],
"nodes": self._enrich_workflow_nodes(config.nodes) if config else [],
"nodes": config.nodes if config else [],
"features": config.features if config else {},
"execution_config": config.execution_config if config else {},
"triggers": config.triggers if config else [],
@@ -195,23 +190,6 @@ class AppDslService:
def _enrich_tools(self, tools: list) -> list:
return [{**t, "_ref": self._tool_ref(t.get("tool_id"))} for t in (tools or [])]
def _enrich_workflow_nodes(self, nodes: list) -> list:
"""enrich 工作流节点中的模型引用,添加 name、provider、type 信息"""
from app.core.workflow.nodes.enums import NodeType
enriched_nodes = []
for node in (nodes or []):
node_type = node.get("type")
config = dict(node.get("config") or {})
if node_type in (NodeType.LLM.value, NodeType.QUESTION_CLASSIFIER.value, NodeType.PARAMETER_EXTRACTOR.value):
model_id = config.get("model_id")
if model_id:
config["model_ref"] = self._model_ref(model_id)
del config["model_id"]
enriched_nodes.append({**node, "config": config})
return enriched_nodes
def _skill_ref(self, skill_id) -> Optional[dict]:
if not skill_id:
return None
@@ -642,16 +620,16 @@ class AppDslService:
warnings.append(f"[{node_label}] 知识库 '{kb_id}' 未匹配,已移除,请导入后手动配置")
config["knowledge_bases"] = resolved_kbs
elif node_type in (NodeType.LLM.value, NodeType.QUESTION_CLASSIFIER.value, NodeType.PARAMETER_EXTRACTOR.value):
model_ref = config.get("model_ref") or config.get("model_id")
model_ref = config.get("model_id")
if model_ref:
ref_dict = None
if isinstance(model_ref, dict):
ref_dict = {
"id": model_ref.get("id"),
"name": model_ref.get("name"),
"provider": model_ref.get("provider"),
"type": model_ref.get("type")
}
ref_id = model_ref.get("id")
ref_name = model_ref.get("name")
if ref_id:
ref_dict = {"id": ref_id}
elif ref_name is not None:
ref_dict = {"name": ref_name, "provider": model_ref.get("provider"), "type": model_ref.get("type")}
elif isinstance(model_ref, str):
try:
uuid.UUID(model_ref)
@@ -662,18 +640,12 @@ class AppDslService:
resolved_model_id = self._resolve_model(ref_dict, tenant_id, warnings)
if resolved_model_id:
config["model_id"] = resolved_model_id
if "model_ref" in config:
del config["model_ref"]
else:
warnings.append(f"[{node_label}] 模型未匹配,已置空,请导入后手动配置")
config["model_id"] = None
if "model_ref" in config:
del config["model_ref"]
else:
warnings.append(f"[{node_label}] 模型未匹配,已置空,请导入后手动配置")
config["model_id"] = None
if "model_ref" in config:
del config["model_ref"]
resolved_nodes.append({**node, "config": config})
return resolved_nodes

View File

@@ -34,7 +34,26 @@ def generate_file_key(
Generate a unique file key for storage.
The file key follows the format: {tenant_id}/{workspace_id}/{file_id}{file_ext}
Args:
tenant_id: The tenant UUID.
workspace_id: The workspace UUID.
file_id: The file UUID.
file_ext: The file extension (e.g., '.pdf', '.txt').
Returns:
A unique file key string.
Example:
>>> generate_file_key(
... uuid.UUID('550e8400-e29b-41d4-a716-446655440000'),
... uuid.UUID('660e8400-e29b-41d4-a716-446655440001'),
... uuid.UUID('770e8400-e29b-41d4-a716-446655440002'),
... '.pdf'
... )
'550e8400-e29b-41d4-a716-446655440000/660e8400-e29b-41d4-a716-446655440001/770e8400-e29b-41d4-a716-446655440002.pdf'
"""
# Ensure file_ext starts with a dot
if file_ext and not file_ext.startswith('.'):
file_ext = f'.{file_ext}'
if workspace_id:
@@ -42,21 +61,6 @@ def generate_file_key(
return f"{tenant_id}/{file_id}{file_ext}"
def generate_kb_file_key(
kb_id: uuid.UUID,
file_id: uuid.UUID,
file_ext: str,
) -> str:
"""
Generate a file key for knowledge base files.
Format: kb/{kb_id}/{file_id}{file_ext}
"""
if file_ext and not file_ext.startswith('.'):
file_ext = f'.{file_ext}'
return f"kb/{kb_id}/{file_id}{file_ext}"
class FileStorageService:
"""
High-level service for file storage operations.

View File

@@ -1,5 +1,5 @@
from sqlalchemy.orm import Session
from sqlalchemy import desc, nullslast, or_, cast, String, func
from sqlalchemy import desc, nullslast, or_, and_, cast, String
from typing import List, Optional, Dict, Any
import uuid
from fastapi import HTTPException
@@ -102,7 +102,6 @@ def get_workspace_end_users_paginated(
"""获取工作空间的宿主列表(分页版本,支持模糊搜索)
返回结果按 created_at 从新到旧排序NULL 值排在最后)
固定过滤 memory_count > 0 的宿主,保证分页基于“有记忆宿主”集合计算。
支持通过 keyword 参数同时模糊搜索 other_name 和 id 字段
Args:
@@ -121,8 +120,7 @@ def get_workspace_end_users_paginated(
try:
# 构建基础查询
base_query = db.query(EndUserModel).filter(
EndUserModel.workspace_id == workspace_id,
EndUserModel.memory_count > 0 , # 只查询有记忆的宿主
EndUserModel.workspace_id == workspace_id
)
# 构建搜索条件过滤空字符串和None
@@ -130,13 +128,20 @@ def get_workspace_end_users_paginated(
if keyword:
keyword_pattern = f"%{keyword}%"
# other_name 匹配始终生效id 匹配仅对 other_name 为空的记录生效
base_query = base_query.filter(
or_(
EndUserModel.other_name.ilike(keyword_pattern),
cast(EndUserModel.id, String).ilike(keyword_pattern),
and_(
or_(
EndUserModel.other_name.is_(None),
EndUserModel.other_name == "",
),
cast(EndUserModel.id, String).ilike(keyword_pattern),
),
)
)
business_logger.info(f"应用模糊搜索: keyword={keyword}(匹配 other_name id")
business_logger.info(f"应用模糊搜索: keyword={keyword}(匹配 other_nameother_name 为空时匹配 id")
# 获取总记录数
total = base_query.count()
@@ -164,98 +169,6 @@ def get_workspace_end_users_paginated(
business_logger.error(f"获取工作空间宿主列表(分页)失败: workspace_id={workspace_id} - {str(e)}")
raise
def get_workspace_end_users_paginated_rag(
db: Session,
workspace_id: uuid.UUID,
current_user: User,
page: int,
pagesize: int,
keyword: Optional[str] = None,
) -> Dict[str, Any]:
"""RAG 模式宿主列表分页。
RAG 记忆数量以 documents.chunk_num 为准:
- file_name = end_user_id + ".txt"
- 只统计当前 workspace 下 permission_id="Memory" 的用户记忆知识库
- 在 SQL 层过滤 chunk 总数为 0 的宿主,保证分页准确
"""
business_logger.info(
f"获取 RAG 宿主列表(分页): workspace_id={workspace_id}, "
f"keyword={keyword}, page={page}, pagesize={pagesize}, 操作者: {current_user.username}"
)
try:
from app.models.document_model import Document
from app.models.knowledge_model import Knowledge
chunk_subquery = (
db.query(
Document.file_name.label("file_name"),
func.coalesce(func.sum(Document.chunk_num), 0).label("memory_count"),
)
.join(Knowledge, Document.kb_id == Knowledge.id)
.filter(
Knowledge.workspace_id == workspace_id,
Knowledge.status == 1,
Knowledge.permission_id == "Memory",
Document.status == 1,
)
.group_by(Document.file_name)
.subquery()
)
base_query = (
db.query(
EndUserModel,
chunk_subquery.c.memory_count.label("memory_count"),
)
.join(
chunk_subquery,
chunk_subquery.c.file_name == func.concat(cast(EndUserModel.id, String), ".txt"),
)
.filter(
EndUserModel.workspace_id == workspace_id,
chunk_subquery.c.memory_count > 0,
)
)
keyword = keyword.strip() if keyword else None
if keyword:
keyword_pattern = f"%{keyword}%"
base_query = base_query.filter(
or_(
EndUserModel.other_name.ilike(keyword_pattern),
cast(EndUserModel.id, String).ilike(keyword_pattern),
)
)
total = base_query.count()
if total == 0:
business_logger.info("RAG 模式下没有符合条件的宿主")
return {"items": [], "total": 0}
rows = base_query.order_by(
nullslast(desc(EndUserModel.created_at)),
desc(EndUserModel.id),
).offset((page - 1) * pagesize).limit(pagesize).all()
items = []
for end_user_orm, memory_count in rows:
items.append({
"end_user": EndUserSchema.model_validate(end_user_orm),
"memory_count": int(memory_count or 0),
})
business_logger.info(f"成功获取 RAG 宿主记录 {len(items)} 条,总计 {total}")
return {"items": items, "total": total}
except HTTPException:
raise
except Exception as e:
business_logger.error(
f"获取 RAG 宿主列表(分页)失败: workspace_id={workspace_id} - {str(e)}"
)
raise
def get_workspace_memory_increment(
db: Session,

View File

@@ -1,13 +1,13 @@
{% raw %}You are a professional information extraction system.
Your task is to analyze the provided file content and generate structured metadata.
Your task is to analyze the provided document content and generate structured metadata.
Extract the following fields:
* **summary**: A concise summary of the file in 35 sentences.
* **keywords**: 510 important keywords or key phrases that best represent the file. This field MUST be a JSON array of strings.
* **topic**: The primary topic of the file expressed as a short phrase (38 words).
* **domain**: The broader knowledge domain or field the file belongs to (e.g., Artificial Intelligence, Computer Science, Finance, Healthcare, Education, Law, etc.).
* **summary**: A concise summary of the document in 24 sentences.
* **keywords**: 510 important keywords or key phrases that best represent the document. This field MUST be a JSON array of strings.
* **topic**: The primary topic of the document expressed as a short phrase (38 words).
* **domain**: The broader knowledge domain or field the document belongs to (e.g., Artificial Intelligence, Computer Science, Finance, Healthcare, Education, Law, etc.).
STRICT RULES:
@@ -28,7 +28,7 @@ STRICT RULES:
{% endif %}
{% raw %}
6. `keywords` MUST be a JSON array of strings.
7. If the file content is insufficient, infer the best possible answer based on context.
7. If the document content is insufficient, infer the best possible answer based on context.
8. Ensure the JSON is syntactically correct.
{% endraw %}
9. Output using the language {{ language }}
@@ -50,4 +50,4 @@ Required JSON format:
{% raw %}
}
Now analyze the following file and return the JSON result.{% endraw %}
Now analyze the following document and return the JSON result.{% endraw %}

View File

@@ -210,14 +210,9 @@ def _build_vision_model(file_path: str, db_knowledge):
@celery_app.task(name="app.core.rag.tasks.parse_document")
def parse_document(file_key: str, document_id: uuid.UUID, file_name: str = ""):
def parse_document(file_path: str, document_id: uuid.UUID):
"""
Document parsing, vectorization, and storage.
Args:
file_key: Storage key for FileStorageService (e.g. "kb/{kb_id}/{file_id}.docx")
document_id: Document UUID
file_name: Original file name (used for extension detection in chunk())
Document parsing, vectorization, and storage
"""
db_document = None
@@ -228,6 +223,7 @@ def parse_document(file_key: str, document_id: uuid.UUID, file_name: str = ""):
with get_db_context() as db:
try:
# Celery JSON 序列化会将 UUID 转为字符串,需要确保类型正确
if not isinstance(document_id, uuid.UUID):
document_id = uuid.UUID(str(document_id))
@@ -238,11 +234,7 @@ def parse_document(file_key: str, document_id: uuid.UUID, file_name: str = ""):
if db_knowledge is None:
raise ValueError(f"Knowledge {db_document.kb_id} not found")
# Use file_name from argument or fall back to document record
if not file_name:
file_name = db_document.file_name
# 1. Download file from storage backend
# 1. Document parsing & segmentation
progress_lines.append(f"{datetime.now().strftime('%H:%M:%S')} Start to parse.")
start_time = time.time()
db_document.progress = 0.0
@@ -253,36 +245,45 @@ def parse_document(file_key: str, document_id: uuid.UUID, file_name: str = ""):
db.commit()
db.refresh(db_document)
# Read file content from storage backend (no NFS dependency)
from app.services.file_storage_service import FileStorageService
import asyncio
storage_service = FileStorageService()
async def _download():
return await storage_service.download_file(file_key)
try:
file_binary = asyncio.run(_download())
except RuntimeError:
# If there's already a running loop (e.g. in some worker configurations)
loop = asyncio.new_event_loop()
try:
file_binary = loop.run_until_complete(_download())
finally:
loop.close()
if not file_binary:
raise IOError(f"Downloaded empty file from storage: {file_key}")
logger.info(f"[ParseDoc] Downloaded {len(file_binary)} bytes from storage key: {file_key}")
def progress_callback(prog=None, msg=None):
progress_lines.append(f"{datetime.now().strftime('%H:%M:%S')} parse progress: {prog} msg: {msg}.")
# Prepare vision_model for parsing
vision_model = _build_vision_model(file_name, db_knowledge)
vision_model = _build_vision_model(file_path, db_knowledge)
# 先将文件读入内存,避免解析过程中依赖 NFS 文件持续可访问
# python-docx 等库在 binary=None 时会用路径直接打开文件,
# 在 NFS/共享存储上可能因缓存失效导致 "Package not found"
max_wait_seconds = 30
wait_interval = 2
waited = 0
file_binary = None
while waited <= max_wait_seconds:
# os.listdir 强制 NFS 客户端刷新目录缓存
parent_dir = os.path.dirname(file_path)
try:
os.listdir(parent_dir)
except OSError:
pass
try:
with open(file_path, "rb") as f:
file_binary = f.read()
if not file_binary:
# NFS 上文件存在但内容为空(可能还在同步中)
raise IOError(f"File is empty (0 bytes), NFS may still be syncing: {file_path}")
break
except (FileNotFoundError, IOError) as e:
if waited >= max_wait_seconds:
raise type(e)(
f"File not accessible at '{file_path}' after waiting {max_wait_seconds}s: {e}"
)
logger.warning(f"File not ready on this node, retrying in {wait_interval}s: {file_path} ({e})")
time.sleep(wait_interval)
waited += wait_interval
from app.core.rag.app.naive import chunk
logger.info(f"[ParseDoc] file_binary size={len(file_binary)} bytes, type={type(file_binary).__name__}, bool={bool(file_binary)}")
res = chunk(filename=file_name,
res = chunk(filename=file_path,
binary=file_binary,
from_page=0,
to_page=DEFAULT_PARSE_TO_PAGE,

View File

@@ -1,47 +0,0 @@
"""202604271530
Revision ID: 1f85dce125e5
Revises: 4e89970f9e7c
Create Date: 2026-04-27 15:30:35.614679
"""
from typing import Sequence, Union
from alembic import op
import sqlalchemy as sa
from sqlalchemy.dialects import postgresql
# revision identifiers, used by Alembic.
revision: str = '1f85dce125e5'
down_revision: Union[str, None] = '4e89970f9e7c'
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
op.add_column('files', sa.Column('file_key', sa.String(length=512), nullable=True, comment='storage file key for FileStorageService'))
op.create_index(op.f('ix_files_file_key'), 'files', ['file_key'], unique=False)
op.alter_column('model_configs', 'capability',
existing_type=postgresql.ARRAY(sa.VARCHAR()),
comment="模型能力列表(如['vision', 'audio', 'video', 'thinking']",
existing_comment="模型能力列表(如['vision', 'audio', 'video']",
existing_nullable=False)
# ### end Alembic commands ###
op.execute("""
UPDATE files
SET file_key = 'kb/' || kb_id::text || '/' || parent_id::text || '/' || id::text || file_ext
WHERE file_ext != 'folder' AND file_key IS NULL
""")
def downgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
op.alter_column('model_configs', 'capability',
existing_type=postgresql.ARRAY(sa.VARCHAR()),
comment="模型能力列表(如['vision', 'audio', 'video']",
existing_comment="模型能力列表(如['vision', 'audio', 'video', 'thinking']",
existing_nullable=False)
op.drop_index(op.f('ix_files_file_key'), table_name='files')
op.drop_column('files', 'file_key')
# ### end Alembic commands ###

View File

@@ -1,139 +0,0 @@
"""202604291755
Revision ID: 37e2a73b28c4
Revises: e2d60c6d1a1a
Create Date: 2026-04-29 18:52:35.686290
"""
from typing import Dict, List, Sequence, Union
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision: str = '37e2a73b28c4'
down_revision: Union[str, None] = 'e2d60c6d1a1a'
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
BATCH_SIZE = 500
def _chunked(values: List[str], size: int) -> List[List[str]]:
return [values[index:index + size] for index in range(0, len(values), size)]
def _load_neo4j_end_user_ids(connection) -> List[str]:
"""加载所有需要从 Neo4j 同步 memory_count 的宿主。
RAG 工作空间的记忆数量以 documents.chunk_num 为准,不写入 end_users.memory_count。
"""
rows = connection.execute(sa.text("""
SELECT eu.id::text AS end_user_id
FROM end_users eu
JOIN workspaces w ON eu.workspace_id = w.id
WHERE w.storage_type IS NULL OR w.storage_type <> 'rag'
""")).all()
return [row[0] for row in rows]
async def _fetch_neo4j_counts(end_user_ids: List[str]) -> Dict[str, int]:
if not end_user_ids:
return {}
from app.repositories.memory_config_repository import MemoryConfigRepository
from app.repositories.neo4j.neo4j_connector import Neo4jConnector
connector = Neo4jConnector()
try:
result = await connector.execute_query(
MemoryConfigRepository.SEARCH_FOR_ALL_BATCH,
end_user_ids=end_user_ids,
)
finally:
await connector.close()
counts = {str(row["user_id"]): int(row["total"]) for row in result}
for end_user_id in end_user_ids:
counts.setdefault(end_user_id, 0)
return counts
def _update_memory_counts(connection, counts: Dict[str, int]) -> int:
updated = 0
for end_user_id, memory_count in counts.items():
result = connection.execute(
sa.text("""
UPDATE end_users
SET memory_count = :memory_count
WHERE id = CAST(:end_user_id AS uuid)
"""),
{
"end_user_id": end_user_id,
"memory_count": memory_count,
},
)
updated += result.rowcount or 0
return updated
def _sync_memory_count_from_neo4j() -> None:
"""迁移时初始化 Neo4j 模式宿主的 memory_count。
"""
import asyncio
print("[memory_count] 开始同步 Neo4j 模式宿主 memory_count")
connection = op.get_bind()
target_ids = _load_neo4j_end_user_ids(connection)
if not target_ids:
print("[memory_count] 没有需要同步的 Neo4j 模式宿主")
return
print(
f"[memory_count] 待同步宿主数量: {len(target_ids)}, "
f"batch_size={BATCH_SIZE}"
)
total_updated = 0
batches = _chunked(target_ids, BATCH_SIZE)
for batch_index, batch_ids in enumerate(batches, start=1):
print(
f"[memory_count] 正在查询 Neo4j: "
f"batch={batch_index}/{len(batches)}, size={len(batch_ids)}"
)
counts = asyncio.run(_fetch_neo4j_counts(batch_ids))
total_updated += _update_memory_counts(connection, counts)
print(
f"[memory_count] 已写入 PostgreSQL: "
f"updated={total_updated}/{len(target_ids)}"
)
print(
f"[memory_count] Neo4j 模式宿主同步完成: "
f"total={len(target_ids)}, updated={total_updated}"
)
def upgrade() -> None:
op.add_column(
'end_users',
sa.Column(
'memory_count',
sa.Integer(),
server_default='0',
nullable=False,
comment='记忆节点总数',
),
)
_sync_memory_count_from_neo4j()
op.create_index(
op.f('ix_end_users_memory_count'),
'end_users',
['memory_count'],
unique=False,
)
def downgrade() -> None:
op.drop_index(op.f('ix_end_users_memory_count'), table_name='end_users')
op.drop_column('end_users', 'memory_count')

View File

@@ -1,34 +0,0 @@
"""202604281230
Revision ID: e2d60c6d1a1a
Revises: 1f85dce125e5
Create Date: 2026-04-28 12:32:01.643954
"""
from typing import Sequence, Union
from alembic import op
import sqlalchemy as sa
from sqlalchemy.dialects import postgresql
# revision identifiers, used by Alembic.
revision: str = 'e2d60c6d1a1a'
down_revision: Union[str, None] = '1f85dce125e5'
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
op.drop_column('tenants', 'api_ops_rate_limit')
op.drop_column('tenants', 'plan')
op.drop_column('tenants', 'plan_expired_at')
# ### end Alembic commands ###
def downgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
op.add_column('tenants', sa.Column('plan_expired_at', postgresql.TIMESTAMP(), autoincrement=False, nullable=True))
op.add_column('tenants', sa.Column('plan', sa.VARCHAR(length=50), autoincrement=False, nullable=True))
op.add_column('tenants', sa.Column('api_ops_rate_limit', sa.VARCHAR(length=100), autoincrement=False, nullable=True))
# ### end Alembic commands ###

Binary file not shown.

Before

Width:  |  Height:  |  Size: 108 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 336 KiB

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 387 B

View File

@@ -1,13 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<svg width="16px" height="16px" viewBox="0 0 16 16" version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
<title>勾选</title>
<g id="空间外层页面优化" stroke="none" stroke-width="1" fill="none" fill-rule="evenodd">
<g id="登录页面" transform="translate(-64, -611)" fill="#FFFFFF" fill-rule="nonzero">
<g id="编组-8" transform="translate(64, 608)">
<g id="勾选" transform="translate(0, 3)">
<path d="M12,0 C14.209139,0 16,1.790861 16,4 L16,12 C16,14.209139 14.209139,16 12,16 L4,16 C1.790861,16 0,14.209139 0,12 L0,4 C0,1.790861 1.790861,4.4408921e-16 4,0 L12,0 Z M11.9182266,4.80024782 C11.7273831,4.80024782 11.5444062,4.87629473 11.4097812,5.0115625 L6.552,9.86932813 L4.4284375,7.74489063 C4.29381317,7.60962766 4.11083967,7.53358379 3.92,7.53358379 C3.72916033,7.53358379 3.54618683,7.60962766 3.4115625,7.74489063 C3.27602096,7.87955071 3.19979999,8.06271883 3.19979999,8.25378125 C3.19979999,8.44484367 3.27602096,8.62801179 3.4115625,8.76267188 L6.0453125,11.3946719 C6.17993745,11.5299396 6.3629143,11.6059866 6.55375781,11.6059866 C6.74460132,11.6059866 6.92757818,11.5299396 7.06220312,11.3946719 L12.4311094,6.02667188 C12.5659036,5.89187668 12.6412595,5.70881589 12.6404302,5.51818919 C12.639587,5.3275625 12.5626279,5.14516989 12.4266562,5.0115625 C12.2920469,4.87629473 12.1090701,4.80024782 11.9182266,4.80024782 Z" id="形状结合"></path>
</g>
</g>
</g>
</g>
</svg>

Before

Width:  |  Height:  |  Size: 1.5 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.3 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 3.8 KiB

View File

@@ -467,29 +467,4 @@ input:-webkit-autofill:active {
animation-name: onAutoFillStart;
animation-duration: 1ms;
}
@keyframes onAutoFillStart { from {} to {} }
/* Login input placeholder */
.login-input input::placeholder {
color: #A8A9AA !important;
}
.login-input {
border-color: #A8A9AA;
}
/* Login input hover/focus border */
.login-input:hover,
.login-input:focus-within {
border-color: #FFFFFF !important;
box-shadow: none !important;
}
/* Override browser autofill styles */
.login-input input:-webkit-autofill,
.login-input input:-webkit-autofill:hover,
.login-input input:-webkit-autofill:focus,
.login-input input:-webkit-autofill:active {
-webkit-box-shadow: 0 0 0px 1000px #0A0A0A inset !important;
-webkit-text-fill-color: #FFFFFF !important;
transition: background-color 5000s ease-in-out 0s !important;
}
@keyframes onAutoFillStart { from {} to {} }

View File

@@ -102,7 +102,7 @@ const Index = () => {
<Flex gap={12} wrap="nowrap" className="rb:w-full! rb:h-full! rb:overflow-y-auto">
<div className="rb:flex-1 rb:min-w-0">
<Flex vertical>
<div className='rb:w-full rb:h-26 rb:p-4 rb:bg-cover rb:bg-[url("@/assets/images/index/index_bg.png")] rb:rounded-xl rb:overflow-hidden'>
<div className='rb:w-full rb:h-26 rb:p-4 rb:bg-cover rb:bg-[url("@/assets/images/index/index_bg@2x.png")] rb:rounded-xl rb:overflow-hidden'>
<div className="rb:font-[MiSans-Bold] rb:font-bold rb:text-white rb:text-[18px] rb:leading-7">
{t('index.spaceTitle')}
</div>

View File

@@ -14,33 +14,27 @@ import React, { useState, useEffect } from 'react';
import { useTranslation } from 'react-i18next';
import { Button, Input, Form, App } from 'antd';
import type { FormProps } from 'antd';
import clsx from 'clsx';
import { useUser, type LoginInfo } from '@/store/user';
import { login } from '@/api/user'
import loginBg from '@/assets/images/login/bg.mp4'
import check from '@/assets/images/login/check.svg'
import loginBg from '@/assets/images/login/loginBg.png'
import check from '@/assets/images/login/check.png'
import email from '@/assets/images/login/email.svg'
import lock from '@/assets/images/login/lock.svg'
import type { LoginForm } from './types';
import { useI18n } from '@/store/locale'
/**
* Input field styling
*/
const inputClassName = "login-input rb:rounded-[8px]! rb:p-[12px]! rb:h-[44px]! rb:bg-transparent! rb:text-[#FFFFFF]! [&_input]:rb:text-[#FFFFFF]! [&_input]:rb:caret-[#FFFFFF]!"
const inputClassName = "rb:rounded-[8px]! rb:p-[12px]! rb:h-[44px]!"
/**
* Login page component
*/const LoginPage: React.FC = () => {
const { t } = useTranslation();
const { clearUserInfo, updateLoginInfo, getUserInfo } = useUser();
const { language } = useI18n()
const [loading, setLoading] = useState(false);
const [form] = Form.useForm<LoginForm>();
const emailVal = Form.useWatch('email', form);
const passwordVal = Form.useWatch('password', form);
const canLogin = !!(emailVal && passwordVal);
const { message } = App.useApp();
useEffect(() => {
@@ -49,7 +43,6 @@ const inputClassName = "login-input rb:rounded-[8px]! rb:p-[12px]! rb:h-[44px]!
/** Handle login form submission */
const handleLogin: FormProps<LoginForm>['onFinish'] = async (values) => {
if (!canLogin) return;
if (!values.email) {
message.warning(t('login.emailPlaceholder'));
return;
@@ -71,45 +64,42 @@ const inputClassName = "login-input rb:rounded-[8px]! rb:p-[12px]! rb:h-[44px]!
return (
<div className="rb:min-h-screen rb:flex rb:h-screen rb:bg-[#0A0A0A] rb:text-[#FFFFFF]">
<div className="rb:min-h-screen rb:flex rb:h-screen">
<div className="rb:relative rb:w-1/2 rb:h-screen rb:overflow-hidden">
<video src={loginBg} loop autoPlay playsInline muted className="rb:w-full rb:h-full rb:object-cover"></video>
<div className="rb:absolute rb:top-10 rb:left-12">
<div className={clsx("rb:h-8.25 rb:bg-cover", {
"rb:w-89 rb:bg-[url('@/assets/images/login/title_en.png')]": language !== 'zh',
"rb:w-42 rb:bg-[url('@/assets/images/login/title_zh.png')]": language === 'zh'
})}></div>
<div className="rb:text-[18px] rb:text-[rgba(255,255,255,0.7)] rb:leading-6.25 rb:font-regular rb:mt-3">{t('login.subTitle')}</div>
<img src={loginBg} alt="loginBg" className="rb:w-full rb:h-full rb:object-cover rb:absolute rb:top-1/2 rb:-translate-y-1/2 rb:left-0" />
<div className="rb:absolute rb:top-14 rb:left-16">
<div className="rb:text-[28px] rb:leading-8.25 rb:font-bold rb:font-[AlimamaShuHeiTi,AlimamaShuHeiTi] rb:mb-4">{t('login.title')}</div>
<div className="rb:text-[18px] rb:leading-6.25 rb:font-regular">{t('login.subTitle')}</div>
</div>
<div className="rb:absolute rb:bottom-14 rb:left-12 rb:right-12 rb:grid rb:grid-cols-2 rb:gap-x-30 rb:gap-y-10.75">
{['intelligentMemory', 'instantRecall', 'knowledgeAssociation'].map((key, index) => (
<div key={key} className={`rb:flex${index === 0 ? ' rb:col-span-2' : ''}`}>
<div className="rb:absolute rb:bottom-20.25 rb:left-16 rb:grid rb:grid-cols-2 rb:gap-x-30 rb:gap-y-10.75">
{['intelligentMemory', 'instantRecall', 'knowledgeAssociation'].map(key => (
<div key={key} className="rb:flex">
<img src={check} className="rb:w-4 rb:h-4 rb:mr-2 rb:mt-0.75" />
<div className="rb:text-[16px] rb:leading-5.5">
<div className="rb:font-medium">{t(`login.${key}`)}</div>
<div className="rb:text-[14px] rb:text-[rgba(255,255,255,0.7)] rb:leading-5 rb:font-regular! rb:mt-2">{t(`login.${key}Desc`)}</div>
<div className="rb:text-[#5B6167] rb:text-[14px] rb:leading-5 rb:font-regular! rb:mt-2">{t(`login.${key}Desc`)}</div>
</div>
</div>
))}
</div>
</div>
<div className="rb:flex rb:items-center rb:justify-center rb:flex-[1_1_auto]">
<div className="rb:w-110 rb:mx-auto">
<div className="rb:text-center rb:text-[24px] rb:font-[MiSans-Bold] rb:font-bold rb:leading-8 rb:mb-12">{t('login.welcome')}</div>
<div className="rb:bg-[#FFFFFF] rb:flex rb:items-center rb:justify-center rb:flex-[1_1_auto]">
<div className="rb:w-100 rb:mx-auto">
<div className="rb:text-center rb:text-[28px] rb:font-semibold rb:leading-8 rb:mb-12">{t('login.welcome')}</div>
<Form
form={form}
onFinish={handleLogin}
>
<Form.Item name="email" className="rb:mb-6!">
<Form.Item name="email" className="rb:mb-5!">
<Input
prefix={<img src={email} className="rb:w-5 rb:h-5 rb:mr-2" />}
placeholder={t('login.emailPlaceholder')}
className={inputClassName}
/>
</Form.Item>
<Form.Item name="password" className="rb:mb-0!">
<Form.Item name="password">
<Input.Password
prefix={<img src={lock} className="rb:w-5 rb:h-5 rb:mr-2" />}
placeholder={t('login.passwordPlaceholder')}
@@ -121,11 +111,7 @@ const inputClassName = "login-input rb:rounded-[8px]! rb:p-[12px]! rb:h-[44px]!
block
loading={loading}
htmlType="submit"
disabled={!canLogin}
className={clsx("rb:h-11.5! rb:rounded-lg! rb:mt-12", {
'rb:hover:bg-[#2d6ef1]! rb:bg-[#155EEF]! rb:border-[#155EEF]!': canLogin,
'rb:bg-[#171719]! rb:border-[#171719]!': !canLogin
})}
className="rb:h-10! rb:rounded-lg! rb:mt-4"
>
{t('login.loginIn')}
</Button>

View File

@@ -361,7 +361,7 @@ const Market: React.FC<{ getStatusTag?: (status: string) => ReactNode }> = () =>
)}
</Flex>
<div>
<div className="rb:font-[MiSans-Bold] rb:font-bold rb:text-[16px] rb:leading-5.5">{source.name}</div>
<div className="rb:font-[MiSans Bold] rb:font-bold rb:text-[16px] rb:leading-5.5">{source.name}</div>
<div className="rb:text-[#5B6167] rb:text-[12px] rb:leading-4.5">{t('tool.availableMcp')} ({mcpTotal})</div>
</div>
</Flex>