* feat(web): add PageEmpty component
* feat(web): add PageTabs component
* feat(web): add PageEmpty component
* feat(web): add PageTabs component
* feat(prompt): add history tracking for prompt releases
* feat(web): add prompt menu
* refactor: The PageScrollList component supports two generic parameters
* feat(web): BodyWrapper compoent update PageLoading
* feat(web): add Ontology menu
* feat(web): memory management add scene
* feat(tasks): add celery task configuration for periodic jobs
- Add ignore_result=True to prevent storing results for periodic tasks
- Set max_retries=0 to skip failed periodic tasks without retry attempts
- Configure acks_late=False for immediate acknowledgment in beat tasks
- Add time_limit and soft_time_limit to regenerate_memory_cache task (3600s/3300s)
- Add time_limit and soft_time_limit to workspace_reflection_task (300s/240s)
- Add time_limit and soft_time_limit to run_forgetting_cycle_task (7200s/7000s)
- Improve task reliability and resource management for scheduled jobs
* feat(sandbox): add Node.js code execution support to sandbox
* Release/v0.2.2 (#260)
* [modify] migration script
* [add] migration script
* fix(web): change form message
* fix(web): the memoryContent field is compatible with numbers and strings
* feat(web): code node hidden
* fix(model):
1. create a basic model to check if the name and provider are duplicated.
2. The result shows error models because the provider created API Keys for all matching models.
---------
Co-authored-by: Mark <zhuwenhui5566@163.com>
Co-authored-by: zhaoying <yzhao96@best-inc.com>
Co-authored-by: yingzhao <zhaoyingyz@126.com>
Co-authored-by: Timebomb2018 <18868801967@163.com>
* Feature/ontology class clean (#249)
* [add] Complete ontology engineering feature implementation
* [add] Add ontology feature integration and validation utilities
* [add] Add OWL validator and validation utilities
* [fix] Add missing render_ontology_extraction_prompt function
* [fix]Add dependencies, fix functionality
* [add] migration script
* feat(celery): add dedicated periodic tasks worker and queue (#261)
* fix(web): conflict resolve
* Fix/v022 bug (#263)
* [fix]Fix the issue of inconsistent language in explicit and episodic memory.
* [fix]Fix the issue of inconsistent language in explicit and episodic memory.
* [add]Add scene_id
* [fix]Based on the AI review to fix the code
* Fix/develop memory reflex (#265)
* 遗漏的历史映射
* 遗漏的历史映射
* 反思后台报错处理
* [add] migration script
* fix: chat conversation_id add node_start
* feat(web): show code node
* fix(web): Restructure the CustomSelect component, repair the interface that is called multiple times when the form is updated
* feat(web): RadioGroupCard support block mode
* feat(web): create space add icon
* feat(app and model): token consumption statistics
* Add/develop memory (#264)
* 遗漏的历史映射
* 遗漏的历史映射
* 遗漏的历史映射
* 遗漏的历史映射
* 遗漏的历史映射
* 遗漏的历史映射
* 遗漏的历史映射
* 遗漏的历史映射
* 遗漏的历史映射
* 新增长期记忆功能
* 新增长期记忆功能
* 新增长期记忆功能
* 知识库检索多余字段
* 长期
* feat(app and model): token consumption statistics of the cluster
* memory_BUG_fix
* fix(web): prompt history remove pageLoading
* fix(prompt): remove hard-coded import of prompt file paths (#279)
* Fix/develop memory bug (#274)
* 遗漏的历史映射
* 遗漏的历史映射
* fix_timeline_memories
* fix(web): update retrieve_type key
* Fix/develop memory bug (#276)
* 遗漏的历史映射
* 遗漏的历史映射
* fix_timeline_memories
* fix_timeline_memories
* write_gragp/bug_fix
* write_gragp/bug_fix
* write_gragp/bug_fix
* chore(celery): disable periodic task scheduling
* fix(prompt): remove hard-coded import of prompt file paths
---------
Co-authored-by: lixinyue11 <94037597+lixinyue11@users.noreply.github.com>
Co-authored-by: zhaoying <yzhao96@best-inc.com>
Co-authored-by: yingzhao <zhaoyingyz@126.com>
Co-authored-by: Ke Sun <kesun5@illinois.edu>
* fix(web): remove delete confirm content
* refactor(workflow): relocate template directory into workflow
* feat(memory): add long-term storage task routing and batching
* fix(web): PageScrollList loading update
* fix(web): PageScrollList loading update
* Ontology v1 bug (#291)
* [changes]Add 'id' as the secondary sorting key, and 'scene_id' now returns a UUID object
* [fix]Fix the "end_user" return to be sorted by update time.
* [fix]Set the default values of the memory configuration model based on the spatial model.
* [fix]Remove the entity extraction check combination model, read the configuration list, and add the return of scene_id
* [fix]Fix the "end_user" return to be sorted by update time.
* [fix]
* fix(memory): add Redis session validation
- Add macOS fork() safety configuration in celery_app.py to prevent initialization issues
- Add null/False checks for Redis session queries in term_memory_save to handle missing sessions gracefully
- Add null/False checks in memory_long_term_storage to prevent processing empty Redis results
- Add null/False checks in aggregate_judgment before format_parsing to avoid errors on missing data
- Initialize redis_messages variable in window_dialogue for consistency
- Add debug logging when no existing session found in Redis for better troubleshooting
- Add TODO comments for magic numbers (scope=6, time=5) to be extracted as constants
- Improve error handling when Redis returns False or empty results instead of crashing
* fix(web): PageScrollList style update
* fix(workflow): fix argument passing in code execution nodes
* fix(web): prompt add disabled
* fix(web): space icon required
* feat(app): modify the key of the token
* fix(fix the key of the app's token):
* fix(workflow): switch code input encoding to base64+URL encoding
* [add]The main project adds multi-API Key load balancing.
* [changes]Attribute security access, secure numerical conversion, unified use of local variables
* fix(web): save add session update
* fix(web): language editor support paste
* [changes]Active status filtering logic, API Key selection strategy
* memory_BUG
* memory_BUG_long_term
* [changes]
* memory_BUG_long_term
* memory_BUG_long_term
* Fix/release memory bug (#306)
* memory_BUG_fix
* memory_BUG
* memory_BUG_long_term
* memory_BUG_long_term
* memory_BUG_long_term
* knowledge_retrieval/bug/fix
* knowledge_retrieval/bug/fix
* knowledge_retrieval/bug/fix
* [fix]1.The "read_all_config" interface returns "scene_name";2.Memory configuration for lightweight query ontology scenarios
* fix(web): replace code editor
* [changes]Modify the description of the time for the recent event
* [changes]Modify the code based on the AI review
* feat(web): update memory config ontology api
* fix(web): ui update
* knowledge_retrieval/bug/fix
* knowledge_retrieval/bug/fix
* knowledge_retrieval/bug/fix
* feat(workflow): add token usage statistics for question classifier and parameter extraction
* feat(web): move prompt menu
* Multiple independent transactions - single transaction
* Multiple independent transactions - single transaction
* Multiple independent transactions - single transaction
* Multiple independent transactions - single transaction
* Write Missing None (#321)
* Write Missing None
* Write Missing None
* Write Missing None
* Apply suggestion from @sourcery-ai[bot]
Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>
* Write Missing None
---------
Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>
* Fix/release memory bug (#324)
* Write Missing None
* Write Missing None
* Write Missing None
* Apply suggestion from @sourcery-ai[bot]
Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>
* Write Missing None
* redis update
* redis update
* redis update
* redis update
---------
Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>
* Fix/writer memory bug (#326)
* [fix]Fix the bug
* [fix]Fix the bug
* [fix]Correct the direction indication.
* fix(web): markdown table ui update
* Fix/release memory bug (#332)
* Write Missing None
* Write Missing None
* Write Missing None
* Apply suggestion from @sourcery-ai[bot]
Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>
* Write Missing None
* redis update
* redis update
* redis update
* redis update
* writer_dup_bug/fix
---------
Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>
* Fix/fact summary (#333)
* [fix]Disable the contents related to fact_summary
* [fix]Disable the contents related to fact_summary
* [fix]Modify the code based on the AI review
* Fix/release memory bug (#335)
* Write Missing None
* Write Missing None
* Write Missing None
* Apply suggestion from @sourcery-ai[bot]
Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>
* Write Missing None
* redis update
* redis update
* redis update
* redis update
* writer_dup_bug/fix
* writer_graph_bug/fix
* writer_graph_bug/fix
---------
Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>
* Revert "feat(web): move prompt menu"
This reverts commit 9e6e8f50f8.
* fix(web): ui update
* fix(web): update text
* fix(web): ui update
* fix(model): change the "vl" model type of dashscope to "chat"
* fix(model): change the "vl" model type of dashscope to "chat"
---------
Co-authored-by: zhaoying <yzhao96@best-inc.com>
Co-authored-by: Eternity <1533512157@qq.com>
Co-authored-by: Mark <zhuwenhui5566@163.com>
Co-authored-by: yingzhao <zhaoyingyz@126.com>
Co-authored-by: Timebomb2018 <18868801967@163.com>
Co-authored-by: 乐力齐 <162269739+lanceyq@users.noreply.github.com>
Co-authored-by: lixinyue11 <94037597+lixinyue11@users.noreply.github.com>
Co-authored-by: lixinyue <2569494688@qq.com>
Co-authored-by: Eternity <61316157+myhMARS@users.noreply.github.com>
Co-authored-by: lanceyq <1982376970@qq.com>
Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>
532 lines
20 KiB
Python
532 lines
20 KiB
Python
import os
|
||
from typing import Optional
|
||
from uuid import UUID
|
||
|
||
from app.core.error_codes import BizCode
|
||
from app.core.logging_config import get_api_logger
|
||
from app.core.response_utils import fail, success
|
||
from app.db import get_db
|
||
from app.dependencies import get_current_user
|
||
from app.models.user_model import User
|
||
from app.schemas.memory_storage_schema import (
|
||
ConfigKey,
|
||
ConfigParamsCreate,
|
||
ConfigParamsDelete,
|
||
ConfigPilotRun,
|
||
ConfigUpdate,
|
||
ConfigUpdateExtracted,
|
||
)
|
||
from app.schemas.response_schema import ApiResponse
|
||
from app.services.memory_storage_service import (
|
||
DataConfigService,
|
||
MemoryStorageService,
|
||
analytics_hot_memory_tags,
|
||
analytics_recent_activity_stats,
|
||
kb_type_distribution,
|
||
search_all,
|
||
search_chunk,
|
||
search_detials,
|
||
search_dialogue,
|
||
search_edges,
|
||
search_entity,
|
||
search_statement,
|
||
)
|
||
from fastapi import APIRouter, Depends
|
||
from fastapi.responses import StreamingResponse
|
||
from sqlalchemy.orm import Session
|
||
|
||
from app.utils.config_utils import resolve_config_id
|
||
|
||
# Get API logger
|
||
api_logger = get_api_logger()
|
||
|
||
# Initialize service
|
||
memory_storage_service = MemoryStorageService()
|
||
|
||
router = APIRouter(
|
||
prefix="/memory-storage",
|
||
tags=["Memory Storage"],
|
||
)
|
||
|
||
|
||
@router.get("/info", response_model=ApiResponse)
|
||
async def get_storage_info(
|
||
storage_id: str,
|
||
current_user: User = Depends(get_current_user)
|
||
):
|
||
"""
|
||
Example wrapper endpoint - retrieves storage information
|
||
|
||
Args:
|
||
storage_id: Storage identifier
|
||
|
||
Returns:
|
||
Storage information
|
||
"""
|
||
api_logger.info("Storage info requested ")
|
||
try:
|
||
result = await memory_storage_service.get_storage_info()
|
||
return success(data=result)
|
||
except Exception as e:
|
||
api_logger.error(f"Storage info retrieval failed: {str(e)}")
|
||
return fail(BizCode.INTERNAL_ERROR, "存储信息获取失败", str(e))
|
||
|
||
|
||
# --- DB connection dependency ---
|
||
_CONN: Optional[object] = None
|
||
|
||
|
||
"""PostgreSQL 连接生成与管理(使用 psycopg2)。"""
|
||
# 这个可以转移,可能是已经有的
|
||
# PostgreSQL 数据库连接
|
||
def _make_pgsql_conn() -> Optional[object]: # 创建 PostgreSQL 数据库连接
|
||
host = os.getenv("DB_HOST")
|
||
user = os.getenv("DB_USER")
|
||
password = os.getenv("DB_PASSWORD")
|
||
database = os.getenv("DB_NAME")
|
||
port_str = os.getenv("DB_PORT")
|
||
try:
|
||
import psycopg2 # type: ignore
|
||
port = int(port_str) if port_str else 5432
|
||
conn = psycopg2.connect(
|
||
host=host or "localhost",
|
||
port=port,
|
||
user=user,
|
||
password=password,
|
||
dbname=database,
|
||
)
|
||
# 设置自动提交,避免显式事务管理
|
||
conn.autocommit = True
|
||
# 设置会话时区为中国标准时间(Asia/Shanghai),便于直接以本地时区展示
|
||
try:
|
||
cur = conn.cursor()
|
||
cur.execute("SET TIME ZONE 'Asia/Shanghai'")
|
||
cur.close()
|
||
except Exception:
|
||
# 时区设置失败不影响连接,仅记录但不抛出
|
||
pass
|
||
return conn
|
||
except Exception as e:
|
||
try:
|
||
print(f"[PostgreSQL] 连接失败: {e}")
|
||
except Exception:
|
||
pass
|
||
return None
|
||
|
||
def get_db_conn() -> Optional[object]: # 获取 PostgreSQL 数据库连接
|
||
global _CONN
|
||
if _CONN is None:
|
||
_CONN = _make_pgsql_conn()
|
||
return _CONN
|
||
|
||
|
||
def reset_db_conn() -> bool: # 重置 PostgreSQL 数据库连接
|
||
"""Close and recreate the global DB connection."""
|
||
global _CONN
|
||
try:
|
||
if _CONN:
|
||
try:
|
||
_CONN.close()
|
||
except Exception:
|
||
pass
|
||
_CONN = _make_pgsql_conn()
|
||
return _CONN is not None
|
||
except Exception:
|
||
_CONN = None
|
||
return False
|
||
|
||
|
||
@router.post("/create_config", response_model=ApiResponse) # 创建配置文件,其他参数默认
|
||
def create_config(
|
||
payload: ConfigParamsCreate,
|
||
current_user: User = Depends(get_current_user),
|
||
db: Session = Depends(get_db),
|
||
) -> dict:
|
||
workspace_id = current_user.current_workspace_id
|
||
# 检查用户是否已选择工作空间
|
||
if workspace_id is None:
|
||
api_logger.warning(f"用户 {current_user.username} 尝试创建配置但未选择工作空间")
|
||
return fail(BizCode.INVALID_PARAMETER, "请先切换到一个工作空间", "current_workspace_id is None")
|
||
|
||
api_logger.info(f"用户 {current_user.username} 在工作空间 {workspace_id} 请求创建配置: {payload.config_name}")
|
||
try:
|
||
# 将 workspace_id 注入到 payload 中(保持为 UUID 类型)
|
||
payload.workspace_id = workspace_id
|
||
svc = DataConfigService(db)
|
||
result = svc.create(payload)
|
||
return success(data=result, msg="创建成功")
|
||
except Exception as e:
|
||
api_logger.error(f"Create config failed: {str(e)}")
|
||
return fail(BizCode.INTERNAL_ERROR, "创建配置失败", str(e))
|
||
|
||
|
||
@router.delete("/delete_config", response_model=ApiResponse) # 删除数据库中的内容(按配置名称)
|
||
def delete_config(
|
||
config_id: UUID|int,
|
||
current_user: User = Depends(get_current_user),
|
||
db: Session = Depends(get_db),
|
||
) -> dict:
|
||
workspace_id = current_user.current_workspace_id
|
||
config_id=resolve_config_id(config_id, db)
|
||
# 检查用户是否已选择工作空间
|
||
if workspace_id is None:
|
||
api_logger.warning(f"用户 {current_user.username} 尝试删除配置但未选择工作空间")
|
||
return fail(BizCode.INVALID_PARAMETER, "请先切换到一个工作空间", "current_workspace_id is None")
|
||
|
||
api_logger.info(f"用户 {current_user.username} 在工作空间 {workspace_id} 请求删除配置: {config_id}")
|
||
try:
|
||
svc = DataConfigService(db)
|
||
result = svc.delete(ConfigParamsDelete(config_id=config_id))
|
||
return success(data=result, msg="删除成功")
|
||
except Exception as e:
|
||
api_logger.error(f"Delete config failed: {str(e)}")
|
||
return fail(BizCode.INTERNAL_ERROR, "删除配置失败", str(e))
|
||
|
||
@router.post("/update_config", response_model=ApiResponse) # 更新配置文件中name和desc
|
||
def update_config(
|
||
payload: ConfigUpdate,
|
||
current_user: User = Depends(get_current_user),
|
||
db: Session = Depends(get_db),
|
||
) -> dict:
|
||
workspace_id = current_user.current_workspace_id
|
||
payload.config_id = resolve_config_id(payload.config_id, db)
|
||
# 检查用户是否已选择工作空间
|
||
if workspace_id is None:
|
||
api_logger.warning(f"用户 {current_user.username} 尝试更新配置但未选择工作空间")
|
||
return fail(BizCode.INVALID_PARAMETER, "请先切换到一个工作空间", "current_workspace_id is None")
|
||
|
||
# 校验至少有一个字段需要更新
|
||
if payload.config_name is None and payload.config_desc is None and payload.scene_id is None:
|
||
api_logger.warning(f"用户 {current_user.username} 尝试更新配置但未提供任何更新字段")
|
||
return fail(BizCode.INVALID_PARAMETER, "请至少提供一个需要更新的字段", "config_name, config_desc, scene_id 均为空")
|
||
|
||
api_logger.info(f"用户 {current_user.username} 在工作空间 {workspace_id} 请求更新配置: {payload.config_id}")
|
||
try:
|
||
svc = DataConfigService(db)
|
||
result = svc.update(payload)
|
||
return success(data=result, msg="更新成功")
|
||
except Exception as e:
|
||
api_logger.error(f"Update config failed: {str(e)}")
|
||
return fail(BizCode.INTERNAL_ERROR, "更新配置失败", str(e))
|
||
|
||
|
||
@router.post("/update_config_extracted", response_model=ApiResponse) # 更新数据库中的部分内容 所有业务字段均可选
|
||
def update_config_extracted(
|
||
payload: ConfigUpdateExtracted,
|
||
current_user: User = Depends(get_current_user),
|
||
db: Session = Depends(get_db),
|
||
) -> dict:
|
||
workspace_id = current_user.current_workspace_id
|
||
payload.config_id = resolve_config_id(payload.config_id, db)
|
||
# 检查用户是否已选择工作空间
|
||
if workspace_id is None:
|
||
api_logger.warning(f"用户 {current_user.username} 尝试更新提取配置但未选择工作空间")
|
||
return fail(BizCode.INVALID_PARAMETER, "请先切换到一个工作空间", "current_workspace_id is None")
|
||
|
||
api_logger.info(f"用户 {current_user.username} 在工作空间 {workspace_id} 请求更新提取配置: {payload.config_id}")
|
||
try:
|
||
svc = DataConfigService(db)
|
||
result = svc.update_extracted(payload)
|
||
return success(data=result, msg="更新成功")
|
||
except Exception as e:
|
||
api_logger.error(f"Update config extracted failed: {str(e)}")
|
||
return fail(BizCode.INTERNAL_ERROR, "更新配置失败", str(e))
|
||
|
||
|
||
# --- Forget config params ---
|
||
# 遗忘引擎配置接口已迁移到 memory_forget_controller.py
|
||
# 使用新接口: /api/memory/forget/read_config 和 /api/memory/forget/update_config
|
||
|
||
@router.get("/read_config_extracted", response_model=ApiResponse) # 通过查询参数读取某条配置(固定路径) 没有意义的话就删除
|
||
def read_config_extracted(
|
||
config_id: UUID | int,
|
||
current_user: User = Depends(get_current_user),
|
||
db: Session = Depends(get_db),
|
||
) -> dict:
|
||
workspace_id = current_user.current_workspace_id
|
||
config_id = resolve_config_id(config_id, db)
|
||
# 检查用户是否已选择工作空间
|
||
if workspace_id is None:
|
||
api_logger.warning(f"用户 {current_user.username} 尝试读取提取配置但未选择工作空间")
|
||
return fail(BizCode.INVALID_PARAMETER, "请先切换到一个工作空间", "current_workspace_id is None")
|
||
|
||
api_logger.info(f"用户 {current_user.username} 在工作空间 {workspace_id} 请求读取提取配置: {config_id}")
|
||
try:
|
||
svc = DataConfigService(db)
|
||
result = svc.get_extracted(ConfigKey(config_id=config_id))
|
||
return success(data=result, msg="查询成功")
|
||
except Exception as e:
|
||
api_logger.error(f"Read config extracted failed: {str(e)}")
|
||
return fail(BizCode.INTERNAL_ERROR, "查询配置失败", str(e))
|
||
|
||
@router.get("/read_all_config", response_model=ApiResponse) # 读取所有配置文件列表
|
||
def read_all_config(
|
||
current_user: User = Depends(get_current_user),
|
||
db: Session = Depends(get_db),
|
||
) -> dict:
|
||
workspace_id = current_user.current_workspace_id
|
||
|
||
# 检查用户是否已选择工作空间
|
||
if workspace_id is None:
|
||
api_logger.warning(f"用户 {current_user.username} 尝试查询配置但未选择工作空间")
|
||
return fail(BizCode.INVALID_PARAMETER, "请先切换到一个工作空间", "current_workspace_id is None")
|
||
|
||
api_logger.info(f"用户 {current_user.username} 在工作空间 {workspace_id} 请求读取所有配置")
|
||
try:
|
||
svc = DataConfigService(db)
|
||
# 传递 workspace_id 进行过滤(保持为 UUID 类型)
|
||
result = svc.get_all(workspace_id=workspace_id)
|
||
return success(data=result, msg="查询成功")
|
||
except Exception as e:
|
||
api_logger.error(f"Read all config failed: {str(e)}")
|
||
return fail(BizCode.INTERNAL_ERROR, "查询所有配置失败", str(e))
|
||
|
||
|
||
@router.post("/pilot_run", response_model=None)
|
||
async def pilot_run(
|
||
payload: ConfigPilotRun,
|
||
current_user: User = Depends(get_current_user),
|
||
db: Session = Depends(get_db),
|
||
) -> StreamingResponse:
|
||
api_logger.info(
|
||
f"Pilot run requested: config_id={payload.config_id}, "
|
||
f"dialogue_text_length={len(payload.dialogue_text)}"
|
||
)
|
||
payload.config_id = resolve_config_id(payload.config_id, db)
|
||
svc = DataConfigService(db)
|
||
return StreamingResponse(
|
||
svc.pilot_run_stream(payload),
|
||
media_type="text/event-stream",
|
||
headers={
|
||
"Cache-Control": "no-cache",
|
||
"Connection": "keep-alive",
|
||
"X-Accel-Buffering": "no",
|
||
},
|
||
)
|
||
|
||
"""
|
||
以下为搜索与分析接口,直接挂载到同一 router,统一响应为 ApiResponse。
|
||
"""
|
||
|
||
@router.get("/search/kb_type_distribution", response_model=ApiResponse)
|
||
async def get_kb_type_distribution(
|
||
end_user_id: Optional[str] = None,
|
||
current_user: User = Depends(get_current_user),
|
||
) -> dict:
|
||
api_logger.info(f"KB type distribution requested for end_user_id: {end_user_id}")
|
||
try:
|
||
result = await kb_type_distribution(end_user_id)
|
||
return success(data=result, msg="查询成功")
|
||
except Exception as e:
|
||
api_logger.error(f"KB type distribution failed: {str(e)}")
|
||
return fail(BizCode.INTERNAL_ERROR, "知识库类型分布查询失败", str(e))
|
||
|
||
|
||
@router.get("/search/dialogue", response_model=ApiResponse)
|
||
async def search_dialogues_num(
|
||
end_user_id: Optional[str] = None,
|
||
current_user: User = Depends(get_current_user),
|
||
) -> dict:
|
||
api_logger.info(f"Search dialogue requested for end_user_id: {end_user_id}")
|
||
try:
|
||
result = await search_dialogue(end_user_id)
|
||
return success(data=result, msg="查询成功")
|
||
except Exception as e:
|
||
api_logger.error(f"Search dialogue failed: {str(e)}")
|
||
return fail(BizCode.INTERNAL_ERROR, "对话查询失败", str(e))
|
||
|
||
|
||
@router.get("/search/chunk", response_model=ApiResponse)
|
||
async def search_chunks_num(
|
||
end_user_id: Optional[str] = None,
|
||
current_user: User = Depends(get_current_user),
|
||
) -> dict:
|
||
api_logger.info(f"Search chunk requested for end_user_id: {end_user_id}")
|
||
try:
|
||
result = await search_chunk(end_user_id)
|
||
return success(data=result, msg="查询成功")
|
||
except Exception as e:
|
||
api_logger.error(f"Search chunk failed: {str(e)}")
|
||
return fail(BizCode.INTERNAL_ERROR, "分块查询失败", str(e))
|
||
|
||
|
||
@router.get("/search/statement", response_model=ApiResponse)
|
||
async def search_statements_num(
|
||
end_user_id: Optional[str] = None,
|
||
current_user: User = Depends(get_current_user),
|
||
) -> dict:
|
||
api_logger.info(f"Search statement requested for end_user_id: {end_user_id}")
|
||
try:
|
||
result = await search_statement(end_user_id)
|
||
return success(data=result, msg="查询成功")
|
||
except Exception as e:
|
||
api_logger.error(f"Search statement failed: {str(e)}")
|
||
return fail(BizCode.INTERNAL_ERROR, "语句查询失败", str(e))
|
||
|
||
|
||
@router.get("/search/entity", response_model=ApiResponse)
|
||
async def search_entities_num(
|
||
end_user_id: Optional[str] = None,
|
||
current_user: User = Depends(get_current_user),
|
||
) -> dict:
|
||
api_logger.info(f"Search entity requested for end_user_id: {end_user_id}")
|
||
try:
|
||
result = await search_entity(end_user_id)
|
||
return success(data=result, msg="查询成功")
|
||
except Exception as e:
|
||
api_logger.error(f"Search entity failed: {str(e)}")
|
||
return fail(BizCode.INTERNAL_ERROR, "实体查询失败", str(e))
|
||
|
||
|
||
@router.get("/search", response_model=ApiResponse)
|
||
async def search_all_num(
|
||
end_user_id: Optional[str] = None,
|
||
current_user: User = Depends(get_current_user),
|
||
) -> dict:
|
||
api_logger.info(f"Search all requested for end_user_id: {end_user_id}")
|
||
try:
|
||
result = await search_all(end_user_id)
|
||
return success(data=result, msg="查询成功")
|
||
except Exception as e:
|
||
api_logger.error(f"Search all failed: {str(e)}")
|
||
return fail(BizCode.INTERNAL_ERROR, "全部查询失败", str(e))
|
||
|
||
|
||
@router.get("/search/detials", response_model=ApiResponse)
|
||
async def search_entities_detials(
|
||
end_user_id: Optional[str] = None,
|
||
current_user: User = Depends(get_current_user),
|
||
) -> dict:
|
||
api_logger.info(f"Search details requested for end_user_id: {end_user_id}")
|
||
try:
|
||
result = await search_detials(end_user_id)
|
||
return success(data=result, msg="查询成功")
|
||
except Exception as e:
|
||
api_logger.error(f"Search details failed: {str(e)}")
|
||
return fail(BizCode.INTERNAL_ERROR, "详情查询失败", str(e))
|
||
|
||
|
||
@router.get("/search/edges", response_model=ApiResponse)
|
||
async def search_entity_edges(
|
||
end_user_id: Optional[str] = None,
|
||
current_user: User = Depends(get_current_user),
|
||
) -> dict:
|
||
api_logger.info(f"Search edges requested for end_user_id: {end_user_id}")
|
||
try:
|
||
result = await search_edges(end_user_id)
|
||
return success(data=result, msg="查询成功")
|
||
except Exception as e:
|
||
api_logger.error(f"Search edges failed: {str(e)}")
|
||
return fail(BizCode.INTERNAL_ERROR, "边查询失败", str(e))
|
||
|
||
|
||
|
||
|
||
@router.get("/analytics/hot_memory_tags", response_model=ApiResponse)
|
||
async def get_hot_memory_tags_api(
|
||
limit: int = 10,
|
||
db: Session = Depends(get_db),
|
||
current_user: User = Depends(get_current_user),
|
||
) -> dict:
|
||
"""
|
||
获取热门记忆标签(带Redis缓存)
|
||
|
||
缓存策略:
|
||
- 缓存键:workspace_id + limit
|
||
- 过期时间:5分钟(300秒)
|
||
- 缓存命中:~50ms
|
||
- 缓存未命中:~600-800ms(取决于LLM速度)
|
||
"""
|
||
workspace_id = current_user.current_workspace_id
|
||
|
||
# 构建缓存键
|
||
cache_key = f"hot_memory_tags:{workspace_id}:{limit}"
|
||
|
||
api_logger.info(f"Hot memory tags requested for workspace: {workspace_id}, limit: {limit}")
|
||
|
||
try:
|
||
# 尝试从Redis缓存获取
|
||
from app.aioRedis import aio_redis_get, aio_redis_set
|
||
import json
|
||
|
||
cached_result = await aio_redis_get(cache_key)
|
||
if cached_result:
|
||
api_logger.info(f"Cache hit for key: {cache_key}")
|
||
try:
|
||
data = json.loads(cached_result)
|
||
return success(data=data, msg="查询成功(缓存)")
|
||
except json.JSONDecodeError:
|
||
api_logger.warning(f"Failed to parse cached data, will refresh")
|
||
|
||
# 缓存未命中,执行查询
|
||
api_logger.info(f"Cache miss for key: {cache_key}, executing query")
|
||
result = await analytics_hot_memory_tags(db, current_user, limit)
|
||
|
||
# 写入缓存(过期时间:5分钟)
|
||
# 注意:result是列表,需要转换为JSON字符串
|
||
try:
|
||
cache_data = json.dumps(result, ensure_ascii=False)
|
||
await aio_redis_set(cache_key, cache_data, expire=300)
|
||
api_logger.info(f"Cached result for key: {cache_key}")
|
||
except Exception as cache_error:
|
||
# 缓存写入失败不影响主流程
|
||
api_logger.warning(f"Failed to cache result: {str(cache_error)}")
|
||
|
||
return success(data=result, msg="查询成功")
|
||
|
||
except Exception as e:
|
||
api_logger.error(f"Hot memory tags failed: {str(e)}")
|
||
return fail(BizCode.INTERNAL_ERROR, "热门标签查询失败", str(e))
|
||
|
||
|
||
@router.delete("/analytics/hot_memory_tags/cache", response_model=ApiResponse)
|
||
async def clear_hot_memory_tags_cache(
|
||
current_user: User = Depends(get_current_user),
|
||
) -> dict:
|
||
"""
|
||
清除热门标签缓存
|
||
|
||
用于:
|
||
- 手动刷新数据
|
||
- 调试和测试
|
||
- 数据更新后立即生效
|
||
"""
|
||
workspace_id = current_user.current_workspace_id
|
||
|
||
api_logger.info(f"Clear hot memory tags cache requested for workspace: {workspace_id}")
|
||
|
||
try:
|
||
from app.aioRedis import aio_redis_delete
|
||
|
||
# 清除所有limit的缓存(常见的limit值)
|
||
cleared_count = 0
|
||
for limit in [5, 10, 15, 20, 30, 50]:
|
||
cache_key = f"hot_memory_tags:{workspace_id}:{limit}"
|
||
result = await aio_redis_delete(cache_key)
|
||
if result:
|
||
cleared_count += 1
|
||
api_logger.info(f"Cleared cache for key: {cache_key}")
|
||
|
||
return success(
|
||
data={"cleared_count": cleared_count},
|
||
msg=f"成功清除 {cleared_count} 个缓存"
|
||
)
|
||
|
||
except Exception as e:
|
||
api_logger.error(f"Clear cache failed: {str(e)}")
|
||
return fail(BizCode.INTERNAL_ERROR, "清除缓存失败", str(e))
|
||
|
||
|
||
@router.get("/analytics/recent_activity_stats", response_model=ApiResponse)
|
||
async def get_recent_activity_stats_api(
|
||
current_user: User = Depends(get_current_user),
|
||
) -> dict:
|
||
api_logger.info("Recent activity stats requested")
|
||
try:
|
||
result = await analytics_recent_activity_stats()
|
||
return success(data=result, msg="查询成功")
|
||
except Exception as e:
|
||
api_logger.error(f"Recent activity stats failed: {str(e)}")
|
||
return fail(BizCode.INTERNAL_ERROR, "最近活动统计失败", str(e))
|
||
|