Refs: AC-AISVC-101, AC-AISVC-102, AC-AISVC-103, AC-AISVC-104, AC-AISVC-105, AC-AISVC-106, AC-AISVC-107
Refs: AC-ASA-59, AC-ASA-60, AC-ASA-61, AC-ASA-62, AC-ASA-63, AC-ASA-64
Backend changes:
- New: ai-service/app/services/flow/tester.py (ScriptFlowTester)
- New: ai-service/app/services/guardrail/tester.py (GuardrailTester)
- New: ai-service/app/services/monitoring/flow_monitor.py (FlowMonitor)
- New: ai-service/app/services/monitoring/guardrail_monitor.py (GuardrailMonitor)
- Modified: ai-service/app/api/admin/script_flows.py (add POST /{flowId}/simulate)
- Modified: ai-service/app/api/admin/guardrails.py (add POST /test)
- Modified: ai-service/app/api/admin/monitoring.py (add flow/guardrail stats endpoints)
Frontend changes:
- New: SimulateDialog.vue (flow simulation dialog)
- New: TestDialog.vue (guardrail test dialog)
- New: ScriptFlows.vue (flow monitoring page)
- New: Guardrails.vue (guardrail monitoring page)
- Extended: API services (monitoring.ts, script-flow.ts, guardrail.ts)
- Updated: Router with new monitoring routes
|
||
|---|---|---|
| .. | ||
| app | ||
| scripts | ||
| tests | ||
| .dockerignore | ||
| Dockerfile | ||
| README.md | ||
| pyproject.toml | ||
README.md
AI Service
Python AI Service for intelligent chat with RAG support.
Features
- Multi-tenant isolation via X-Tenant-Id header
- SSE streaming support via Accept: text/event-stream
- RAG-powered responses with confidence scoring
Prerequisites
- PostgreSQL 12+
- Qdrant vector database
- Python 3.10+
Installation
pip install -e ".[dev]"
Database Initialization
Option 1: Using Python script (Recommended)
# Create database and tables
python scripts/init_db.py --create-db
# Or just create tables (database must exist)
python scripts/init_db.py
Option 2: Using SQL script
# Connect to PostgreSQL and run
psql -U postgres -f scripts/init_db.sql
Configuration
Create a .env file in the project root:
AI_SERVICE_DATABASE_URL=postgresql+asyncpg://postgres:password@localhost:5432/ai_service
AI_SERVICE_QDRANT_URL=http://localhost:6333
AI_SERVICE_LLM_API_KEY=your-api-key
AI_SERVICE_LLM_BASE_URL=https://api.openai.com/v1
AI_SERVICE_LLM_MODEL=gpt-4o-mini
AI_SERVICE_DEBUG=true
Running
uvicorn app.main:app --host 0.0.0.0 --port 8000
API Endpoints
Chat API
POST /ai/chat- Generate AI reply (supports SSE streaming)GET /ai/health- Health check
Admin API
GET /admin/kb/documents- List documentsPOST /admin/kb/documents- Upload documentGET /admin/kb/index/jobs/{jobId}- Get indexing job statusDELETE /admin/kb/documents/{docId}- Delete documentPOST /admin/rag/experiments/run- Run RAG experimentGET /admin/sessions- List chat sessionsGET /admin/sessions/{sessionId}- Get session details