- Optimize Orchestrator streaming output with LLM client integration - Implement _stream_from_llm() to wrap LLM chunks as message events - Implement _stream_mock_response() for demo/testing - Add SSE event format tests (message/final/error) - Fix by_alias=True for final event JSON output - 79 tests passing |
||
|---|---|---|
| .gitea/workflows | ||
| ai-service | ||
| docs | ||
| java | ||
| scripts | ||
| spec | ||
| .gitignore | ||
| README.md | ||
| agents.md | ||
README.md
ai-robot-core
ai中台业务的能力支撑