sidebar_position: 14 title: "API Server" description: "Expose hermes-agent as an OpenAI-compatible API for any frontend" lang: ru
API-сервер
Сервер API предоставляет агент Hermes как конечную точку HTTP, совместимую с OpenAI. Любой интерфейс, поддерживающий формат OpenAI — Open WebUI, LobeChat, LibreChat, NextChat, ChatBox и сотни других — может подключиться к hermes-agent и использовать его в качестве серверной части.
Ваш агент обрабатывает запросы, используя полный набор инструментов (терминал, файловые операции, веб-поиск, память, навыки) и возвращает окончательный ответ. При потоковой передаче индикаторы хода работы инструмента отображаются в строке, поэтому интерфейсы могут показывать, что делает агент.
Быстрый старт
1. Включите сервер API
Добавьте в ~/.hermes/.env:
API_SERVER_ENABLED=true
API_SERVER_KEY=change-me-local-dev
# Optional: only if a browser must call Hermes directly
# API_SERVER_CORS_ORIGINS=http://localhost:3000
2. Запустите шлюз
hermes gateway
Вы увидите:
[API Server] API server listening on http://127.0.0.1:8642
3. Подключите интерфейс
Наведите любой OpenAI-совместимый клиент на http://localhost:8642/v1:
# Test with curl
curl http://localhost:8642/v1/chat/completions \
-H "Authorization: Bearer change-me-local-dev" \
-H "Content-Type: application/json" \
-d '{"model": "hermes-agent", "messages": [{"role": "user", "content": "Hello!"}]}'
Or connect Open WebUI, LobeChat, or any other frontend — see the Open WebUI integration guide for step-by-step instructions.
Endpoints
POST /v1/chat/completions
Standard OpenAI Chat Completions format. Stateless — the full conversation is included in each request via the messages array.
Request:
{
"model": "hermes-agent",
"messages": [
{"role": "system", "content": "You are a Python expert."},
{"role": "user", "content": "Write a fibonacci function"}
],
"stream": false
}
Response:
{
"id": "chatcmpl-abc123",
"object": "chat.completion",
"created": 1710000000,
"model": "hermes-agent",
"choices": [{
"index": 0,
"message": {"role": "assistant", "content": "Here's a fibonacci function..."},
"finish_reason": "stop"
}],
"usage": {"prompt_tokens": 50, "completion_tokens": 200, "total_tokens": 250}
}
Inline image input: user messages may send content as an array of text and image_url parts. Both remote http(s)@@IC0006@ @data:image/... URLs are supported:
{
"model": "hermes-agent",
"messages": [
{
"role": "user",
"content": [
{"type": "text", "text": "What is in this image?"},
{"type": "image_url", "image_url": {"url": "https://example.com/cat.png", "detail": "high"}}
]
}
]
}
Uploaded files (file / input_file / file_id) and non-image data: URLs return 400 unsupported_content_type.
Streaming ("поток": true): Returns Server-Sent Events (SSE) with token-by-token response chunks. For Chat Completions, the stream uses standard chat.completion.chunk events plus Hermes' custom hermes.tool.progress event for tool-start UX. For Responses, the stream uses OpenAI Responses event types such as response.created, response.output_text .delta, response.output_item.added, response.output_item.done, and response.completed.
Tool progress in streams:
- Chat Completions: Hermes emits event: hermes.tool.progress for tool-start visibility without polluting persisted assistant text.
- Responses: Hermes emits spec-native function_call and function_call_output output items during the SSE stream, so clients can render structured tool UI in real time.
POST /v1/responses
OpenAI Responses API format. Supports server-side conversation state via previous_response_id — the server stores full conversation history (including tool calls and results) so multi-turn context is preserved without the client managing it.
Request:
{
"model": "hermes-agent",
"input": "What files are in my project?",
"instructions": "You are a helpful coding assistant.",
"store": true
}
Response:
{
"id": "resp_abc123",
"object": "response",
"status": "completed",
"model": "hermes-agent",
"output": [
{"type": "function_call", "name": "terminal", "arguments": "{\"command\": \"ls\"}", "call_id": "call_1"},
{"type": "function_call_output", "call_id": "call_1", "output": "README.md src/ tests/"},
{"type": "message", "role": "assistant", "content": [{"type": "output_text", "text": "Your project has..."}]}
],
"usage": {"input_tokens": 50, "output_tokens": 200, "total_tokens": 250}
}
Inline image input: input[].content@ @IC0025@@input_text and input_image parts. Both remote URLs and data:image/... URLs are supported:
{
"model": "hermes-agent",
"input": [
{
"role": "user",
"content": [
{"type": "input_text", "text": "Describe this screenshot."},
{"type": "input_image", "image_url": "data:image/png;base64,iVBORw0K..."}
]
}
]
}
Uploaded files (input_file / file_id) and non-image data: URLs return 400 unsupported_content_type.
Multi-turn with previous_response_id
Chain responses to maintain full context (including tool calls) across turns:
{
"input": "Now show me the README",
"previous_response_id": "resp_abc123"
}
The server reconstructs the full conversation from the stored response chain — all previous tool calls and results are preserved. Chained requests also share the same session, so multi-turn conversations appear as a single entry in the dashboard and session history.
Named conversations
Use the conversation parameter instead of tracking response IDs:
{"input": "Hello", "conversation": "my-project"}
{"input": "What's in src/?", "conversation": "my-project"}
{"input": "Run the tests", "conversation": "my-project"}
The server automatically chains to the latest response in that conversation. Like the /title command for gateway sessions.
GET /v1/responses/{id}
Retrieve a previously stored response by ID.
DELETE /v1/responses/{id}
Delete a stored response.
GET /v1/models
Lists the agent as an available model. The advertised model name defaults to the profile name (or hermes-agent for the default profile). Required by most frontends for model discovery.
GET /v1/capabilities
Returns a machine-readable description of the API server's stable surface for external UIs, orchestrators, and plugin bridges.
{
"object": "hermes.api_server.capabilities",
"platform": "hermes-agent",
"model": "hermes-agent",
"auth": {"type": "bearer", "required": true},
"features": {
"chat_completions": true,
"responses_api": true,
"run_submission": true,
"run_status": true,
"run_events_sse": true,
"run_stop": true
}
}
Use this endpoint when integrating dashboards, browser UIs, or control planes so they can discover whether the running Hermes version supports runs, streaming, cancellation, and session continuity without depending on private Python internals.
GET /health
Health check. Returns {"статус": "ok"}. Also available at GET /v1/health for OpenAI-compatible clients that expect the /v1/ prefix.
GET /health/detailed
Extended health check that also reports active sessions, running agents, and resource usage. Useful for monitoring/observability tooling.
Runs API (streaming-friendly alternative)
In addition to /v1/chat/completions and /v1/responses, the server exposes a runs API for long-form sessions where the client wants to subscribe to progress events instead of managing streaming themselves.
POST /v1/runs
Create a new agent run. Returns a run_id that can be used to subscribe to progress events.
{
"run_id": "run_abc123",
"status": "started"
}
Runs accept a simple input string and optional session_id, instructions@@IC 0043@@conversation_history, or previous_response_id. When session_id is provided, Hermes surfaces it in the run status so external UIs can correlate runs with their own conversation IDs.
GET /v1/runs/{run_id}
Poll the current run state. This is useful for dashboards that need status without holding an SSE connection open, or for UIs that reconnect after navigation.
{
"object": "hermes.run",
"run_id": "run_abc123",
"status": "completed",
"session_id": "space-session",
"model": "hermes-agent",
"output": "Done.",
"usage": {"input_tokens": 50, "output_tokens": 200, "total_tokens": 250}
}
Statuses are retained briefly after terminal states (completed, failed, or cancelled) for polling and UI reconciliation.
GET /v1/runs/{run_id}/events
Server-Sent Events stream of the run's tool-call progress, token deltas, and lifecycle events. Designed for dashboards and thick clients that want to attach/detach without losing state.
POST /v1/runs/{run_id}/stop
Interrupt a running agent turn. The endpoint returns immediately with {"статус": "остановка"} while Hermes asks the active agent to stop at the next safe interruption point.
Jobs API (background scheduled work)
The server exposes a lightweight jobs CRUD surface for managing scheduled / background agent runs from a remote client. All endpoints are gated behind the same bearer auth.
GET /api/jobs
List all scheduled jobs.
POST /api/jobs
Create a new scheduled job. Body accepts the same shape as hermes cron — prompt, schedule, skills, provider override, delivery target.
GET /api/jobs/{job_id}
Fetch a single job's definition and last-run state.
PATCH /api/jobs/{job_id}
Update fields on an existing job (prompt, schedule, etc.). Partial updates are merged.
DELETE /api/jobs/{job_id}
Remove a job. Also cancels any in-flight run.
POST /api/jobs/{job_id}/pause
Pause a job without deleting it. Next-scheduled-run timestamps are suspended until resumed.
POST /api/jobs/{job_id}/resume
Resume a previously paused job.
POST /api/jobs/{job_id}/run
Trigger the job to run immediately, out of schedule.
System Prompt Handling
When a frontend sends a system message (Chat Completions) or instructions field (Responses API), hermes-agent layers it on top of its core system prompt. Your agent keeps all its tools, memory, and skills — the frontend's system prompt adds extra instructions.
This means you can customize behavior per-frontend without losing capabilities: - Open WebUI system prompt: "You are a Python expert. Always include type hints." - The agent still has terminal, file tools, web search, memory, etc.
Authentication
Bearer token auth via the Authorization header:
Authorization: Bearer ***
Configure the key via API_SERVER_KEY env var. If you need a browser to call Hermes directly, also set API_SERVER_CORS_ORIGINS@@IC 0056@@0.0.0.0, API_SERVER_KEY is required. Also keep API_SERVER_CORS_ORIGINS narrow to control browser access.
The default bind address (127.0.0.1) is for local-only use. Browser access is disabled by default; enable it only for explicit trusted origins.