From 8e6d86dbe3590addf766119d3a6b656a9514720f Mon Sep 17 00:00:00 2001 From: Bruce MacDonald Date: Fri, 10 Apr 2026 13:13:36 -0700 Subject: [PATCH] docs: add hermes agent integration guide (#15488) Update cloud and local model recommendations to match current models.go: add qwen3.5:cloud and glm-5.1:cloud, replace glm-4.7-flash with gemma4 and qwen3.5 as local options. Add documentation for Hermes Agent by Nous Research, covering installation, Ollama setup via custom endpoint, messaging configuration, and recommended models. --- docs/docs.json | 3 +- docs/integrations/hermes.mdx | 111 +++++++++++++++++++++++++++++++++ docs/integrations/index.mdx | 1 + docs/integrations/openclaw.mdx | 6 +- 4 files changed, 118 insertions(+), 3 deletions(-) create mode 100644 docs/integrations/hermes.mdx diff --git a/docs/docs.json b/docs/docs.json index 921992495..3b2e651ff 100644 --- a/docs/docs.json +++ b/docs/docs.json @@ -110,7 +110,8 @@ "group": "Assistants", "expanded": true, "pages": [ - "/integrations/openclaw" + "/integrations/openclaw", + "/integrations/hermes" ] }, { diff --git a/docs/integrations/hermes.mdx b/docs/integrations/hermes.mdx new file mode 100644 index 000000000..590f8ec65 --- /dev/null +++ b/docs/integrations/hermes.mdx @@ -0,0 +1,111 @@ +--- +title: Hermes Agent +--- + +Hermes Agent is a self-improving AI agent built by Nous Research. It features automatic skill creation, cross-session memory, and connects messaging platforms (Telegram, Discord, Slack, WhatsApp, Signal, Email) to models through a unified gateway. + +## Quick start + +### Pull a model + +Before running the setup wizard, make sure you have a model available. Hermes will auto-detect models downloaded through Ollama. + +```bash +ollama pull kimi-k2.5:cloud +``` + +See [Recommended models](#recommended-models) for more options. + +### Install + +```bash +curl -fsSL https://raw.githubusercontent.com/NousResearch/hermes-agent/main/scripts/install.sh | bash +``` + +### Set up + +After installation, Hermes launches the setup wizard automatically. Choose **Quick setup**: + +``` +How would you like to set up Hermes? + + → Quick setup — provider, model & messaging (recommended) + Full setup — configure everything +``` + +### Connect to Ollama + +1. Select **More providers...** +2. Select **Custom endpoint (enter URL manually)** +3. Set the API base URL to the Ollama OpenAI-compatible endpoint: + + ``` + API base URL [e.g. https://api.example.com/v1]: http://127.0.0.1:11434/v1 + ``` + +4. Leave the API key blank (not required for local Ollama): + + ``` + API key [optional]: + ``` + +5. Hermes auto-detects downloaded models, confirm the one you want: + + ``` + Verified endpoint via http://127.0.0.1:11434/v1/models (1 model(s) visible) + Detected model: kimi-k2.5:cloud + Use this model? [Y/n]: + ``` + +6. Leave context length blank to auto-detect: + + ``` + Context length in tokens [leave blank for auto-detect]: + ``` + +### Connect messaging + +Optionally connect a messaging platform during setup: + +``` +Connect a messaging platform? (Telegram, Discord, etc.) + + → Set up messaging now (recommended) + Skip — set up later with 'hermes setup gateway' +``` + +### Launch + +``` +Launch hermes chat now? [Y/n]: Y +``` + +## Recommended models + +**Cloud models**: + +- `kimi-k2.5:cloud` — Multimodal reasoning with subagents +- `qwen3.5:cloud` — Reasoning, coding, and agentic tool use with vision +- `glm-5.1:cloud` — Reasoning and code generation +- `minimax-m2.7:cloud` — Fast, efficient coding and real-world productivity + +**Local models:** + +- `gemma4` — Reasoning and code generation locally (~16 GB VRAM) +- `qwen3.5` — Reasoning, coding, and visual understanding locally (~11 GB VRAM) + +More models at [ollama.com/search](https://ollama.com/models). + +## Configure later + +Re-run the setup wizard at any time: + +```bash +hermes setup +``` + +To configure just messaging: + +```bash +hermes setup gateway +``` diff --git a/docs/integrations/index.mdx b/docs/integrations/index.mdx index 5ae2fe670..2703fc0e2 100644 --- a/docs/integrations/index.mdx +++ b/docs/integrations/index.mdx @@ -20,6 +20,7 @@ Coding assistants that can read, modify, and execute code in your projects. AI assistants that help with everyday tasks. - [OpenClaw](/integrations/openclaw) +- [Hermes Agent](/integrations/hermes) ## IDEs & Editors diff --git a/docs/integrations/openclaw.mdx b/docs/integrations/openclaw.mdx index 5e24cc3ea..10df4a1c1 100644 --- a/docs/integrations/openclaw.mdx +++ b/docs/integrations/openclaw.mdx @@ -59,12 +59,14 @@ If the gateway is already running, it restarts automatically to pick up the new **Cloud models**: - `kimi-k2.5:cloud` — Multimodal reasoning with subagents +- `qwen3.5:cloud` — Reasoning, coding, and agentic tool use with vision +- `glm-5.1:cloud` — Reasoning and code generation - `minimax-m2.7:cloud` — Fast, efficient coding and real-world productivity -- `glm-5:cloud` — Reasoning and code generation **Local models:** -- `glm-4.7-flash` — Reasoning and code generation locally (~25 GB VRAM) +- `gemma4` — Reasoning and code generation locally (~16 GB VRAM) +- `qwen3.5` — Reasoning, coding, and visual understanding locally (~11 GB VRAM) More models at [ollama.com/search](https://ollama.com/search?c=cloud).