Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.remoteagent.chat/llms.txt

Use this file to discover all available pages before exploring further.

The openclaw runner is in beta and has not been fully tested. You may encounter bugs or unexpected behavior. Use with caution.
The openclaw runner connects to a locally running OpenClaw daemon that exposes an AI gateway on a configurable port (default 18789). OpenClaw manages its own configuration entirely via its own setup wizard — RemoteAgent.CHAT does not configure OpenClaw settings.

Requirements

  • OpenClaw installed and its daemon running before remoteagent init

Setting up OpenClaw first

Before running remoteagent init, you must set up OpenClaw independently:
npm install -g openclaw@latest
openclaw onboard --install-daemon
openclaw onboard --install-daemon will walk you through OpenClaw’s own configuration wizard and install the background daemon. Once the daemon is running, you can proceed to initialize RemoteAgent.CHAT.
Run openclaw onboard --install-daemon before remoteagent init. The RemoteAgent.CHAT wizard checks whether the daemon is reachable and will show instructions if it is not found.

Setup

Once the OpenClaw daemon is running, initialize the agent:
remoteagent init --runner openclaw
Or choose openclaw when prompted by the interactive wizard. During remoteagent init, the wizard will:
  1. Ask for the gateway port (default 18789)
  2. Check whether the OpenClaw daemon is reachable on that port
  3. If the daemon is not found: display setup instructions and offer to exit or continue anyway
RemoteAgent.CHAT does not configure any OpenClaw settings — all model routing, API keys, and gateway behavior are managed through OpenClaw’s own configuration.

How it works

The runner sends POST requests to http://localhost:{port}/v1/chat/completions using the OpenAI API format. The port is configurable and saved in ~/.remoteagent/agents/{agentId}.json. The prompt is sent as a user message. Response chunks are streamed via server-sent events (SSE) and forwarded to Telegram as they arrive. Example request body sent by the runner:
{
  "model": "default",
  "messages": [
    { "role": "user", "content": "<your prompt here>" }
  ],
  "stream": true
}
The model field defaults to "default" — OpenClaw is responsible for routing that to the appropriate backend model.

Use cases

  • Local model routing — run multiple models and route based on task type
  • Cost control — apply rate limiting or budget caps at the gateway layer
  • Custom middleware — inject system prompts, logging, or content filters before requests reach the model
  • Air-gapped environments — run entirely offline with no external API calls

Pros and cons

ProsCons
Full control over the AI backendRequires running and maintaining the OpenClaw daemon
Works with any OpenAI-compatible modelOpenClaw daemon must be running before the agent starts
Air-gap compatibleMore setup complexity
No API key managed by RemoteAgent.CHAT
OpenClaw handles all its own configuration