A runner is the AI engine that processes your prompts. RemoteAgent.CHAT acts as the relay — the runner is what actually reads your code, reasons about it, and produces output. You choose a runner once duringDocumentation Index
Fetch the complete documentation index at: https://docs.remoteagent.chat/llms.txt
Use this file to discover all available pages before exploring further.
remoteagent init, and you can change it at any time by re-running init.
Comparison table
| Runner | Type | Status | Requires | Best for |
|---|---|---|---|---|
claude-code | CLI binary | Stable | claude CLI + auth (OAuth token, claude login, or ANTHROPIC_API_KEY) | Default choice — full Claude capability, no API key needed with a subscription |
claude-sdk | In-process SDK | Stable | Anthropic API key | Best latency with Anthropic models |
opencode | CLI binary | Stable | nothing (or optional API key) | No API key needed, 75+ providers |
codex | CLI binary | Stable | ChatGPT Plus/Pro/Business/Edu/Enterprise or CODEX_API_KEY | OpenAI users, GPT-5 models |
gemini | CLI binary | Beta | Google account (free tier available) | No-cost option for Gemini models |
aider | CLI binary | Beta | Python, pip, model API key | model selection + architect mode via wizard |
openclaw | HTTP gateway | Stable | openclaw daemon running (configure with openclaw’s own wizard first) | Self-hosted AI gateway |
custom | CLI binary | Stable | Your binary on PATH | Any AI tool with stdin/stdout interface |
How to select a runner
Runremoteagent init and the wizard will show an interactive list — use the arrow keys to pick a runner, then press Enter:
--runner:
Switching runners
To change the runner for an existing agent, runremoteagent init again in the same project directory. The wizard will detect your existing configuration and ask what you want to do — choose “Update runner — keep existing pairing” to change only the runner without re-pairing.
After the wizard completes, start the agent:
Runner architecture
All runners implement the same interface internally. When the agent receives a command, it calls the runner with the prompt and streams chunks back as they are produced. The runner is responsible for:- Receiving the prompt string
- Invoking the AI model (in-process, via subprocess, or via HTTP)
- Yielding output chunks as they arrive
- Signaling completion or error
Runners marked Beta are functional but may have edge cases in output
parsing or error handling. If you encounter issues, please open an issue on
the GitHub repository.