Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.remoteagent.chat/llms.txt

Use this file to discover all available pages before exploring further.

A runner is the AI engine that processes your prompts. RemoteAgent.CHAT acts as the relay — the runner is what actually reads your code, reasons about it, and produces output. You choose a runner once during remoteagent init, and you can change it at any time by re-running init.

Comparison table

RunnerTypeStatusRequiresBest for
claude-codeCLI binaryStableclaude CLI + auth (OAuth token, claude login, or ANTHROPIC_API_KEY)Default choice — full Claude capability, no API key needed with a subscription
claude-sdkIn-process SDKStableAnthropic API keyBest latency with Anthropic models
opencodeCLI binaryStablenothing (or optional API key)No API key needed, 75+ providers
codexCLI binaryStableChatGPT Plus/Pro/Business/Edu/Enterprise or CODEX_API_KEYOpenAI users, GPT-5 models
geminiCLI binaryBetaGoogle account (free tier available)No-cost option for Gemini models
aiderCLI binaryBetaPython, pip, model API keymodel selection + architect mode via wizard
openclawHTTP gatewayStableopenclaw daemon running (configure with openclaw’s own wizard first)Self-hosted AI gateway
customCLI binaryStableYour binary on PATHAny AI tool with stdin/stdout interface
If you have ANTHROPIC_API_KEY set in your shell environment, the claude binary will use it regardless of any other authentication — it overrides both the OAuth token and claude login, even with an active Pro/Max subscription. Unset it if you want to authenticate via subscription:
unset ANTHROPIC_API_KEY
See claude-code authentication for the full details.

How to select a runner

Run remoteagent init and the wizard will show an interactive list — use the arrow keys to pick a runner, then press Enter:
AI runner:
❯ claude-code  — claude CLI binary
  claude-sdk   — Claude Code SDK
  gemini       — Gemini CLI binary
  opencode     — OpenCode AI (no API key required)
  codex        — Codex CLI by OpenAI
  aider        — Aider CLI
  openclaw     — OpenClaw (local gateway, port 18789) [beta]
  custom       — Custom binary
If you already know which runner you want, you can skip the prompt by passing --runner:
remoteagent init --runner opencode
remoteagent init --runner codex
remoteagent init --runner custom --runner-bin /path/to/binary

Switching runners

To change the runner for an existing agent, run remoteagent init again in the same project directory. The wizard will detect your existing configuration and ask what you want to do — choose “Update runner — keep existing pairing” to change only the runner without re-pairing. After the wizard completes, start the agent:
remoteagent start

Runner architecture

All runners implement the same interface internally. When the agent receives a command, it calls the runner with the prompt and streams chunks back as they are produced. The runner is responsible for:
  1. Receiving the prompt string
  2. Invoking the AI model (in-process, via subprocess, or via HTTP)
  3. Yielding output chunks as they arrive
  4. Signaling completion or error
This uniform interface means the Telegram experience is identical regardless of which runner you use — output always streams in real time.
Runners marked Beta are functional but may have edge cases in output parsing or error handling. If you encounter issues, please open an issue on the GitHub repository.