custom runner lets you connect RemoteAgent to any command-line tool that follows a simple contract: read a prompt from stdin, write output to stdout. This covers shell scripts, Python scripts, local model wrappers, internal tools, or any binary you build yourself.
Requirements
- Your binary must be executable and on the machine running the agent.
- It must read the prompt from stdin and write the response to stdout.
- Exit code
0indicates success; any non-zero exit code is treated as an error. - Node.js 20 or later.
Setup
Pass the path to your binary with the--runner-bin flag:
~/.remoteagent/agents/{agentId}.json and used each time the agent receives a command.
How it works
When the agent receives a command, the runner:- Spawns the binary as a child process
- Writes the prompt string to the process’s stdin
- Closes the stdin stream (signals EOF to the binary)
- Reads stdout progressively and publishes chunks to the Redis output channel
- Waits for the process to exit, then publishes the
doneevent
Example — bash script
A minimal bash script that wraps a localllm command:
Example — Python script
A Python script that calls the OpenAI API directly:Example — Ollama wrapper
Passing environment variables
If your binary needs environment variables (API keys, config paths), set them in the shell environment where you runremoteagent start. They are inherited by child processes:
Pros and cons
| Pros | Cons |
|---|---|
| Works with any AI tool | No tool-use or file I/O capability (text only) |
| Full control over the AI stack | You maintain the binary and its dependencies |
| Zero vendor lock-in | Output quality depends entirely on your binary |
| Air-gap compatible | Streaming requires explicit flush in your binary |