Routerly Is Now on Docker Hub

Carlo Satta 3 min read

The official pre-built image for Routerly is now available on Docker Hub at hub.docker.com/r/inebrio/routerly.

Until now, running Routerly from Docker required cloning the repository and building locally. The published image eliminates that step entirely.

Quick start

docker run -d \
  --name routerly \
  -p 3000:3000 \
  -v routerly_data:/data \
  -e ROUTERLY_HOME=/data \
  --restart unless-stopped \
  inebrio/routerly:latest

Open http://localhost:3000/dashboard to access the web interface.

The -v routerly_data:/data flag mounts a named volume at /data inside the container, where all configuration and usage data are stored. Settings, model definitions, project tokens, and usage records persist across container restarts and image updates without any manual backup.

With Docker Compose

For teams that prefer a declarative setup:

services:
  routerly:
    image: inebrio/routerly:latest
    ports:
      - "3000:3000"
    volumes:
      - routerly_data:/data
    environment:
      - ROUTERLY_HOME=/data
    restart: unless-stopped

volumes:
  routerly_data:
docker compose up -d

Environment variables

VariableDefaultDescription
ROUTERLY_HOME/dataData directory inside the container
ROUTERLY_PORT3000HTTP port
NODE_ENVproductionNode environment

Supported platforms

The image is built for two architectures:

  • linux/amd64: standard servers and desktops
  • linux/arm64: Apple Silicon (M1/M2/M3), AWS Graviton, Raspberry Pi

Docker automatically pulls the correct variant.

Tags

TagMeaning
latestLatest stable release
v0.1.5Specific version (use this for reproducible deployments)

We recommend pinning to a specific version tag in production environments and updating deliberately rather than relying on latest.

Why Docker?

The one-line installer (curl ... | bash) is the fastest path to running Routerly on a single machine. Docker is the better choice when:

  • You want a reproducible, isolated process with no Node.js installation on the host
  • You are deploying to a container orchestrator (Kubernetes, Docker Swarm, Fly.io, Railway)
  • You need to enforce consistent versions across multiple machines
  • You want container-level resource limits (--memory, --cpus)

Both installation methods share the same runtime: the Docker image is simply the same Node.js service packaged with all dependencies and no build step required.

Integrating with an existing stack

Because Routerly is a wire-compatible OpenAI and Anthropic proxy, no change is required in your application code. Point base_url (or api_base) at the container, use a project token as the API key, and the gateway handles the rest.

from openai import OpenAI

client = OpenAI(
    base_url="http://localhost:3000/v1",
    api_key="your-project-token"
)

Sources