LLM routing, explained
Practical guides on AI cost optimisation, model routing, and building production-grade LLM infrastructure with Routerly.
- security supply-chain litellm
LiteLLM v1.59.8 Supply Chain Attack: Routerly Is Not Affected
A backdoor was discovered in the litellm PyPI package version 1.59.8, designed to exfiltrate LLM API keys. Routerly has no dependency on litellm or any Python package. Here is what we found and fixed during the security audit this triggered.
Carlo Satta · 4 min readRead - benchmarks routing performance
Measuring Routerly: MMLU, HumanEval, and BIRD Benchmarks
We published routerly-benchmark, an open suite that measures the impact of intelligent routing on quality, cost, and latency across three standard AI evaluation tasks. Here is how it works and what we found.
Carlo Satta · 4 min readRead - docker deployment self-hosted
Routerly Is Now on Docker Hub
The official inebrio/routerly image is live on Docker Hub. Run the complete gateway, including the web dashboard and CLI, with a single docker run command and no build step.
Carlo Satta · 3 min readRead - release v0.1.5 routing
Routerly v0.1.5: First Public Release
The first tagged release of Routerly ships 9 routing policies, multi-tenant project isolation, a built-in web dashboard, a full admin CLI, and a one-line installer for macOS, Linux, and Windows.
Carlo Satta · 4 min readRead - announcement open-source llm-gateway
Introducing Routerly: One Gateway for Every AI Model
Today we open-source Routerly, a self-hosted LLM gateway that routes requests across AI providers using intelligent multi-policy scoring, with no external database required.
Carlo Satta · 4 min readRead