manifest

Smart LLM Routing for OpenClaw. Cut Costs up to 70% 🦞🦚

4.2k
Stars
+293
Stars/month
10
Releases (6m)

Star Growth

+47 (1.1%)
4.0k4.1k4.2kMar 27Apr 1

Overview

Manifest is an intelligent model routing system designed for the OpenClaw ecosystem that automatically optimizes LLM costs by directing requests to the most cost-effective models. It acts as a middleware layer between your agent and various LLM providers, analyzing each request to determine the appropriate model based on complexity and requirements. Simple queries are routed to fast, inexpensive models while complex problems are handled by more capable but expensive models. The system includes automatic fallback mechanisms to ensure reliability when models fail, and provides usage monitoring with customizable alerts when spending thresholds are exceeded. With over 4,000 GitHub stars, Manifest offers both cloud-hosted and self-hosted deployment options, making it accessible for different infrastructure preferences and privacy requirements.

Deep Analysis

Key Differentiator

Free, open-source, local-first LLM router with transparent scoring β€” vs OpenRouter which is a cloud proxy with 5% fee and no routing transparency

⚑ Capabilities

  • β€’ Smart LLM request routing based on 23-dimension scoring algorithm
  • β€’ Automatic model selection across 300+ models and 14+ providers
  • β€’ Cost optimization with up to 70% savings
  • β€’ Automatic fallbacks when a model fails
  • β€’ Budget limits and spending controls
  • β€’ Full routing transparency and cost dashboard
  • β€’ Local-first architecture β€” requests go direct to providers

πŸ”— Integrations

OpenAIAnthropicGoogle GeminiDeepSeekxAIMistral AIQwenMiniMaxKimiAmazon NovaZ.ai (Zhipu)OpenRouterOllama

βœ“ Best For

  • βœ“ Reducing LLM API costs by routing to cheapest capable model
  • βœ“ Teams using multiple LLM providers who want automatic failover

βœ— Not Ideal For

  • βœ— Single-model deployments
  • βœ— Projects requiring deterministic model selection

Languages

TypeScript

Deployment

Cloud (app.manifest.build)Local (OpenClaw plugin)DockerSelf-hosted

Pricing Detail

Free: Fully free and open source (MIT)
Paid: N/A β€” no fees on API calls

⚠ Known Limitations

  • ⚠ Beta status β€” still evolving
  • ⚠ Requires OpenClaw for local plugin mode
  • ⚠ Routing algorithm is opinionated β€” may not match all use cases

Pros

  • + Significant cost reduction potential of up to 70% through intelligent model routing based on request complexity
  • + Automatic failover system ensures high reliability by seamlessly switching to alternative models when primary ones fail
  • + Flexible deployment options with both cloud-managed service and local self-hosted installation available

Cons

  • - Limited to the OpenClaw ecosystem, which may restrict compatibility with other AI agent frameworks
  • - Requires additional infrastructure setup and configuration compared to direct LLM provider integration

Use Cases

  • β€’ Cost optimization for high-volume AI applications that process both simple and complex queries with varying computational requirements
  • β€’ Production AI systems requiring high availability through automatic model fallbacks and redundancy
  • β€’ Organizations with strict budget controls needing usage monitoring and spending alerts for LLM consumption

Getting Started

Install the manifest-provider plugin using 'openclaw plugins install manifest-provider', configure authentication by running 'openclaw providers setup manifest-provider' with your API key from app.manifest.build, then restart the gateway with 'openclaw gateway restart' to enable the 'manifest/auto' model

Compare manifest