llm.ts

Call any LLM with a single API. Zero dependencies.

213
Stars
+-8
Stars/month
0
Releases (6m)

Star Growth

209214218Mar 27Apr 1

Overview

llm.ts is a unified JavaScript/TypeScript library that provides a single API interface for calling over 30 different large language models from multiple providers including OpenAI, Cohere, and HuggingFace. The library's key innovation is its ability to send multiple prompts to multiple models simultaneously and receive all results in a single response, making it ideal for A/B testing and model comparison workflows. At under 10kB minified with zero dependencies, it's designed to be lightweight and portable across environments including Node.js, Deno, and browsers. The library abstracts away the complexity of different provider APIs while maintaining flexibility through model-specific configurations. Users can specify models either by name (like 'text-ada-001'), by provider prefix ('cohere/command-nightly'), or using built-in enums to avoid typos. The response format is standardized across all providers, including metadata like creation timestamps, model names, and prompt indices for easy result mapping. This makes llm.ts particularly valuable for developers building applications that need provider flexibility, researchers conducting model comparisons, or teams prototyping with multiple AI models without vendor lock-in.

Deep Analysis

Key Differentiator

vs Vercel AI SDK / LangChain.js: zero-dependency TypeScript library under 10kB that sends prompts to 30+ models from 3 providers in a single call — optimized for lightweight multi-model comparison

Capabilities

  • Unified TypeScript API for 30+ language models across multiple providers
  • Send multiple prompts to multiple models in a single request
  • Zero dependencies — under 10kB minified
  • Works in Node.js, Deno, and browser environments

🔗 Integrations

OpenAICohereHuggingFace

Best For

  • Comparing outputs across multiple LLMs simultaneously in TypeScript
  • Lightweight multi-model evaluation without vendor lock-in
  • Browser-based LLM applications needing minimal bundle size

Not Ideal For

  • Production applications needing robust error handling
  • Python-first teams
  • Streaming or real-time response use cases

Languages

TypeScript

Deployment

Node.jsDenobrowser

Known Limitations

  • Only three hosting providers supported
  • No streaming API support
  • No built-in error handling or rate limiting
  • Requires users to provide their own API keys

Pros

  • + Unified API that abstracts complexity across 30+ models from multiple providers (OpenAI, Cohere, HuggingFace)
  • + Extremely lightweight with zero dependencies and under 10kB minified size, suitable for any environment
  • + Batch processing capability to send multiple prompts to multiple models in a single request with standardized response format

Cons

  • - Requires managing API keys for each provider separately, increasing configuration complexity
  • - Limited to older generation models with no apparent support for newer models like GPT-4 or Claude 3
  • - No streaming support mentioned, which may limit real-time applications

Use Cases

  • A/B testing and benchmarking different LLMs with identical prompts to compare output quality and characteristics
  • Building LLM comparison tools or research platforms that need to evaluate multiple models simultaneously
  • Prototyping applications that require provider flexibility without committing to a single LLM vendor

Getting Started

1. Install the package with `npm install llm.ts` or `yarn add llm.ts`. 2. Set up API keys for your desired providers (OpenAI, Cohere, HuggingFace) in environment variables. 3. Import LLM class and call completion() with arrays of prompts and models to get standardized results from multiple LLMs in one response.

Compare llm.ts