langchainrb

Build LLM-powered applications in Ruby

open-sourcememory-knowledge
Visit WebsiteView on GitHub
2.0k
Stars
+165
Stars/month
0
Releases (6m)

Overview

Langchainrb is a Ruby gem that provides a unified interface for building LLM-powered applications. With 1,974 GitHub stars, it offers a Ruby-native solution for integrating multiple Large Language Model providers through a consistent API. The library supports over 10 major LLM providers including OpenAI, Anthropic, Google Gemini, AWS Bedrock, and others, allowing developers to easily switch between backends without changing application code. Key features include Retrieval Augmented Generation (RAG) capabilities, vector search functionality, prompt management, output parsers, and evaluation tools. The gem focuses on two primary use cases: building RAG systems for enhanced information retrieval and creating AI assistants or chat bots. Langchainrb abstracts the complexity of working with different LLM APIs, providing methods for generating embeddings, prompt completions, and chat completions across all supported providers. For Rails developers, there's a separate langchainrb_rails gem that offers deeper framework integration.

Pros

  • + Unified interface across 10+ major LLM providers (OpenAI, Anthropic, Google, AWS Bedrock, etc.) enabling easy provider switching
  • + Ruby-native solution with strong community adoption (1,974 GitHub stars) and dedicated Rails integration
  • + Comprehensive feature set including RAG, vector search, prompt management, and evaluation tools

Cons

  • - Requires additional gems that aren't included by default, potentially increasing dependency complexity
  • - Needs separate API keys and configuration for each LLM provider you want to use

Use Cases

Getting Started

1. Install the gem with 'bundle add langchainrb' or 'gem install langchainrb' 2. Initialize your chosen LLM provider with API key: 'llm = Langchain::LLM::OpenAI.new(api_key: ENV["OPENAI_API_KEY"])' 3. Start using the unified interface for embeddings, completions, or chat: 'response = llm.embed(text: "Hello, world!")'