MiniChain

A tiny library for coding with large language models.

open-sourceagent-frameworks
Visit WebsiteView on GitHub
1.2k
Stars
+103
Stars/month
0
Releases (6m)

Overview

MiniChain is a lightweight Python library designed for building and orchestrating large language model workflows through function composition. It uses a decorator-based approach where developers annotate Python functions with @prompt to create reusable LLM components that can be chained together. The library builds a computational graph similar to PyTorch, enabling visualization and debugging of complex prompt chains. MiniChain separates prompt templates from code using Jinja templating, making prompts more maintainable and reusable. It supports multiple backends including OpenAI, Hugging Face, Google Search, Python execution, and Bash commands, allowing developers to combine different AI services and tools in a single workflow. The library includes implementations of popular LLM techniques like Retrieval-Augmented Generation, Chain-of-Thought reasoning, and Program-Aided Language models. With its focus on simplicity and modularity, MiniChain enables rapid prototyping of complex AI applications while maintaining code clarity and debuggability through its graph visualization capabilities.

Pros

  • + Simple decorator-based API that makes LLM chaining intuitive and Pythonic
  • + Built-in visualization and debugging through computational graph tracking
  • + Clean separation of concerns with external Jinja template files for prompts

Cons

  • - Limited to basic chaining functionality compared to more comprehensive frameworks
  • - Requires manual setup and configuration for each backend service
  • - Small community and ecosystem with fewer pre-built components

Use Cases

Getting Started

1. Install with 'pip install minichain' and set your OPENAI_API_KEY environment variable. 2. Create a Python function decorated with @prompt that specifies a model and template. 3. Chain functions together by calling them sequentially and use show() to visualize the execution graph.