quivr

Opiniated RAG for integrating GenAI in your apps 🧠 Focus on your product rather than the RAG. Easy integration in existing products with customisation! Any LLM: GPT4, Groq, Llama. Any Vectorstore:

39.1k
Stars
+68
Stars/month
0
Releases (6m)

Star Growth

+13 (0.0%)
38.3k39.1k39.9kMar 27Apr 1

Overview

Quivr is an opinionated RAG (Retrieval Augmented Generation) framework designed to help developers integrate GenAI capabilities into their applications quickly. Positioned as a 'second brain' powered by generative AI, Quivr abstracts the complexity of building RAG systems so developers can focus on their core product features. The framework is LLM-agnostic, supporting providers like OpenAI, Anthropic, Mistral, and Gemma, while handling multiple file formats including PDF, TXT, and Markdown. With over 39,000 GitHub stars, Quivr offers a customizable RAG pipeline that can be extended with internet search, custom tools, and integrations like Megaparse for advanced document processing. The system emphasizes simplicity with a 5-line code setup while maintaining the flexibility needed for production applications. Quivr serves as the core engine behind Quivr.com and represents an opinionated approach to RAG implementation that prioritizes speed and efficiency over configuration complexity.

Deep Analysis

Key Differentiator

YC-backed 'second brain' RAG framework that prioritizes simplicity (5 lines of code to start) and opinionated defaults over flexibility β€” same project as quivr

⚑ Capabilities

  • β€’ Opinionated RAG framework for knowledge-based Q&A
  • β€’ Multi-LLM support (OpenAI, Anthropic, Mistral, Ollama)
  • β€’ Multi-format document ingestion (PDF, TXT, Markdown)
  • β€’ Customizable RAG workflows with YAML configuration
  • β€’ Reranking support via Cohere
  • β€’ Integration with Megaparse for advanced document parsing

πŸ”— Integrations

OpenAIAnthropicMistralOllamaCohereMegaparse

βœ“ Best For

  • βœ“ Building personal knowledge assistants with minimal code
  • βœ“ Teams wanting a quick RAG setup over their documents

βœ— Not Ideal For

  • βœ— Teams needing fully custom RAG architectures
  • βœ— Use cases requiring non-text data (video, structured databases)

Languages

Python

Deployment

LocalSelf-hostedCloud (quivr.com)

⚠ Known Limitations

  • ⚠ This is the old 'quiver' repo β€” likely redirects to quivr
  • ⚠ Opinionated design may limit customization for advanced RAG pipelines
  • ⚠ Requires external API keys for LLM providers

Pros

  • + LLM-agnostic design supporting multiple providers (OpenAI, Anthropic, Mistral, Gemma) with unified API
  • + Extremely simple setup requiring only 5 lines of code to create a working RAG system
  • + Flexible file format support with extensible parsers for PDF, TXT, Markdown and custom document types

Cons

  • - Python-only implementation limiting cross-platform development options
  • - Requires Python 3.10 or newer, excluding older Python environments
  • - Still actively developing core features, indicating potential API instability

Use Cases

  • β€’ Integrating document Q&A capabilities into existing Python applications without building RAG from scratch
  • β€’ Building personal knowledge management systems that can query across multiple document formats
  • β€’ Creating AI-powered customer support tools that can answer questions from company documentation

Getting Started

Install the package with 'pip install quivr-core', set your LLM provider API keys as environment variables (e.g., OPENAI_API_KEY), create a Brain instance from your documents and start asking questions with the ask() method.

Compare quivr