ragapp

The easiest way to use Agentic RAG in any enterprise

Visit WebsiteView on GitHub
4.4k
Stars
+368
Stars/month
0
Releases (6m)

Overview

RAGapp is an enterprise-ready Retrieval-Augmented Generation platform that makes deploying AI-powered document search and chat systems as simple as configuring OpenAI's custom GPTs, but with full control over your infrastructure. Built on LlamaIndex, it provides a complete stack including an admin interface for configuration, a chat UI for end users, and REST APIs for integration. The platform supports both cloud-hosted models (OpenAI, Gemini) and local deployment with Ollama, making it suitable for organizations with varying privacy and compliance requirements. With 4,410 GitHub stars, it has proven popular for teams needing production-grade RAG without the complexity of building from scratch.

Pros

  • + Zero-config Docker deployment with comprehensive UI stack (admin, chat, API) included out of the box
  • + Enterprise-grade architecture supporting both cloud and on-premises models with built-in vector database integration
  • + Production-ready with pre-built Docker Compose templates for common scenarios like Ollama + Qdrant deployment

Cons

  • - No built-in authentication layer - requires external API gateway or proxy for user management
  • - Limited customization of UI components compared to building a custom solution
  • - Authorization features are still in development for access control based on user tokens

Use Cases

Getting Started

1. Run `docker run -p 8000:8000 ragapp/ragapp` to start the container, 2. Access [link] to configure your AI model (OpenAI/Gemini API keys or local Ollama), 3. Upload your documents and start chatting at [link]