promptfoo

Test your prompts, agents, and RAGs. Red teaming/pentesting/vulnerability scanning for AI. Compare performance of GPT, Claude, Gemini, Llama, and more. Simple declarative configs with command line and

18.9k
Stars
+1740
Stars/month
10
Releases (6m)

Star Growth

+341 (1.8%)
18.3k18.8k19.4kMar 27Apr 1

Overview

Promptfoo is a comprehensive CLI and library designed for evaluating and red-teaming LLM applications. It addresses the critical need for systematic testing and security assessment of AI systems, moving teams away from trial-and-error approaches toward reliable, secure AI development. The tool supports automated evaluations of prompts, agents, and RAG systems across multiple LLM providers including OpenAI, Anthropic, Azure, Bedrock, and Ollama. Beyond basic testing, promptfoo offers sophisticated red teaming capabilities for vulnerability scanning and security assessment of LLM applications. It features side-by-side model comparisons, helping teams make informed decisions about model selection and performance optimization. The platform integrates seamlessly into CI/CD pipelines, enabling automated checks and pull request reviews for LLM-related security and compliance issues. With its declarative configuration approach and web-based results viewer, promptfoo makes it easy to share findings across teams and track improvements over time. Recently acquired by OpenAI while maintaining its open-source MIT license, the tool has proven its value with over 18,000 GitHub stars and active community support.

Deep Analysis

Key Differentiator

Unlike LangSmith (production observability) or Langfuse (logging), promptfoo is the only open-source tool combining eval + red teaming + CI/CD code scanning — now backed by OpenAI while remaining fully MIT-licensed

Capabilities

  • Automated LLM evaluation with side-by-side model comparison across providers
  • Red teaming and vulnerability scanning for LLM application security
  • CI/CD integration for automated prompt regression testing
  • Code scanning for LLM-related security and compliance issues in PRs
  • Web UI for visualizing eval results with assertion-based grading
  • 100% local execution — prompts never leave your machine

🔗 Integrations

OpenAIAnthropicAzureAWS BedrockOllamaGitHub ActionsAny OpenAI-compatible API

Best For

  • Teams hardening LLM apps against prompt injection and jailbreaks with automated red teaming
  • Engineering teams adding LLM eval regression tests to CI/CD pipelines

Not Ideal For

  • Building or deploying AI agents — use LangGraph or CrewAI for agent development
  • Real-time production monitoring — use LangSmith or Langfuse for observability

Languages

TypeScriptJavaScriptPython

Deployment

npm global installHomebrewpip installnpx (no install)

Pricing Detail

Free: Fully free and open source (MIT license)
Paid: N/A (acquired by OpenAI, remains MIT)

Known Limitations

  • Evaluation-only — does not deploy or serve LLM applications
  • Complex eval configurations require YAML expertise
  • Red teaming effectiveness depends on attack strategy coverage

Pros

  • + Comprehensive testing suite covering both performance evaluation and security red teaming in a single tool
  • + Multi-provider support with easy comparison between OpenAI, Anthropic, Claude, Gemini, Llama and dozens of other models
  • + Strong CI/CD integration with automated pull request scanning and code review capabilities for production deployments

Cons

  • - Requires API keys and credits for multiple LLM providers, which can become expensive for extensive testing
  • - Command-line focused interface may have a learning curve for teams preferring GUI-based tools
  • - Limited to evaluation and testing - does not provide actual LLM application development capabilities

Use Cases

  • Automated testing and evaluation of prompt performance across different models before production deployment
  • Security vulnerability scanning and red teaming of LLM applications to identify potential risks and compliance issues
  • Systematic comparison of model performance and cost-effectiveness to optimize AI application architecture

Getting Started

Install promptfoo globally with npm install -g promptfoo, set your LLM provider API key as environment variable (export OPENAI_API_KEY=sk-abc123), then initialize and run your first evaluation with promptfoo init --example getting-started followed by promptfoo eval and promptfoo view

Compare promptfoo