promptfoo
Test your prompts, agents, and RAGs. Red teaming/pentesting/vulnerability scanning for AI. Compare performance of GPT, Claude, Gemini, Llama, and more. Simple declarative configs with command line and
Overview
Promptfoo is a comprehensive CLI and library designed for evaluating and red-teaming LLM applications. It addresses the critical need for systematic testing and security assessment of AI systems, moving teams away from trial-and-error approaches toward reliable, secure AI development. The tool supports automated evaluations of prompts, agents, and RAG systems across multiple LLM providers including OpenAI, Anthropic, Azure, Bedrock, and Ollama. Beyond basic testing, promptfoo offers sophisticated red teaming capabilities for vulnerability scanning and security assessment of LLM applications. It features side-by-side model comparisons, helping teams make informed decisions about model selection and performance optimization. The platform integrates seamlessly into CI/CD pipelines, enabling automated checks and pull request reviews for LLM-related security and compliance issues. With its declarative configuration approach and web-based results viewer, promptfoo makes it easy to share findings across teams and track improvements over time. Recently acquired by OpenAI while maintaining its open-source MIT license, the tool has proven its value with over 18,000 GitHub stars and active community support.
Pros
- + Comprehensive testing suite covering both performance evaluation and security red teaming in a single tool
- + Multi-provider support with easy comparison between OpenAI, Anthropic, Claude, Gemini, Llama and dozens of other models
- + Strong CI/CD integration with automated pull request scanning and code review capabilities for production deployments
Cons
- - Requires API keys and credits for multiple LLM providers, which can become expensive for extensive testing
- - Command-line focused interface may have a learning curve for teams preferring GUI-based tools
- - Limited to evaluation and testing - does not provide actual LLM application development capabilities
Use Cases
- • Automated testing and evaluation of prompt performance across different models before production deployment
- • Security vulnerability scanning and red teaming of LLM applications to identify potential risks and compliance issues
- • Systematic comparison of model performance and cost-effectiveness to optimize AI application architecture