agentic-radar
A security scanner for your LLM agentic workflows
Star Growth
Overview
Agentic Radar is a specialized security scanner designed to analyze and assess the security posture of LLM-powered agentic workflows. As AI agents become increasingly prevalent in production systems, they introduce unique security challenges including prompt injection vulnerabilities, unauthorized data access, and unpredictable behavior patterns. This tool provides comprehensive security scanning capabilities specifically tailored for agentic systems, helping developers and security teams identify potential vulnerabilities before deployment. The tool features an integrated visualizer component that provides clear insights into detected security issues and workflow analysis. With over 935 GitHub stars, Agentic Radar has gained traction in the AI security community as organizations seek to safely deploy autonomous AI systems. The scanner integrates with popular agentic frameworks like CrewAI, making it accessible for teams already building with these platforms. Available as a Python package through PyPI, it offers both command-line and programmatic interfaces for security assessment. The tool is backed by SPLX.ai and maintains active community support through Discord and Slack channels, ensuring users have access to ongoing updates and security research.
Deep Analysis
The first dedicated security scanner specifically designed for agentic AI workflows, combining static analysis with runtime adversarial testing and automatic prompt hardening — no other tool maps agent vulnerabilities to OWASP AI security frameworks
⚡ Capabilities
- • Security scanner for agentic AI workflows with graph-based architecture visualization
- • Tool and MCP server detection across agent systems
- • Vulnerability mapping aligned with OWASP Top 10 for LLM Applications and Agentic AI
- • Prompt hardening that transforms agent instructions into security-hardened structured prompts
- • Runtime adversarial testing simulating prompt injection, PII leakage, and harmful content
🔗 Integrations
✓ Best For
- ✓ Security teams auditing agentic AI systems before production deployment
- ✓ DevOps teams integrating AI security scanning into CI/CD pipelines
✗ Not Ideal For
- ✗ Real-time production threat monitoring — use runtime guardrails like NeMo Guardrails instead
- ✗ Non-agentic LLM applications — use standard OWASP tools for traditional web app security
Languages
Deployment
Pricing Detail
⚠ Known Limitations
- ⚠ Prompt hardening only supports OpenAI Agents, CrewAI, and Autogen
- ⚠ Runtime testing currently limited to OpenAI Agents only
- ⚠ Requires OpenAI API key for advanced features
- ⚠ CrewAI extras restricted to Python 3.10-3.12
Pros
- + Specialized focus on LLM agentic workflow security vulnerabilities that traditional scanners miss
- + Includes built-in visualization tools for clear security assessment reporting and analysis
- + Integrates with popular frameworks like CrewAI and provides easy PyPI installation
Cons
- - Appears to be a relatively new tool with limited documentation visibility from the provided materials
- - May require specialized knowledge of agentic systems to effectively interpret and act on scan results
Use Cases
- • Security assessment of autonomous AI agent systems before production deployment
- • Compliance auditing for organizations using LLM-powered workflows in regulated industries
- • Continuous security monitoring of agentic systems to detect emerging vulnerabilities