Curated resources for AI red teaming, LLM security testing, and securing AI/ML systems
A comprehensive collection of resources for security professionals working with AI systems. Use these guides to understand AI attack vectors and tools to test AI/ML implementations.
Comprehensive guide from OWASP for red teaming generative AI applications including LLMs.
genai.owasp.org/resource/genai-red-teaming-guide/Microsoft's best practices and learnings for AI red teaming operations.
learn.microsoft.com/en-us/security/ai-red-team/Comprehensive repository covering AI/ML penetration testing methodologies and techniques.
github.com/Mr-Infect/AI-penetration-testingCurated list of LLM security resources, papers, tools, and research.
github.com/corca-ai/awesome-llm-securityResearch and resources on LLM security vulnerabilities and attack vectors.
github.com/greshake/llm-securityCollection of prompt injection attacks and jailbreaks for testing LLM security.
github.com/mik0w/pallmsStep-by-step guide for red teaming large language models from Confident AI.
confident-ai.com/blog/red-teaming-llms-a-step-by-step-guidePrompt injection taxonomy and classification from Arcanum Security.
github.com/Arcanum-Sec/arc_pi_taxonomyCollection of prompt examples for understanding LLM behavior and capabilities.
prompts.chat/promptsA a comprehensive list of GPT agents focused on cybersecurity (offensive and defensive), created by the community.
github.com/fr0gger/Awesome-GPT-AgentsIt is focused heavily on attacks that have code you can use to perform the attacks right away, rather than a database of research papers.
wiki.offsecml.com/Welcome+to+the+Offensive+ML+PlaybookComprehensive guide for conducting AI red team exercises.
github.com/requie/AI-Red-Teaming-GuideCovering cutting-edge AI agent technologies and their applications in cybersecurity operations
github.com/santosomar/AI-agents-for-cybersecurityEfficiently use language models for a wide variety of applications and research topics
www.promptingguide.aiPython Risk Identification Tool for generative AI. Automates AI red teaming tasks.
github.com/Azure/PyRITOpen-source testing framework for ML models. Detect bias, security issues, and performance problems.
github.com/Giskard-AI/giskard-osspromptmap2 is a an automated prompt injection scanner for custom LLM applications
github.com/utkusen/promptmapFuzzing framework for AI/ML models from CyberArk. Test model robustness and find edge cases.
github.com/cyberark/FuzzyAISecurity testing toolkit for AI applications from Reversec Labs.
github.com/ReversecLabs/spikeeGraphical penetration testing platform with AI integration capabilities.
github.com/FunnyWolf/ViperA vulnerability scanner for Agent Workflows and Large Language Models.
github.com/msoedov/agentic_securityRed Teaming python-framework for testing chatbots and GenAI systems
github.com/LLAMATOR-Core/llamatorAI systems introduce unique security challenges that traditional security testing may not address. Key areas of concern include:
Use the resources above to learn about these attack vectors and test your AI implementations for vulnerabilities.