← Back to Home

AI Security Resources

Curated resources for AI red teaming, LLM security testing, and securing AI/ML systems

A comprehensive collection of resources for security professionals working with AI systems. Use these guides to understand AI attack vectors and tools to test AI/ML implementations.

📖

Red Teaming Guides & Resources

OWASP GenAI Red Teaming Guide

Comprehensive guide from OWASP for red teaming generative AI applications including LLMs.

genai.owasp.org/resource/genai-red-teaming-guide/

Microsoft AI Red Team

Microsoft's best practices and learnings for AI red teaming operations.

learn.microsoft.com/en-us/security/ai-red-team/

AI Penetration Testing Guide

Comprehensive repository covering AI/ML penetration testing methodologies and techniques.

github.com/Mr-Infect/AI-penetration-testing

Awesome LLM Security

Curated list of LLM security resources, papers, tools, and research.

github.com/corca-ai/awesome-llm-security

LLM Security Research

Research and resources on LLM security vulnerabilities and attack vectors.

github.com/greshake/llm-security

PALLMS - Prompt Attack Library for LLMs

Collection of prompt injection attacks and jailbreaks for testing LLM security.

github.com/mik0w/pallms

Red Teaming LLMs Guide

Step-by-step guide for red teaming large language models from Confident AI.

confident-ai.com/blog/red-teaming-llms-a-step-by-step-guide

ARC PI Taxonomy

Prompt injection taxonomy and classification from Arcanum Security.

github.com/Arcanum-Sec/arc_pi_taxonomy

Awesome ChatGPT Prompts

Collection of prompt examples for understanding LLM behavior and capabilities.

prompts.chat/prompts

Awesome GPTs Agents

A a comprehensive list of GPT agents focused on cybersecurity (offensive and defensive), created by the community.

github.com/fr0gger/Awesome-GPT-Agents

Offensive ML Playbook

It is focused heavily on attacks that have code you can use to perform the attacks right away, rather than a database of research papers.

wiki.offsecml.com/Welcome+to+the+Offensive+ML+Playbook

AI Red Teaming Guide

Comprehensive guide for conducting AI red team exercises.

github.com/requie/AI-Red-Teaming-Guide

AI Agents for Cybersecurity

Covering cutting-edge AI agent technologies and their applications in cybersecurity operations

github.com/santosomar/AI-agents-for-cybersecurity

Prompt Engineering Guide

Efficiently use language models for a wide variety of applications and research topics

www.promptingguide.ai
🔧

AI Security Tools

PyRIT

Python Risk Identification Tool for generative AI. Automates AI red teaming tasks.

github.com/Azure/PyRIT

Giskard

Open-source testing framework for ML models. Detect bias, security issues, and performance problems.

github.com/Giskard-AI/giskard-oss

PromptMap

promptmap2 is a an automated prompt injection scanner for custom LLM applications

github.com/utkusen/promptmap

FuzzyAI

Fuzzing framework for AI/ML models from CyberArk. Test model robustness and find edge cases.

github.com/cyberark/FuzzyAI

Spikee

Security testing toolkit for AI applications from Reversec Labs.

github.com/ReversecLabs/spikee

Viper

Graphical penetration testing platform with AI integration capabilities.

github.com/FunnyWolf/Viper

Garak

Generative AI Red-teaming & Assessment Kit

github.com/NVIDIA/garak

Agentic Security

A vulnerability scanner for Agent Workflows and Large Language Models.

github.com/msoedov/agentic_security

Llamator

Red Teaming python-framework for testing chatbots and GenAI systems

github.com/LLAMATOR-Core/llamator

CAI

Cybersecurity AI (CAI), the framework for AI Security

github.com/aliasrobotics/cai

HexStrike AI

AI-Powered MCP Cybersecurity Automation Platform

github.com/0x4m4/hexstrike-ai

About AI Security Testing

AI systems introduce unique security challenges that traditional security testing may not address. Key areas of concern include:

Use the resources above to learn about these attack vectors and test your AI implementations for vulnerabilities.