I have published a comprehensive repository for conducting AI/LLM red team assessments across LLMs, AI agents, RAG pipelines, and enterprise AI applications.
The repo includes:
- AI/LLM Red Team Field Manual — operational guidance, attack prompts, tooling references, and OWASP/MITRE mappings.
- AI/LLM Red Team Consultant’s Handbook — full methodology, scoping, RoE/SOW templates, threat modeling, and structured delivery workflows.
Designed for penetration testers, red team operators, and security engineers delivering or evaluating AI security engagements.
📁 Includes:
Structured manuals (MD/PDF/DOCX), attack categories, tooling matrices, reporting guidance, and a growing roadmap of automation tools and test environments.
🔗 Repository: https://github.com/shiva108/ai-llm-red-team-handbook
If you work with AI security, this provides a ready-to-use operational and consultative reference for assessments, training, and client delivery. Contributions are welcome.
[link] [comments]
from hacking: security in practice https://ift.tt/nZzaA0F
Comments
Post a Comment