OWASP says prompt injection is the #1 LLM threat for 2025. What's your strategy?

OWASP ranked prompt injection as the #1 LLM security threat for 2025. As a security lead, I'm seeing this everywhere now.

Invisible instructions hidden in PDFs, images, even Base64 encoded text that completely hijack agent behavior.

Your customer service bot could be leaking PII. Your RAG system could be executing arbitrary commands. The scary part is most orgs have zero detection in place. We need runtime guardrails, not just input sanitization.

What's your current defense strategy? Would love to exchange ideas here.

submitted by /u/Infamous_Horse
[link] [comments]

from hacking: security in practice https://ift.tt/cWHSLKE

Comments