Check out my post explaining how LLM can encrypt commands from attackers to their victims using completely natural language.
tl;dr:
By hiding information in natural language, i.e. using the positioning of certain words and their frequency, an attacker could send a benign looking email/text/etc. to their victim, and have it decoded to perform actions on the machine. No YARA rules and classic defense tools can flag this behavior. And, if done well, this technique could be used to bypass even human observers doing manual checks.
[link] [comments]
from hacking: security in practice https://ift.tt/kJ6V2sA
Comments
Post a Comment