Attackers are now leveraging on voice cloning, AI-generated video, and synthetic personas to build trust.
Imagine getting a call from a parent, relative or close friend, asking for an urgent wire transfer because of an emergency.
I'm curious: Have you personally encountered or investigated cases where generative AI was used maliciously --scams, pentests, or training?
How did you identify it? Which countermeasures do you think worked best?
[link] [comments]
from hacking: security in practice https://ift.tt/XSIbt7K
Comments
Post a Comment