somebody’s letting ai write malware now?

been lurking and noticed a crazy trend lately. ai is writing malware these days, like reading cves, crafting exploits, even cracking passwords. feels like the pentesting playground just grew a lot more chaotic.

i’ve been messing with ai tools. prompt chaining, sandboxed payload tests, RAG models but damn, the worst part is how easily they can get tricked into doing bad things with minimal code. it’s not ultra-sophisticated, just cleverly prompted.

i’ve tried a few courses to help keep my setup legit. haxorplus had some modules teaching you to use ai for ethical research and pentesting workflows, HTB too (a classic) and tryhackme. low-key helpful for getting the mindset before going full wild west.

any of you fighting this trend? prompts that spin harmlessly vs ones that go haywire? share your fails, your wild chain exploits, or whatever you’re seeing, i feel like we’re collectively figuring out how to police the next-gen hackers, and i’m curious how you're handling it.

submitted by /u/c1nnamonapple
[link] [comments]

from hacking: security in practice https://ift.tt/cOKYvhG

Comments