I'm currently looking for a way to change the "mind" of a machine learning algorithm. It's something I'm studying and trying to achieve in order to show how unsecure are ML algorithms. In fact if you think about it, if a ML algorithm is trained to distinguish between male and female through a dataset but the data are switched, he will give back false positives.
Now I want to prove that a bot detection system, can be induced in error making it thus useless, anyone have any idea on how it can be done?
[link] [comments]
from hacking: security in practice https://ift.tt/2WvJ2jZ
Comments
Post a Comment