Hi guys, I’ve been looking at some data and trying to make sense of it. My company has a server that runs Windows Server 2012 R2 and it’s used as a gateway to process a bunch of tcp messages and forward to other stuff. I’m trying to understand some of the latency behavior and I noticed that packets with different window sizes/scale have different treatments.
I know that on Linux the memory is only allocated when used, but on windows OS I’m not sure how the memory and NUMA nodes work. Could it be possible to send a big window size and because the OS has to manage more memory it takes longer to process and it could also impact other incoming packets by “stilling “ thread process time?
Does that make any sense? I’ve been trying to reproduce this behavior on some tests but no success, I don’t know if the volume of packet transmission is a variable, because in production we can get more than 500 million requests a day, and I can’t do this many. Are there any other variables/configurations I should look at when talking about denial of service attack?
[link] [comments]
from hacking: security in practice https://ift.tt/3pDUxzv
Comments
Post a Comment