I have an open soource project that can be helpfull for investigation any environments at runtime.
In my case, this project provided me the way for investigating requests handling over the microservices networks:
I worked under SIP server.
The simple SIP call were transfered approximetly 20 microservicies
From time to time, I was needed resolve any bugs with SIP calls.
The problem was how to aggregate dumps, logs, configurations from 20 microservicies during the call? We had the ELK, but this don't have the all information, that I need (for example, tcp dumps).
I did the first version of Daggy that provides me the next solution:
1. I did sources config where I decribed all microservircies with any data streams and configs, that I intrested
2. I run the Daggy with this config
3. I did the SIP call
4. I stop the Daggy
5. I had all information about the call on my localhost - each stream were saved in separate file.
But, I have idea about the creation of common database with data aggregation snippets.
For example, this snippet can dump the network traffic:
entireNetwork:
exec: tcpdump -i any -s 0 port not 22 -w -
extension: pcap
Or, this snippet aggregate the log from journalctl:
journaldLog:
exec: journalctl -f
extension: log
What do you think about idea?
Do you have any data aggregation snippets that you can share for community?
[link] [comments]
from hacking: security in practice https://ift.tt/3FwnODs
Comments
Post a Comment