Hi, I am getting myself familiar with overall Suricata architecture, so installed it on one of the Linux system. Would like to feed packets through some shared memory instead of existing available mechanisms, so wrote new PAQ to read packets from shared memory and feed those packets to engine. So far so good and I see Suricata identify the malicious traffic, detects it and logs it. But performance sucks with concurrent HTTP sessions (1000 http sessions) with libhtp enabled. If I disable libhtp overall performance is good but with libhtp it impacts up to 80%. Any inputs on why libhtp is killing the performance to this extent? Any remedy to overcome this? Should we enable libhtp or it is not mandatory to use it? I am using Suricata 6.0.1 with libhtp 0.5.37.
libhtp is used for the application layer parsing of http trafffic.
It makes a lot of sense to me that it will be a resource hog if it suricata is fed a lot of http sessions.
I would not disable libhtp if I needed http logging or the http alert keywords.
libhtp also handles decompress of HTTP compressed data (eg: gzip/deflate etc). I would check with oprofile/perf/Intel Vtune to get data as to where within libhtp memory access or CPU cycle overhead is present.
Thanks for the quick reply, appreciate it. Also observed that request-body-limit and response-body-limit play a major role in such tests. The suricata.yaml from git has 100kb value while document says 3072. With 3072 the performance is better which is expected as parser works on less data. So what is expected value?
I would go with the suggested value in yaml. Everything in prod deployments of Suricata is balance between performance and accuracy/coverage. As you have seen in your testing adjusting that value in the libhtp has a perf impact , lowering it too much and you might miss important alert events. It all depends on the type and purpose of the tests too.
As another suggestion if you are using Suricata 6 - always try the latest version available - at the moment is 6.0.3