I’m running Suricata 6.x in IPS mode with nfqueue and noticing weird latency and timeouts. Has anyone else experienced this?
I’m using these iptables rules:
iptables -A INPUT -j NFQUEUE --queue-num 0 --queue-bypass
iptables -A OUTPUT -j NFQUEUE --queue-num 0 --queue-bypass
and this config for nfq:
SSH connections to the server are occasionally becoming unresponsive for about 20-60 seconds, and other connections to the server are doing the same (freezing for 20-60 seconds, before finally resolving, or sometimes just timing out). I’d say it’s roughly 2% of the time, but it’s hard to judge. When I shut down suricata and remove the nfqueue rules the issue goes away.
Memory and CPU seem fine, and the traffic to the server is very low.
There are no alerts at all in the logs related to these connections, only alerts from other IP addresses and other issues (botnets looking for VOIP vulnerabilities, etc).
Ah, I may have fucked up and had two programs running on nfqueue at the same time
Looks like it’s fine.
In the docs it says:
…in case of certain IPS setups (like NFQ),
autofp is used.
autofp is the default. But with the
batchcount option enabled I get this:
<Error> - [ERRCODE: SC_ERR_INVALID_ARGUMENT(13)] - nfq.batchcount is only valid in workers runmode.
runmode: workers preferable to
runmode: autofp for nfq?
Also, if I’m using nfqueue on both the INPUT and OUTPUT chains, is it preferable to use one queue for each, e.g:
iptables -A INPUT -j NFQUEUE --queue-num 1 --queue-bypass
iptables -A OUTPUT -j NFQUEUE --queue-num 2 --queue-bypass
and specify two queues in the run options:
suricata -q 1 -q 2
You might see better performance with workers run mode. I would recommend to try it.
And yes you could use two queues, even more to have some sort of load balancing.
Would using multiple queues only give a performance improvement in
No, that could also benefit the other mode as well, but maybe with less impact.