Recommendations for sizing nf_queue

Suricata version 7.0.10 RELEASE
opensuse LEAP 15.6
linux 6.4.0

Today when inspecting the system log, a large number of these entries were present:

2025-06-17T09:31:14-0700 sma-server3 kernel: net_ratelimit: 24 callbacks suppressed
2025-06-17T09:31:14-0700 sma-server3 kernel: nfnetlink_queue: nf_queue: full at 4096 entries, dropping packets(s)
2025-06-17T09:31:14-0700 sma-server3 kernel: nfnetlink_queue: nf_queue: full at 4096 entries, dropping packets(s)

This essentially blocked network access.
They persisted until I terminated Suricata.

Any recommendations for increasing the size of nf_queue?
Or other measures to prevent this happening again?

Hello,

could you share the suricata log and/or stats log for that same period? I couldn’t find those outputs in Suricata itself…

It is often that a fair amount time passes between instances in fast.log. Up to an hour is not uncommon. However:

06/17/2025-00:20:50.544024  [Drop] [**] [1:2402000:7400] ET DROP Dshield Block Listed Source group 1 [**] [Classification: Misc Attack] [Priority: 2] {TCP} 64.62.197.64:43332 -> 192.168.69.246:25
06/17/2025-14:25:56.081646  [Drop] [**] [1:2402000:7400] ET DROP Dshield Block Listed Source group 1 [**] [Classification: Misc Attack] [Priority: 2] {UDP} 176.65.148.139:36715 -> 192.168.69.246:53

Over 12 hours has not occurred previously, ever.
I am not clear which part of stats you would want.

The fast.log won’t have a lot of info, indeed, as it only has alerts.

Does suricata.log also look empty?

Since I was trying to understand those entries in connections to what Suricata was seen, the whole stats event (if EVE log), or the stats.log file, if possible.

But, if that’s not possible nor desired, one thing that could offer some relief (as I’m unfortunately not able to suggest a value for nf_queue size), it the fail-open option in the suricata.yaml file – but maybe you’ve already checked that section?

I have been studying the EVE file for the time period where the network became sketchy. I had terminated Suricata at 9:31:30. Yet the EVE file shows no awareness that happened; it continued to log network events without pause. And there was no discernible change in logging when Suricata restarted.

For instance:
At 9:29:24 (before shutdown) the stat "uptime":253509.
At 9:57:54 (after restart) the stat "uptime":255189.

Is this expected?

Sorry for the delay. Any chance there are two suricata instances running? (one system-run, another user-run?)

Possibly. Not likely, though, Because it has happened more than once, I watch for multiple instances of Suricata.

The issue has not arisen since i posted the request.

I had changed a backup script and caused a lot of network traffic for a long time. Occasionally the remote host loses its mind and needs booting. So did that, and booted the Suricata host as well.

Since then, no problem with data transfer, not even the large backup that took 30 hours to complete. (I am looking into that; the backup should have proceeded much faster.)