Suricata consuming high memory

Please include the following information with your help request:

  • Suricata version : 7.0.7
  • Operating system and/or Linux distribution : Linux 5.15.158-yocto-standard
  • How you installed Suricata (from source, packages, something else): source

we were using suricata 6.0.X version, later we updated it to 7.0.7.
Now with out any traffic being sent to suricata below is the memory consumption by it.
linux:/home/admin# ps_mem | grep Surica
238.0 MiB + 12.5 KiB = 238.0 MiB Suricata-Main

can we optimize this memory usage ?
Attaching suricata.yaml for reference. Kindly let us know if we can comment any thing in yaml file such that we can use reasonable amount of physical memory.
suricata.yaml (70.8 KB)

Additional info:
top - 06:21:48 up 28 min, 1 user, load average: 0.60, 1.04, 1.04
Tasks: 297 total, 3 running, 294 sleeping, 0 stopped, 0 zombie
%Cpu(s): 15.5 us, 15.5 sy, 0.0 ni, 69.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
MiB Mem : 7603.5 total, 2083.5 free, 1929.9 used, 3590.2 buff/cache
MiB Swap: 2048.0 total, 2048.0 free, 0.0 used. 5540.9 avail Mem

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
14229 root 20 0 647796 222888 4732 S 0.0 2.9 0:09.67 Suricata-Main

Info:
:/etc/suricata# ps -L -p 14229
PID LWP TTY TIME CMD
14229 14229 ? 00:00:05 Suricata-Main
14229 14276 ? 00:00:00 W#01-ids
14229 14277 ? 00:00:00 W#02-ids
14229 14278 ? 00:00:00 W#03-ids
14229 14279 ? 00:00:00 W#04-ids
14229 14280 ? 00:00:12 FM#01
14229 14281 ? 00:00:00 FR#01

Please post the suricata.log as well to see the startup log. How many rules do you use?
But with 4 worker threads, 2 management threads, the amount of memory looks quite okay.

@Andreas_Herz , Thank you for your reply.
suricata.log (7.9 KB)
We use 9243 rules as of now.
Kindly let us know if there is a possibility to reduce memory consumption.

@Andreas_Herz , is there anything we can disable in yaml file to reduce memory consumption ?

You can reduce the values of the memcap configuration settings and try again. Note that memcap values limit the amount of memory used by Suricata while processing packets.

There are memuse stats that correspond to the memcap values; if the memuse value for a specific memcap is less than the memcap value, you can consider reducing the memcap value to be closer to the memuse value.

Is this an 8GB VM?

@Jeff_Lucovsky , thank you for your reply.
As part of memcap investigation, I first saw defrag: section. So,
Kindly suggest, what are the best values for below:

defrag:
  memcap: 32mb
  hash-size: 65536
  trackers: 65535        # number of defragmented flows to follow
  max-frags: 65535       # number of fragments do keep (higher than trackers)
  prealloc: yes
  timeout: 60

we have a 7603.5 MiB RAM.

Compare defrag.memcap with the stat value defrag.memuse; the stat indicates how much memory is actually used for defragmentation; the memcap value limits the amount of memory that can be used for defragmentation.

If the memcap exceeds the memuse value, then you might be able to decrease the memcap value to limit the amount of memory that could be used.

Note that if the memuse values are already lower than the corresponding memcap value, lowering the memcap value won’t have any effect on Suricata memory usage. Memcaps are limits; memuse indicate how much is used relative to the limit

@Jeff_Lucovsky ,

defrag.memuse : is the current mem used.
defrag.memcap : is the max mem that can be used.

If am not wrong if memuse has not reached memcap we can reduce the memcap values to make sure that suricata consumes less memory, Hope I am correct ?

Reducing memcap values will limit how much memory is used for that area (e.g, defragmentation handling memory use is bounded by defrag.memcap).

So yes, that’s correct.

@Jeff_Lucovsky , Thank you, I could see some improvement wrt defrag: & flow:
I want to ask you a suggestion wrt no of worker & Management threads.
what is the minimum number wrt both ?

I’d be happy to chime in but please open a new discussion topic so others can search/participate more easily

Thank you @Jeff_Lucovsky ,
Here is the new Topic.