Suricata memory Allocation

Hi All,

I’m interested in testing Suricata in resource-constrained duty.

What’s the best way for me to limit the amount of RAM available to Suricata?

Thanks,

You can limit the memory available to Suricata with ulimit -v. The -v option sets the virtual memory limit:
virtual memory (kbytes, -v) unlimited

OK thanks,

Does this have to be configured before Suricata starts or will this be applied to the currently running instance?

Apologies, I’m fairly new to Suricata.

Thanks,

I think so, ya. No expert on ulimit either, so I think you need to experiment a bit.

Btw, if you want to simulate out of memory conditions during packet processing, make sure to set the limit high enough that the initial startup can succeed.

Great, thanks for your help

You can also use LInux “control groups” to limit the amount of memory used. See https://en.wikipedia.org/wiki/Cgroups for an overview.

Unfortunately using ulimit or prlimit might lead to a Suricata crash. The libraries on which suricata depends might not gracefully handle memory allocation failures. In the below example, the rust library raises an abort on a memory allocation failure post hitting the ulimit/prlimit ( RLIMIT_AS )

(gdb) bt
#0  0x00007f9f1dc2a18b in raise () from /lib/x86_64-linux-gnu/libc.so.6
#1  0x00007f9f1dc09859 in abort () from /lib/x86_64-linux-gnu/libc.so.6
#2  0x000055d3e280b487 in std::sys::unix::abort_internal () at library/std/src/sys/unix/mod.rs:167
#3  0x000055d3e2837476 in std::process::abort () at library/std/src/process.rs:1623
#4  0x000055d3e2829c0e in std::alloc::rust_oom (layout=...) at library/std/src/alloc.rs:310
#5  0x000055d3e2866c07 in alloc::alloc::handle_alloc_error (layout=...) at library/alloc/src/alloc.rs:322
#6  0x000055d3e2893fb4 in alloc::raw_vec::RawVec<T,A>::reserve (self=0x7f9f0c383f78, len=<optimized out>, additional=<optimized out>)
    at /build/rustc-n7HJ8w/rustc-1.47.0+dfsg1+llvm/library/alloc/src/raw_vec.rs:310
#7  0x000055d3e27aa5cc in alloc::vec::Vec<T>::reserve (self=0x7f9f0c383f78, additional=1460)
    at /build/rustc-n7HJ8w/rustc-1.47.0+dfsg1+llvm/library/alloc/src/vec.rs:497
#8  <alloc::vec::Vec<T> as alloc::vec::SpecExtend<&T,core::slice::Iter<T>>>::spec_extend (self=0x7f9f0c383f78, iterator=...)
    at /build/rustc-n7HJ8w/rustc-1.47.0+dfsg1+llvm/library/alloc/src/vec.rs:2223
#9  <alloc::vec::Vec<T> as core::iter::traits::collect::Extend<&T>>::extend (self=0x7f9f0c383f78, iter=...)
    at /build/rustc-n7HJ8w/rustc-1.47.0+dfsg1+llvm/library/alloc/src/vec.rs:2368
#10 suricata::filetracker::FileTransferTracker::update (self=0x7f9f0437d5c0, files=<optimized out>, flags=<optimized out>, data=...,
    gap_size=0) at src/filetracker.rs:315
#11 0x000055d3e27b6ce5 in suricata::smb::files::<impl suricata::smb::smb::SMBState>::filetracker_update (self=<optimized out>,
    direction=<optimized out>, data=..., gap_size=<optimized out>) at src/smb/files.rs:214
#12 0x000055d3e27b3238 in suricata::smb::smb::SMBState::parse_tcp_data_tc (self=0x7f9f04343860, i=...) at src/smb/smb.rs:1630
#13 suricata::smb::smb::rs_smb_parse_response_tcp (flow=<optimized out>, state=0x7f9f04343860, _pstate=<optimized out>, input=<optimized out>,
    input_len=1460, _data=<optimized out>, flags=8) at src/smb/smb.rs:1901
#14 0x000055d3e246cd5f in RustSMBTCPParseResponse (f=0x55d3e5b80950, state=0x7f9f04343860, pstate=0x7f9f0c346f90,
    input=0x7f9f0ff02944 

Could you provide some details on system memory availability and whether control groups (or similar) are in use?

In our case the system has 8GB of RAM. We noticed that on having a cgroup memory constraint (using --memory docker option) on the Suricata docker, it would lead to the container getting killed by the kernel OOM on hitting the memory limit without any visibility into the memory usage. We also tried not using the docker memory limiting option, but set the ulimit -v on the Suricata process. With the RLIMIT_AS set, Suricata crashed while doing a SMB transfer. Looking at the gdb backtrace it was apparent that it was not safe to set RLIMIT_AS on Suricata as all the dependent libraries would not always gracefully handle out of memory conditions.

Can you try limiting the SMB inspection depth?

In suricata.yaml, set the stream-depth to 20mb to see what effect this has.

app-layer:
  protocols:

    smb:
      enabled: yes
      detection-ports:
        dp: 139, 445

      # Stream reassembly size for SMB streams. By default track it completely.
      stream-depth: 20mb

@Jeff_Lucovsky Yes this setting does help to limit memory in case of SMB flows. Based on my understanding, this affects the TCP reassembly depth for the SMB flow (Please correct me as needed). If Suricata is working in IPS mode, what does this stream-depth limit translate to? Does it mean the the scan window size is limited to 20MB for the entire flow or does it mean only first 20MB of a SMB flow is scanned. What impact does it have on the security/rules? Was there any reason why stream-depth was set to 0 by default for SMB (SMB_CONFIG_DEFAULT_STREAM_DEPTH)?

depth is either 0 or a value. If not 0, then the value specifies how much of the flow is scanned/inspected.

This has implications, of course and it depends on a few things:

  1. Rules used
  2. Threat likelihood.
  3. Environmental/Deployment

Thanks for the update Jeff. I am trying to understand how this translates to a sample use case scenario. What would be the behavior when a client say transfers 100 x 30MB files to the SMB server. Then with a setting of 20MB stream-depth, would the engine scan each of the 100 files to a depth of 20MB or would it only scan the first 20MB of the flow between the client and server irrespective of the number of files transferred.