Guidance understanding increased memory usage in Suricata based on heap analysis

Hi,
Have Suricata 4.1.10 running in IPS mode in a memory constrained device. We see at times the memory consumption of Suricata exceeds 2GB. On exceeding 1.6GB, we triggered a core and from the non instrumented cores using VMware chap tool, we get the following.

chap> count allocations
1663049 allocations use 0xa3ce6e58 (2,748,214,872) bytes.
chap> count leaked
3 allocations use 0x8058 (32,856) bytes.
chap> count unreferenced
1 allocations use 0x18 (24) bytes.
chap>
chap> summarize allocations /sortby bytes
Unrecognized allocations have 1633046 instances taking 0xa3282fd0(2,737,319,888) bytes.
   Unrecognized allocations of size 0x8008 have 40448 instances taking 0x4f04f000(1,325,723,648) bytes. 0x8000 = 32768
   Unrecognized allocations of size 0x10008 have 1400 instances taking 0x5782bc0(91,761,600) bytes.     0x10000 = 65536
   Unrecognized allocations of size 0x100008 have 85 instances taking 0x55002a8(89,129,640) bytes.      0x100000 = 1048576
   Unrecognized allocations of size 0x40008 have 199 instances taking 0x31c0638(52,168,248) bytes.


chap> count allocations
1482042 allocations use 0x8f1c47d0 (2,400,995,280) bytes.
chap> count leaked
0 allocations use 0x0 (0) bytes.
chap> count unreferenced
0 allocations use 0x0 (0) bytes.
chap> summarize allocations /sortby bytes
Unrecognized allocations have 1456327 instances taking 0x8eee7438(2,397,991,992) bytes.
   Unrecognized allocations of size 0x100008 have 889 instances taking 0x37901bc8(932,191,176) bytes. 0x100000 = 1048576
   Unrecognized allocations of size 0x8008 have 4378 instances taking 0x88d88d0(143,493,328) bytes.
   Unrecognized allocations of size 0x40008 have 540 instances taking 0x87010e0(141,562,080) bytes.
   Unrecognized allocations of size 0x3ffefb8 have 1 instances taking 0x3ffefb8(67,104,696) bytes.

Suricata config has http memcap and file-store is not enabled
    http:
      enabled: yes
      memcap: 128mb
  - file-store:
      version: 2
      enabled: no

stats.log 
app_layer.flow.http                           | Total                     | 90258
app_layer.tx.http                             | Total                     | 393695
app_layer.flow.ftp                            | Total                     | 22
app_layer.flow.smtp                           | Total                     | 77
app_layer.tx.smtp                             | Total                     | 101
app_layer.flow.tls                            | Total                     | 1987124
app_layer.flow.ssh                            | Total                     | 1139
app_layer.flow.smb                            | Total                     | 22416
app_layer.tx.smb                              | Total                     | 2081210
app_layer.flow.dcerpc_tcp                     | Total                     | 29399
app_layer.flow.dns_tcp                        | Total                     | 1057
app_layer.tx.dns_tcp                          | Total                     | 2115
app_layer.flow.ntp                            | Total                     | 333779
app_layer.flow.ftp-data                       | Total                     | 105
app_layer.flow.ikev2                          | Total                     | 327
app_layer.flow.krb5_tcp                       | Total                     | 32182
app_layer.tx.krb5_tcp                         | Total                     | 32179
app_layer.flow.failed_tcp                     | Total                     | 200243
app_layer.flow.dns_udp                        | Total                     | 2154715
app_layer.tx.dns_udp                          | Total                     | 4345501
app_layer.tx.ntp                              | Total                     | 1309445
app_layer.tx.ikev2                            | Total                     | 1177
app_layer.flow.failed_udp                     | Total                     | 374450
flow_mgr.closed_pruned                        | Total                     | 2421797
flow_mgr.new_pruned                           | Total                     | 1143676
flow_mgr.est_pruned                           | Total                     | 2887080
flow.spare                                    | Total                     | 10235
flow.tcp_reuse                                | Total                     | 754
flow_mgr.flows_checked                        | Total                     | 1135
flow_mgr.flows_notimeout                      | Total                     | 783
flow_mgr.flows_timeout                        | Total                     | 352
flow_mgr.flows_timeout_inuse                  | Total                     | 108
flow_mgr.flows_removed                        | Total                     | 244
flow_mgr.rows_checked                         | Total                     | 65536
flow_mgr.rows_skipped                         | Total                     | 64444
flow_mgr.rows_empty                           | Total                     | 200
flow_mgr.rows_maxlen                          | Total                     | 4
tcp.memuse                                    | Total                     | 3552472
tcp.reassembly_memuse                         | Total                     | 111287360
http.memuse                                   | Total                     | 1108932
ftp.memuse                                    | Total                     | 562
flow.memuse                                   | Total                     | 13005744



Our suspect for 32768 sized allocation include inflate() in htp_gzip_decompressor_decompress(). We are still trying to reproduce this internally, but no luck so far. We are also trying to gather packet captures. Request any guidance from the developers regarding allocation sizes of 1048576, 32768 and 65536 in Suricata.

Suspect for 1048576 sizes include HTP_COMPRESSION_BOMB_LIMIT, but don’t see these decoder events in stats.log

Please update to a supported version of Suricata (5 or even better 6) since 4.1.10 is EOL. If you can reproduce it with more recent versions, feel free to report it!