Suricata high capture.kernel_drops count. I use the PF_RING zc mode

This is my NIC eth2 info, it has probably about 5G/s traffic.

# I have executed PF_RING `./load_driver.sh. The eth2 NIC zero-copy mode is normal.
[root@sec-audit-lljx-027093 ~]# cat /proc/net/pf_ring/dev/eth2/info
Name:         eth2
Index:        13
Address:      B4:96:91:B2:A9:E8
Polling Mode: NAPI/ZC
Promisc:      Enabled
Type:         Ethernet
Family:       Intel ice
TX Queues:    10
RX Queues:    10
Num RX Slots: 4096
Num TX Slots: 4096
RX Slot Size: 3072
TX Slot Size: 3072
[root@sec-audit-lljx-027093 ~]# ethtool -S eth2
NIC statistics:
     rx_unicast: 32676664
     tx_unicast: 0
     rx_multicast: 0
     tx_multicast: 0
     rx_broadcast: 0
     tx_broadcast: 0
     rx_bytes: 15342755273
     tx_bytes: 0
     rx_dropped: 29371142
     rx_unknown_protocol: 0
     rx_alloc_fail: 0
     rx_pg_alloc_fail: 0
     tx_errors: 3
     tx_linearized: 0
     tx_busy: 0
     tx_restart: 0
     tx_queue_0_packets: 0
     tx_queue_0_bytes: 0
     tx_queue_1_packets: 0
     tx_queue_1_bytes: 0
     tx_queue_2_packets: 0
     tx_queue_2_bytes: 0
     tx_queue_3_packets: 0
     tx_queue_3_bytes: 0
     tx_queue_4_packets: 0
     tx_queue_4_bytes: 0
     tx_queue_5_packets: 0
     tx_queue_5_bytes: 0
     tx_queue_6_packets: 0
     tx_queue_6_bytes: 0
     tx_queue_7_packets: 0
     tx_queue_7_bytes: 0
     tx_queue_8_packets: 3
     tx_queue_8_bytes: 735
     tx_queue_9_packets: 0
     tx_queue_9_bytes: 0
     rx_queue_0_packets: 0
     rx_queue_0_bytes: 0
     rx_queue_1_packets: 0
     rx_queue_1_bytes: 0
     rx_queue_2_packets: 0
     rx_queue_2_bytes: 0
     rx_queue_3_packets: 0
     rx_queue_3_bytes: 0
     rx_queue_4_packets: 0
     rx_queue_4_bytes: 0
     rx_queue_5_packets: 0
     rx_queue_5_bytes: 0
     rx_queue_6_packets: 0
     rx_queue_6_bytes: 0
     rx_queue_7_packets: 0
     rx_queue_7_bytes: 0
     rx_queue_8_packets: 0
     rx_queue_8_bytes: 0
     rx_queue_9_packets: 0
     rx_queue_9_bytes: 0
     rx_bytes.nic: 71196867757
     tx_bytes.nic: 0
     rx_unicast.nic: 147547917
     tx_unicast.nic: 0
     rx_multicast.nic: 0
     tx_multicast.nic: 0
     rx_broadcast.nic: 0
     tx_broadcast.nic: 0
     tx_errors.nic: 0
     tx_timeout.nic: 0
     rx_size_64.nic: 0
     tx_size_64.nic: 0
     rx_size_127.nic: 83903501
     tx_size_127.nic: 0
     rx_size_255.nic: 9129792
     tx_size_255.nic: 0
     rx_size_511.nic: 5136782
     tx_size_511.nic: 0
     rx_size_1023.nic: 14885476
     tx_size_1023.nic: 0
     rx_size_1522.nic: 34492377
     tx_size_1522.nic: 0
     rx_size_big.nic: 0
     tx_size_big.nic: 0
     link_xon_rx.nic: 0
     link_xon_tx.nic: 0
     link_xoff_rx.nic: 0
     link_xoff_tx.nic: 0
     tx_dropped_link_down.nic: 0
     rx_undersize.nic: 0
     rx_fragments.nic: 0
     rx_oversize.nic: 0
     rx_jabber.nic: 0
     rx_csum_bad.nic: 0
     rx_length_errors.nic: 0
     rx_dropped.nic: 0
     rx_crc_errors.nic: 0
     illegal_bytes.nic: 0
     mac_local_faults.nic: 0
     mac_remote_faults.nic: 0
     fdir_sb_match.nic: 0
     fdir_sb_status.nic: 0
     chnl_inline_fd_match: 0
     tx_priority_0_xon.nic: 0
     tx_priority_0_xoff.nic: 0
     tx_priority_1_xon.nic: 0
     tx_priority_1_xoff.nic: 0
     tx_priority_2_xon.nic: 0
     tx_priority_2_xoff.nic: 0
     tx_priority_3_xon.nic: 0
     tx_priority_3_xoff.nic: 0
     tx_priority_4_xon.nic: 0
     tx_priority_4_xoff.nic: 0
     tx_priority_5_xon.nic: 0
     tx_priority_5_xoff.nic: 0
     tx_priority_6_xon.nic: 0
     tx_priority_6_xoff.nic: 0
     tx_priority_7_xon.nic: 0
     tx_priority_7_xoff.nic: 0
     rx_priority_0_xon.nic: 0
     rx_priority_0_xoff.nic: 0
     rx_priority_1_xon.nic: 0
     rx_priority_1_xoff.nic: 0
     rx_priority_2_xon.nic: 0
     rx_priority_2_xoff.nic: 0
     rx_priority_3_xon.nic: 0
     rx_priority_3_xoff.nic: 0
     rx_priority_4_xon.nic: 0
     rx_priority_4_xoff.nic: 0
     rx_priority_5_xon.nic: 0
     rx_priority_5_xoff.nic: 0
     rx_priority_6_xon.nic: 0
     rx_priority_6_xoff.nic: 0
     rx_priority_7_xon.nic: 0
     rx_priority_7_xoff.nic: 0

suricata always gets a lots of kernel_droped packets when I running suricata --pfring -c /etc/suricata/suricata.yaml -vvv command.

[root@sec-audit-lljx-027093 ~]# suricata --build-info
This is Suricata version 6.0.2 RELEASE
Features: PCAP_SET_BUFF PF_RING AF_PACKET HAVE_PACKET_FANOUT LIBCAP_NG LIBNET1.1 HAVE_HTP_URI_NORMALIZE_HOOK PCRE_JIT HAVE_NSS HAVE_LUA HAVE_LUAJIT HAVE_LIBJANSSON PROFILING TLS TLS_GNU MAGIC RUST 
SIMD support: SSE_4_2 SSE_4_1 SSE_3 
Atomic intrinsics: 1 2 4 8 16 byte(s)
64-bits, Little-endian architecture
GCC version 4.8.5 20150623 (Red Hat 4.8.5-28), C version 199901
compiled with _FORTIFY_SOURCE=0
L1 cache line size (CLS)=64
thread local storage method: __thread
compiled with LibHTP v0.5.37, linked against LibHTP v0.5.37

Suricata Configuration:
  AF_PACKET support:                       yes
  eBPF support:                            no
  XDP support:                             no
  PF_RING support:                         yes
  NFQueue support:                         no
  NFLOG support:                           no
  IPFW support:                            no
  Netmap support:                          no 
  DAG enabled:                             no
  Napatech enabled:                        no
  WinDivert enabled:                       no

  Unix socket enabled:                     yes
  Detection enabled:                       yes

  Libmagic support:                        yes
  libnss support:                          yes
  libnspr support:                         yes
  libjansson support:                      yes
  hiredis support:                         no
  hiredis async with libevent:             no
  Prelude support:                         no
  PCRE jit:                                yes
  LUA support:                             yes, through luajit
  libluajit:                               yes
  GeoIP2 support:                          yes
  Non-bundled htp:                         no
  Hyperscan support:                       yes
  Libnet support:                          yes
  liblz4 support:                          no

  Rust support:                            yes
  Rust strict mode:                        no
  Rust compiler path:                      /root/.cargo/bin/rustc
  Rust compiler version:                   rustc 1.52.0 (88f19c6da 2021-05-03)
  Cargo path:                              /root/.cargo/bin/cargo
  Cargo version:                           cargo 1.52.0 (69767412a 2021-04-21)
  Cargo vendor:                            yes

  Python support:                          yes
  Python path:                             /bin/python2.7
  Python distutils                         yes
  Python yaml                              yes
  Install suricatactl:                     yes
  Install suricatasc:                      yes
  Install suricata-update:                 yes

  Profiling enabled:                       yes
  Profiling locks enabled:                 no

  Plugin support (experimental):           yes

Development settings:
  Coccinelle / spatch:                     no
  Unit tests enabled:                      no
  Debug output enabled:                    no
  Debug validation enabled:                no

Generic build parameters:
  Installation prefix:                     /usr
  Configuration directory:                 /etc/suricata/
  Log directory:                           /var/log/suricata/

  --prefix                                 /usr
  --sysconfdir                             /etc
  --localstatedir                          /var
  --datarootdir                            /usr/share

  Host:                                    x86_64-pc-linux-gnu
  Compiler:                                gcc (exec name) / g++ (real)
  GCC Protect enabled:                     no
  GCC march native enabled:                yes
  GCC Profile enabled:                     no
  Position Independent Executable enabled: no
  CFLAGS                                   -g -O2 -std=gnu99 -march=native -I${srcdir}/../rust/gen -I${srcdir}/../rust/dist
  PCAP_CFLAGS                               
  SECCFLAGS 

There are some statistics.

Date: 8/31/2022 -- 11:12:25 (uptime: 0d, 00h 07m 41s)
------------------------------------------------------------------------------------
Counter                                       | TM Name                   | Value
------------------------------------------------------------------------------------
capture.kernel_packets                        | Total                     | 192741316
capture.kernel_drops                          | Total                     | 447129395
decoder.pkts                                  | Total                     | 193214619
decoder.bytes                                 | Total                     | 89468210279
decoder.invalid                               | Total                     | 28384561
decoder.ipv4                                  | Total                     | 193214619
decoder.ethernet                              | Total                     | 193214619
decoder.tcp                                   | Total                     | 164076264
decoder.udp                                   | Total                     | 3229
decoder.icmpv4                                | Total                     | 750565
decoder.vlan                                  | Total                     | 193214619
decoder.vlan_qinq                             | Total                     | 803
decoder.avg_pkt_size                          | Total                     | 463
decoder.max_pkt_size                          | Total                     | 1514
flow.tcp                                      | Total                     | 22981615
flow.udp                                      | Total                     | 2326
flow.icmpv4                                   | Total                     | 434926
flow.tcp_reuse                                | Total                     | 952626
flow.wrk.spare_sync_avg                       | Total                     | 100
flow.wrk.spare_sync                           | Total                     | 201857
decoder.event.ipv4.trunc_pkt                  | Total                     | 28384561
flow.wrk.flows_evicted_needs_work             | Total                     | 11630626
flow.wrk.flows_evicted_pkt_inject             | Total                     | 23236225
flow.wrk.flows_evicted                        | Total                     | 2609824
flow.wrk.flows_injected                       | Total                     | 10455348
tcp.sessions                                  | Total                     | 15815102
tcp.syn                                       | Total                     | 11109650
tcp.synack                                    | Total                     | 11053848
tcp.rst                                       | Total                     | 105159
tcp.midstream_pickups                         | Total                     | 7232899
tcp.pkt_on_wrong_thread                       | Total                     | 26118156
tcp.segment_memcap_drop                       | Total                     | 19167
tcp.reassembly_gap                            | Total                     | 954952
tcp.overlap                                   | Total                     | 644
tcp.insert_data_normal_fail                   | Total                     | 13997720
detect.alert                                  | Total                     | 305
app_layer.flow.http                           | Total                     | 255298
app_layer.tx.http                             | Total                     | 2553701
app_layer.flow.ntp                            | Total                     | 69
app_layer.tx.ntp                              | Total                     | 96
app_layer.flow.failed_tcp                     | Total                     | 18792
app_layer.flow.failed_udp                     | Total                     | 2257
flow.mgr.full_hash_pass                       | Total                     | 56
flow.spare                                    | Total                     | 1343379
flow.mgr.rows_maxlen                          | Total                     | 66
flow.mgr.flows_checked                        | Total                     | 2213294
flow.mgr.flows_notimeout                      | Total                     | 2009630
flow.mgr.flows_timeout                        | Total                     | 203664
flow.mgr.flows_evicted                        | Total                     | 19040798
flow.mgr.flows_evicted_needs_work             | Total                     | 10457152
tcp.memuse                                    | Total                     | 502942240
tcp.reassembly_memuse                         | Total                     | 42937695396
http.memuse                                   | Total                     | 4169660227
flow.memuse                                   | Total                     | 1008572224

The CPU and memory resources are sufficient. I really want to get your help, and how it happens. Looking forward to your reply. Thank you very much.
suricata_backup.yaml (75.4 KB)

First of all I would update to the most recent version, 6.0.2 is a bit old and we released several updates including security fixes.

I would also test AF_PACKET instead of PF_RING and see if it works better. What hardware specs do you have? How does htop output look like and you could also look into perf top output of suricata.

I have updated suricata to v6.0.6. htop output look like the cpu and mem resources are sufficent, but the problem still exists.

[root@sec-audit-lljx-027093 src]# suricata --build-info
This is Suricata version 6.0.6 RELEASE
Features: PCAP_SET_BUFF PF_RING AF_PACKET HAVE_PACKET_FANOUT LIBCAP_NG LIBNET1.1 HAVE_HTP_URI_NORMALIZE_HOOK PCRE_JIT HAVE_NSS HAVE_LUA HAVE_LUAJIT HAVE_LIBJANSSON PROFILING TLS TLS_GNU MAGIC RUST 
SIMD support: SSE_4_2 SSE_4_1 SSE_3 
Atomic intrinsics: 1 2 4 8 16 byte(s)
64-bits, Little-endian architecture
GCC version 4.8.5 20150623 (Red Hat 4.8.5-28), C version 199901
compiled with _FORTIFY_SOURCE=0
L1 cache line size (CLS)=64
thread local storage method: __thread
compiled with LibHTP v0.5.40, linked against LibHTP v0.5.40

Suricata Configuration:
  AF_PACKET support:                       yes
  eBPF support:                            no
  XDP support:                             no
  PF_RING support:                         yes
  NFQueue support:                         no
  NFLOG support:                           no
  IPFW support:                            no
  Netmap support:                          no 
  DAG enabled:                             no
  Napatech enabled:                        no
  WinDivert enabled:                       no

  Unix socket enabled:                     yes
  Detection enabled:                       yes

  Libmagic support:                        yes
  libnss support:                          yes
  libnspr support:                         yes
  libjansson support:                      yes
  hiredis support:                         no
  hiredis async with libevent:             no
  Prelude support:                         no
  PCRE jit:                                yes
  LUA support:                             yes, through luajit
  libluajit:                               yes
  GeoIP2 support:                          yes
  Non-bundled htp:                         no
  Hyperscan support:                       yes
  Libnet support:                          yes
  liblz4 support:                          no
  HTTP2 decompression:                     no

  Rust support:                            yes
  Rust strict mode:                        no
  Rust compiler path:                      /root/.cargo/bin/rustc
  Rust compiler version:                   rustc 1.52.0 (88f19c6da 2021-05-03)
  Cargo path:                              /root/.cargo/bin/cargo
  Cargo version:                           cargo 1.52.0 (69767412a 2021-04-21)
  Cargo vendor:                            yes

  Python support:                          yes
  Python path:                             /bin/python2.7
  Python distutils                         yes
  Python yaml                              yes
  Install suricatactl:                     yes
  Install suricatasc:                      yes
  Install suricata-update:                 yes

  Profiling enabled:                       yes
  Profiling locks enabled:                 no

  Plugin support (experimental):           yes

Development settings:
  Coccinelle / spatch:                     no
  Unit tests enabled:                      no
  Debug output enabled:                    no
  Debug validation enabled:                no

Generic build parameters:
  Installation prefix:                     /usr
  Configuration directory:                 /etc/suricata/
  Log directory:                           /var/log/suricata/

  --prefix                                 /usr
  --sysconfdir                             /etc
  --localstatedir                          /var
  --datarootdir                            /usr/share

  Host:                                    x86_64-pc-linux-gnu
  Compiler:                                gcc (exec name) / g++ (real)
  GCC Protect enabled:                     no
  GCC march native enabled:                yes
  GCC Profile enabled:                     no
  Position Independent Executable enabled: no
  CFLAGS                                   -g -O2 -std=gnu99 -march=native -I${srcdir}/../rust/gen -I${srcdir}/../rust/dist
  PCAP_CFLAGS                               
  SECCFLAGS 

The following is the htop output when I using workers mode.

[root@sec-audit-lljx-027093 src]# ethtool -i eth2
driver: ice
version: 1.9.11
firmware-version: 2.15 0x800049c3 1.2789.0
expansion-rom-version: 
bus-info: 0000:5e:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes

[root@sec-audit-lljx-027093 src]# ethtool eth2
Settings for eth2:
	Supported ports: [ FIBRE ]
	Supported link modes:   25000baseCR/Full 
	                        25000baseKR/Full 
	                        25000baseSR/Full 
	                        50000baseCR2/Full 
	                        100000baseSR4/Full 
	                        100000baseCR4/Full 
	                        100000baseLR4_ER4/Full 
	                        50000baseSR2/Full 
	Supported pause frame use: Symmetric
	Supports auto-negotiation: No
	Supported FEC modes: None
	Advertised link modes:  25000baseSR/Full 
	Advertised pause frame use: No
	Advertised auto-negotiation: No
	Advertised FEC modes: None BaseR RS
	Speed: 100000Mb/s
	Duplex: Full
	Port: FIBRE
	PHYAD: 0
	Transceiver: internal
	Auto-negotiation: off
	Supports Wake-on: d
	Wake-on: d
	Current message level: 0x00000007 (7)
			       drv probe link
	Link detected: yes

Looking forward to your reply. Thanks for you again.

The cores are peaking at 100% so this does look bad. Can you also run perf top -p $(pidof suricata)? and how many rules are enabled?

I would also still give AF_PACKET a try which is the best supported mode IMHO

The cores are peaking at about 80%~92%. When I run suricata in autofp mode, The cores are peaking at about 60%-80%, but the capture.kernel_packets: capture.kernel_drops is about 1:3. So I think it’s not the CPU’s problem.


According to your advice. I ran suricata in af_packet mode. suricata --af-packet -c /etc/suricata/suricata.yaml -vvvv

af-packet:
  - interface: eth2
    # Number of receive threads. "auto" uses the number of cores
    #threads: auto
    # Default clusterid. AF_PACKET will load balance packets based on flow.
    cluster-id: 99
    # Default AF_PACKET cluster type. AF_PACKET can load balance per flow or per hash.
    # This is only supported for Linux kernel > 3.1
    # possible value are:
    #  * cluster_flow: all packets of a given flow are sent to the same socket
    #  * cluster_cpu: all packets treated in kernel by a CPU are sent to the same socket
    #  * cluster_qm: all packets linked by network card to a RSS queue are sent to the same
    #  socket. Requires at least Linux 3.14.
    #  * cluster_ebpf: eBPF file load balancing. See doc/userguide/capture-hardware/ebpf-xdp.rst for
    #  more info.
    # Recommended modes are cluster_flow on most boxes and cluster_cpu or cluster_qm on system
    # with capture card using RSS (requires cpu affinity tuning and system IRQ tuning)
    cluster-type: cluster_flow
    # In some fragmentation cases, the hash can not be computed. If "defrag" is set
    # to yes, the kernel will do the needed defragmentation before sending the packets.
    defrag: yes


31/8/2022 -- 18:35:32 - <Perf> - AutoFP - Total flow handler queues - 48
31/8/2022 -- 18:35:32 - <Perf> - (RX#43-eth2) Kernel: Packets 2795988, dropped 2491963
31/8/2022 -- 18:35:32 - <Perf> - AutoFP - Total flow handler queues - 48
31/8/2022 -- 18:35:32 - <Perf> - (RX#44-eth2) Kernel: Packets 2761698, dropped 2444301
31/8/2022 -- 18:35:32 - <Perf> - AutoFP - Total flow handler queues - 48
31/8/2022 -- 18:35:32 - <Perf> - (RX#45-eth2) Kernel: Packets 2980594, dropped 2662848
31/8/2022 -- 18:35:32 - <Perf> - AutoFP - Total flow handler queues - 48
31/8/2022 -- 18:35:32 - <Perf> - (RX#46-eth2) Kernel: Packets 2804426, dropped 2488253
31/8/2022 -- 18:35:32 - <Perf> - AutoFP - Total flow handler queues - 48
31/8/2022 -- 18:35:32 - <Perf> - (RX#47-eth2) Kernel: Packets 2941815, dropped 2621890
31/8/2022 -- 18:35:32 - <Perf> - AutoFP - Total flow handler queues - 48
31/8/2022 -- 18:35:32 - <Perf> - (RX#48-eth2) Kernel: Packets 3613353, dropped 3290188
31/8/2022 -- 18:35:32 - <Perf> - AutoFP - Total flow handler queues - 48
31/8/2022 -- 18:35:38 - <Info> - Alerts: 73
31/8/2022 -- 18:35:38 - <Perf> - ippair memory usage: 414144 bytes, maximum: 16777216
^C31/8/2022 -- 18:35:39 - <Perf> - Done dumping profiling data.
31/8/2022 -- 18:35:39 - <Perf> - host memory usage: 39814400 bytes, maximum: 21474836480
31/8/2022 -- 18:35:39 - <Perf> - Dumping profiling data for 1 rules.
31/8/2022 -- 18:35:39 - <Perf> - Done dumping profiling data.
31/8/2022 -- 18:35:39 - <Perf> - Done dumping keyword profiling data.
31/8/2022 -- 18:35:39 - <Perf> - Done dumping rulegroup profiling data.
31/8/2022 -- 18:35:39 - <Perf> - Done dumping prefilter profiling data.
31/8/2022 -- 18:35:39 - <Info> - cleaning up signature grouping structure... complete
31/8/2022 -- 18:35:39 - <Notice> - Stats for 'eth2':  pkts: 135537041, drop: 120452507 (88.87%), invalid chksum: 0
31/8/2022 -- 18:35:39 - <Perf> - Cleaning up Hyperscan global scratch
31/8/2022 -- 18:35:39 - <Perf> - Clearing Hyperscan database cache

I did overlook one important thing:

  Profiling enabled:                       yes

Don’t run with profiling enabled unless you want to debug something code specific. Profiling is very resource intensive. Rebuild Suricata without this enabled and try again.

And with the new run again perf top to see if the handle_mm_fault is still there.

Your advice is very helpful. But it still has a lot of dropped packets. It will drop lots of packets about every ten minutes. That really confuses me.



The CPU utilization is very low just now. But when I closed the file-store option, It became in this situation and didn’t drop packets.


Filestore is also quite resource intensive, but the perf top also indicates that maybe pf ring needs to be improved. The overhead of ring_is_not_empty would be worth a look.

Thanks for your reply. Could you tell me more about looking ring_is_not_empty in detail? The CPU utilization is so low. I really want to fix it great.

This sounds pf_ring specific. What NIC are you using?

This is my NIC that I am using.

[root@sec-audit-lljx-027093 ~]#  lspci | grep -i eth
19:00.0 Ethernet controller: Mellanox Technologies MT27710 Family [ConnectX-4 Lx]
19:00.1 Ethernet controller: Mellanox Technologies MT27710 Family [ConnectX-4 Lx]
5e:00.0 Ethernet controller: Intel Corporation Ethernet Controller E810-C for QSFP (rev 02)
5e:00.1 Ethernet controller: Intel Corporation Ethernet Controller E810-C for QSFP (rev 02)
[root@sec-audit-lljx-027093 ~]# ifconfig eth2
eth2: flags=4419<UP,BROADCAST,RUNNING,PROMISC,MULTICAST>  mtu 1500
        inet6 fe80::b696:91ff:feb2:a9e8  prefixlen 64  scopeid 0x20<link>
        ether b4:96:91:b2:a9:e8  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 374546577  overruns 0  frame 0
        TX packets 4  bytes 980 (980.0 B)
        TX errors 4  dropped 0 overruns 0  carrier 0  collisions 0

Well there are two NICs, the Mellanox and the Intel one, so which one is used in that case :)?

The NIC is the Mellanox. When the file-store option is closed, the cores are peaking at 40%-50%, and everything is OK. When the file-store option is opening, the cores are peaking at 20%-30%, and they begin to drop a lot of packets. That’s very strange. I don’t know why that happened.

 - file-store:
      version: 2
      enabled: yes

      # Set the directory for the filestore. Relative pathnames
      # are contained within the "default-log-dir".
      dir: filestore

      # Write out a fileinfo record for each occurrence of a file.
      # Disabled by default as each occurrence is already logged
      # as a fileinfo record to the main eve-log.
      write-fileinfo: yes

      # Force storing of all files. Default: no.
      #force-filestore: yes

      # Override the global stream-depth for sessions in which we want
      # to perform file extraction. Set to 0 for unlimited; otherwise,
      # must be greater than the global stream-depth value to be used.
      stream-depth: 200mb

      # Uncomment the following variable to define how many files can
      # remain open for filestore by Suricata. Default value is 0 which
      # means files get closed after each write to the file.
      max-open-files: 10000

      # Force logging of checksums: available hash functions are md5,
      # sha1 and sha256. Note that SHA256 is automatically forced by
      # the use of this output module as it uses the SHA256 as the
      # file naming scheme.
      #force-hash: [sha1, md5]
      # NOTE: X-Forwarded configuration is ignored if write-fileinfo is disabled
      # HTTP X-Forwarded-For support by adding an extra field or overwriting
      # the source or destination IP address (depending on flow direction)
      # with the one reported in the X-Forwarded-For HTTP header. This is
      # helpful when reviewing alerts for traffic that is being reverse
      # or forward proxied.
      xff:
        enabled: no
        # Two operation modes are available, "extra-data" and "overwrite".
        mode: extra-data
        # Two proxy deployments are supported, "reverse" and "forward". In
        # a "reverse" deployment the IP address used is the last one, in a
        # "forward" deployment the first IP address is used.
        deployment: reverse
        # Header name where the actual IP address will be reported. If more
        # than one IP address is present, the last IP address will be the
        # one taken into consideration.
        header: X-Forwarded-For

stream:
  memcap: 20gb
  checksum-validation: yes      # reject incorrect csums
  inline: no                  # auto will use inline mode in IPS mode, yes or no set it statically
  reassembly:
    memcap: 20gb
    depth: 2kb                 # reassemble 1mb into a stream
    toserver-chunk-size:  25600
    toclient-chunk-size: 25600
    randomize-chunk-size: no
    #randomize-chunk-range: 10
    #raw: yes
    #segment-prealloc: 20000
    #check-overlap-different-data: true

Where do you log the files and what hardware is this?
Can you check the I/O stats as well?
This could be an I/O issue, especially when the CPU is more idle during that

I just log a few files, and I/O stats are always at a low state.

[root@sec-audit-lljx-027093 nvme0n1]#  smartctl --all /dev/nvme0n1 
smartctl 7.0 2018-12-30 r4883 [x86_64-linux-3.10.0-862.el7.x86_64] (local build)
Copyright (C) 2002-18, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Number:                       P5516DS0160T00
Serial Number:                      SH211903989
Firmware Version:                   224005A0
PCI Vendor/Subsystem ID:            0x1c5f
IEEE OUI Identifier:                0x00e0cf
Total NVM Capacity:                 1,600,321,314,816 [1.60 TB]
Unallocated NVM Capacity:           0
Controller ID:                      1
Number of Namespaces:               1
Namespace 1 Size/Capacity:          1,600,321,314,816 [1.60 TB]
Namespace 1 Formatted LBA Size:     512
Namespace 1 IEEE EUI-64:            38b19e 734c0f9501
Local Time is:                      Tue Sep 20 10:42:58 2022 CST
Firmware Updates (0x07):            3 Slots, Slot 1 R/O
Optional Admin Commands (0x000e):   Format Frmw_DL NS_Mngmt
Optional NVM Commands (0x0014):     DS_Mngmt Sav/Sel_Feat
Maximum Data Transfer Size:         32 Pages
Warning  Comp. Temp. Threshold:     70 Celsius
Critical Comp. Temp. Threshold:     80 Celsius

Supported Power States
St Op     Max   Active     Idle   RL RT WL WT  Ent_Lat  Ex_Lat
 0 +    14.00W       -        -    0  0  0  0        0       0
 1 +    13.00W       -        -    1  1  1  1        0       0
 2 +    12.00W       -        -    2  2  2  2        0       0
 3 +    11.00W       -        -    3  3  3  3        0       0
 4 +    10.00W       -        -    4  4  4  4        0       0
 5 +     9.00W       -        -    5  5  5  5        0       0
 6 +     8.00W       -        -    6  6  6  6        0       0
 7 +     7.00W       -        -    7  7  7  7        0       0

Supported LBA Sizes (NSID 0x1)
Id Fmt  Data  Metadt  Rel_Perf
 0 +     512       0         2
 1 -    4096       0         0
 2 -     512       0         2
 3 -    4096       0         0

=== START OF SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

SMART/Health Information (NVMe Log 0x02)
Critical Warning:                   0x00
Temperature:                        37 Celsius
Available Spare:                    100%
Available Spare Threshold:          5%
Percentage Used:                    0%
Data Units Read:                    2,007,404 [1.02 TB]
Data Units Written:                 1,038,711 [531 GB]
Host Read Commands:                 7,861,706
Host Write Commands:                4,069,800
Controller Busy Time:               12
Power Cycles:                       10
Power On Hours:                     10,134
Unsafe Shutdowns:                   2
Media and Data Integrity Errors:    0
Error Information Log Entries:      0
Warning  Comp. Temperature Time:    0
Critical Comp. Temperature Time:    0
Temperature Sensor 1:               32 Celsius
Temperature Sensor 2:               27 Celsius

Error Information (NVMe Log 0x01, max 63 entries)
No Errors Logged


Can you post your whole suricata.yaml (remove confidential settings)?
What rulesets do you use and are there also custom rules?
Also the whole suricata.log would help to see if there is something off.
33MB doesn’t sound like a lot of files to be stored.

I have only one custom rule:

alert http any any <> any any (msg:"FILE info all"; http.content_type; content:"application/vnd.openxmlformats-officedocument.spreadsheetml.sheet"; filestore; sid:130003; rev:1;)

Following is my configuration file.
suricata.yaml (74.2 KB)

suricata.log

And suricata.rules are ET rules?

What happens if you just remove the custom rule?

And do you see those syscall errors on all interfaces? I’m not familiar with PF_RING but I would investigate those errors as well.

The file suricata.rules is empty.
Those errors don’t seem to matter. So I am confused.