High number of kernel_drops


I’m a Suricata newbie and I need help with higher number of kernel_drops. I went through the docs, but couldn’t follow https://suricata.readthedocs.io/en/suricata-5.0.3/performance/high-performance-config.html, because Suricata is running on Hyper-V and thus doesn’t support almost any changes made by ethtool needed for perfomance tuning.

On the image below you can see rising numbers of kernel_packets and kernel_drops for the last 7 days until suricata got restarted. In the peak point the count is 107 947 240 for kernel_drops and 673 030 883 for kernel_packets, which comes into 16% of dropped packets, which is pretty much high I think.

The virtual machine is running with 8 CPUs/cores/threads, Ubuntu 18.04.3 LTS, kernel version 4.15.0-112-generic. Configuration of af-packet is default (the rest is commented):

 # Linux high speed capture support
  - interface: eth1
    # Number of receive threads. "auto" uses the number of cores
    threads: auto
    # Default clusterid. AF_PACKET will load balance packets based on flow.
   cluster-id: 99
    # Default AF_PACKET cluster type. AF_PACKET can load balance per flow or per hash.
    # This is only supported for Linux kernel > 3.1
    # possible value are:
    #  * cluster_flow: all packets of a given flow are send to the same socket
    #  * cluster_cpu: all packets treated in kernel by a CPU are send to the same socket
    #  * cluster_qm: all packets linked by network card to a RSS queue are sent to the same
    #  socket. Requires at least Linux 3.14.
    #  * cluster_ebpf: eBPF file load balancing. See doc/userguide/capture-hardware/ebpf-xdp.rst for
    #  more info.
    # Recommended modes are cluster_flow on most boxes and cluster_cpu or cluster_qm on system
    # with capture card using RSS (require cpu affinity tuning and system irq tuning)
    cluster-type: cluster_flow
    #cluster-type: cluster_qm
    # In some fragmentation case, the hash can not be computed. If "defrag" is set
    # to yes, the kernel will do the needed defragmentation before sending the packets.
    defrag: yes

At the moment of writing this post, after restarting suricata, there are some drops on eth1, where suricata should be sniffing, but not that much:

inet6 fe80::215:5dff:fe19:8f04 prefixlen 64 scopeid 0x20
ether 00:15:5d:19:8f:04 txqueuelen 1000 (Ethernet)
RX packets 1067046 bytes 793598619 (793.5 MB)
RX errors 0 dropped 873 overruns 0 frame 0
TX packets 17 bytes 1362 (1.3 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

And this is also latest output of stats.log

Date: 8/19/2020 -- 14:40:28 (uptime: 0d, 00h 16m 51s)
Counter                                       | TM Name                   | Value
capture.kernel_packets                        | Total                     | 1148861
capture.kernel_drops                          | Total                     | 131999
decoder.pkts                                  | Total                     | 1017118
decoder.bytes                                 | Total                     | 733718905
decoder.invalid                               | Total                     | 88
decoder.ipv4                                  | Total                     | 1008659
decoder.ipv6                                  | Total                     | 1896
decoder.ethernet                              | Total                     | 1017118
decoder.tcp                                   | Total                     | 852114
decoder.udp                                   | Total                     | 155427
decoder.icmpv4                                | Total                     | 1524
decoder.icmpv6                                | Total                     | 294
decoder.vlan                                  | Total                     | 1017096
decoder.avg_pkt_size                          | Total                     | 721
decoder.max_pkt_size                          | Total                     | 1534
flow.tcp                                      | Total                     | 11278
flow.udp                                      | Total                     | 3939
flow.icmpv4                                   | Total                     | 37
flow.icmpv6                                   | Total                     | 48
decoder.event.ipv4.trunc_pkt                  | Total                     | 88
decoder.event.ipv4.opt_pad_required           | Total                     | 482
decoder.event.ipv6.zero_len_padn              | Total                     | 178
tcp.sessions                                  | Total                     | 10410
tcp.syn                                       | Total                     | 12063
tcp.synack                                    | Total                     | 9391
tcp.rst                                       | Total                     | 19809
tcp.pkt_on_wrong_thread                       | Total                     | 17521
tcp.stream_depth_reached                      | Total                     | 27
tcp.reassembly_gap                            | Total                     | 432
tcp.overlap                                   | Total                     | 78461
detect.alert                                  | Total                     | 5
app_layer.flow.http                           | Total                     | 111
app_layer.tx.http                             | Total                     | 403
app_layer.flow.tls                            | Total                     | 8528
app_layer.flow.smb                            | Total                     | 2
app_layer.tx.smb                              | Total                     | 11
app_layer.flow.dcerpc_tcp                     | Total                     | 17
app_layer.flow.ntp                            | Total                     | 23
app_layer.tx.ntp                              | Total                     | 24
app_layer.flow.krb5_tcp                       | Total                     | 1
app_layer.tx.krb5_tcp                         | Total                     | 1
app_layer.flow.dhcp                           | Total                     | 45
app_layer.tx.dhcp                             | Total                     | 118
app_layer.flow.snmp                           | Total                     | 171
app_layer.tx.snmp                             | Total                     | 342
app_layer.flow.failed_tcp                     | Total                     | 147
app_layer.flow.dns_udp                        | Total                     | 2090
app_layer.tx.dns_udp                          | Total                     | 4552
app_layer.flow.krb5_udp                       | Total                     | 28
app_layer.tx.krb5_udp                         | Total                     | 24
app_layer.flow.failed_udp                     | Total                     | 1582
flow_mgr.closed_pruned                        | Total                     | 8802
flow_mgr.new_pruned                           | Total                     | 2544
flow_mgr.est_pruned                           | Total                     | 1301
flow.spare                                    | Total                     | 10000
flow.tcp_reuse                                | Total                     | 406
flow_mgr.flows_checked                        | Total                     | 29
flow_mgr.flows_notimeout                      | Total                     | 25
flow_mgr.flows_timeout                        | Total                     | 4
flow_mgr.flows_timeout_inuse                  | Total                     | 3
flow_mgr.flows_removed                        | Total                     | 1
flow_mgr.rows_checked                         | Total                     | 65536
flow_mgr.rows_skipped                         | Total                     | 65493
flow_mgr.rows_empty                           | Total                     | 15
flow_mgr.rows_maxlen                          | Total                     | 2
tcp.memuse                                    | Total                     | 4587760
tcp.reassembly_memuse                         | Total                     | 7337608
http.memuse                                   | Total                     | 1248
flow.memuse                                   | Total                     | 8347112

If there is any information I could you give to help resolve this issue, please let me know.


What is the traffic rate and how many cores/threads are assigned for Suricata?
How man rules did you enable?

This is the traffic rate from the same day:

Suricata is left with default values, so it’s using all available cores/threads - 8. This is it’s utilization:

I think my predecessor, who set up suricata, left all rules enabled, just a few are disabled, so there should be something about 52 000 rules that I saw, when I last ran suricata update.

Ah 35Mbit/s peak is nothing. So the drops are rather strange unless the CPU core is very slow. Do you have more specs?
Could you paste the suricata.log as well?
What NIC is shown to you in Ubuntu via ethtool? especially driver used. Maybe there’s another NIC to choose within Hyper-V.

Currently I would bet that’s an issue related to Hyper-V and Linux/Suricata.

You could play around with the capture settings for af_packet as well.

Suricata log: suricata.log (27.2 KB)

cat /proc/cpuinfo

processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 79
model name      : Intel(R) Xeon(R) CPU E5-2667 v4 @ 3.20GHz
stepping        : 1
microcode       : 0xffffffff
cpu MHz         : 3196.302
cache size      : 25600 KB
physical id     : 0
siblings        : 8
core id         : 0
cpu cores       : 8
apicid          : 0
initial apicid  : 0
fpu             : yes
fpu_exception   : yes
cpuid level     : 20
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology cpuid pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti ssbd ibrs ibpb stibp fsgsbase bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt flush_l1d
bugs            : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs taa itlb_multihit
bogomips        : 6392.60
clflush size    : 64
cache_alignment : 64
address sizes   : 44 bits physical, 48 bits virtual
power management:


Architecture:        x86_64
CPU op-mode(s):      32-bit, 64-bit
Byte Order:          Little Endian
CPU(s):              8
On-line CPU(s) list: 0-7
Thread(s) per core:  1
Core(s) per socket:  8
Socket(s):           1
NUMA node(s):        1
Vendor ID:           GenuineIntel
CPU family:          6
Model:               79
Model name:          Intel(R) Xeon(R) CPU E5-2667 v4 @ 3.20GHz
Stepping:            1
CPU MHz:             3196.302
BogoMIPS:            6392.60
Hypervisor vendor:   Microsoft
Virtualization type: full
L1d cache:           32K
L1i cache:           32K
L2 cache:            256K
L3 cache:            25600K
NUMA node0 CPU(s):   0-7
Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology cpuid pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti ssbd ibrs ibpb stibp fsgsbase bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt flush_l1d

ethtool -i eth1

driver: hv_netvsc
firmware-version: N/A
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no

Does any info, that I pasted above, show something new/important? Shall I look at the Hyper-V settings for this VM? I don’t have access to the machine with Hyper-V currently, I need to ask someone more privileged than me, so I need to know for sure, that there’s nothing that I could do to improve the performance on the machine that’s running suricata. Could some other capture settings from af_packet help with this issue?

Nothing strange there, I’m a bit confused why you end up with such low performance.

I would ask if there is another NIC available to test instead of hv_netcsv. It also seems that the interface eth1 is sometimes down when I look into the suricata.log. Also it can set some settings:

[ERRCODE: SC_ERR_SYSCALL(50)] - Failure when trying to set feature via ioctl for 'eth1': Operation not supported (95)

Also 4 rules fail, although that shouldn’t affect the performance.

You could try to disable offloading setting.

In addition to that you could check perf top while Suricata is running (make sure debug symbols are installed as well).

About offloading, these settings were already disabled:

Features for eth1:
rx-checksumming: on
tx-checksumming: off
        tx-checksum-ipv4: off
        tx-checksum-ip-generic: off [fixed]
        tx-checksum-ipv6: off
        tx-checksum-fcoe-crc: off [fixed]
        tx-checksum-sctp: off [fixed]
scatter-gather: on
        tx-scatter-gather: on [fixed]
        tx-scatter-gather-fraglist: off [fixed]
tcp-segmentation-offload: off
        tx-tcp-segmentation: off
        tx-tcp-ecn-segmentation: off [fixed]
        tx-tcp-mangleid-segmentation: off
        tx-tcp6-segmentation: off
udp-fragmentation-offload: off
generic-segmentation-offload: off
generic-receive-offload: off
large-receive-offload: off [fixed]
rx-vlan-offload: on [fixed]
tx-vlan-offload: on [fixed]
ntuple-filters: off [fixed]
receive-hashing: off [fixed]
highdma: on [fixed]
rx-vlan-filter: off [fixed]
vlan-challenged: off [fixed]
tx-lockless: off [fixed]
netns-local: off [fixed]
tx-gso-robust: off [fixed]
tx-fcoe-segmentation: off [fixed]
tx-gre-segmentation: off [fixed]
tx-gre-csum-segmentation: off [fixed]
tx-ipxip4-segmentation: off [fixed]
tx-ipxip6-segmentation: off [fixed]
tx-udp_tnl-segmentation: off [fixed]
tx-udp_tnl-csum-segmentation: off [fixed]
tx-gso-partial: off [fixed]
tx-sctp-segmentation: off [fixed]
tx-esp-segmentation: off [fixed]
fcoe-mtu: off [fixed]
tx-nocache-copy: off
loopback: off [fixed]
rx-fcs: off [fixed]
rx-all: off [fixed]
tx-vlan-stag-hw-insert: off [fixed]
rx-vlan-stag-hw-parse: off [fixed]
rx-vlan-stag-filter: off [fixed]
l2-fwd-offload: off [fixed]
hw-tc-offload: off [fixed]
esp-hw-offload: off [fixed]
esp-tx-csum-hw-offload: off [fixed]
rx-udp_tunnel-port-offload: off [fixed]

Just a sample from perf top:

Samples: 19K of event 'cpu-clock', Event count (approx.): 4334839951
Overhead  Shared Object               Symbol
  24.67%  suricata                    [.] DetectRun.part.16
   9.19%  suricata                    [.] DetectAddressMatchIPv4
   6.12%  libc-2.27.so                [.] 0x00000000000422ea
   3.11%  [kernel]                    [k] _raw_spin_unlock_irqrestore
   2.55%  suricata                    [.] DetectRunTxSortHelper
   2.30%  [kernel]                    [k] __softirqentry_text_start
   1.99%  suricata                    [.] DetectRunStoreStateTx
   1.76%  suricata                    [.] DetectEngineInspectGenericList
   1.40%  suricata                    [.] DetectTlsFingerprintMatch
   1.26%  perf                        [.] __symbols__insert
   1.09%  [kernel]                    [k] finish_task_switch
   1.02%  libc-2.27.so                [.] 0x00000000000bae23
   0.99%  libc-2.27.so                [.] 0x000000000018f01a
   0.80%  libc-2.27.so                [.] 0x000000000018f021
   0.79%  libc-2.27.so                [.] 0x0000000000042085
   0.75%  libc-2.27.so                [.] 0x000000000018eebc
   0.74%  libc-2.27.so                [.] 0x00000000000422e8
   0.53%  [kernel]                    [k] __do_page_fault
   0.49%  libc-2.27.so                [.] 0x000000000018f02f
   0.46%  libc-2.27.so                [.] 0x000000000018f0a6
   0.43%  libpthread-2.27.so          [.] __pthread_mutex_unlock
   0.43%  libc-2.27.so                [.] 0x0000000000041eff
   0.42%  libc-2.27.so                [.] 0x00000000000420cf
   0.41%  libc-2.27.so                [.] 0x0000000000042094
   0.41%  libpthread-2.27.so          [.] __pthread_mutex_trylock
   0.40%  perf                        [.] rb_next
   0.40%  suricata                    [.] DetectEngineInspectRulePacketMatches        

Back to the rules count - is 52 000 of enabled rules too much or is it adequate? Just to be sure.

I’ll also try to check the NIC available on Hyper-V and I’ll let you know.

It’s quite a lot but not too much. Especially with that low traffic rate that should be no issue at all. Even small 4 core machines can handle 100mbit/s traffic.

Another idea, is hyperscan enabled?

If you mean suricata-hyperscan, then I think it’s not installed:

apt list --installed | grep hyperscan

WARNING: apt does not have a stable CLI interface. Use with caution in scripts.

libhyperscan4/bionic,now 4.7.0-1 amd64 [installed,automatic]

How did you install/build Suricata? Can you also post suricata --build-info?

Output of suricata --build-info:

This is Suricata version 5.0.3 RELEASE
SIMD support: none
Atomic intrinsics: 1 2 4 8 byte(s)
64-bits, Little-endian architecture
GCC version 7.5.0, C version 199901
compiled with _FORTIFY_SOURCE=2
L1 cache line size (CLS)=64
thread local storage method: __thread
compiled with LibHTP v0.5.33, linked against LibHTP v0.5.33

Suricata Configuration:
  AF_PACKET support:                       yes
  eBPF support:                            no
  XDP support:                             no
  PF_RING support:                         no
  NFQueue support:                         yes
  NFLOG support:                           no
  IPFW support:                            no
  Netmap support:                          no
  DAG enabled:                             no
  Napatech enabled:                        no
  WinDivert enabled:                       no

  Unix socket enabled:                     yes
  Detection enabled:                       yes

  Libmagic support:                        yes
  libnss support:                          yes
  libnspr support:                         yes
  libjansson support:                      yes
  hiredis support:                         yes
  hiredis async with libevent:             yes
  Prelude support:                         no
  PCRE jit:                                yes
  LUA support:                             yes, through luajit
  libluajit:                               yes
  GeoIP2 support:                          yes
  Non-bundled htp:                         yes
  Old barnyard2 support:                   no
  Hyperscan support:                       yes
  Libnet support:                          yes
  liblz4 support:                          yes

  Rust support:                            yes
  Rust strict mode:                        no
  Rust compiler path:                      /usr/bin/rustc
  Rust compiler version:                   rustc 1.41.0
  Cargo path:                              /usr/bin/cargo
  Cargo version:                           cargo 1.41.0
  Cargo vendor:                            yes

  Python support:                          yes
  Python path:                             /usr/bin/python3
  Python distutils                         yes
  Python yaml                              yes
  Install suricatactl:                     yes
  Install suricatasc:                      yes
  Install suricata-update:                 yes

  Profiling enabled:                       no
  Profiling locks enabled:                 no

Development settings:
  Coccinelle / spatch:                     no
  Unit tests enabled:                      no
  Debug output enabled:                    no
  Debug validation enabled:                no

Generic build parameters:
  Installation prefix:                     /usr
  Configuration directory:                 /etc/suricata/
  Log directory:                           /var/log/suricata/

  --prefix                                 /usr
  --sysconfdir                             /etc
  --localstatedir                          /var
  --datarootdir                            /usr/share

  Host:                                    x86_64-pc-linux-gnu
  Compiler:                                gcc (exec name) / gcc (real)
  GCC Protect enabled:                     yes
  GCC march native enabled:                no
  GCC Profile enabled:                     no
  Position Independent Executable enabled: yes
  CFLAGS                                   -g -O2 -fdebug-prefix-map=/build/suricata-Detg4V/suricata-5.0.3=. -fstack-protector-strong -Wformat -Werror=format-security -I${srcdir}/../rust/gen/c-headers
  PCAP_CFLAGS                               -I/usr/include
  SECCFLAGS                                -fstack-protector -D_FORTIFY_SOURCE=2 -Wformat -Wformat-security

Suricata was installed like this:

sudo add-apt-repository ppa:oisf/suricata-stable
sudo apt-get update
sudo apt-get install suricata

in case you have not solved your problem, this helped me with a similar problem.

In short:
I had to uncomment mmap-locked and tpacket-v3 and also change some memory settings so that suricata was able to reserve enough memory.

Thanks! Edited the settings and I’ll let you know the results :slightly_smiling_face:

Hi @AlphaRaven, did you manage to have any success on your end?