Suricata 7.0.0 IPS AF_Packet+RSS huge drop in performance

Hi, I am currently running Suricata version “7.0.0” with AF_Packet capture+ RSS in IPS mode.
i have modified few params in yaml file pinned cores to worker threads.
i see huge Tx drops which resulted in performance drops when compared to

input: 4gig traffic/16KCPS/avg.pkt_size:580bytes

suricata 6.0.3, Rx-drops:0 Tx-drops:4.9%

suricata 7.0.0, Rx-drops:0 Tx-drops:33.3%

Please find the attachment of yaml file which we used for both iterations.

Suriacta-6.0.3 logs:

^C9/8/2023 – 15:30:47 - - Signal Received. Stopping engine.
9/8/2023 – 15:30:47 - - 0 new flows, 0 established flows were timed out, 0 flows in closed state
9/8/2023 – 15:30:47 - - time elapsed 544.902s
9/8/2023 – 15:30:48 - - 1038319 flows processed
9/8/2023 – 15:30:48 - - (W#01-enp1s0f0) Kernel: Packets 12138193, dropped 0
9/8/2023 – 15:30:48 - - (W#02-enp1s0f0) Kernel: Packets 12098146, dropped 0
9/8/2023 – 15:30:48 - - (W#03-enp1s0f0) Kernel: Packets 12163974, dropped 0
9/8/2023 – 15:30:48 - - (W#04-enp1s0f0) Kernel: Packets 12229230, dropped 0
9/8/2023 – 15:30:48 - - (W#01-enp1s0f1) Kernel: Packets 9312569, dropped 0
9/8/2023 – 15:30:48 - - (W#02-enp1s0f1) Kernel: Packets 9338674, dropped 0
9/8/2023 – 15:30:48 - - (W#03-enp1s0f1) Kernel: Packets 9364308, dropped 0
9/8/2023 – 15:30:48 - - (W#04-enp1s0f1) Kernel: Packets 9423072, dropped 0
9/8/2023 – 15:30:48 - - Alerts: 0
9/8/2023 – 15:30:49 - - ippair memory usage: 414144 bytes, maximum: 16777216
9/8/2023 – 15:30:49 - - host memory usage: 398144 bytes, maximum: 33554432
9/8/2023 – 15:30:50 - - cleaning up signature grouping structure… complete
9/8/2023 – 15:30:50 - - Stats for ‘enp1s0f0’: pkts: 48629543, drop: 0 (0.00%), invalid chksum: 0
9/8/2023 – 15:30:50 - - Stats for ‘enp1s0f1’: pkts: 37438623, drop: 0 (0.00%), invalid chksum: 0
9/8/2023 – 15:30:50 - - Cleaning up Hyperscan global scratch
9/8/2023 – 15:30:50 - - Clearing Hyperscan database cache

no kernel drops or memcap issues logged in stats file

Suricata-7.0.0 logs:

^CNotice: suricata: Signal Received. Stopping engine. [SuricataMainLoop:suricata.c:2825]
Info: suricata: time elapsed 529.902s [SCPrintElapsedTime:suricata.c:1173]
Perf: flow-manager: 1853950 flows processed [FlowRecycler:flow-manager.c:1131]
Perf: af-packet: enp1s0f0: (W#01-enp1s0f0) kernel: Packets 12138193, dropped 0 [ReceiveAFPThreadExitStats:source-af-packet.c:2626]
Perf: af-packet: enp1s0f0: (W#02-enp1s0f0) kernel: Packets 12098146, dropped 0 [ReceiveAFPThreadExitStats:source-af-packet.c:2626]
Perf: af-packet: enp1s0f0: (W#03-enp1s0f0) kernel: Packets 12163965, dropped 0 [ReceiveAFPThreadExitStats:source-af-packet.c:2626]
Perf: af-packet: enp1s0f0: (W#04-enp1s0f0) kernel: Packets 12229230, dropped 0 [ReceiveAFPThreadExitStats:source-af-packet.c:2626]
Perf: af-packet: enp1s0f1: (W#01-enp1s0f1) kernel: Packets 9312569, dropped 0 [ReceiveAFPThreadExitStats:source-af-packet.c:2626]
Perf: af-packet: enp1s0f1: (W#02-enp1s0f1) kernel: Packets 9338670, dropped 0 [ReceiveAFPThreadExitStats:source-af-packet.c:2626]
Perf: af-packet: enp1s0f1: (W#03-enp1s0f1) kernel: Packets 9364304, dropped 0 [ReceiveAFPThreadExitStats:source-af-packet.c:2626]
Perf: af-packet: enp1s0f1: (W#04-enp1s0f1) kernel: Packets 9423072, dropped 0 [ReceiveAFPThreadExitStats:source-af-packet.c:2626]
Info: counters: Alerts: 0 [StatsLogSummary:counters.c:871]
Perf: ippair: ippair memory usage: 414144 bytes, maximum: 16777216 [IPPairPrintStats:ippair.c:296]
Perf: host: host memory usage: 398144 bytes, maximum: 33554432 [HostPrintStats:host.c:299]
Notice: device: enp1s0f0: packets: 48629534, drops: 0 (0.00%), invalid chksum: 0 [LiveDeviceListClean:util-device.c:321]
Notice: device: enp1s0f1: packets: 37438615, drops: 0 (0.00%), invalid chksum: 0 [LiveDeviceListClean:util-device.c:321]

no kernel drops or memcap issues logged in stats file

Please let me know if i am missing any inputs to suricata engine.
suricata_af_ft_v1.yaml (79.0 KB)

Hi!
Your configuration file seems to have been from 7.0.0-beta1, it would be nice to sync it with 7.0.0.

Also, could you please check if your issue could be related to the newly added exception policies?

If this is the case, it could happen because of the old conf. The setting may be defaulting to auto and could be a potential reason for your drops.

1 Like

Hi Shivani,
Thanks for reply.
i have synced yaml file with 7.0.0 version.
post that i see kernel drops.
Please find the attachment of updated yaml file and let me know if i am missing any inputs.

modified params in yaml as below:

af-packet:

  • interface: enp1s0f0
    threads: 4
    defrag: no
    use-mmap: yes
    mmap-locked: yes
    cluster-type: cluster_qm
    cluster-id: 98
    copy-mode: ips
    copy-iface: enp1s0f1
    ring-size: 100000

  • interface: enp1s0f1
    threads: 4
    defrag: no #bsr
    use-mmap: yes
    mmap-locked: yes
    cluster-type: cluster_qm
    cluster-id: 98
    copy-mode: ips
    copy-iface: enp1s0f0
    ring-size: 100000

tls:
encryption-handling: bypass

http:
memcap: 12gb

max-pending-packets: 32768

runmode: workers

flow:
memcap: 4gb
hash-size: 256072
prealloc: 300000
emergency-recovery: 30

flow-timeouts:

default:
new: 15
established: 30
closed: 0
bypassed: 15
emergency-new: 5
emergency-established: 15
emergency-closed: 0
emergency-bypassed: 10
tcp:
new: 15
established: 60
closed: 0
bypassed: 15
emergency-new: 5
emergency-established: 15
emergency-closed: 0
emergency-bypassed: 10
udp:
new: 15
established: 30
bypassed: 15
emergency-new: 5
emergency-established: 15
emergency-bypassed: 10
icmp:
new: 15
established: 30
bypassed: 15
emergency-new: 5
emergency-established: 15
emergency-bypassed: 10

stream:
memcap: 12gb
checksum-validation: no
inline: auto
reassembly:
memcap: 14gb
depth: 1mb
toserver-chunk-size: 2560
toclient-chunk-size: 2560
randomize-chunk-size: yes
segment-prealloc: 200000

mpm-algo: hs

spm-algo: hs

cpu-affinity:
- management-cpu-set:
cpu: [ 0 ] # include only these CPUs in affinity settings
- receive-cpu-set:
cpu: [ 0 ] # include only these CPUs in affinity settings
- worker-cpu-set:
#cpu: [ “all” ]
cpu: [ “2-9” ] # include only these CPUs in affinity settings
mode: “exclusive”
# Use explicitly 3 threads and don’t compute number by using
# detect-thread-ratio variable:
# threads: 3
prio:
#low: [ 0 ]
#medium: [ “1-2” ]
#high: [ 3 ]
#default: “medium”
default: “high”
#- verdict-cpu-set:
# cpu: [ 0 ]
# prio:
# default: “high”

Thnaks
-B Suresh Reddy
suricata_af_ft_7.0.0.yaml (84.0 KB)

Thank you!
Could you please also share your stats.log when the drops happen? As I understand, kernel drops could be due to several reasons…

Hi Shivani,
Please find the attachment of stats and suricata log file.
Thanks
-B Suresh Reddy
stats_081423.log (209.4 KB)
suricata_081423.log (40.3 KB)

@sbhardwaj

Hi Shivani,
did you get chance to look at the logs, please let me know if i am missing any inputs in yaml file.
Thanks
-B Suresh Reddy

Hi, Suresh!
Thank you for sharing the stats. Following is what I could come up with:

  1. Since you’re using cluster_qm type, please make sure that:
    a. RSS symmetric hashing is enabled
    b. Disable NIC offloading except rx/tx checksums
    c. Set proper affinity
    d. any other optimizations you can think of (check the ones mentioned by Andreas: Suricata high capture.kernel_drops count - #8 by Andreas_Herz)
  2. Try increasing your ring-size
  3. AF_PACKET v2 seems to be recommended w IPS, so make sure you’re using that

Drops shouldn’t be as high as they are currently indeed. Let’s see if something works out with the above mentioned things.

suricata_af_ft_7.0.0_v1.yaml (84.0 KB)
@sbhardwaj
Hi Shivani,
with cluster_qm i have applied RSS settings with script and set irq affinity settings also.
i have tried now with increasing ring-size to 200000 still facing same issue.
drop counters are high.

input traffic rate: 4Gbps avg_packet_size: 584
4 worker threads per interface: total 8 worker threads
mapped each worker thread to core.
mapped 2,3,4,5 cores to worker threads on intf1
mapped 6,7,8,9 cores to worker threads on intf2


Date: 8/24/2023 – 12:11:15 (uptime: 0d, 00h 04m 14s)

Counter | TM Name | Value

capture.kernel_packets | Total | 86068149
capture.kernel_drops | Total | 27443404
capture.afpacket.polls | Total | 769632
capture.afpacket.poll_timeout | Total | 7038
capture.afpacket.poll_data | Total | 762586

Please find the attachment of log files.

Thanks
-B Suresh Reddy
stats_082423.log (127.0 KB)
suricata_082423.log (40.1 KB)
suricata_interface_rss_script.log (1.5 KB)

suricata_interface_rss_script.log (1.5 KB)
@pevma

@sbhardwaj

Hi Shivani,
could you please let us know if we need to tweak few more params in yaml for kernel drops mentioned in stats file?

capture.kernel_packets | Total | 86068149
capture.kernel_drops | Total | 27443404

Thanks
-B Suresh Reddy

@sbhardwaj
Hi Shivani,
Thanks for your support commenting out “mmap-locked: yes” in af-packet section of yaml increased the performance.

input traffic rate: 4Gbps avg_packet_size: 584
4 worker threads per interface: total 8 worker threads
mapped each worker thread to core.
mapped 2,3,4,5 cores to worker threads on intf1
mapped 6,7,8,9 cores to worker threads on intf2

suricata processed 1861394 flows, test duration: 100sec
Notice: threads: Threads created → W: 8 FM: 1 FR: 1 Engine started. [TmThreadWaitOnThreadRunning:tm-threads.c:1888]
^CNotice: suricata: Signal Received. Stopping engine. [SuricataMainLoop:suricata.c:2825]
Info: suricata: time elapsed 711.128s [SCPrintElapsedTime:suricata.c:1173]
Perf: flow-manager: 1861394 flows processed [FlowRecycler:flow-manager.c:1131]
Perf: af-packet: enp1s0f0: (W#01-enp1s0f0) kernel: Packets 12138194, dropped 0 [ReceiveAFPThreadExitStats:source-af-packet.c:2626]
Perf: af-packet: enp1s0f0: (W#02-enp1s0f0) kernel: Packets 12098150, dropped 0 [ReceiveAFPThreadExitStats:source-af-packet.c:2626]
Perf: af-packet: enp1s0f0: (W#03-enp1s0f0) kernel: Packets 12163964, dropped 0 [ReceiveAFPThreadExitStats:source-af-packet.c:2626]
Perf: af-packet: enp1s0f0: (W#04-enp1s0f0) kernel: Packets 12229233, dropped 0 [ReceiveAFPThreadExitStats:source-af-packet.c:2626]
Perf: af-packet: enp1s0f1: (W#01-enp1s0f1) kernel: Packets 9312570, dropped 0 [ReceiveAFPThreadExitStats:source-af-packet.c:2626]
Perf: af-packet: enp1s0f1: (W#02-enp1s0f1) kernel: Packets 9338676, dropped 0 [ReceiveAFPThreadExitStats:source-af-packet.c:2626]
Perf: af-packet: enp1s0f1: (W#03-enp1s0f1) kernel: Packets 9364303, dropped 0 [ReceiveAFPThreadExitStats:source-af-packet.c:2626]
Perf: af-packet: enp1s0f1: (W#04-enp1s0f1) kernel: Packets 9423073, dropped 0 [ReceiveAFPThreadExitStats:source-af-packet.c:2626]
Info: counters: Alerts: 696094 [StatsLogSummary:counters.c:871]
Perf: ippair: ippair memory usage: 414144 bytes, maximum: 16777216 [IPPairPrintStats:ippair.c:296]
Perf: host: host memory usage: 398144 bytes, maximum: 33554432 [HostPrintStats:host.c:299]
Notice: device: enp1s0f0: packets: 48629541, drops: 0 (0.00%), invalid chksum: 0 [LiveDeviceListClean:util-device.c:321]
Notice: device: enp1s0f1: packets: 37438622, drops: 0 (0.00%), invalid chksum: 0 [LiveDeviceListClean:util-device.c:321

suricata_af_ft_7.0.0_v1_final.yaml (84.0 KB)

Thanks
-B Suresh Reddy