Suresh
(Suresh)
August 3, 2023, 3:39pm
1
Hi, I am currently running Suricata version “7.0.0-beta1 RELEASE” with DPDK capture in IPS mode.
i have modified few params in yaml file pinned cores to worker threads.
bumped up Rx/Tx descriptors. with 4 gig traffic input i see packet drops.
intf1 worker threads are pinned to cores 2,3,4,5
intf2 worker threads are pinned to cores 6,7,8,9
3/8/2023 – 21:00:50 - - Stats for ‘0000:01:00.1’: pkts: 37438615, drop: 849331 (2.27%), invalid chksum: 0
3/8/2023 – 21:00:50 - - Closing device 0000:01:00.1
3/8/2023 – 21:00:50 - - Stats for ‘0000:01:00.0’: pkts: 48629535, drop: 0 (0.00%), invalid chksum: 0
3/8/2023 – 21:00:50 - - Closing device 0000:01:00.0
logs:
Network devices using DPDK-compatible driver
0000:01:00.0 ‘Ethernet Controller X710 for 10GbE SFP+ 1572’ drv=vfio-pci unused=i40e
0000:01:00.1 ‘Ethernet Controller X710 for 10GbE SFP+ 1572’ drv=vfio-pci unused=i40e
sudo dpdk-hugepages.py -s
Node Pages Size Total
0 4 1Gb 4Gb
stats.log:
Date: 8/3/2023 – 21:00:49 (uptime: 0d, 00h 03m 33s)
Counter | TM Name | Value
capture.packets | Total | 86068150
capture.rx_errors | Total | 849331
capture.dpdk.imissed | Total | 849331
decoder.pkts | Total | 85218819
decoder.bytes | Total | 49614803584
please find the attachment of yaml file and suricata log file for reference.
Suresh
(Suresh)
August 3, 2023, 3:42pm
2
vjulien
(Victor Julien)
August 4, 2023, 5:26am
3
Please don’t report issue in 7.0.0-beta1 now that 7.0.0 is out.
Suresh
(Suresh)
August 7, 2023, 6:06pm
4
Hi Victor,
After upgrading to 7.0.0 also i see dpdk.imissed drop counters incrmenting.
one more observation was throughput reduced to half when compared to beta version.
please find the attachment of yaml file and let me know if i am missing any inputs to Suricata engine.
logs:
[1700] Notice: threads: Threads created → W: 8 FM: 1 FR: 1 Engine started.
^C[1700] Notice: suricata: Signal Received. Stopping engine.
[1700] Info: suricata: time elapsed 354.677s
[1718] Perf: flow-manager: 2559690 flows processed
[1709] Perf: dpdk: Port 1 (0000:01:00.1) - rx_good_packets: 46537611
[1709] Perf: dpdk: Port 1 (0000:01:00.1) - tx_good_packets: 25352628
[1709] Perf: dpdk: Port 1 (0000:01:00.1) - rx_good_bytes: 14676244159
[1709] Perf: dpdk: Port 1 (0000:01:00.1) - tx_good_bytes: 12440792159
[1709] Perf: dpdk: Port 1 (0000:01:00.1) - rx_missed_errors: 263078
[1709] Perf: dpdk: Port 1 (0000:01:00.1) - rx_unicast_packets: 46800687
[1709] Perf: dpdk: Port 1 (0000:01:00.1) - rx_multicast_packets: 2
[1709] Perf: dpdk: Port 1 (0000:01:00.1) - rx_unknown_protocol_packets: 46800704
[1709] Perf: dpdk: Port 1 (0000:01:00.1) - tx_unicast_packets: 25352628
[1709] Perf: dpdk: Port 1 (0000:01:00.1) - tx_link_down_dropped: 2
[1709] Perf: dpdk: Port 1 (0000:01:00.1) - rx_size_64_packets: 16071974
[1709] Perf: dpdk: Port 1 (0000:01:00.1) - rx_size_65_to_127_packets: 15952123
[1709] Perf: dpdk: Port 1 (0000:01:00.1) - rx_size_128_to_255_packets: 5389373
[1709] Perf: dpdk: Port 1 (0000:01:00.1) - rx_size_256_to_511_packets: 1269895
[1709] Perf: dpdk: Port 1 (0000:01:00.1) - rx_size_512_to_1023_packets: 464829
[1709] Perf: dpdk: Port 1 (0000:01:00.1) - rx_size_1024_to_1522_packets: 7652510
[1709] Perf: dpdk: Port 1 (0000:01:00.1) - tx_size_64_packets: 2028088
[1709] Perf: dpdk: Port 1 (0000:01:00.1) - tx_size_65_to_127_packets: 11509528
[1709] Perf: dpdk: Port 1 (0000:01:00.1) - tx_size_128_to_255_packets: 811380
[1709] Perf: dpdk: Port 1 (0000:01:00.1) - tx_size_256_to_511_packets: 2211352
[1709] Perf: dpdk: Port 1 (0000:01:00.1) - tx_size_512_to_1023_packets: 86484
[1709] Perf: dpdk: Port 1 (0000:01:00.1) - tx_size_1024_to_1522_packets: 8705812
[1709] Perf: dpdk: 0000:01:00.1: total RX stats: packets 46537611 bytes: 14676244159 missed: 263078 errors: 0 nombufs: 0
[1709] Perf: dpdk: 0000:01:00.1: total TX stats: packets 25352628 bytes: 12440792159 errors: 0
[1709] Perf: dpdk: (W#01-01:00.1) received packets 11680165
[1710] Perf: dpdk: (W#02-01:00.1) received packets 11617732
[1711] Perf: dpdk: (W#03-01:00.1) received packets 11610280
[1712] Perf: dpdk: (W#04-01:00.1) received packets 11629434
[1713] Perf: dpdk: Port 0 (0000:01:00.0) - rx_good_packets: 60781573
[1713] Perf: dpdk: Port 0 (0000:01:00.0) - tx_good_packets: 45956396
[1713] Perf: dpdk: Port 0 (0000:01:00.0) - rx_good_bytes: 47664923717
[1713] Perf: dpdk: Port 0 (0000:01:00.0) - tx_good_bytes: 14530212686
[1713] Perf: dpdk: Port 0 (0000:01:00.0) - rx_unicast_packets: 60781571
[1713] Perf: dpdk: Port 0 (0000:01:00.0) - rx_multicast_packets: 2
[1713] Perf: dpdk: Port 0 (0000:01:00.0) - rx_unknown_protocol_packets: 60781588
[1713] Perf: dpdk: Port 0 (0000:01:00.0) - tx_unicast_packets: 45956394
[1713] Perf: dpdk: Port 0 (0000:01:00.0) - tx_multicast_packets: 2
[1713] Perf: dpdk: Port 0 (0000:01:00.0) - mac_local_errors: 1
[1713] Perf: dpdk: Port 0 (0000:01:00.0) - rx_size_64_packets: 5420353
[1713] Perf: dpdk: Port 0 (0000:01:00.0) - rx_size_65_to_127_packets: 13871774
[1713] Perf: dpdk: Port 0 (0000:01:00.0) - rx_size_128_to_255_packets: 5672861
[1713] Perf: dpdk: Port 0 (0000:01:00.0) - rx_size_256_to_511_packets: 4829673
[1713] Perf: dpdk: Port 0 (0000:01:00.0) - rx_size_512_to_1023_packets: 545335
[1713] Perf: dpdk: Port 0 (0000:01:00.0) - rx_size_1024_to_1522_packets: 30441592
[1713] Perf: dpdk: Port 0 (0000:01:00.0) - tx_size_64_packets: 15486883
[1713] Perf: dpdk: Port 0 (0000:01:00.0) - tx_size_65_to_127_packets: 15835936
[1713] Perf: dpdk: Port 0 (0000:01:00.0) - tx_size_128_to_255_packets: 5321528
[1713] Perf: dpdk: Port 0 (0000:01:00.0) - tx_size_256_to_511_packets: 1255928
[1713] Perf: dpdk: Port 0 (0000:01:00.0) - tx_size_512_to_1023_packets: 459951
[1713] Perf: dpdk: Port 0 (0000:01:00.0) - tx_size_1024_to_1522_packets: 7596186
[1713] Perf: dpdk: 0000:01:00.0: total RX stats: packets 60781573 bytes: 47664923717 missed: 0 errors: 0 nombufs: 0
[1713] Perf: dpdk: 0000:01:00.0: total TX stats: packets 45956396 bytes: 14530212686 errors: 0
[1713] Perf: dpdk: (W#01-01:00.0) received packets 15232658
[1714] Perf: dpdk: (W#02-01:00.0) received packets 15220202
[1715] Perf: dpdk: (W#03-01:00.0) received packets 15227420
[1716] Perf: dpdk: (W#04-01:00.0) received packets 15101293
[1700] Info: counters: Alerts: 0
[1700] Perf: ippair: ippair memory usage: 414144 bytes, maximum: 16777216
[1700] Perf: host: host memory usage: 398144 bytes, maximum: 33554432
[1700] Notice: device: 0000:01:00.1: packets: 46800689, drops: 263078 (0.56%), invalid chksum: 0
[1700] Perf: dpdk: 0000:01:00.1: closing device
[1700] Notice: device: 0000:01:00.0: packets: 60781573, drops: 0 (0.00%), invalid chksum: 0
[1700] Perf: dpdk: 0000:01:00.0: closing device
Counter | TM Name | Value
capture.packets | Total | 107582262
capture.rx_errors | Total | 263078
capture.dpdk.imissed | Total | 263078
33% traffic dropped with 7.0.0 release image, where as “7.0.0.beta1” has only 2% drop
with 5gig input traffic i see still dpdk.imissed
suricata_dpdk_ft_7.0.yaml (79.8 KB)
lukashino
(Lukas Sismis)
August 8, 2023, 7:05am
5
Hi Suresh,
your DPDK config seems more or less correct, you could only try to bump up mempool size to like
mempool-size: 262143
mempool-cache-size: 511
rx-descriptors: 16384 # -> maybe try this with 32768 descriptors but that should be it
tx-descriptors: 16384
I believe your mempool size should be at least size number of descriptors * number of threads
so that’s why I suggest increasing it.
Maybe I would increase flow memcap a bit more to like 4gb
Suresh
(Suresh)
August 14, 2023, 3:31pm
6
Hi Lukas,
Thanks for reply, after modifying tx,rx descriptors performance matches with beta version. with 4gig traffic i see 1.9% drop overall.
still i see capture.dpdk.imissed errors.
Date: 8/14/2023 – 19:56:15 (uptime: 0d, 00h 04m 43s)
Counter | TM Name | Value
capture.packets | Total | 107582262
capture.rx_errors | Total | 899415
capture.dpdk.imissed | Total | 899415
decoder.pkts | Total | 106682847
decoder.bytes | Total | 62031628766
decoder.ipv4 | Total | 106682843
decoder.ipv6 | Total | 4
Please find the attachment of yaml file and stats.log file and let me know if i need further tuning in yaml file.
Thanks
-B Suresh Reddy
stats_dpdk_7_081423.log (168.1 KB)
suricata_dpdk_7_081423.log (55.1 KB)
suricata_dpdk_ft_7.0_v1.yaml (84.2 KB)
lukashino
(Lukas Sismis)
August 15, 2023, 6:41pm
7
Hey Suresh,
great to hear that.
I think having 4 workers on each interface facing to 4 Gbps of traffic might be too little resource-wise. I would suggest enabling some optimizations like flow bypass, encrypted flow bypass, lowering number of rules, lowering stream inspection depth etc. - that kind of stuff depends on your security policy but can enhance your Suricata performance.
Suresh
(Suresh)
August 16, 2023, 8:39am
8
Hi Lukas,
could you please clarify do we need to increase worker threads per interface for 4Gbps?
some optimizations already in-place w.r.to flow/http/stream as below.
let me know if i need to add more optimizations.
tls:
encryption-handling: bypass
http:
memcap: 12gb
max-pending-packets: 32768
runmode: workers
flow:
memcap: 4gb
hash-size: 256072
prealloc: 300000
emergency-recovery: 30
flow-timeouts:
default:
new: 15
established: 30
closed: 0
bypassed: 15
emergency-new: 5
emergency-established: 15
emergency-closed: 0
emergency-bypassed: 10
tcp:
new: 15
established: 60
closed: 0
bypassed: 15
emergency-new: 5
emergency-established: 15
emergency-closed: 0
emergency-bypassed: 10
udp:
new: 15
established: 30
bypassed: 15
emergency-new: 5
emergency-established: 15
emergency-bypassed: 10
icmp:
new: 15
established: 30
bypassed: 15
emergency-new: 5
emergency-established: 15
emergency-bypassed: 10
stream:
memcap: 12gb
checksum-validation: no
inline: auto
reassembly:
memcap: 14gb
depth: 1mb
toserver-chunk-size: 2560
toclient-chunk-size: 2560
randomize-chunk-size: yes
segment-prealloc: 200000
mpm-algo: hs
spm-algo: hs
cpu-affinity:
- management-cpu-set:
cpu: [ 0 ] # include only these CPUs in affinity settings
- receive-cpu-set:
cpu: [ 0 ] # include only these CPUs in affinity settings
- worker-cpu-set:
#cpu: [ “all” ]
cpu: [ “2-9” ] # include only these CPUs in affinity settings
mode: “exclusive”
# Use explicitly 3 threads and don’t compute number by using
# detect-thread-ratio variable:
# threads: 3
prio:
#low: [ 0 ]
#medium: [ “1-2” ]
#high: [ 3 ]
#default: “medium”
default: “high”
#- verdict-cpu-set:
# cpu: [ 0 ]
# prio:
# default: “high”
lukashino
(Lukas Sismis)
August 16, 2023, 8:48am
9
Hi
could you please clarify do we need to increase worker threads per interface for 4Gbps?
Yes, increase the worker (thread) count on both interfaces to e.g. 6 or 8 threads.
stream:
memcap: 12gb
checksum-validation: no
inline: auto
reassembly:
memcap: 14gb
With regards to bypass - I believe you are missing bypass: true
in the stream:
setting
Lukas