7.0.0-beta1 dpdk alert performance problem? When I use rule to generate alert (even suppressed alerts ) it will become slow down and loss packet.
Test about 7.0.0-beta1 mode DPDK(IPS mode , copy-mode: ips)
-
no rules, all nsm function + pcap log
~ 10Gbps mixed traffic,works fine, no packet loss. cpu keep 100% -
add 1 rule to generate many alerts and write to event log.
for test only , just want to get about ~5000logs/sec with low network traffic
alert ip any any → any any (msg:“per_pkt_alert_test”;flow: no_stream;rev:1;sid:1;)
~ 80Mbps Traffic , about 30% packet loss. some cpu usage lower to about 20%-40%(in htop) -
add 1 rule to generate many alerts (but no log to disk)
alert ip any any → any any (msg:“per_pkt_alert_test”;flow: no_stream;noalert;rev:1;sid:1;)
~80Mbps Traffic , 30% packet loss, some cpu usage lower to about 20%-40%(in htop)
so it is not about disk speed, just when there is a lot of alerts it will slow down.
but why?
af_packet mode have no problem
Here is configs
Machine: Intel Xeon with 36Cores, HT disabled + Mellanox mcx512a NIC
Kernel cmdline
BOOT_IMAGE=(hd2,gpt2)/vmlinuz-5.14.0-70.26.1.el9_0.x86_64 root=UUID=99bb48ce-5342-4094-8ca9-48cf3a2a467f ro hugepagesz=1G hugepages=36 default_hugepagesz=1G transparent_hugepage=never crashkernel=160M nmi_watchdog=0 audit=0 nosoftlockup processor.max_cstate=0 intel_idle.max_cstate=0 hpet=disable mce=ignore_ce tsc=reliable numa_balancing=disable isolcpus=1-35 rcu_nocbs=1-35 nohz_full=1-35
Interface
dpdk:
eal-params:
proc-type: primary
interfaces:
- interface: 0000:c3:00.0
threads: 16
promisc: true
multicast: true
checksum-checks: false
checksum-checks-offload: true
mtu: 1500
mempool-size: 65535
mempool-cache-size: 257
rx-descriptors: 1024
tx-descriptors: 1024
copy-mode: ips
copy-iface: 0000:c3:00.1
- interface: 0000:c3:00.1
threads: 16
promisc: true
multicast: true
checksum-checks: false
checksum-checks-offload: true
mtu: 1500
mempool-size: 65535
mempool-cache-size: 257
rx-descriptors: 1024
tx-descriptors: 1024
copy-mode: ips
copy-iface: 0000:c3:00.0
CPU Affinity
threading:
set-cpu-affinity: yes
cpu-affinity:
- management-cpu-set:
cpu: [ "33-35" ] # include only these CPUs in affinity settings
#- receive-cpu-set:
# cpu: [ 0 ] # include only these CPUs in affinity settings
- worker-cpu-set:
cpu: [ "1-32" ]
mode: "exclusive"
# Use explicitly 3 threads and don't compute number by using
# detect-thread-ratio variable:
threads: 32
prio:
low: [ 0 ]
medium: [ "1-35" ]
high: [ ]
default: "medium"
is it a bug? how to resolve this problem or just back to af_packet mode?