7.0.0-beta1 dpdk alert performance problem?

7.0.0-beta1 dpdk alert performance problem? When I use rule to generate alert (even suppressed alerts ) it will become slow down and loss packet.

Test about 7.0.0-beta1 mode DPDK(IPS mode , copy-mode: ips)

  1. no rules, all nsm function + pcap log
    ~ 10Gbps mixed traffic,works fine, no packet loss. cpu keep 100%

  2. add 1 rule to generate many alerts and write to event log.
    for test only , just want to get about ~5000logs/sec with low network traffic
    alert ip any any → any any (msg:“per_pkt_alert_test”;flow: no_stream;rev:1;sid:1;)
    ~ 80Mbps Traffic , about 30% packet loss. some cpu usage lower to about 20%-40%(in htop)

  3. add 1 rule to generate many alerts (but no log to disk)
    alert ip any any → any any (msg:“per_pkt_alert_test”;flow: no_stream;noalert;rev:1;sid:1;)
    ~80Mbps Traffic , 30% packet loss, some cpu usage lower to about 20%-40%(in htop)
    so it is not about disk speed, just when there is a lot of alerts it will slow down.

but why?

af_packet mode have no problem

Here is configs
Machine: Intel Xeon with 36Cores, HT disabled + Mellanox mcx512a NIC
Kernel cmdline

BOOT_IMAGE=(hd2,gpt2)/vmlinuz-5.14.0-70.26.1.el9_0.x86_64 root=UUID=99bb48ce-5342-4094-8ca9-48cf3a2a467f ro hugepagesz=1G hugepages=36 default_hugepagesz=1G transparent_hugepage=never crashkernel=160M nmi_watchdog=0 audit=0 nosoftlockup processor.max_cstate=0 intel_idle.max_cstate=0 hpet=disable mce=ignore_ce tsc=reliable numa_balancing=disable isolcpus=1-35 rcu_nocbs=1-35 nohz_full=1-35

Interface

dpdk:
  eal-params:
    proc-type: primary
  
  interfaces:
    - interface: 0000:c3:00.0 
      threads: 16
      promisc: true 
      multicast: true 
      checksum-checks: false
      checksum-checks-offload: true
      mtu: 1500
      mempool-size: 65535
      mempool-cache-size: 257
      rx-descriptors: 1024
      tx-descriptors: 1024
      copy-mode: ips
      copy-iface: 0000:c3:00.1
	  
	- interface: 0000:c3:00.1 
      threads: 16
      promisc: true 
      multicast: true 
      checksum-checks: false
      checksum-checks-offload: true
      mtu: 1500
      mempool-size: 65535
      mempool-cache-size: 257
      rx-descriptors: 1024
      tx-descriptors: 1024
      copy-mode: ips
      copy-iface: 0000:c3:00.0

CPU Affinity

threading:
  set-cpu-affinity: yes
  cpu-affinity:
    - management-cpu-set:
        cpu: [ "33-35" ]  # include only these CPUs in affinity settings
    #- receive-cpu-set:
       # cpu: [ 0 ]  # include only these CPUs in affinity settings
    - worker-cpu-set:
        cpu: [ "1-32" ]
        mode: "exclusive"
        # Use explicitly 3 threads and don't compute number by using
        # detect-thread-ratio variable:
        threads: 32
        prio:
          low: [ 0 ]
          medium: [ "1-35"  ]
          high: [  ]
          default: "medium"

is it a bug? how to resolve this problem or just back to af_packet mode?

Hi @abigyellowdog

sorry for the delayed response.
Just wanted to check - your second and third test case was really on 80Mbps? That seems really low for drops to occur - especially with such settings (32 workers)… But you mention your CPU cores usage drops to 20 to 40%. That should never happen with DPDK - CPUs are supposed to keep polling on the NIC and have 100% CPU usage all the time (even with no packets coming in). That indicates something is really blocking the CPU cores. I did IPS tests for Suricon 2022 (talks available soon) and in there I had no problem with the performance). I had a full ET Open ruleset enabled with the performance of ~600 Mbps per worker (as usual for my CPU). It was running on 8 workers in total.

Just to be sure - can you try to increase the mempool size, cache size and rx/tx descriptors and see if it helps?
I would try something like this on both interfaces:

mempool-size: 262143
mempool-cache-size: 511
rx-descriptors: 8192
tx-descriptors: 8192

I would try running it on a lower number of cores - say 4 cores per interface.

Also, could you please attach perf top log?

Thanks.
Lukas