CPU usage of version 6.0.0

Hello Team.

In suricata 6.0.0, is there any change in CPU usage unlike the previous version?

This time I installed suricata 6.0.0 and newly configured suricata.yaml. However, even though there were no packets being processed by suricata, CPU usage continued to occur. I initially understood it as a suricata.yaml setup problem. However, when running the same suricata.yaml with version 5.0.4, there was no apparent CPU usage in idle state.

As I change several settings, I guess the CPU usage is related to the management-CPU. As the number of threads (managers/recyclers) of flow configuration increased, CPU usage increased accordingly. and in CPU-affinity, CPU load increased as management-CPU increased.

※ root@suricata-perf: Guest VM (fedora 32)
※ root@kvm: Host KVM (CentOS 8.2 2004)

  1. (v6.0.0)2_Flow_Threads,2_Affinity_management-CPU

  2. (v6.0.0)2_Flow_Threads,4_Affinity_management-CPU

  3. (v5.0.4)2_Flow_Threads,4_Affinity_management-CPU

I am using suricata by configuring CPU information and NIC passthrough in qemu-kvm. The qemu-KVM configuration is constantly being modified, but CPU usage in the idle state of 6.0.0 installed in the Guest VM is putting a lot of load on the host PC such as CPU polling. Version 6.0.0, which was simply configured in Hyper-V, also caused some CPU usage in idle. However, it is not the same situation as qemu-KVM’s CPU polling.

This is not a request for help with qemu-KVM. I just want to know that version 6.0.0 has more CPU usage than version 5.0 when it is idle.

I need help with the above.

suricata.yaml (71.3 KB)

Could you run perf top -p $(pidof suricata) in both scenarios? that might give us a hint if there is an overhead on cpu usage.

The both scenario results are the results 30 seconds after the command is executed.

5.0.4 flow manager:2, affinity 2

6.0.0 flow manager:2, affinity 2

The Event Count of the 6.0.0 version perf result was much higher.

Are both outputs the same for a longer period of time while traffic is inspected?

The aggregated numbers were different, but the 6.0.0 version was higher.
I connected the client and server and tested it using iperf3.

Throughput (iperf3 -c $SERVER_IP -p 443 -T 100 -P 100 / iperf3 -s -p 443)

  • Client-Server Loopback (9.41G)
  • 6.0.0 (9.40 ~ 9.41)
  • 5.0.4 (9.34 ~ 9.38)

Common setting

  • flow manager:2, affinity:2
  • enable stats (interval 10s)
  • Loaded rule ( zero )
  • Suricata Multi Queue: 10
  • Intel-x540 T2 * 2
  • Interface settings such as offload off followed the High Performance Configuration in the manual.

5.0.4 perf

5.0.4 stats
5.0.4_stats

6.0.0 perf

6.0.0 stats