In suricata 6.0.0, is there any change in CPU usage unlike the previous version?
This time I installed suricata 6.0.0 and newly configured suricata.yaml. However, even though there were no packets being processed by suricata, CPU usage continued to occur. I initially understood it as a suricata.yaml setup problem. However, when running the same suricata.yaml with version 5.0.4, there was no apparent CPU usage in idle state.
As I change several settings, I guess the CPU usage is related to the management-CPU. As the number of threads (managers/recyclers) of flow configuration increased, CPU usage increased accordingly. and in CPU-affinity, CPU load increased as management-CPU increased.
I am using suricata by configuring CPU information and NIC passthrough in qemu-kvm. The qemu-KVM configuration is constantly being modified, but CPU usage in the idle state of 6.0.0 installed in the Guest VM is putting a lot of load on the host PC such as CPU polling. Version 6.0.0, which was simply configured in Hyper-V, also caused some CPU usage in idle. However, it is not the same situation as qemu-KVM’s CPU polling.
This is not a request for help with qemu-KVM. I just want to know that version 6.0.0 has more CPU usage than version 5.0 when it is idle.
I modified suricata.yaml for accurate results. I adjusted the thread of flow: to 1 and assigned the management-cpu of cpu-affinity to cpu: [ 0 ] only. It proceeded without traffic. Please tell me if you need further testing.
In suricata, CPU 0 is a thread that looks like polling for information(htop) in KVM.
Both results are aggregated for about 30 seconds after running each version.
Doesn’t this happen in other kvm environments? (Different OS or different versions of libraries, etc.)
It should show a diff - i would expect at least - as in the previously shared screenshots the CPUs that were busy were the ones with the Flow threads on.
Can you please re-share a screenshot of the perf top with the busy cpus please after you have done the config change ?
Above we mainly used 2 threads and affinity for this. Is it correct to mean this?
flow:
memcap: 128mb
hash-size: 65536
prealloc: 10000
emergency-recovery: 10
managers: 2 # default to one flow manager
recyclers: 2 # default to one flow recycler thread
# Runmode the engine should use. Please check --list-runmodes to get the available
# runmodes for each packet acquisition method. Default depends on selected capture
# method. 'workers' generally gives best performance.
runmode: workers
# Suricata is multi-threaded. Here the threading can be influenced.
threading:
set-cpu-affinity: yes
# Tune cpu affinity of threads. Each family of threads can be bound
# to specific CPUs.
#
# These 2 apply to the all runmodes:
# management-cpu-set is used for flow timeout handling, counters
# worker-cpu-set is used for 'worker' threads
#
# Additionally, for autofp these apply:
# receive-cpu-set is used for capture threads
# verdict-cpu-set is used for IPS verdict threads
#
cpu-affinity:
- management-cpu-set:
cpu: [ "0", "1" ] # include only these CPUs in affinity settings
In this setting, the result was as shown in the attached picture.
Thank you for the update.
In the screenshots above for 6.0.0 the perf top commands are for CPU 0 and 1.
Judging by the htop output (pegged CPUs are 3 and 15 in htop) we actually need perf top -C 2 -g -K
and perf top -C 14 -g -K on the root@kvm~ host (not Suricata VM guest) to get an idea of what the problem might be.
Can you please share that for 6.0.0 ?
Oh sorry. With 6.0.0 running, I checked the information of the KVM Host again. In all figures, the top terminal is CPU #2 (htop #3) and the bottom is #14 (htop #15).
The main loops in the flow manager and recycler threads switched from a pthread condition wait to a simpler usleep loop. Wonder if that is what works poorly with kvm.