Suricata 6.0.4 behavior on threads CPU affnity wrong on some hardware

Some

I installed suricata 6.0.4 on a DELL server(R740)
It have a strange cpu core order.

lscpu numa-core bind just like this:
I have ens1f1 on numa 0 and ens5f1 on numa 1
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28…
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29…

some of configs:
run mode workers
interface

  • interface: ens1f1
    threads: 4
    cluster-id: 97
    cluster-type: cluster_qm
  • interface: ens5f1
    threads: 4
    cluster-id: 95
    cluster-type: cluster_qm

cpu affinity

  • worker-cpu-set:
    cpu: [ “2”,“4”,“6”,“8”,“3”,“5”,“7”,“9” ]

But when I run it with args -vvvv
it display thread and core binding
I find it assign:
2 3 4 5 to ens1f1
and
6 7 8 9 to ens5f1

In most of servers that numa is like
Numa0 0-7
Numa1 8-15
It will work file

but in my server with strange cpu core id assign,it will cause half of threads running at anothor numa node. It will cause latency.

I can not change this cpu core id assign to numa because it is defined in acpi table by motherboard vendor

I see there is a bug report on 2017-2018 about this but no one give a solution.

If you need any more info just reply.
thanks

That is a bigger issue or at least it’s complex. Some things to consider:

  1. You could check if there are any numa related settings in the bios, for example numa clustering can have an impact on this order. Depends highly on the platform.

  2. Since you use cluster_qm, are you also doing the set_irq_affinity settings for the interfaces to cpu?

  3. On thing I am working on is the impact/order of the interface section. Everything needs to be aligned and there are several parts where numa plays a role, either on the OS but also on Suricata side. So what happens, at least based on my tests, that you define 4 threads in the first interface section and doesn’t consider your order in the pool it seems.

Maybe point 2 already fixes it for you, at least worth a try.

  1. I checked. And asked for server’s technical support.They said this is fixed in bios .In bios what I only can do with numa is disable or enable “Node Interleaving”. Both disable or enable on this server will not change cpu core order. Using google to find more about this but result is it is depends on mothorboard(defined by vendor,and usually can not change)

2.I have already did that correctly. Disable offload,set symmeric rss,set queue number = thread per interface,set irq affnity to local numa. All of these have no problem
And,for this problem it can be ignore.Only talking with wrong cpu affnity to threads.

3.It seems suricata sort the cpu defined in yaml config ,and then assign for threading numbers to per interface. I just want it follow the order in yaml that I configured, not sort from small to big.(If there is anyway to do that,it can work on any order)

And threr is a workaround.
If thread_need_per_interface < cores_pre_cpu/2:
just assigned first interface to smaller cores in numa 0
and assigned second interface to bigger cores in numa1

In any condition even thread_need_per_interface >= cores_per_cpu/2
enable hyper threading
isolate half logical cores (big number) in numa0 (never use and no process will run on them)
isolate half logical cores (small number) in numa1 (never use and no process will run on them)
set cpu affnity in the rest logical cores.

But it is only a workaround.

Hello,
Running in the same issue with Suricata 7; 4 interfaces of which 3 on numa 0 and 1 on numa 1. Just a mess when Suricata does this instead of following what I have told it to do un yaml.

Sorry about your experience. Could you please create a ticket on our Redmine or create another forum post explaining the issue and what you would expect Suricata to do instead?
That’ll help us see if there’s a workaround/solid reason to do something like this.

Edit: Just saw you already started a new post. Thank you. :slight_smile:

1 Like