CPU affinity with

I have the following NUMA laytout:

root# lscpu | grep NUMA
NUMA node(s):                       2
NUMA node0 CPU(s):                  0,2,4,6,8,10,12,14,16,18
NUMA node1 CPU(s):                  1,3,5,7,9,11,13,15,17,19

In the documentation I see the following CPU sets defined:

threading:
 cpu-affinity:
   - management-cpu-set:
       cpu: [ "1-10" ]  # include only these CPUs in affinity settings
   - receive-cpu-set:
       cpu: [ "0-10" ]  # include only these CPUs in affinity settings
   - worker-cpu-set:
       cpu: [ "18-35", "54-71" ]
       mode: "exclusive"
       prio:
         low: [ 0 ]
         medium: [ "1" ]
         high: [ "18-35","54-71" ]
         default: "high"

So I configured my affinity setup like this, because my layout is pretty different.

  cpu-affinity:
    - management-cpu-set:
        cpu: [ 18,19 ]  # include only these CPUs in affinity settings
    - receive-cpu-set:
        cpu: [ 1,2 ]  # include only these CPUs in affinity settings
    - worker-cpu-set:
        cpu: [ "0,2,4,6,8,10,12,14,16", "1,3,5,7,9,11,13,15,17" ]
        mode: "exclusive"
        # Use explicitly 3 threads and don't compute number by using
        # detect-thread-ratio variable:
        # threads: 3
        prio:
          low: [ 0 ]
          medium: [ "1-10" ]
          high: [ "0,2,4,6,8,10,12,14,16", "1,3,5,7,9,11,13,15,17" ]
          default: "high"

That config is not parsed correctly. It only works when it’s a consecutive incremental range, such as “18-35”. But not e.g. “0,2,4,6”.

Error: affinity: worker-cpu-set: invalid cpu range (not an integer): "0,2,4,6,8,10,12,14,16"

How exactly should I read this? Obviously, when I remove the quotes, it’s not a string anymore. Then it is able to read the line. But then the NUMA layout is not correct. Then it’s just actually “0-17”, but then just “0,1,2,3,4,etc.” For best tuning, 2 NUMA nodes should be defined so that locality is used and it becomes more efficient.

Hi and welcome!

I would focus on NUMA locality only in case you actually have at least 1 NIC on EACH NUMA node.
Otherwise, I would just specify it as e.g. 0,1,2,3..17.
In case you have multiple NICs on the system laid on throughout the system, then assigning threads to multiple NICs per NUMA-node is not supported at the moment.
You can start 2 (multiple) processes of Suricata, each process will be running on individual NUMA, sharing the common config path and extending it with the CPU and NIC-specific settings.

1 Like

Interesting, the documentation seems to point at this possibility, right?

I have 2 NICs and 2 NUMA nodes, it would be cool to make use of NUMA locality to increase performance. If the documented threads per NIC is not that great or supported, then I’ll check to run multiple processes.

Usually this is supported with systemd, by providing an argument to load a custom config file and orchestrate this: Manage multiple service instances with systemctl | Opensource.com

I don’t see this in the systemd unit from Suricata. Do I have to create a systemd override file to load this custom config myself? That would work too, but it’s less pretty. So, something like this then?

I by the way see that my NIC is not connected to just one NUMA node. So it first needs to be connected to a different PCI slot I suppose.

20:15 ids1:/var/log/suricata                                                                                                                                                                                                 
root# cat /sys/class/net/eno1/device/numa_node                                                                          
0                                                                                                                                                                                                                                              
                                                                                                                       
20:20 ids1:/var/log/suricata                                                                                                                                                                                                 
root# cat /sys/class/net/eno2/device/numa_node                                                                          
0                                                                                                                                                                                                                                              
                                                                                                                       
20:20 ids1:/var/log/suricata                                                                                                                                                                                                 
root# cat /sys/devices/system/node/node0/cpulist                                                                        
0,2,4,6,8,10,12,14,16,18  

This guide is also quite nice: SEPTun/SEPTun.pdf at master · pevma/SEPTun · GitHub

But it’s a bit unclear how their own hardware setup translates to that configuration. Do their NUMA nodes for example really go from 2-13? But what I gather from their example configs and setup, they don’t create multiple processes of Suricata. Did you really mean another Suricata process?

Based on your output the eno1 and eno2 ports of the NIC are all on the same NUMA node 0, so it would be enough to use the 0,2,4,6,8,10,12,14,16,18 range in your yaml file.

Correct, I will change the hardware layout. But then the question still remains, how can I make use of the NUMA affinity? When I quote both NUMA lists I get an error. If I keep them like this then I don’t make a distinction between these 2 hardware islands, with their own locality benefits.

Ideally I have both network cards connected to their own NUMA node. How can I configure something like that in this yaml file? I got a response that 2 total separate processes should be used. For that I then have to modify the systemd unit. This is not documented, but may work. Also the systemd unit is not designed to make this easy, with the @ prefix a different config file could be defined without creating separate unit files. So I’m a bit confused what the exact way forward is. The network interfaces will be changed to make use of their own NUMA node, but how to configure Suricata in the proper way to make use of these 2 NUMA nodes?

I’ve created a bugreport: Bug #7137: "invalid cpu range" when trying to use CPU affinity - Suricata - Open Information Security Foundation