Numa Pinning & adding second NIC to yaml file

Please include the following information with your help request:

  • Suricata version - 6.0.10
  • Operating system and/or Linux distribution - CentOS7
  • How you installed Suricata - Source

Good afternoon,

I am trying to tune our suricata nodes for performance and start ingesting on another NIC for 10GBs to load balance/collect more traffic. Total of 20Gb/s to suricata node. I’m not sure how to add another NIC into the suricata.yaml file to load balance with the current set up properly. I also want to make sure i’m utilizing the threads/pinning appropriately so improve received traffic. I have a good understanding of how to utilize suricata based on alerts but i still have a lot more to learn about the configuration and architecture of this open source tool. Any input would be greatly appreciated!

Current set up:

2x x710 10Gbs NIC eno1/eno2 are in numa_node 0

Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 40
On-line CPU(s) list: 0-39
Thread(s) per core: 2
Core(s) per socket: 10
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz
Stepping: 4
CPU MHz: 2200.000
BogoMIPS: 4400.00
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 14080K
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39

af-packet:

  • interface: eno1
  • threads: 18
  • cluster-id: 98
  • use-mmap: yes
  • tpacket-v3: yes
  • ring-size: 204800
  • block-size: 65536

cpu-affinity:
- management-cpu-set:
cpu: [ 0 ] # include only these CPUs in affinity settings
mode: “balanced”
prio:
default: “low”
- receive-cpu-set:
# cpu: [ 0 ] # include only these CPUs in affinity settings
- worker-cpu-set:
cpu: [ 4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38 ]
mode: “exclusive”
# Use explicitly 3 threads and don’t compute number by using
# detect-thread-ratio variable:
# threads: 3
prio:
# low: [ 0 ]
# medium: [ “1-2” ]
# high: [ 3 ]
default: “high”

You can add a second interface in the af-packet section

af-packet:
af-packet:
  - interface: eno1
    threads: 18
    cluster-id: 98
    use-mmap: yes
    tpacket-v3: yes
    ring-size: 204800
    block-size: 65536
  - interface: en02
    ...

The workers will be affined to the cores you’ve specified (even cores 4-38) (and hence, the NUMA node that corresponds to the cores).

Thank you @Jeff_Lucovsky

would this be appropriate pinning and could we possibly use more threads as “workers”)? It looks like we are not using numa1, i would like to possibly give the suricata a bit more power.

thanks,
Joe

You can track numa statistics with numastat.

One thing about core usage is using a core and it’s hyperthreaded sibling may not be fully performant. You’d have to do some experimentation to see if using hyperthreads provides a boost or not.

Thank you @Jeff_Lucovsky

For CPU pinning for node 1, would i add 5,7,9,11,13, etc to the end of the brackets or would i need to make a new bracket correlation [5,7,9,11,13] | cpu: [ 4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38 ]

You’d include the CPU cores within one list, so [0, ...., n] would contain the cores from both numa nodes.

Does the amount of threads have to correlate to pinned cpu?

I plan to add a thread/pin every few days after adding a new uplink. [2,3,4] one day [2,3,4,5] etc

I truly appreciate the input with this. Learning the configuration and tuning is a whole other beast!

The thread count in the interface section of the af-packet section indicates how many worker threads are needed.

Ensure that the total of the threads value and the number of cores equal.

Because af-packets were set in yaml file. there was a systemd file that was looking for a specific device name. once i removed device after --af-packet . it started to pull both interfaces.

The command line overrides the config file when using --af-packet=<device>. Glad things are working.