Hello, I’m running suricata 7.0, with the command line “suricata --dpdk -vvv”, I found the message like “(W#17-0000…00.0) received packets 0” for some of worker threads.
Please see the full messages below:
7/12/2022 -- 00:24:00 - <Info> - time elapsed 153.609s
7/12/2022 -- 00:24:06 - <Perf> - 1740309 flows processed
7/12/2022 -- 00:24:07 - <Perf> - 1790877 flows processed
7/12/2022 -- 00:24:07 - <Perf> - Total RX stats of 0000:3c:00.0: packets 67146697 bytes: 34690689730 missed: 45505062 errors: 0 nombufs: 0
7/12/2022 -- 00:24:07 - <Perf> - (W#01-0000..00.0) received packets 5756428
7/12/2022 -- 00:24:07 - <Perf> - (W#02-0000..00.0) received packets 5675240
7/12/2022 -- 00:24:07 - <Perf> - (W#03-0000..00.0) received packets 5707733
7/12/2022 -- 00:24:07 - <Perf> - (W#04-0000..00.0) received packets 5649414
7/12/2022 -- 00:24:07 - <Perf> - (W#05-0000..00.0) received packets 5693064
7/12/2022 -- 00:24:07 - <Perf> - (W#06-0000..00.0) received packets 5613512
7/12/2022 -- 00:24:07 - <Perf> - (W#07-0000..00.0) received packets 6096739
7/12/2022 -- 00:24:07 - <Perf> - (W#08-0000..00.0) received packets 5711998
7/12/2022 -- 00:24:07 - <Perf> - (W#09-0000..00.0) received packets 2978760
7/12/2022 -- 00:24:07 - <Perf> - (W#10-0000..00.0) received packets 2451203
7/12/2022 -- 00:24:07 - <Perf> - (W#11-0000..00.0) received packets 2443522
7/12/2022 -- 00:24:07 - <Perf> - (W#12-0000..00.0) received packets 2911791
7/12/2022 -- 00:24:07 - <Perf> - (W#13-0000..00.0) received packets 2428270
7/12/2022 -- 00:24:07 - <Perf> - (W#14-0000..00.0) received packets 2826189
7/12/2022 -- 00:24:07 - <Perf> - (W#15-0000..00.0) received packets 2748860
> *7/12/2022 -- 00:24:07 - <Perf> - (W#16-0000..00.0) received packets 2446266*
> *7/12/2022 -- 00:24:07 - <Perf> - (W#17-0000..00.0) received packets 0*
> *7/12/2022 -- 00:24:07 - <Perf> - (W#18-0000..00.0) received packets 0*
> *7/12/2022 -- 00:24:07 - <Perf> - (W#19-0000..00.0) received packets 0*
> *7/12/2022 -- 00:24:07 - <Perf> - (W#20-0000..00.0) received packets 0*
> *7/12/2022 -- 00:24:07 - <Perf> - (W#21-0000..00.0) received packets 0*
> *7/12/2022 -- 00:24:07 - <Perf> - (W#22-0000..00.0) received packets 0*
> *7/12/2022 -- 00:24:07 - <Perf> - (W#23-0000..00.0) received packets 0*
> *7/12/2022 -- 00:24:07 - <Perf> - (W#24-0000..00.0) received packets 0*
7/12/2022 -- 00:24:07 - <Info> - Alerts: 0
7/12/2022 -- 00:24:08 - <Perf> - ippair memory usage: 414144 bytes, maximum: 16777216
7/12/2022 -- 00:24:08 - <Perf> - host memory usage: 398144 bytes, maximum: 33554432
7/12/2022 -- 00:24:08 - <Info> - cleaning up signature grouping structure... complete
7/12/2022 -- 00:24:08 - <Notice> - Stats for '0000:3c:00.0': pkts: 112651759, drop: 45505062 (40.39%), invalid chksum: 0
7/12/2022 -- 00:24:08 - <Info> - Closing device 0000:3c:00.0
Is there any wrong in my configuration?
Here is the lscpu output:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Silver 4214 CPU @ 2.20GHz
Stepping: 7
CPU MHz: 2200.000
BogoMIPS: 4400.00
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 16896K
NUMA node0 CPU(s): 0-11,24-35
NUMA node1 CPU(s): 12-23,36-47
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb cat_l3 cdp_l3 intel_ppin intel_pt mba tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local ibpb ibrs stibp dtherm ida arat pln pts pku ospke spec_ctrl intel_stibp arch_capabilities
The dpdk configuration:
dpdk:
eal-params:
proc-type: primary
# DPDK capture support
# RX queues (and TX queues in IPS mode) are assigned to cores in 1:1 ratio
interfaces:
- interface: 0000:3c:00.0 # PCIe address of the NIC port
# Threading: possible values are either "auto" or number of threads
# - auto takes all cores
# in IPS mode it is required to specify the number of cores and the numbers on both interfaces must match
threads: 24
promisc: true # promiscuous mode - capture all packets
multicast: true # enables also detection on multicast packets
checksum-checks: true # if Suricata should validate checksums
checksum-checks-offload: true # if possible offload checksum validation to the NIC (saves Suricata resources)
mtu: 1500 # Set MTU of the device in bytes
# rss-hash-functions: 0x0 # advanced configuration option, use only if you use untested NIC card and experience RSS warnings,
# For `rss-hash-functions` use hexadecimal 0x01ab format to specify RSS hash function flags - DumpRssFlags can help (you can see output if you use -vvv option during Suri startup)
# setting auto to rss_hf sets the default RSS hash functions (based on IP addresses)
#rss-hash-functions: 0x6d5a
# To approximately calculate required amount of space (in bytes) for interface's mempool: mempool-size * mtu
# Make sure you have enough allocated hugepages.
# The optimum size for the packet memory pool (in terms of memory usage) is power of two minus one: n = (2^q - 1)
mempool-size: 262143 # The number of elements in the mbuf pool
# Mempool cache size must be lower or equal to:
# - RTE_MEMPOOL_CACHE_MAX_SIZE (by default 512) and
# - "mempool-size / 1.5"
# It is advised to choose cache_size to have "mempool-size modulo cache_size == 0".
# If this is not the case, some elements will always stay in the pool and will never be used.
# The cache can be disabled if the cache_size argument is set to 0, can be useful to avoid losing objects in cache
# If the value is empty or set to "auto", Suricata will attempt to set cache size of the mempool to a value
# that matches the previously mentioned recommendations
mempool-cache-size: 511
rx-descriptors: 8192
tx-descriptors: 8192
And here is the threading configuration:
threading:
set-cpu-affinity: yes
# Tune cpu affinity of threads. Each family of threads can be bound
# to specific CPUs.
#
# These 2 apply to the all runmodes:
# management-cpu-set is used for flow timeout handling, counters
# worker-cpu-set is used for 'worker' threads
#
# Additionally, for autofp these apply:
# receive-cpu-set is used for capture threads
# verdict-cpu-set is used for IPS verdict threads
#
cpu-affinity:
- management-cpu-set:
cpu: [ 12, 36 ] # include only these CPUs in affinity settings
- receive-cpu-set:
cpu: [ 12 ] # include only these CPUs in affinity settings
- worker-cpu-set:
cpu: [ "0-11", "24-35" ]
mode: "exclusive"
# Use explicitly 3 threads and don't compute number by using
# detect-thread-ratio variable:
# threads: 3
prio:
low: [ ]
medium: [ ]
high: [ "0-11", "24-35" ]
default: "high"
Any help would be greatly appreciated!
Thanks.