Operating system and/or Linux distribution: OpenEuler 24.03 LTS
How you installed Suricata: source
Configuration:
dpdk:
eal-params:
proc-type: primary
allow: ["0000:07:00.0"]
# DPDK capture support
# RX queues (and TX queues in IPS mode) are assigned to cores in 1:1 ratio
interfaces:
- interface: "0000:07:00.0"
# Threading: possible values are either "auto" or number of threads
# - auto takes all cores
# in IPS mode it is required to specify the number of cores and the numbers on both interfaces must match
threads: 1
# interrupt-mode: false # true to switch to interrupt mode
promisc: true # promiscuous mode - capture all packets
#multicast: true # enables also detection on multicast packets
checksum-checks: true # if Suricata should validate checksums
checksum-checks-offload: true # if possible offload checksum validation to the NIC (saves Suricata resources)
mtu: 1500 # Set MTU of the device in bytes
vlan-strip-offload: false # if possible enable hardware vlan stripping
# rss-hash-functions: 0x0 # advanced configuration option, use only if you use untested NIC card and experience RSS warnings,
# For `rss-hash-functions` use hexadecimal 0x01ab format to specify RSS hash function flags - DumpRssFlags can help (you can see output if you use -vvv option during Suri startup)
threading:
set-cpu-affinity: yes
# Tune cpu affinity of threads. Each family of threads can be bound
# to specific CPUs.
#
# These 2 apply to the all runmodes:
# management-cpu-set is used for flow timeout handling, counters
# worker-cpu-set is used for 'worker' threads
#
# Additionally, for autofp these apply:
# receive-cpu-set is used for capture threads
# verdict-cpu-set is used for IPS verdict threads
#
cpu-affinity:
- management-cpu-set:
cpu: [ 0 ] # include only these CPUs in affinity settings
- receive-cpu-set:
cpu: [ 0 ] # include only these CPUs in affinity settings
- worker-cpu-set:
cpu: [ "1" ]
mode: "exclusive"
# Use explicitly 3 threads and don't compute number by using
Network devices using DPDK-compatible driver
============================================
0000:07:00.0 'Virtio 1.0 network device 1041' drv=vfio-pci unused=
Network devices using kernel driver
===================================
0000:01:00.0 'Virtio 1.0 network device 1041' if=enp1s0 drv=virtio-pci unused=vfio-pci *Active*
No 'Baseband' devices detected
==============================
No 'Crypto' devices detected
============================
No 'DMA' devices detected
=========================
No 'Eventdev' devices detected
==============================
No 'Mempool' devices detected
=============================
No 'Compress' devices detected
==============================
Misc (rawdev) devices using kernel driver
=========================================
0000:04:00.0 'Virtio 1.0 block device 1042' drv=virtio-pci unused=vfio-pci
No 'Regex' devices detected
===========================
No 'ML' devices detected
========================
DPDK testpmd
show port info 0
********************* Infos for port 0 *********************
MAC address: 52:54:00:5D:38:C1
Device name: 0000:07:00.0
Driver name: net_virtio
Firmware-version: not available
Connect to socket: 0
memory allocation on the socket: 0
Link status: up
Link speed: Unknown
Link duplex: full-duplex
Autoneg status: On
MTU: 1500
Promiscuous mode: enabled
Allmulticast mode: disabled
Maximum number of MAC addresses: 64
Maximum number of MAC addresses of hash filtering: 0
VLAN offload:
strip off, filter off, extend off, qinq strip off
No RSS offload flow type is supported.
Minimum size of RX buffer: 64
Maximum configurable length of RX packet: 9728
Maximum configurable size of LRO aggregated packet: 0
Current number of RX queues: 1
Max possible RX queues: 1
Max possible number of RXDs per queue: 32768
Min possible number of RXDs per queue: 32
RXDs number alignment: 1
Current number of TX queues: 1
Max possible TX queues: 1
Max possible number of TXDs per queue: 32768
Min possible number of TXDs per queue: 32
TXDs number alignment: 1
Max segment number per packet: 65535
Max segment number per MTU/TSO: 65535
Device capabilities: 0x0( )
Device error handling mode: none
Device private info:
guest_features: 0x110af8020
vtnet_hdr_size: 12
use_vec: rx-0 tx-0
use_inorder: rx-0 tx-0
intr_lsc: 1
max_mtu: 9698
max_rx_pkt_len: 1530
max_queue_pairs: 1
req_guest_features: 0x8000005f10ef8028
testpmd>
Suricata Logs
/usr/bin/suricata -c /etc/suricata/suricata.yaml --dpdk
Notice: caracal: This is Caracal version v1.0.4 running in SYSTEM mode [LogVersion:caracal.c:1167]
Info: cpu: CPUs/cores online: 4 [UtilCpuPrintSummary:util-cpu.c:149]
Info: caracal: Setting engine mode to IDS mode by default [PostConfLoadedSetup:caracal.c:2687]
Info: exception-policy: master exception-policy set to: auto [ExceptionPolicyMasterParse:util-exception-policy.c:201]
Info: conf: Running in live mode, activating unix socket [ConfUnixSocketIsEnable:util-conf.c:154]
Info: logopenfile: fast output device (regular) initialized: fast.log [SCConfLogOpenGeneric:util-logopenfile.c:616]
Info: logopenfile: eve-log output device (regular) initialized: eve.json [SCConfLogOpenGeneric:util-logopenfile.c:616]
Info: logopenfile: stats output device (regular) initialized: stats.log [SCConfLogOpenGeneric:util-logopenfile.c:616]
Warning: detect: No rule files match the pattern /var/lib/suricata/rules/suricata.rules [ProcessSigFiles:detect-engine-loader.c:239]
Warning: detect: 1 rule files specified, but no rules were loaded! [SigLoadSignatures:detect-engine-loader.c:358]
Info: threshold-config: Threshold config parsed: 0 rule(s) found [SCThresholdConfParseFile:util-threshold-config.c:1015]
Info: detect: 0 signatures processed. 0 are IP-only rules, 0 are inspecting packet payload, 0 inspect application layer, 0 are decoder event only [SigPrepareStage1:detect-engine-build.c:1843]
TELEMETRY: No legacy callbacks, legacy socket not created
Error: dpdk: 0000:07:00.0: interface not found: No such device [ConfigSetIface:runmode-dpdk.c:356]
Hi,
Thanks for reaching out.
One thing that I thought of – is it possible that you have multiple DPDK versions installed? How did you install DPDK?
You can compare if Suricata uses the same libs as dpdk-testpmd with ldd
Be sure to run it with the same user as you run your testpmd with. The exact paths to binaries can be found using which
Thanks for your reply, this is my newly installed virtual machine, so I don’t think I installed multiple DPDK versions. I used the command dnf install -y dpdk-devel dpdk-tools to install DPDK, and testpmd appears to be statically linked.
EAL: Detected CPU lcores: 4
EAL: Detected NUMA nodes: 1
EAL: Detected shared linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: VFIO support initialized
TELEMETRY: No legacy callbacks, legacy socket not created
Number of available ports: 0
I have another question, I found that rte_pktmbuf_pool_create can’t create, and it’s suricata’s log
Error: dpdk: 0000:07:00.0: rte_pktmbuf_pool_create failed with code 2 (mempool: mempool_0000:07:00.0): No such file or directory [DeviceConfigureQueues:runmode-dpdk.c:1275]
Error: dpdk: 0000:07:00.0: failed to configure [ParseDpdkConfigAndConfigureDevice:runmode-dpdk.c:1620]
Then I installed debuginfo by dnf install dpdk-debuginfo, and I debuged the program, I found the error, rte_mempool_ops_table.num_ops is empty.
(gdb) s
175 for (i = 0; i < rte_mempool_ops_table.num_ops; i++) {
(gdb) s
rte_mempool_create_empty (name=name@entry=0x7fffffffd540 "mempool_0000:07:00.0", n=2048, elt_size=elt_size@entry=2304, cache_size=512, private_data_size=64, private_data_size@entry=8, socket_id=0, flags=64) at ../lib/mempool/rte_mempool.c:930
930 if (ret)
(gdb) p rte_mempool_ops_table.num_ops
$1 = 0
(gdb) n
959 rte_mcfg_mempool_write_unlock();
(gdb) n
960 rte_free(te);
(gdb) n
961 rte_mempool_free(mp);
(gdb) n
962 return NULL;
(gdb) n
833 return NULL;
(gdb) n
rte_pktmbuf_pool_create_by_ops (name=name@entry=0x7fffffffd540 "mempool_0000:07:00.0", n=<optimized out>, cache_size=<optimized out>, priv_size=priv_size@entry=0, data_room_size=data_room_size@entry=2176, socket_id=<optimized out>, ops_name=0x0) at ../lib/mbuf/rte_mbuf.c:243
243 if (mp == NULL)
(gdb) n
261 return NULL;
(gdb) n
266 return mp;
Then I have read the DPDK’s code, I found out that it will register ops by the librte_mempool_ring.so, it is not loaded by the Suricata, so I used the following command to run Suricata, and it worked!
But I think this is not a solution to solve this, can you give me some advises?