DPDK: device not found in qemu virtual machine

  • Suricata version: 8.0.0-dev
  • Operating system and/or Linux distribution: OpenEuler 24.03 LTS
  • How you installed Suricata: source

Configuration:

dpdk:
  eal-params:
    proc-type: primary
    allow: ["0000:07:00.0"]
  # DPDK capture support
  # RX queues (and TX queues in IPS mode) are assigned to cores in 1:1 ratio
  interfaces:
    - interface: "0000:07:00.0"
      # Threading: possible values are either "auto" or number of threads
      # - auto takes all cores
      # in IPS mode it is required to specify the number of cores and the numbers on both interfaces must match
      threads: 1
      # interrupt-mode: false # true to switch to interrupt mode
      promisc: true # promiscuous mode - capture all packets
      #multicast: true # enables also detection on multicast packets
      checksum-checks: true # if Suricata should validate checksums
      checksum-checks-offload: true # if possible offload checksum validation to the NIC (saves Suricata resources)
      mtu: 1500 # Set MTU of the device in bytes
      vlan-strip-offload: false # if possible enable hardware vlan stripping
      # rss-hash-functions: 0x0 # advanced configuration option, use only if you use untested NIC card and experience RSS warnings,
      # For `rss-hash-functions` use hexadecimal 0x01ab format to specify RSS hash function flags - DumpRssFlags can help (you can see output if you use -vvv option during Suri startup)
threading:
  set-cpu-affinity: yes
  # Tune cpu affinity of threads. Each family of threads can be bound
  # to specific CPUs.
  #
  # These 2 apply to the all runmodes:
  # management-cpu-set is used for flow timeout handling, counters
  # worker-cpu-set is used for 'worker' threads
  #
  # Additionally, for autofp these apply:
  # receive-cpu-set is used for capture threads
  # verdict-cpu-set is used for IPS verdict threads
  #
  cpu-affinity:
    - management-cpu-set:
        cpu: [ 0 ]  # include only these CPUs in affinity settings
    - receive-cpu-set:
        cpu: [ 0 ]  # include only these CPUs in affinity settings
    - worker-cpu-set:
        cpu: [ "1" ]
        mode: "exclusive"
        # Use explicitly 3 threads and don't compute number by using

DPDK Version

pkg-config --modversion libdpdk
23.11.0

DPDK Configuration

modprobe vfio-pci
echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode
echo 1024 |  tee /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
dpdk-devbind -b vfio-pci 07:00.0

DPDK dev-bind

Network devices using DPDK-compatible driver
============================================
0000:07:00.0 'Virtio 1.0 network device 1041' drv=vfio-pci unused=

Network devices using kernel driver
===================================
0000:01:00.0 'Virtio 1.0 network device 1041' if=enp1s0 drv=virtio-pci unused=vfio-pci *Active*

No 'Baseband' devices detected
==============================

No 'Crypto' devices detected
============================

No 'DMA' devices detected
=========================

No 'Eventdev' devices detected
==============================

No 'Mempool' devices detected
=============================

No 'Compress' devices detected
==============================

Misc (rawdev) devices using kernel driver
=========================================
0000:04:00.0 'Virtio 1.0 block device 1042' drv=virtio-pci unused=vfio-pci

No 'Regex' devices detected
===========================

No 'ML' devices detected
========================

DPDK testpmd

show port info 0

********************* Infos for port 0  *********************
MAC address: 52:54:00:5D:38:C1
Device name: 0000:07:00.0
Driver name: net_virtio
Firmware-version: not available
Connect to socket: 0
memory allocation on the socket: 0
Link status: up
Link speed: Unknown
Link duplex: full-duplex
Autoneg status: On
MTU: 1500
Promiscuous mode: enabled
Allmulticast mode: disabled
Maximum number of MAC addresses: 64
Maximum number of MAC addresses of hash filtering: 0
VLAN offload:
  strip off, filter off, extend off, qinq strip off
No RSS offload flow type is supported.
Minimum size of RX buffer: 64
Maximum configurable length of RX packet: 9728
Maximum configurable size of LRO aggregated packet: 0
Current number of RX queues: 1
Max possible RX queues: 1
Max possible number of RXDs per queue: 32768
Min possible number of RXDs per queue: 32
RXDs number alignment: 1
Current number of TX queues: 1
Max possible TX queues: 1
Max possible number of TXDs per queue: 32768
Min possible number of TXDs per queue: 32
TXDs number alignment: 1
Max segment number per packet: 65535
Max segment number per MTU/TSO: 65535
Device capabilities: 0x0( )
Device error handling mode: none
Device private info:
guest_features: 0x110af8020
vtnet_hdr_size: 12
use_vec: rx-0 tx-0
use_inorder: rx-0 tx-0
intr_lsc: 1
max_mtu: 9698
max_rx_pkt_len: 1530
max_queue_pairs: 1
req_guest_features: 0x8000005f10ef8028
testpmd>

Suricata Logs

/usr/bin/suricata -c /etc/suricata/suricata.yaml --dpdk
Notice: caracal: This is Caracal version v1.0.4 running in SYSTEM mode [LogVersion:caracal.c:1167]
Info: cpu: CPUs/cores online: 4 [UtilCpuPrintSummary:util-cpu.c:149]
Info: caracal: Setting engine mode to IDS mode by default [PostConfLoadedSetup:caracal.c:2687]
Info: exception-policy: master exception-policy set to: auto [ExceptionPolicyMasterParse:util-exception-policy.c:201]
Info: conf: Running in live mode, activating unix socket [ConfUnixSocketIsEnable:util-conf.c:154]
Info: logopenfile: fast output device (regular) initialized: fast.log [SCConfLogOpenGeneric:util-logopenfile.c:616]
Info: logopenfile: eve-log output device (regular) initialized: eve.json [SCConfLogOpenGeneric:util-logopenfile.c:616]
Info: logopenfile: stats output device (regular) initialized: stats.log [SCConfLogOpenGeneric:util-logopenfile.c:616]
Warning: detect: No rule files match the pattern /var/lib/suricata/rules/suricata.rules [ProcessSigFiles:detect-engine-loader.c:239]
Warning: detect: 1 rule files specified, but no rules were loaded! [SigLoadSignatures:detect-engine-loader.c:358]
Info: threshold-config: Threshold config parsed: 0 rule(s) found [SCThresholdConfParseFile:util-threshold-config.c:1015]
Info: detect: 0 signatures processed. 0 are IP-only rules, 0 are inspecting packet payload, 0 inspect application layer, 0 are decoder event only [SigPrepareStage1:detect-engine-build.c:1843]
TELEMETRY: No legacy callbacks, legacy socket not created
Error: dpdk: 0000:07:00.0: interface not found: No such device [ConfigSetIface:runmode-dpdk.c:356]

Hi,
Thanks for reaching out.
One thing that I thought of – is it possible that you have multiple DPDK versions installed? How did you install DPDK?

You can compare if Suricata uses the same libs as dpdk-testpmd with ldd
Be sure to run it with the same user as you run your testpmd with. The exact paths to binaries can be found using which

btw: +1 for the report quality

1 Like

Thanks for your reply, this is my newly installed virtual machine, so I don’t think I installed multiple DPDK versions. I used the command dnf install -y dpdk-devel dpdk-tools to install DPDK, and testpmd appears to be statically linked.

DPDK testpmd

ldd /usr/bin/dpdk-testpmd
        linux-vdso.so.1 (0x00007ffe3ddec000)
        libm.so.6 => /usr/lib64/libm.so.6 (0x00007fdcb2f24000)
        libnuma.so.1 => /usr/lib64/libnuma.so.1 (0x00007fdcba777000)
        libpcap.so.1 => /usr/lib64/libpcap.so.1 (0x00007fdcba72d000)
        libmlx5.so.1 => /usr/lib64/libmlx5.so.1 (0x00007fdcba6b7000)
        libibverbs.so.1 => /usr/lib64/libibverbs.so.1 (0x00007fdcb2f03000)
        libmana.so.1 => /usr/lib64/libmana.so.1 (0x00007fdcb2efb000)
        libmlx4.so.1 => /usr/lib64/libmlx4.so.1 (0x00007fdcb2eec000)
        libxscale.so.1 => /usr/lib64/libxscale.so.1 (0x00007fdcb2ed2000)
        libc.so.6 => /usr/lib64/libc.so.6 (0x00007fdcb2cfa000)
        /lib64/ld-linux-x86-64.so.2 (0x00007fdcba796000)
        libnl-route-3.so.200 => /usr/lib64/libnl-route-3.so.200 (0x00007fdcb2c6a000)
        libnl-3.so.200 => /usr/lib64/libnl-3.so.200 (0x00007fdcb2c47000)

Suricata

ldd /usr/bin/suricata
        linux-vdso.so.1 (0x00007ffd3ebef000)
        libhtp.so.2 => /usr/lib64/libhtp.so.2 (0x00007fd8c6d6f000)
        libm.so.6 => /usr/lib64/libm.so.6 (0x00007fd8c6c93000)
        libxdp.so.1 => /usr/lib64/libxdp.so.1 (0x00007fd8c6c7d000)
        libbpf.so.1 => /usr/lib64/libbpf.so.1 (0x00007fd8c6c23000)
        libmagic.so.1 => /usr/lib64/libmagic.so.1 (0x00007fd8c6bf8000)
        libcap-ng.so.0 => /usr/lib64/libcap-ng.so.0 (0x00007fd8c6bf0000)
        libjansson.so.4 => /usr/lib64/libjansson.so.4 (0x00007fd8c6bde000)
        libyaml-0.so.2 => /usr/lib64/libyaml-0.so.2 (0x00007fd8c6bbd000)
        libpcre2-8.so.0 => /usr/lib64/libpcre2-8.so.0 (0x00007fd8c6b21000)
        libnuma.so.1 => /usr/lib64/libnuma.so.1 (0x00007fd8c6b13000)
        librte_ethdev.so.24 => /usr/lib64/librte_ethdev.so.24 (0x00007fd8c69e9000)
        librte_mbuf.so.24 => /usr/lib64/librte_mbuf.so.24 (0x00007fd8c69d7000)
        librte_mempool.so.24 => /usr/lib64/librte_mempool.so.24 (0x00007fd8c69c9000)
        librte_eal.so.24 => /usr/lib64/librte_eal.so.24 (0x00007fd8c6200000)
        librte_log.so.24 => /usr/lib64/librte_log.so.24 (0x00007fd8c69c3000)
        librte_net_bond.so.24 => /usr/lib64/librte_net_bond.so.24 (0x00007fd8c69a3000)
        libpcap.so.1 => /usr/lib64/libpcap.so.1 (0x00007fd8c6959000)
        librdkafka.so.1 => /usr/lib64/librdkafka.so.1 (0x00007fd8c6026000)
        libz.so.1 => /usr/lib64/libz.so.1 (0x00007fd8c693d000)
        libgcc_s.so.1 => /usr/lib64/libgcc_s.so.1 (0x00007fd8c691d000)
        libc.so.6 => /usr/lib64/libc.so.6 (0x00007fd8c5e4e000)
        /lib64/ld-linux-x86-64.so.2 (0x00007fd8c6daa000)
        libelf.so.1 => /usr/lib64/libelf.so.1 (0x00007fd8c6902000)
        librte_kvargs.so.24 => /usr/lib64/librte_kvargs.so.24 (0x00007fd8c68fd000)
        librte_telemetry.so.24 => /usr/lib64/librte_telemetry.so.24 (0x00007fd8c68f0000)
        librte_net.so.24 => /usr/lib64/librte_net.so.24 (0x00007fd8c68e7000)
        librte_ring.so.24 => /usr/lib64/librte_ring.so.24 (0x00007fd8c68e1000)
        librte_meter.so.24 => /usr/lib64/librte_meter.so.24 (0x00007fd8c68dc000)
        librte_bus_pci.so.24 => /usr/lib64/librte_bus_pci.so.24 (0x00007fd8c68cc000)
        librte_pci.so.24 => /usr/lib64/librte_pci.so.24 (0x00007fd8c68c5000)
        librte_bus_vdev.so.24 => /usr/lib64/librte_bus_vdev.so.24 (0x00007fd8c68be000)
        librte_sched.so.24 => /usr/lib64/librte_sched.so.24 (0x00007fd8c68b1000)
        librte_ip_frag.so.24 => /usr/lib64/librte_ip_frag.so.24 (0x00007fd8c68a7000)
        librte_hash.so.24 => /usr/lib64/librte_hash.so.24 (0x00007fd8c6893000)
        librte_rcu.so.24 => /usr/lib64/librte_rcu.so.24 (0x00007fd8c688c000)
        liblz4.so.1 => /usr/lib64/liblz4.so.1 (0x00007fd8c6864000)
        libzstd.so.1 => /usr/lib64/libzstd.so.1 (0x00007fd8c5d5d000)
        libsasl2.so.3 => /usr/lib64/libsasl2.so.3 (0x00007fd8c5d3e000)
        libssl.so.3 => /usr/lib64/libssl.so.3 (0x00007fd8c5c9a000)
        libcrypto.so.3 => /usr/lib64/libcrypto.so.3 (0x00007fd8c5800000)
        liblzma.so.5 => /usr/lib64/liblzma.so.5 (0x00007fd8c5c6a000)
        libbz2.so.1 => /usr/lib64/libbz2.so.1 (0x00007fd8c684f000)
        libresolv.so.2 => /usr/lib64/libresolv.so.2 (0x00007fd8c5c59000)
        libcrypt.so.1 => /usr/lib64/libcrypt.so.1 (0x00007fd8c57c1000)

I have written the DPDK application by myself, it indeed can’t find the port, I’m confused, should I feedback to the DPDK?

dpdk_port_info.c

#include <stdio.h>
#include <stdint.h>
#include <rte_common.h>
#include <rte_eal.h>
#include <rte_ethdev.h>

int main(int argc, char **argv) {
    int ret;
    uint16_t port_id;
    uint16_t nb_ports;

    // 初始化 EAL
    ret = rte_eal_init(argc, argv);
    if (ret < 0) {
        rte_panic("Failed to initialize EAL\n");
    }

    argc -= ret;
    argv += ret;

    // 获取端口数
    nb_ports = rte_eth_dev_count_avail();
    printf("Number of available ports: %u\n", nb_ports);

    // 枚举每个端口
    RTE_ETH_FOREACH_DEV(port_id) {
        struct rte_eth_dev_info dev_info;
        rte_eth_dev_info_get(port_id, &dev_info);
        printf("Port %u: driver name: %s\n", port_id, dev_info.driver_name);
    }

    return 0;
}

Compile

gcc dpdk_port_info.c -o dpdk_port_info   -I/usr/include/dpdk   -L/usr/lib64   -Wl,-rpath=/usr/lib64   -lrte_eal -lrte_ethdev -lrte_mempool -lrte_ring -lrte_mbuf

Output

EAL: Detected CPU lcores: 4
EAL: Detected NUMA nodes: 1
EAL: Detected shared linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: VFIO support initialized
TELEMETRY: No legacy callbacks, legacy socket not created
Number of available ports: 0

I have figured out, thank you.

What was the solution to your problem?

I add the eal-params d to link the driver

dpdk:
  eal-params:
    proc-type: primary
    allow: ["0000:07:00.0"]
    d: "/usr/lib64/librte_net_virtio.so"

I have another question, I found that rte_pktmbuf_pool_create can’t create, and it’s suricata’s log

Error: dpdk: 0000:07:00.0: rte_pktmbuf_pool_create failed with code 2 (mempool: mempool_0000:07:00.0): No such file or directory [DeviceConfigureQueues:runmode-dpdk.c:1275]
Error: dpdk: 0000:07:00.0: failed to configure [ParseDpdkConfigAndConfigureDevice:runmode-dpdk.c:1620]

Then I installed debuginfo by dnf install dpdk-debuginfo, and I debuged the program, I found the error, rte_mempool_ops_table.num_ops is empty.

(gdb) s
175             for (i = 0; i < rte_mempool_ops_table.num_ops; i++) {
(gdb) s
rte_mempool_create_empty (name=name@entry=0x7fffffffd540 "mempool_0000:07:00.0", n=2048, elt_size=elt_size@entry=2304, cache_size=512, private_data_size=64, private_data_size@entry=8, socket_id=0, flags=64) at ../lib/mempool/rte_mempool.c:930
930             if (ret)
(gdb) p rte_mempool_ops_table.num_ops
$1 = 0
(gdb) n
959             rte_mcfg_mempool_write_unlock();
(gdb) n
960             rte_free(te);
(gdb) n
961             rte_mempool_free(mp);
(gdb) n
962             return NULL;
(gdb) n
833                     return NULL;
(gdb) n
rte_pktmbuf_pool_create_by_ops (name=name@entry=0x7fffffffd540 "mempool_0000:07:00.0", n=<optimized out>, cache_size=<optimized out>, priv_size=priv_size@entry=0, data_room_size=data_room_size@entry=2176, socket_id=<optimized out>, ops_name=0x0) at ../lib/mbuf/rte_mbuf.c:243
243             if (mp == NULL)
(gdb) n
261                     return NULL;
(gdb) n
266             return mp;

Then I have read the DPDK’s code, I found out that it will register ops by the librte_mempool_ring.so, it is not loaded by the Suricata, so I used the following command to run Suricata, and it worked!
But I think this is not a solution to solve this, can you give me some advises?

LD_PRELOAD=/usr/lib64/librte_mempool_ring.so /usr/bin/suricata -c /etc/suricata/suricata.yaml --dpdk

What is this change about? Did you do any other changes to Suricata besides changing the name and version?

For self-management, I just changed the codes associated to the log and add kafka output.