E: dpdk: Interface "0000:19:00.1": No such device

E: dpdk: Interface “0000:19:00.1”: No such device
Suricata version 7.0.4 DPDK 21.11.3 Centos 8.5.2111

Hi, friend, I have a problem about suricata,
I have intel and mellanox on the device, but I use intel only,
DPDK test no problem, but suricata transfer dpdk not good,please help me

[root@ids01.kj01.zzjg.dxm-int.com /usr/bin]# suricata -c /usr/local/etc/suricata/suricata.yaml --user=root --dpdk
i: suricata: This is Suricata version 7.0.4 RELEASE running in SYSTEM mode
W: detect: No rule files match the pattern /usr/local/var/lib/suricata/rules/suricata.rules
W: detect: 1 rule files specified, but no rules were loaded!
EAL: No available hugepages reported in hugepages-1048576kB
EAL: Invalid NUMA socket, default to 0
mlx5_pci: No kernel/verbs support for VF LAG bonding found.
common_mlx5: Failed to load driver = mlx5_pci.

EAL: Requested device 0000:31:00.0 cannot be used
EAL: Invalid NUMA socket, default to 0
mlx5_pci: No kernel/verbs support for VF LAG bonding found.
common_mlx5: Failed to load driver = mlx5_pci.

EAL: Requested device 0000:31:00.1 cannot be used
E: dpdk: Interface “0000:19:00.1”: No such device

and DPDK is ok,

[root@ids01.kj01.zzjg.dxm-int.com /home/dpdk-stable-21.11.3/build/app]# ./dpdk-testpmd -l 0,2 -a 0000:19:00.1 – -i --forward-mode=rxonly
EAL: Detected CPU lcores: 48
EAL: Detected NUMA nodes: 1
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode ‘PA’
EAL: No available 1048576 kB hugepages reported
EAL: VFIO support initialized
EAL: Probe PCI driver: net_i40e (8086:dda) device: 0000:19:00.1 (socket 0)
Interactive-mode selected
Set rxonly packet forwarding mode
testpmd: create a new mbuf pool <mb_pool_0>: n=155456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc

Warning! port-topology=paired and odd forward ports number, the last port will pair with itself.

Configuring Port 0 (socket 0)
Port 0: 9C:C2:C4:31:90:88
Checking link statuses…
Done
testpmd>
Port 0: link state change event
start
rxonly packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP allocation mode: native
Logical Core 2 (socket 0) forwards packets on 1 streams:
RX P=0/Q=0 (socket 0) → TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00

rxonly packet forwarding packets/burst=32
nb forwarding cores=1 - nb forwarding ports=1
port 0: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x10000
RX queue: 0
RX desc=256 - RX free threshold=32
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=256 - TX free threshold=32
TX threshold registers: pthresh=32 hthresh=0 wthresh=0
TX offloads=0x10000 - TX RS bit threshold=32
testpmd> stop
Telling cores to stop…
Waiting for lcores to finish…

---------------------- Forward statistics for port 0 ----------------------
RX-packets: 1875888 RX-dropped: 51 RX-total: 1875939
TX-packets: 0 TX-dropped: 0 TX-total: 0
+++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
RX-packets: 1875888 RX-dropped: 51 RX-total: 1875939
TX-packets: 0 TX-dropped: 0 TX-total: 0
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Done.
testpmd> quit

Stopping port 0…
Stopping ports…
Done

Shutting down port 0…
Closing ports…
Port 0 is closed
Done

Bye…

Yes,suricata.yaml file as follows:
dpdk:
eal-params:
proc-type: primary

DPDK capture support

RX queues (and TX queues in IPS mode) are assigned to cores in 1:1 ratio

interfaces:

  • interface: 0000:19:00.1 # PCIe address of the NIC port

Threading: possible values are either “auto” or number of threads

- auto takes all cores

in IPS mode it is required to specify the number of cores and the numbers on both interfaces must match

threads: auto
promisc: true # promiscuous mode - capture all packets
multicast: true # enables also detection on multicast packets
checksum-checks: true # if Suricata should validate checksums
checksum-checks-offload: true # if possible offload checksum validation to the NIC (saves Suricata resources)
mtu: 1500 # Set MTU of the device in bytes