Suricata with Netronome/Agilio error dpdk: Interface: No such device

Hey All,

I built Suricata 7.0.10 from source on a Debian 12 machine using DPDK v21.11.7 and a Netronome Agilio CX 2x40GbE SmartNIC. When I try to start Suricata with DPDK I get the error:

$ sudo -i suricata -c /etc/suricata/suricata.yaml --dpdk
i: suricata: This is Suricata version 7.0.10 RELEASE running in SYSTEM mode
EAL: No available 1048576 kB hugepages reported
NFP HWINFO header: 0x48490200
TELEMETRY: No legacy callbacks, legacy socket not created
E: dpdk: Interface "0000:c1:00.0": No such device

yaml config

dpdk:
  eal-params:
    proc-type: primary
  interfaces:
    - interface: 0000:c1:00.0 
      threads: auto
      interrupt-mode: false
      promisc: true
      multicast: true 
      checksum-checks: true 
      checksum-checks-offload: true 
      mtu: 1500 
      queues: 4
      mempool-size: auto
      mempool-cache-size: auto
      rx-descriptors: auto
      tx-descriptors: auto

Suricata build info

$ sudo -i suricata --build-info
This is Suricata version 7.0.10 RELEASE
Features: PCAP_SET_BUFF AF_PACKET HAVE_PACKET_FANOUT LIBCAP_NG LIBNET1.1 HAVE_HTP_URI_NORMALIZE_HOOK PCRE_JIT HAVE_NSS HTTP2_DECOMPRESSION HAVE_LUA HAVE_JA3 HAVE_JA4 HAVE_LIBJANSSON TLS TLS_C11 MAGIC RUST POPCNT64 
SIMD support: SSE_4_2 SSE_4_1 SSE_3 SSE_2 
Atomic intrinsics: 1 2 4 8 16 byte(s)
64-bits, Little-endian architecture
GCC version 12.2.0, C version 201112
compiled with _FORTIFY_SOURCE=0
L1 cache line size (CLS)=64
thread local storage method: _Thread_local
compiled with LibHTP v0.5.50, linked against LibHTP v0.5.50

Suricata Configuration:
  AF_PACKET support:                       yes
  AF_XDP support:                          yes
  DPDK support:                            yes
  eBPF support:                            yes
  XDP support:                             yes
  PF_RING support:                         no
  NFQueue support:                         no
  NFLOG support:                           no
  IPFW support:                            no
  Netmap support:                          no 
  DAG enabled:                             no
  Napatech enabled:                        no
  WinDivert enabled:                       no

  Unix socket enabled:                     yes
  Detection enabled:                       yes

  Libmagic support:                        yes
  libjansson support:                      yes
  hiredis support:                         no
  hiredis async with libevent:             no
  PCRE jit:                                yes
  LUA support:                             yes
  libluajit:                               no
  GeoIP2 support:                          yes
  JA3 support:                             yes
  JA4 support:                             yes
  Non-bundled htp:                         no
  Hyperscan support:                       yes
  Libnet support:                          yes
  liblz4 support:                          yes
  Landlock support:                        yes

  Rust support:                            yes
  Rust strict mode:                        no
  Rust compiler path:                      /usr/bin/rustc
  Rust compiler version:                   rustc 1.63.0
  Cargo path:                              /usr/bin/cargo
  Cargo version:                           cargo 1.65.0

  Python support:                          yes
  Python path:                             /usr/bin/python3
  Install suricatactl:                     yes
  Install suricatasc:                      yes
  Install suricata-update:                 yes

  Profiling enabled:                       no
  Profiling locks enabled:                 no
  Profiling rules enabled:                 no

  Plugin support (experimental):           yes
  DPDK Bond PMD:                           yes

Development settings:
  Coccinelle / spatch:                     no
  Unit tests enabled:                      no
  Debug output enabled:                    no
  Debug validation enabled:                no
  Fuzz targets enabled:                    no

Generic build parameters:
  Installation prefix:                     /usr
  Configuration directory:                 /etc/suricata/
  Log directory:                           /srv/var/log/suricata/

  --prefix                                 /usr
  --sysconfdir                             /etc
  --localstatedir                          /srv/var
  --datarootdir                            /usr/share

  Host:                                    x86_64-pc-linux-gnu
  Compiler:                                gcc (exec name) / g++ (real)
  GCC Protect enabled:                     no
  GCC march native enabled:                yes
  GCC Profile enabled:                     no
  Position Independent Executable enabled: no
  CFLAGS                                   -g -O2 -fPIC -std=c11 -march=native -I/usr/local/include -include rte_config.h -march=corei7 -I/usr/include/dbus-1.0 -I/usr/lib/x86_64-linux-gnu/dbus-1.0/include  -I${srcdir}/../rust/gen -I${srcdir}/../rust/dist
  PCAP_CFLAGS                               -I/usr/include
  SECCFLAGS                                

All tests run against the card and DPDK outside of Suricata indicate no issue.

$ sudo -i dpdk-devbind.py --status

Network devices using DPDK-compatible driver
============================================
0000:c1:00.0 'Device 4000' drv=vfio-pci unused=nfp
$ sudo -i dpdk-testpmd -l 0-3 -n 4 -- --port-topology=chained --nb-cores=2 --auto-start
EAL: Detected CPU lcores: 128
EAL: Detected NUMA nodes: 2
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: No available 1048576 kB hugepages reported
EAL: VFIO support initialized
EAL: Using IOMMU type 1 (Type 1)
EAL: Probe PCI driver: net_nfp_pf (19ee:4000) device: 0000:c1:00.0 (socket 1)
NFP HWINFO header: 0x48490200
Auto-start selected
testpmd: create a new mbuf pool <mb_pool_0>: n=171456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
testpmd: create a new mbuf pool <mb_pool_1>: n=171456, size=2176, socket=1
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 1)

Port 0: link state change event

Port 1: link state change event
Port 0: 00:15:4D:13:3B:08
Configuring Port 1 (socket 1)

Port 0: link state change event

Port 1: link state change event
Port 1: 00:15:4D:13:3B:0C
Checking link statuses...
Done
No commandline core given, start packet forwarding
io packet forwarding - ports=2 - cores=2 - streams=2 - NUMA support enabled, MP allocation mode: native
Logical Core 1 (socket 0) forwards packets on 1 streams:
  RX P=0/Q=0 (socket 1) -> TX P=1/Q=0 (socket 1) peer=02:00:00:00:00:01
Logical Core 2 (socket 0) forwards packets on 1 streams:
  RX P=1/Q=0 (socket 1) -> TX P=0/Q=0 (socket 1) peer=02:00:00:00:00:00

  io packet forwarding packets/burst=32
  nb forwarding cores=2 - nb forwarding ports=2
  port 0: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x0
    RX queue: 0
      RX desc=0 - RX free threshold=32
      RX threshold registers: pthresh=8 hthresh=8  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=0 - TX free threshold=32
      TX threshold registers: pthresh=32 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=32
  port 1: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x0
    RX queue: 0
      RX desc=0 - RX free threshold=32
      RX threshold registers: pthresh=8 hthresh=8  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=0 - TX free threshold=32
      TX threshold registers: pthresh=32 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=32
Press enter to exit

Port 0: link state change event

Port 1: link state change event

Telling cores to stop...
Waiting for lcores to finish...

  ---------------------- Forward statistics for port 0  ----------------------
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 3434308        TX-dropped: 0             TX-total: 3434308
  ----------------------------------------------------------------------------

  ---------------------- Forward statistics for port 1  ----------------------
  RX-packets: 3433897        RX-dropped: 0             RX-total: 3433897
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ----------------------------------------------------------------------------

  +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
  RX-packets: 3433897        RX-dropped: 0             RX-total: 3433897
  TX-packets: 3434308        TX-dropped: 0             TX-total: 3434308
  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Done.

And Suricata is finding/linking the correct DPDK libraries.

$ sudo ldd /usr/bin/suricata | grep librte
	librte_ethdev.so.22 => /usr/local/lib/x86_64-linux-gnu/librte_ethdev.so.22 (0x00007fac5230c000)
	librte_mbuf.so.22 => /usr/local/lib/x86_64-linux-gnu/librte_mbuf.so.22 (0x00007fac52dc7000)
	librte_mempool.so.22 => /usr/local/lib/x86_64-linux-gnu/librte_mempool.so.22 (0x00007fac52dbb000)
	librte_eal.so.22 => /usr/local/lib/x86_64-linux-gnu/librte_eal.so.22 (0x00007fac52201000)
	librte_net_bond.so.22 => /usr/local/lib/x86_64-linux-gnu/librte_net_bond.so.22 (0x00007fac52d9e000)
	librte_kvargs.so.22 => /usr/local/lib/x86_64-linux-gnu/librte_kvargs.so.22 (0x00007fac51f0a000)
	librte_telemetry.so.22 => /usr/local/lib/x86_64-linux-gnu/librte_telemetry.so.22 (0x00007fac51f01000)
	librte_net.so.22 => /usr/local/lib/x86_64-linux-gnu/librte_net.so.22 (0x00007fac51ef5000)
	librte_ring.so.22 => /usr/local/lib/x86_64-linux-gnu/librte_ring.so.22 (0x00007fac51ef0000)
	librte_meter.so.22 => /usr/local/lib/x86_64-linux-gnu/librte_meter.so.22 (0x00007fac51eeb000)
	librte_bus_pci.so.22 => /usr/local/lib/x86_64-linux-gnu/librte_bus_pci.so.22 (0x00007fac51edc000)
	librte_pci.so.22 => /usr/local/lib/x86_64-linux-gnu/librte_pci.so.22 (0x00007fac51ed7000)
	librte_bus_vdev.so.22 => /usr/local/lib/x86_64-linux-gnu/librte_bus_vdev.so.22 (0x00007fac51ece000)
	librte_sched.so.22 => /usr/local/lib/x86_64-linux-gnu/librte_sched.so.22 (0x00007fac51ec0000)
	librte_ip_frag.so.22 => /usr/local/lib/x86_64-linux-gnu/librte_ip_frag.so.22 (0x00007fac51eb5000)
	librte_hash.so.22 => /usr/local/lib/x86_64-linux-gnu/librte_hash.so.22 (0x00007fac51e9d000)
	librte_rcu.so.22 => /usr/local/lib/x86_64-linux-gnu/librte_rcu.so.22 (0x00007fac51e96000)

Any advice, or hopefully someone else has run into and solve the issue, would be very much appreciated. Please lt me know if there is additional detail I could provide.

Hello tleif,

Sorry for the delayed response, I must have overlooked this one but also thank you for the detailed report.
I haven’t worked with Netronome but:

  • sometimes NIC vendors create multiple ports from one PCIe address - this seems to be the case here - testpmd sees two ports from c1.0 PCIe address. Run testpmd in interactive mode and enter show port info all - there you should see the port names. I assume it will be something like 0000:c1:00.0_eth0 or something like that.
  • second, less probable, tip - ensure you only have one DPDK setup installed, you can also verify ldd of testpmd if the library path matches there

Thank you! That was the issue. The port was 0000:c1:00.0_port1. I’ve been working with Netronome support for a few weeks and that never came up. Now I get a new error I’m not sure how to resolve.

i: suricata: This is Suricata version 7.0.10 RELEASE running in SYSTEM mode
EAL: No available 1048576 kB hugepages reported
NFP HWINFO header: 0x48490200
TELEMETRY: No legacy callbacks, legacy socket not created
W: dpdk: 0000:c1:00.0_port1: modified RSS hash function based on hardware support: requested:0xa38c, configured:0x104
E: dpdk: 0000:c1:00.0_port1: Allmulticast setting of port (1) can not be configured. Set it to false
mempool/dpaa2: Not a valid dpaa2 buffer pool
E: dpdk: 0000:c1:00.0_port1: rte_pktmbuf_pool_create failed with code 36 (mempool: mempool_0000:c1:00.0_port1) - File name too long
E: dpdk: 0000:c1:00.0_port1: failed to configure

Any ideas would be very much appreciated.

1 Like

Since you compile it yourself, can you try to edit L1216 in src/runmode-dpdk.c from
snprintf(mempool_name, 64, "mempool_%.20s", iconf->iface);

to, e.g.

snprintf(mempool_name, 64, "mp_%.20s", iconf->iface);

I saw in Suricata 8 that the name is shorter, so the problem should not happen there. I can also backport the changes.
If possible, I encourage you to also try Suricata 8 :slight_smile:

Built 8 and we have lift off. Thank you very very much for the help!