Getting "*** buffer overflow detected ***: terminated" from suricata with --dpdk secondary process

I’m trying to have a dpdk primary process which sends packets over a dpdk ring towards the suricata process.
I thought I once read it in the docs, that it’s not yet supported? but can’t find it now.
Does suricata supports being a dpdk secondary process?

I’m using

# suricata -V
This is Suricata version 7.0.7 RELEASE

I have a compiled suricata with dpdk:

# suricata --build-info
This is Suricata version 7.0.7 RELEASE
Features: NFQ PCAP_SET_BUFF AF_PACKET HAVE_PACKET_FANOUT LIBCAP_NG LIBNET1.1 HAVE_HTP_URI_NORMALIZE_HOOK PCRE_JIT HAVE_NSS HTTP2_DECOMPRESSION HAVE_LUA HAVE_JA3 HAVE_JA4 HAVE_LIBJANSSON TLS TLS_C11 MAGIC RUST POPCNT64 
SIMD support: SSE_4_2 SSE_4_1 SSE_3 SSE_2 
Atomic intrinsics: 1 2 4 8 16 byte(s)
64-bits, Little-endian architecture
GCC version 11.4.1 20231218 (Red Hat 11.4.1-3), C version 201112
compiled with _FORTIFY_SOURCE=0
L1 cache line size (CLS)=64
thread local storage method: _Thread_local
compiled with LibHTP v0.5.49, linked against LibHTP v0.5.49

Suricata Configuration:
  AF_PACKET support:                       yes
  AF_XDP support:                          no
  DPDK support:                            yes
  eBPF support:                            yes
  XDP support:                             yes
  PF_RING support:                         no
  NFQueue support:                         yes
  NFLOG support:                           no
  IPFW support:                            no
  Netmap support:                          no 
  DAG enabled:                             no
  Napatech enabled:                        no
  WinDivert enabled:                       no

  Unix socket enabled:                     yes
  Detection enabled:                       yes

  Libmagic support:                        yes
  libjansson support:                      yes
  hiredis support:                         yes
  hiredis async with libevent:             yes
  PCRE jit:                                yes
  LUA support:                             yes
  libluajit:                               no
  GeoIP2 support:                          yes
  JA3 support:                             yes
  JA4 support:                             yes
  Non-bundled htp:                         no
  Hyperscan support:                       yes
  Libnet support:                          yes
  liblz4 support:                          yes
  Landlock support:                        yes

  Rust support:                            yes
  Rust strict mode:                        no
  Rust compiler path:                      /usr/bin/rustc
  Rust compiler version:                   rustc 1.75.0 (82e1608df 2023-12-21) (Red Hat 1.75.0-1.el9)
  Cargo path:                              /usr/bin/cargo
  Cargo version:                           cargo 1.75.0

  Python support:                          yes
  Python path:                             /usr/bin/python3
  Install suricatactl:                     yes
  Install suricatasc:                      yes
  Install suricata-update:                 yes

  Profiling enabled:                       no
  Profiling locks enabled:                 no
  Profiling rules enabled:                 no

  Plugin support (experimental):           yes
  DPDK Bond PMD:                           no

Development settings:
  Coccinelle / spatch:                     no
  Unit tests enabled:                      no
  Debug output enabled:                    no
  Debug validation enabled:                no
  Fuzz targets enabled:                    no

Generic build parameters:
  Installation prefix:                     /usr
  Configuration directory:                 /etc/suricata/
  Log directory:                           /var/log/suricata/

  --prefix                                 /usr
  --sysconfdir                             /etc
  --localstatedir                          /var
  --datarootdir                            /usr/share

  Host:                                    x86_64-pc-linux-gnu
  Compiler:                                gcc (exec name) / g++ (real)
  GCC Protect enabled:                     no
  GCC march native enabled:                no
  GCC Profile enabled:                     no
  Position Independent Executable enabled: no
  CFLAGS                                   -g -O2 -fPIC -std=c11 -I/usr/include/dpdk -include rte_config.h -march=corei7 -mrtm  -I${srcdir}/../rust/gen -I${srcdir}/../rust/dist
  PCAP_CFLAGS                               
  SECCFLAGS

I saw this: Task #5560: dpdk: Design a test-case for Suricata running as a secondary process - Suricata - Open Information Security Foundation
Which says, as I understand, it will only be supported in the 8.0.0 version?

This is my partial suricata.yaml file:

dpdk:
  eal-params:
    proc-type: secondary
    vdev: net_ring0
    file-prefix: test

  interfaces:
    - interface: net_ring0
    ...

I create the --vdev=net_ring0 in my primary process using this example: https://doc.dpdk.org/guides-24.11/nics/pcap_ring.html#usage-examples
Also adding it as

rte_eal_hotplug_add("vdev", "net_ring0", "");

Then, when starting suricata I get:

# suricata --dpdk
i: suricata: This is Suricata version 7.0.7 RELEASE running in SYSTEM mode
W: dpdk: net_ring0: changing MTU on port 6 is not supported, ignoring the setting
*** buffer overflow detected ***: terminated
i: threads: Threads created -> W: 1 FM: 1 FR: 1   Engine started.
Aborted

gdb points on:

Thread 12 "US" received signal SIGABRT, Aborted.
[Switching to Thread 0x7fffd4f73640 (LWP 788)]
__pthread_kill_implementation (no_tid=0, signo=6, threadid=140736766359104) at ./nptl/pthread_kill.c:44
44	./nptl/pthread_kill.c: No such file or directory.
(gdb) bt
#0  __pthread_kill_implementation (no_tid=0, signo=6, threadid=140736766359104) at ./nptl/pthread_kill.c:44
#1  __pthread_kill_internal (signo=6, threadid=140736766359104) at ./nptl/pthread_kill.c:78
#2  __GI___pthread_kill (threadid=140736766359104, signo=signo@entry=6) at ./nptl/pthread_kill.c:89
#3  0x00007ffff791c476 in __GI_raise (sig=sig@entry=6) at ../sysdeps/posix/raise.c:26
#4  0x00007ffff79027f3 in __GI_abort () at ./stdlib/abort.c:79
#5  0x00007ffff7963676 in __libc_message (action=action@entry=do_abort, 
    fmt=fmt@entry=0x7ffff7ab592e "*** %s ***: terminated\n") at ../sysdeps/posix/libc_fatal.c:155
#6  0x00007ffff7a1059a in __GI___fortify_fail (msg=msg@entry=0x7ffff7ab58d4 "buffer overflow detected")
    at ./debug/fortify_fail.c:26
#7  0x00007ffff7a0ef16 in __GI___chk_fail () at ./debug/chk_fail.c:28
#8  0x00007ffff7a104db in __fdelt_chk (d=<optimized out>) at ./debug/fdelt_chk.c:25
#9  0x0000555555685345 in UnixMain (this=<optimized out>) at unix-manager.c:640
#10 0x00005555556859b8 in UnixManager (th_v=0x5555620db4e0, thread_data=<optimized out>) at unix-manager.c:1161
#11 0x0000555555681d37 in TmThreadsManagement (td=0x5555620db4e0) at tm-threads.c:557
#12 0x00007ffff796eac3 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
#13 0x00007ffff7a00850 in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

Thank you very much.

@lukashino what do you think ?

Hi,
yes, that’s correct, secondary process support is not currently supported but it will be in 8.0.
You were on the right track, except I don’t think rte_eal_hotplug_add("vdev", "net_ring0", ""); is needed.
I need to resurrect my old branch to get this finished. Feature ETA is in April.
Lukas

@lukashino, from the little I saw in Task #5560: dpdk: Design a test-case for Suricata running as a secondary process - Suricata - Open Information Security Foundation , you’re:

  1. You’re using a new operation-mode: ring on the dpdk: interface in suricata.yaml
  2. You will probably be using rte_ring_lookup() in suricata to get the ring and read packets from it.

I’m trying to understand - is there a different approach? I see from https://doc.dpdk.org/guides-24.11/nics/pcap_ring.html#usage-examples that a ring can be treated as an eth device. So if you pass --vdev=net_ring0 on the dpdk command line, you should just get it as a regular port. I’m not sure about it.
Are you familiar with this approach ? This way, no specific rte_ring_lookup should be used.

Hi there,

I’ve done some experiments around this and I couldn’t make it work with net_ring.

However, I was able to run it with memif.

Here is the testpmd command that I start with:
sudo dpdk-testpmd -v -l 2,4 --no-pci --vdev="net_memif0,role=server" --file-prefix=pmd1 -- --portmask=0x1 --nb-ports 1 --nb-cores 1 --rxq 1 --txq 1 -i

Run Suricata with:

dpdk:
  eal-params:
    proc-type: primary #secondary # primary
    file-prefix: pmd2
    vdev: net_memif0 #,zero-copy=yes

  # DPDK capture support
  # RX queues (and TX queues in IPS mode) are assigned to cores in 1:1 ratio
  interfaces:
    - interface: net_memif0 #0000:3b:00.0 # PCIe address of the NIC port
      # Threading: possible values are either "auto" or number of threads
      # - auto takes all cores
      # in IPS mode it is required to specify the number of cores and the numbers on both interfaces must match
      threads: auto
      # interrupt-mode: false # true to switch to interrupt mode 
      promisc: true # promiscuous mode - capture all packets
      multicast: true # enables also detection on multicast packets
      checksum-checks: true # if Suricata should validate checksums
      checksum-checks-offload: true # if possible offload checksum validation to the NIC (saves Suricata resources)
      mtu: 1500 # Set MTU of the device in bytes
      copy-mode: tap
      copy-iface: net_memif0

Note: I set copy-iface / mode to receive the packets back to testpmd.

After the testpmd is started, I start Suricata and then on dpdk-testpmd I run command:
start tx_first

Note.2: For some reason net_memif doesn’t report the basic stats though in stats.log / eve.json stats events you will see the packets. This can be fixed though.

1 Like

Thank you. I was able to pass packets from my primary dpdk process to the suricata dpdk using the net_memif