Can Suricata version 7.0.0-rc2 receive packets from memif via DPDK

Hello guys,

I am trying to receive packets from memif via DPDK.

In accordance with: libmemif: add testing application · FDio/vpp@7280e3f · GitHub

1)In VPP I create the memif interface:

create interface memif id 0 master
set int state memif0/0 up
set int l2 xconn GigabitEthernet5/0/0 memif0/0
set int l2 xconn memif0/0 GigabitEthernet5/0/0

set interface state GigabitEthernet5/0/0 up

2)This memif interface can be readed by test_app from libmemif: add testing application · FDio/vpp@7280e3f · GitHub

3)This memif interface can be readed by dpdk-testpmd:

test@vpp-desk:~$ sudo /home/test/projects/vpp/build-root/build-vpp-native/external/build-dpdk/app/dpdk-testpmd --vdev=net_memif0,role=slave,id=0,socket-abstract=no,socket=/run/vpp/memif.sock – -i

3)But I can’t configure the suricata to read this memif.

The suricata.yaml fragment is:

dpdk:
eal-params:
file-prefix: suri
proc-type: primary
vdev: ‘net_memif,role=slave,id=0,socket-abstract=no,socket=/run/vpp/memif.sock’

interfaces:

  • interface: net_memif
    checksum-checks: true
    checksum-checks-offload: true
    copy-iface: none
    copy-mode: none
    mempool-cache-size: 257
    mempool-size: 65535
    mtu: 1500
    multicast: true
    promisc: true
    rss-hash-functions: auto
    rx-descriptors: 1024
    socket-id: 0
    threads: 4
    tx-descriptors: 1024

Suricata output:

i: suricata: This is Suricata version 7.0.0-rc2 RELEASE running in SYSTEM mode
W: exception-policy: exception-policy: auto not a valid config in IDS mode. Ignoring it.
W: detect: No rule files match the pattern /usr/local/var/lib/suricata/rules/suricata.rules
W: detect: 1 rule files specified, but no rules were loaded!
EAL: No available 1048576 kB hugepages reported
EAL: Cannot open /dev/vfio/noiommu-0: Device or resource busy
EAL: Failed to open VFIO group 0
EAL: Requested device 0000:05:00.0 cannot be used
TELEMETRY: No legacy callbacks, legacy socket not created
i: conf: unable to find interface default in DPDK config
E: dpdk: net_memif: invalid socket id (err: Operation not permitted)
E: dpdk: net_memif: failed to configure

The question is: Can the Suricata version 7.0.0-rc2 receive packets from memif via DPDK?
Is there something that I am doing wrong ?

Thank you for any hint/help :slight_smile:

Hey Alex,

thanks for reaching out. The memif interface is not officially supported and I haven’t tried running it. The issue might lie in the configuration process, sometimes some DPDK interfaces are configured a little bit differently than others. Invalid socket ID might lead to somewhere here:

Can you possibly try to debug the output in those places to see e.g. what the function rte_eth_dev_socket_id returns or similar things? E.g. that commit supposed to update the codebase for new DPDK API (I think 21.11+) but maybe I missed something there…
Also, what version of DPDK are you using? If you are using <22.11 could you try it with 22.11 or above?

Thank you.

Hi, Lukas!

Thank you for you answer. I’m sorry to be late with the reply.

I use dpdk version 21.11.3. VPP and suricata are on the same host.

test@vpp-desk:~$ dpkg -l | grep dpdk-dev
ii dpdk-dev 21.11.3-0ubuntu0.22.04.1 amd64 Data Plane Development Kit (dev tools)
ii libdpdk-dev:amd64 21.11.3-0ubuntu0.22.04.1 amd64 Data Plane Development Kit (basic development files)
test@vpp-desk:~$

As far as i understand the problem is that dpdk doesn’t know which socket memif (virtual device) is connected to.
Solution to this problem (from dpdk/app/testpmd) :

  1. at the start of the suricata, enumerate all available numa sockets.
  2. if dpdk can’t determine socket for memif, the memif will allocate the device ports to first found socket.

It works. The code is attached (new code commented out with //ADymov).

3)I tried to set this up with suricata.yaml,

dpdk:
eal-params:
socket-mem: 1024
lcores: ‘1,2@(0)’

but it doesn’t help to solve this problem.

It will be very interesting to know your opinion about this problem.

Thank you!
runmode-dpdk.c (63.5 KB)

Hey Alex,

I will try to look into this a bit more but I see lcores parameter in your suricata.yaml. That should not be there as CPU affinity should only be set through threading section in suricata.yaml (management/worker threads).

Btw I remember I’ve seen one version of your comment which mentioned that you are running Suricata as a secondary process. That is not currently supported, I was able to make it work but the work hasn’t been yet merged (and will not be for some time until things settle a bit with Suri 7.0). I believe currently there is some simple configuration check that is preventing Suricata to run as secondary, it was not that hard to make it work.

Hi, Lukas!

Thank you for you answer.

1)Yes, I agree that the lcores setting in the dpdk section is wrong settings. I’ve read

https://docs.suricata.io/en/suricata-7.0.0-rc2/configuration/suricata-yaml.html

However, lcore parameters like -l, -c, and --lcores` are specified within the suricata-yaml-threading section to prevent configuration overlap.

I just wanted to show that I was unable to bind memif to a socket via suricata.yaml without changing the code.

2)Lukas, yes, I first tried to run suricata to read memif as a secondary process, but that was the wrong direction. That’s why I removed my comment.

3)In order for suricata to read memif, I did the following:

3.1)Creating memif in vpp

create interface memif id 0 master
set int state memif0/0 up
show memif

3.2) building the suricata

cd ~
git clone GitHub - OISF/suricata: Suricata is a network Intrusion Detection System, Intrusion Prevention System and Network Security Monitoring engine developed by the OISF and the Suricata community.
cd ~/suricata
git checkout 7.0.0-rc2

update runmode-dpdk.c (attached to the previous letter)
cp runmode-dpdk.c ./suricata/src/

./autogen.sh
./configure --enable-dpdk
make
sudo make install-full

3.3)changing the suricata.yaml

The dpdk section should look like this: (you can see the socket name and id in the show memif output)

dpdk:
eal-params:
proc-type: primary
vdev: ‘net_memif,role=client,id=0,socket-abstract=no,socket=/run/vpp/memif.sock’

interfaces:

  • interface: net_memif
    checksum-checks: true
    checksum-checks-offload: true
    copy-iface: none
    copy-mode: none
    mempool-cache-size: 257
    mempool-size: 65535
    mtu: 1500
    multicast: true
    promisc: true
    rss-hash-functions: auto
    rx-descriptors: 1024
    socket-id: 0
    threads: 4
    tx-descriptors: 1024

In this configuration, the suricata reads the memif.

Hey Alex,

I’ve tried memif through dpdk-testpmd.

  1. created memif server:
sudo dpdk-testpmd -l 0,1 --proc-type=primary --file-prefix=pmd1 --vdev=net_memif,role=server --no-pci -- -i --rxq=2 --txq=2
  1. edited suricata.yaml (suricata.yaml.memif.2thr) to:
dpdk:
  eal-params:
    proc-type: primary
    file-prefix: pmd2
    vdev: net_memif

  # DPDK capture support
  # RX queues (and TX queues in IPS mode) are assigned to cores in 1:1 ratio
  interfaces:
    - interface: net_memif #0000:00:08.0 # PCIe address of the NIC port
      # Threading: possible values are either "auto" or number of threads
      # - auto takes all cores
      # in IPS mode it is required to specify the number of cores and the numbers on both interfaces must match
      threads: 2

with CPU affinity set to 2 threads

- worker-cpu-set:
        cpu: [ "2-3" ]
  1. started Suricata
sudo ./src/suricata -c suricata.yaml.memif.2thr -S /dev/null -l /tmp/ --dpdk -vvvv
  1. in dpdk-testpmd I run:
    start tx-first

And it works even on multiple queues - in each Suricata worker I receive 32 pkts. I wanted to try it with VPP too but I am having problems to make it run so far - can’t seem to connect even with dpdk-testpmd to VPP - I blame it on master/slave vs server/client terminology that is a bit different between DPDK and VPP (fd.io).

I’ve tried it with a guide from https://doc.dpdk.org/guides/nics/memif.html where when using the command vpp# create interface memif id 0 server no-zero-copy complained create interface memif: unknown input '

My versions:
VPP - 23.06, DPDK 23.03

Edit1:
Ok, for VPP I was missing abstract_socket=no in vdev params. Was able to reproduce the socket error that you reported.

Hi Alex,

seems like I’ve nailed it down - if possible, try it with the newest master (soon to be 7.0) with DPDK 22.11+.

Alternatively, here is the proposed fix: https://github.com/lukashino/suricata/tree/bug/socket_id_v1
Unknown socket_id was understood as error, it has been reworked so it doesn’t trip return value checks.

Ideally, please confirm that this works for you (it is for my VPP 23.06 and DPDK 21.11 testing as well)

Hi, Lukas!

Thank you for your letter.

Your suricata build from https://github.com/lukashino/suricata/tree/bug/socket_id_v1 works fine. Thank you.
I am using vpp 23.02 and dpdk 21.11.3.

This is suricata output in IDS mode:

EAL: No available 1048576 kB hugepages reported
EAL: Cannot open /dev/vfio/noiommu-0: Device or resource busy
EAL: Failed to open VFIO group 0
EAL: Requested device 0000:05:00.0 cannot be used
EAL: Cannot open /dev/vfio/noiommu-1: Device or resource busy
EAL: Failed to open VFIO group 1
EAL: Requested device 0000:06:00.0 cannot be used
TELEMETRY: No legacy callbacks, legacy socket not created
Warning: dpdk: “all” specified in worker CPU cores affinity, excluding management threads [ConfigSetThreads:runmode-dpdk.c:379]
Error: dpdk: net_memif: Allmulticast setting of port (0) can not be configured. Set it to false [DeviceConfigure:runmode-dpdk.c:1431]
Warning: dpdk: net_memif: changing MTU on port 0 is not supported, ignoring the setting [DeviceConfigure:runmode-dpdk.c:1470]
Notice: dpdk: net_memif: unable to determine NIC’s NUMA node, degraded performance can be expected [ReceiveDPDKThreadInit:source-dpdk.c:554]
Notice: threads: Threads created → W: 7 FM: 1 FR: 1 Engine started. [TmThreadWaitOnThreadRunning:tm-threads.c:1888]
^CNotice: suricata: Signal Received. Stopping engine. [SuricataMainLoop:suricata.c:2831]
Notice: device: net_memif: packets: 38681070, drops: 0 (0.00%), invalid chksum: 0 [LiveDeviceListClean:util-device.c:321]
test@vpp-desk:~/projects/suricata$

This is suricata output in IPS mode:

EAL: No available 1048576 kB hugepages reported
EAL: Cannot open /dev/vfio/noiommu-0: Device or resource busy
EAL: Failed to open VFIO group 0
EAL: Requested device 0000:05:00.0 cannot be used
EAL: Cannot open /dev/vfio/noiommu-1: Device or resource busy
EAL: Failed to open VFIO group 1
EAL: Requested device 0000:06:00.0 cannot be used
TELEMETRY: No legacy callbacks, legacy socket not created
Warning: dpdk: “all” specified in worker CPU cores affinity, excluding management threads [ConfigSetThreads:runmode-dpdk.c:379]
Error: dpdk: net_memif0: Allmulticast setting of port (0) can not be configured. Set it to false [DeviceConfigure:runmode-dpdk.c:1431]
Warning: dpdk: net_memif0: changing MTU on port 0 is not supported, ignoring the setting [DeviceConfigure:runmode-dpdk.c:1470]
Notice: dpdk: net_memif0: unable to determine NIC’s NUMA node, degraded performance can be expected [ReceiveDPDKThreadInit:source-dpdk.c:554]
Error: dpdk: net_memif1: Allmulticast setting of port (1) can not be configured. Set it to false [DeviceConfigure:runmode-dpdk.c:1431]
Warning: dpdk: net_memif1: changing MTU on port 1 is not supported, ignoring the setting [DeviceConfigure:runmode-dpdk.c:1470]
Notice: dpdk: net_memif1: unable to determine NIC’s NUMA node, degraded performance can be expected [ReceiveDPDKThreadInit:source-dpdk.c:554]
Notice: threads: Threads created → W: 2 FM: 1 FR: 1 Engine started. [TmThreadWaitOnThreadRunning:tm-threads.c:1888]
^CNotice: suricata: Signal Received. Stopping engine. [SuricataMainLoop:suricata.c:2831]
Notice: device: net_memif0: packets: 27763661, drops: 0 (0.00%), invalid chksum: 0 [LiveDeviceListClean:util-device.c:321]
Notice: device: net_memif1: packets: 0, drops: 0 (0.00%), invalid chksum: 0 [LiveDeviceListClean:util-device.c:321]

Surprisingly, but:

Tomorrow I’ll try to check it with the newest master.
Could you roughly estimate: when we can expect the release of Surikata with your fix?

Thank you!

Hey Alex,

First of all, thanks for testing it out.
Is that a typo in your RX results of 7.0.0rc2 in IPS mode where you noted rx only gives you 700Kbps (possibly Mbps)?

I would wait for the results of the master branch (or just checkout to changes prior to my commit and apply your changes. (In case the master branch would contain new changes) There is a slight difference- your solution sets 0 to the socket id, mine sets SOCKET_ID_ANY which made more sense to me (as it better represents the state of the nic) but may affect some internal e.g. DPDK mempool creation.

Generally speaking I would assume new minor release relatively quickly, sometime towards the fall this year but of course no guarantees.

Hi, Lukas!

I’m sorry to be late with the reply.

Is that a typo in your RX results of 7.0.0rc2 in IPS mode where you noted rx only gives you 700Kbps >(possibly Mbps)?

No, exactly 700Kbps. And it is related exactly to the fact that I am setting 0 for the socket id.

There is a slight difference- your solution sets 0 to the socket id, mine sets SOCKET_ID_ANY which >made more sense to me (as it better represents the state of the nic) but may affect some internal >e.g. DPDK mempool creation.

You’re right. It was enough just to add 3 lines to the function for everything to work correctly:

static int32_t DeviceSetSocketID(uint16_t port_id, int32_t *socket_id)
{
    rte_errno = 0;
    int retval = rte_eth_dev_socket_id(port_id);
    *socket_id = retval;

#if RTE_VERSION >= RTE_VERSION_NUM(22, 11, 0, 0) // DPDK API changed since 22.11
    retval = -rte_errno;
#else
    if (retval == SOCKET_ID_ANY)
        retval = 0; // DPDK couldn't determine socket ID of a port
#endif

    return retval;
}

seems like I’ve nailed it down - if possible, try it with the newest master (soon to be 7.0) with >DPDK 22.11+.

The newest master (GitHub - OISF/suricata: Suricata is a network Intrusion Detection System, Intrusion Prevention System and Network Security Monitoring engine developed by the OISF and the Suricata community.) with dpdk 23.03.0 works fine (in IDS and IPS modes) without any code changes. In IPS mode trex shows speed: Total-Tx: 600 Mbps, Total-Rx: 560 Mbps

So, I can take the latest suricata release with the latest dpdk release.
Lukas, thank you very much!

Hi, Alex

Can you share yaml config for libmemif in ips mode? Thanks

I have tried sucessfully with this yaml config:

dpdk:
eal-params:
proc-type: primary
file-prefix: suricata
vdev: [“net_memif1,role=slave,id=1,socket-abstract=no,socket=/run/vpp/memif.sock”, “net_memif0,role=slave,id=0,socket-abstract=no,socket=/run/vpp/memif.sock”]

interfaces:
- interface: net_memif1
checksum-checks: true
checksum-checks-offload: true
copy-iface: net_memif0
copy-mode: ips
mempool-cache-size: 257
mempool-size: 65535
mtu: 1500
multicast: true
promisc: true
rss-hash-functions: auto
rx-descriptors: 1024
socket-id: 0
tx-descriptors: 1024
threads: 3
- interface: net_memif0
checksum-checks: true
checksum-checks-offload: true
copy-iface: net_memif1
copy-mode: ips
mempool-cache-size: 257
mempool-size: 65535
mtu: 1500
multicast: true
promisc: true
rss-hash-functions: auto
rx-descriptors: 1024
socket-id: 0
tx-descriptors: 1024
threads: 3

Hi, andy123

I have the same suricata.yaml as you. This problem appears when DPDK version <= 22.11 (in my case 21.11.3). If DPDK version > 22.11+, suricata works fine with memif. What version of DPDK are you using?

Hi Alex and Andy,

when I was doing the fix, my intention was to fix it also for DPDK <22.11 (at the time of fix I was using 21.11).

From the comment Can Suricata version 7.0.0-rc2 receive packets from memif via DPDK - #8 by lukashino we can see it is supposed to work with at least 21.11.

Andy seemed to be successful with his config but just to be sure, I’ll be happy to learn what is Andy’s Suricata/DPDK version and if everything works as expected.

Lukas

DPDK version: 22.11.2-2~deb12u1(installed by debian deb), vpp: 23.02, suricata: 7.0.1, memif works fine.
I used this vpp config to test: libmemif: add testing application · FDio/vpp@7280e3f · GitHub, I got this test result:
vpp# monitor interface memif0/0
rx: 0pps 0bps tx: 25.93Mpps 13.28Gbps
rx: 0pps 0bps tx: 25.98Mpps 13.30Gbps
vpp# monitor interface memif0/1
rx: 2.68Mpps 1.37Gbps tx: 0pps 0bps
rx: 2.68Mpps 1.37Gbps tx: 0pps 0bps

Thanks Lukas and Alex.

Just curious; why using 2 packet processing frameworks? Is VPP together with dpdk faster or more flexible then suricata with ‘only’ dpdk and psyhical interfaces?
Cheers,
Andre

For me, vpp based on dpdk has more flexible features such L2,L3… support than pure dpdk.

Hi, Lukas and ADymov

Several months passed from last my post :grinning:.

My initial goal was to use VPP and Suricata like this:
[IF] → DPDK_VPP ← (memif) → DPDK_Suricata
Among DPDK_VPP and DPDK_Suricata, use memif virtual interface to transfer packets.

In my previous tests, I just tested packets which were generated by VPP packet generator, DPDK_Suricata received them sucessfully.

By the futher, I want to test the full scenario:[IF] → DPDK_VPP ← (memif) → DPDK_Suricata.
Now I found the problem, when DPDK_Suricata is launched, DPDK_VPP would not receive any packets from the physical IFs. After searched the forum, it seems to be the same problem discussed here Suricata and dpdk in secondary mode, right? Can you give any advise for me?

I created a new topic to discuss this…