Several days ago, I have tried successfully suricata dpdk ips mode with libmemif. Now I have a new question: when config dpdk mode with 3 threads, then use vpp packet generator to simulate packets to memif interface.
From suricata log, only one rx thread received packets and other threads show 0 as below:
Perf: dpdk: net_memif1: total RX stats: packets 0 bytes: 0 missed: 0 errors: 0 nombufs: 0 [ReceiveDPDKThreadExitStats:source-dpdk.c:639]
Perf: dpdk: net_memif1: total TX stats: packets 413836372 bytes: 26485527808 errors: 0 [ReceiveDPDKThreadExitStats:source-dpdk.c:644]
Perf: dpdk: (W#01-net_memif1) received packets 0 [ReceiveDPDKThreadExitStats:source-dpdk.c:649]
Perf: dpdk: (W#02-net_memif1) received packets 0 [ReceiveDPDKThreadExitStats:source-dpdk.c:649]
Perf: dpdk: (W#03-net_memif1) received packets 0 [ReceiveDPDKThreadExitStats:source-dpdk.c:649]
Perf: dpdk: net_memif0: total RX stats: packets 0 bytes: 0 missed: 0 errors: 0 nombufs: 0 [ReceiveDPDKThreadExitStats:source-dpdk.c:639]
Perf: dpdk: net_memif0: total TX stats: packets 0 bytes: 0 errors: 0 [ReceiveDPDKThreadExitStats:source-dpdk.c:644] Perf: dpdk: (W#01-net_memif0) received packets 413839776 [ReceiveDPDKThreadExitStats:source-dpdk.c:649] Perf: dpdk: (W#02-net_memif0) received packets 0 [ReceiveDPDKThreadExitStats:source-dpdk.c:649] Perf: dpdk: (W#03-net_memif0) received packets 0 [ReceiveDPDKThreadExitStats:source-dpdk.c:649]
Info: counters: Alerts: 0 [StatsLogSummary:counters.c:878]
And in the log, also shows: Config: dpdk: net_memif0: RSS not supported [DeviceInitPortConf:runmode-dpdk.c:1143]
In my test evn, one rx thread can process 2.39Mpps indicated in vpp terminal. Can suricata dpdk mode with memif vdev support multiple RSS queues? If so, we can improve total PPS through more threads setting.
I haven’t worked with either VPP or memif PMD much…
But from the docs, it seems like memif in DPDK does not support RSS natively.
In the DPDK docs, testpmd is only shown with one queue - https://doc.dpdk.org/guides/nics/memif.html
It is always a good idea to test if dpdk-testpmd works with the desired feature (e.g. configure RSS on memif PMD). testpmd command then would be something like:
Seems that memif support multiple rx,tx queues. So I configured vpp and testpmd as below: memif with 2 rx/tx queues:
vpp config: create interface memif id 0 master rx-queues 2 tx-queues 2 create interface memif id 1 master rx-queues 2 tx-queues 2 set int state memif0/0 up set int state memif0/1 up create packet-generator interface pg0 set int state pg0 up create packet-generator interface pg1 set int state pg1 up
set int l2 xconn pg0 memif0/0 set int l2 xconn memif0/0 pg0 set int l2 xconn pg1 memif0/1 set int l2 xconn memif0/1 pg1
root@debian:~# dpdk-testpmd -l 3-7 -n 4 --vdev=net_memif,role=slave,id=0,socket-abstract=no,socket=/run/vpp/memif.sock – --nb-cores=4 --rxq=2 --txq=2 -i EAL: Detected CPU lcores: 8 EAL: Detected NUMA nodes: 1 EAL: Detected shared linkage of DPDK EAL: Multi-process socket /var/run/dpdk/rte/mp_socket EAL: Selected IOVA mode ‘VA’ memif_set_role(): Role argument “slave” is deprecated, use “client” Interactive-mode selected Warning: NUMA should be configured manually by using --port-numa-config and --ring-numa-config parameters along with --numa. testpmd: create a new mbuf pool <mb_pool_0>: n=179456, size=2176, socket=0 testpmd: preferred mempool ops selected: ring_mp_mc
Warning! port-topology=paired and odd forward ports number, the last port will pair with itself.
Configuring Port 0 (socket 0) Port 0: 36:8B:0F:52:34:E5 Checking link statuses… Done testpmd> set corelist 4,5,6,7 testpmd> start io packet forwarding - ports=1 - cores=2 - streams=2 - NUMA support enabled, MP allocation mode: native Logical Core 4 (socket 0) forwards packets on 1 streams:
*** RX P=0/Q=0 (socket 0) → TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00*** Logical Core 5 (socket 0) forwards packets on 1 streams:
*** RX P=0/Q=1 (socket 0) → TX P=0/Q=1 (socket 0) peer=02:00:00:00:00:00***
set fwd flowgen # flow packet generator
port stop all
port start all
start
show port stats all
stop
testpmd> stop
Telling cores to stop…
Waiting for lcores to finish…
as far as I understand - multi-queue setup works with memif interface (both in other DPDK apps and in Suricata), though, you need to ensure to distribute/generate packets to those individual queues because the memif interface itself will not distribute it. Am I correct?