My test scenario was to use VPP and Suricata like this:
[IF] → DPDK_VPP ← (memif) → DPDK_Suricata
Among DPDK_VPP and DPDK_Suricata, use memif virtual interface to transfer packets.
By the futher, I want to test the full scenario:[IF] → DPDK_VPP ← (memif) → DPDK_Suricata.
Now I found the problem, when DPDK_Suricata is launched, DPDK_VPP would not receive any packets from the physical IFs. After searched the forum, it seems to be the same problem discussed here: Suricata and dpdk in secondary mode, right?
Can you test first with testpmd? You would replace Suricata with dpdk-testpmd with the same settings as Suricata has.
you mention that when DPDK_Suricata is launched, DPDK_VPP does not receive any packets from the physical IF. Are you sure DPDK_Suricata uses the correct memif interface and DPDK_VPP is correctly configured to read from the physical interfaces? Can it read from the PF (Physical Function/interface) before Suricata runs?
In VPP, connect two physical IFs with memif0/0, memif0/1 firstly
vpp# set int l2 xconn TenGigabitEthernet7/0/0 memif0/0
vpp# set int l2 xconn memif0/0 TenGigabitEthernet7/0/0
vpp# set int l2 xconn TenGigabitEthernet7/0/1 memif0/1
vpp# set int l2 xconn memif0/1 TenGigabitEthernet7/0/1
vpp# set interface state TenGigabitEthernet7/0/0 up
vpp# set interface state TenGigabitEthernet7/0/1 up
Ping through TenGigabitEthernet7/0/0, TenGigabitEthernet7/0/1. We can see rx packet counters increasing of two IFs and memif vdevs
vpp# sh int
Name Idx State MTU (L3/IP4/IP6/MPLS) Counter Count
TenGigabitEthernet7/0/0 1 up 9000/0/0/0 rx packets 5
rx bytes 300
drops 5
TenGigabitEthernet7/0/1 2 up 9000/0/0/0 rx packets 5
rx bytes 300
drops 5
local0 0 down 0/0/0/0
memif0/0 4 up 9000/0/0/0 tx-error 5
memif0/1 3 up 9000/0/0/0 tx-error 5
Run testpmd with two memif vdev loop back mode
dpdk-testpmd --vdev=net_memif0,role=client,id=0,socket-abstract=no,socket=/run/vpp/memif.sock --vdev=net_memif1,role=client,id=1,socket-abstract=no,socket=/run/vpp/memif.sock – -i
…
Configuring Port 0 (socket 0)
Port 0: 00:90:0B:54:9A:7E ### testpmd connected with TenGigabitEthernet7/0/0 abnormally.
Configuring Port 1 (socket 0)
Port 1: 00:90:0B:54:9A:7F ### testpmd connected with TenGigabitEthernet7/0/1 abnormally.
Configuring Port 2 (socket 0)
Port 2: B6:86:8D:1D:9D:33
Configuring Port 3 (socket 0)
Port 3: FA:3D:F5:34:9D:0E
Checking link statuses…
Done
testpmd>start
Now we can see ping sucessfully through TenGigabitEthernet7/0/0 and TenGigabitEthernet7/0/1.
64 bytes from 192.168.100.20: icmp_seq=1832 ttl=64 time=5.77 ms
64 bytes from 192.168.100.20: icmp_seq=1833 ttl=64 time=0.100 ms
64 bytes from 192.168.100.20: icmp_seq=1834 ttl=64 time=4.80 ms
64 bytes from 192.168.100.20: icmp_seq=1835 ttl=64 time=0.165 ms
But packet counters in VPP stopped to update any more:
vpp# sh int
Name Idx State MTU (L3/IP4/IP6/MPLS) Counter Count
TenGigabitEthernet7/0/0 1 up 9000/0/0/0 rx packets 44
rx bytes 2640
drops 44
TenGigabitEthernet7/0/1 2 up 9000/0/0/0 rx packets 44
rx bytes 2640
drops 44
local0 0 down 0/0/0/0
memif0/0 4 up 9000/0/0/0 tx-error 44
memif0/1 3 up 9000/0/0/0 tx-error 44
…
vpp# sh int
Name Idx State MTU (L3/IP4/IP6/MPLS) Counter Count
TenGigabitEthernet7/0/0 1 up 9000/0/0/0 rx packets 44
rx bytes 2640
drops 44
TenGigabitEthernet7/0/1 2 up 9000/0/0/0 rx packets 44
rx bytes 2640
drops 44
local0 0 down 0/0/0/0
memif0/0 4 up 9000/0/0/0 tx-error 44
memif0/1 3 up 9000/0/0/0 tx-error 44
Here we can have a conclusion: when testpmd is launched, it will automatically take over the two physical IFs and bypass DPDK_VPP.
Blockquote
Can you test first with testpmd? You would replace Suricata with dpdk-testpmd with the same settings as Suricata has.
Memif interface and physical IFs are configured correctly, we can use test_app to read from memif and ping test is OK.
IFs – memif vdev in VPP ← memif interface–> test_app : libmemif: add testing application · FDio/vpp@7280e3f · GitHub
when test_app is launched, we can see IFs,memif0/0,memif0/1 rx/tx counters in VPP increasing and ping is OK.
So, physical IFs, memif0/0, memif0/1, and test_app are configured correctly.
when DPDK_Suricata is launched, DPDK_VPP does not receive any packets from the physical IF. Furtherly, if we connect memif interface in VPP with embedded packet generator in VPP, packet generator will generate simulated packets and send to memif interface, then DPDK_Suricata receives packets from VPP. This can indicate that memif setting between VPP and Suricata is configured correctly.
So I believe when DPDK_Suricata is launched, it takes over physical IFs abnormally.
Now if you can test - replace test_app (that’s mentioned by the commit) with DPDK testpmd vecause the test_app is written with memif library directly, I’m thinking if there is something in DPDK directly that causes problems?
I don’t see how Suricata would access the physical IF if it only has a configuration for memif interface
Yes, when DPDK testpmd is launched, it will automatically connect with physical IFs abnormally. Below is the testpmd startup log:
Run testpmd with two memif vdev loop back mode
dpdk-testpmd --vdev=net_memif0,role=client,id=0,socket-abstract=no,socket=/run/vpp/memif.sock --vdev=net_memif1,role=client,id=1,socket-abstract=no,socket=/run/vpp/memif.sock – -i
…
Configuring Port 0 (socket 0)
Port 0: 00:90:0B:54:9A:7E ### testpmd connected with TenGigabitEthernet7/0/0 abnormally.
Configuring Port 1 (socket 0)
Port 1: 00:90:0B:54:9A:7F ### testpmd connected with TenGigabitEthernet7/0/1 abnormally.
Configuring Port 2 (socket 0)
Port 2: B6:86:8D:1D:9D:33
Configuring Port 3 (socket 0)
Port 3: FA:3D:F5:34:9D:0E
Checking link statuses…
Done
testpmd>start
Ok, so it seems it’s rather DPDK-related and not Suricata-related. Can you maybe reach out for help on fd.io/VPP or DPDK user mailing list?
Alternatively, I’ve just thought of one more command to run. When you startup a DPDK application, it will probe every device it can. I thought maybe VPP somehow panics when this happens and so this can be limited with the DPDK EAL parameter --allow/-a. In this case testpmd will only use the vdevs and will not probe the physical interfaces.
Updated testpmd cmd: dpdk-testpmd --vdev=net_memif0,role=client,id=0,socket-abstract=no,socket=/run/vpp/memif.sock --vdev=net_memif1,role=client,id=1,socket-abstract=no,socket=/run/vpp/memif.sock -a net_memif0 -a net_memif1 -- -i
Yes, I use –no-pci OR –block 0000:07:00.1 --block 0000:07:00.0 options to ignore physical IFs and it works OK now. When testpmd is launched, DPDK_VPP can still receive packets from physical IFs. But -a net_memif0 -a net_memif1 options doesn’t work.
Furtherly, I update suricata yaml and now it works OK for ping test: dpdk_suricata can receive packets from dpdk_vpp through memif interfaces. dpdk: eal-params:
Alternatively, no-pci: true also works and ping test is successfully.
dpdk:
eal-params:
proc-type: primary
file-prefix: suricata
no-pci: true
vdev: [“net_memif0,role=slave,id=0,socket-abstract=no,socket=/run/vpp/memif.sock”, “net_memif1,role=slave,id=1,socket-abstract=no,socket=/run/vpp/memif.sock”]
2. suricata ips mode can’t transfer http packets but ping test is OK:
dpdk config is the same as this:
ping test is OK
root@debian:~/waf_performance# ping 192.168.100.20
PING 192.168.100.20 (192.168.100.20) 56(84) bytes of data.
64 bytes from 192.168.100.20: icmp_seq=1 ttl=64 time=0.140 ms
64 bytes from 192.168.100.20: icmp_seq=2 ttl=64 time=0.160 ms
64 bytes from 192.168.100.20: icmp_seq=3 ttl=64 time=0.145 ms
From stats log I found this but still don’t know how to figure out the reason
ips.accepted | Total | 940 ips.blocked | Total | 154 ips.drop_reason.flow_drop | Total | 122 ips.drop_reason.stream_midstream | Total | 32