DPDK_VPP can't receive packets when DPDK_Suricata is launched because primary mode conflicts?

Hi, Lukas and guys

My test scenario was to use VPP and Suricata like this:
[IF] → DPDK_VPP ← (memif) → DPDK_Suricata
Among DPDK_VPP and DPDK_Suricata, use memif virtual interface to transfer packets.

In my previous tests, I just tested packets which were generated by VPP packet generator, DPDK_Suricata received them sucessfully Can Suricata version 7.0.0-rc2 receive packets from memif via DPDK

By the futher, I want to test the full scenario:[IF] → DPDK_VPP ← (memif) → DPDK_Suricata.
Now I found the problem, when DPDK_Suricata is launched, DPDK_VPP would not receive any packets from the physical IFs. After searched the forum, it seems to be the same problem discussed here: Suricata and dpdk in secondary mode, right?

Any further advise ?

Hi,
I am not too familiar with VPP so atm I cannot advise you directly. But here are some questions and hints.

  • I don’t believe you are in a need of DPDK secondary process support in Suricata as discussed here Suricata and dpdk in secondary mode - #7 by zappasodi
  • Can you test first with testpmd? You would replace Suricata with dpdk-testpmd with the same settings as Suricata has.
  • you mention that when DPDK_Suricata is launched, DPDK_VPP does not receive any packets from the physical IF. Are you sure DPDK_Suricata uses the correct memif interface and DPDK_VPP is correctly configured to read from the physical interfaces? Can it read from the PF (Physical Function/interface) before Suricata runs?

Sorry for late, I did the testpmd test as below:

  1. In VPP, connect two physical IFs with memif0/0, memif0/1 firstly
    vpp# set int l2 xconn TenGigabitEthernet7/0/0 memif0/0
    vpp# set int l2 xconn memif0/0 TenGigabitEthernet7/0/0
    vpp# set int l2 xconn TenGigabitEthernet7/0/1 memif0/1
    vpp# set int l2 xconn memif0/1 TenGigabitEthernet7/0/1
    vpp# set interface state TenGigabitEthernet7/0/0 up
    vpp# set interface state TenGigabitEthernet7/0/1 up

  2. Ping through TenGigabitEthernet7/0/0, TenGigabitEthernet7/0/1. We can see rx packet counters increasing of two IFs and memif vdevs
    vpp# sh int
    Name Idx State MTU (L3/IP4/IP6/MPLS) Counter Count
    TenGigabitEthernet7/0/0 1 up 9000/0/0/0 rx packets 5
    rx bytes 300
    drops 5
    TenGigabitEthernet7/0/1 2 up 9000/0/0/0 rx packets 5
    rx bytes 300
    drops 5
    local0 0 down 0/0/0/0
    memif0/0 4 up 9000/0/0/0 tx-error 5
    memif0/1 3 up 9000/0/0/0 tx-error 5

  3. Run testpmd with two memif vdev loop back mode
    dpdk-testpmd --vdev=net_memif0,role=client,id=0,socket-abstract=no,socket=/run/vpp/memif.sock --vdev=net_memif1,role=client,id=1,socket-abstract=no,socket=/run/vpp/memif.sock – -i

    Configuring Port 0 (socket 0)
    Port 0: 00:90:0B:54:9A:7E ### testpmd connected with TenGigabitEthernet7/0/0 abnormally.
    Configuring Port 1 (socket 0)
    Port 1: 00:90:0B:54:9A:7F ### testpmd connected with TenGigabitEthernet7/0/1 abnormally.
    Configuring Port 2 (socket 0)
    Port 2: B6:86:8D:1D:9D:33
    Configuring Port 3 (socket 0)
    Port 3: FA:3D:F5:34:9D:0E
    Checking link statuses…
    Done
    testpmd>start

  4. Now we can see ping sucessfully through TenGigabitEthernet7/0/0 and TenGigabitEthernet7/0/1.

64 bytes from 192.168.100.20: icmp_seq=1832 ttl=64 time=5.77 ms
64 bytes from 192.168.100.20: icmp_seq=1833 ttl=64 time=0.100 ms
64 bytes from 192.168.100.20: icmp_seq=1834 ttl=64 time=4.80 ms
64 bytes from 192.168.100.20: icmp_seq=1835 ttl=64 time=0.165 ms

But packet counters in VPP stopped to update any more:
vpp# sh int
Name Idx State MTU (L3/IP4/IP6/MPLS) Counter Count
TenGigabitEthernet7/0/0 1 up 9000/0/0/0 rx packets 44
rx bytes 2640
drops 44
TenGigabitEthernet7/0/1 2 up 9000/0/0/0 rx packets 44
rx bytes 2640
drops 44
local0 0 down 0/0/0/0
memif0/0 4 up 9000/0/0/0 tx-error 44
memif0/1 3 up 9000/0/0/0 tx-error 44

vpp# sh int
Name Idx State MTU (L3/IP4/IP6/MPLS) Counter Count
TenGigabitEthernet7/0/0 1 up 9000/0/0/0 rx packets 44
rx bytes 2640
drops 44
TenGigabitEthernet7/0/1 2 up 9000/0/0/0 rx packets 44
rx bytes 2640
drops 44
local0 0 down 0/0/0/0
memif0/0 4 up 9000/0/0/0 tx-error 44
memif0/1 3 up 9000/0/0/0 tx-error 44

  1. Here we can have a conclusion: when testpmd is launched, it will automatically take over the two physical IFs and bypass DPDK_VPP.

Blockquote

  • Can you test first with testpmd? You would replace Suricata with dpdk-testpmd with the same settings as Suricata has.
  1. Memif interface and physical IFs are configured correctly, we can use test_app to read from memif and ping test is OK.
    IFs – memif vdev in VPP ← memif interface–> test_app : libmemif: add testing application · FDio/vpp@7280e3f · GitHub
    when test_app is launched, we can see IFs,memif0/0,memif0/1 rx/tx counters in VPP increasing and ping is OK.
    So, physical IFs, memif0/0, memif0/1, and test_app are configured correctly.

  2. when DPDK_Suricata is launched, DPDK_VPP does not receive any packets from the physical IF. Furtherly, if we connect memif interface in VPP with embedded packet generator in VPP, packet generator will generate simulated packets and send to memif interface, then DPDK_Suricata receives packets from VPP. This can indicate that memif setting between VPP and Suricata is configured correctly.

So I believe when DPDK_Suricata is launched, it takes over physical IFs abnormally.

Now if you can test - replace test_app (that’s mentioned by the commit) with DPDK testpmd vecause the test_app is written with memif library directly, I’m thinking if there is something in DPDK directly that causes problems?

I don’t see how Suricata would access the physical IF if it only has a configuration for memif interface

Yes, when DPDK testpmd is launched, it will automatically connect with physical IFs abnormally. Below is the testpmd startup log:

  1. Run testpmd with two memif vdev loop back mode
    dpdk-testpmd --vdev=net_memif0,role=client,id=0,socket-abstract=no,socket=/run/vpp/memif.sock --vdev=net_memif1,role=client,id=1,socket-abstract=no,socket=/run/vpp/memif.sock – -i

    Configuring Port 0 (socket 0)
    Port 0: 00:90:0B:54:9A:7E ### testpmd connected with TenGigabitEthernet7/0/0 abnormally.
    Configuring Port 1 (socket 0)
    Port 1: 00:90:0B:54:9A:7F ### testpmd connected with TenGigabitEthernet7/0/1 abnormally.
    Configuring Port 2 (socket 0)
    Port 2: B6:86:8D:1D:9D:33
    Configuring Port 3 (socket 0)
    Port 3: FA:3D:F5:34:9D:0E
    Checking link statuses…
    Done
    testpmd>start

Ok, so it seems it’s rather DPDK-related and not Suricata-related. Can you maybe reach out for help on fd.io/VPP or DPDK user mailing list?

Alternatively, I’ve just thought of one more command to run. When you startup a DPDK application, it will probe every device it can. I thought maybe VPP somehow panics when this happens and so this can be limited with the DPDK EAL parameter --allow/-a. In this case testpmd will only use the vdevs and will not probe the physical interfaces.

Updated testpmd cmd:
dpdk-testpmd --vdev=net_memif0,role=client,id=0,socket-abstract=no,socket=/run/vpp/memif.sock --vdev=net_memif1,role=client,id=1,socket-abstract=no,socket=/run/vpp/memif.sock -a net_memif0 -a net_memif1 -- -i

Yes, I use –no-pci OR –block 0000:07:00.1 --block 0000:07:00.0 options to ignore physical IFs and it works OK now. When testpmd is launched, DPDK_VPP can still receive packets from physical IFs. But -a net_memif0 -a net_memif1 options doesn’t work.

Furtherly, I update suricata yaml and now it works OK for ping test: dpdk_suricata can receive packets from dpdk_vpp through memif interfaces.
dpdk:
eal-params:

  • proc-type: primary*
  • file-prefix: suricata*
  • block: [“0000:07:00.1”, “0000:07:00.0”]*
  • vdev: [“net_memif0,role=slave,id=0,socket-abstract=no,socket=/run/vpp/memif.sock”, “net_memif1,role=slave,id=1,socket-abstract=no,socket=/run/vpp/memif.sock”]*

Alternatively, no-pci: true also works and ping test is successfully.
dpdk:
eal-params:
proc-type: primary
file-prefix: suricata
no-pci: true
vdev: [“net_memif0,role=slave,id=0,socket-abstract=no,socket=/run/vpp/memif.sock”, “net_memif1,role=slave,id=1,socket-abstract=no,socket=/run/vpp/memif.sock”]

The upper ping tests are OK, then I use wget to access web site and found a new issue: suricata (ips mode) failed to transfer http packets.

1. testpmd is OK to transfer http packets.
testpmd command:
dpdk-testpmd --vdev=net_memif0,role=client,id=0,socket-abstract=no,socket=/run/vpp/memif.sock --vdev=net_memif1,role=client,id=1,socket-abstract=no,socket=/run/vpp/memif.sock -b 0000:07:00.1 -b 0000:07:00.0 – -i
wget:
root@debian:~# wget http://192.168.100.20/
–2024-04-30 06:30:40-- http://192.168.100.20/
Connecting to 192.168.100.20:80… connected.
HTTP request sent, awaiting response… 200 OK
Length: 177 [text/html]
Saving to: ‘index.html.7’

index.html.7 100%[=========================================================>] 177 --.-KB/s in 0s

2024-04-30 06:30:40 (21.1 MB/s) - ‘index.html.7’ saved [177/177]

2. suricata ips mode can’t transfer http packets but ping test is OK:
dpdk config is the same as this:

ping test is OK
root@debian:~/waf_performance# ping 192.168.100.20
PING 192.168.100.20 (192.168.100.20) 56(84) bytes of data.
64 bytes from 192.168.100.20: icmp_seq=1 ttl=64 time=0.140 ms
64 bytes from 192.168.100.20: icmp_seq=2 ttl=64 time=0.160 ms
64 bytes from 192.168.100.20: icmp_seq=3 ttl=64 time=0.145 ms

wget test failed:
root@debian:~# wget http://192.168.100.20/
–2024-04-30 06:39:43-- http://192.168.100.20/
Connecting to 192.168.100.20:80…

suricata logs:
Config: detect: No rules loaded from /dev/null [SigLoadSignatures:detect-engine-loader.c:335]
Info: detect: No signatures supplied. [SigLoadSignatures:detect-engine-loader.c:345]
TELEMETRY: No legacy callbacks, legacy socket not created
Notice: conf: unable to find interface default in DPDK config [ConfSetIfaceNode:conf.c:968]
Config: dpdk: RTE_ETH_RX_OFFLOAD_VLAN_STRIP - NOT available [DumpRXOffloadCapabilities:runmode-dpdk.c:996]
Config: dpdk: RTE_ETH_RX_OFFLOAD_IPV4_CKSUM - NOT available [DumpRXOffloadCapabilities:runmode-dpdk.c:998]
Config: dpdk: RTE_ETH_RX_OFFLOAD_UDP_CKSUM - NOT available [DumpRXOffloadCapabilities:runmode-dpdk.c:1000]
Config: dpdk: RTE_ETH_RX_OFFLOAD_TCP_CKSUM - NOT available [DumpRXOffloadCapabilities:runmode-dpdk.c:1002]
Config: dpdk: RTE_ETH_RX_OFFLOAD_TCP_LRO - NOT available [DumpRXOffloadCapabilities:runmode-dpdk.c:1004]
Config: dpdk: RTE_ETH_RX_OFFLOAD_QINQ_STRIP - NOT available [DumpRXOffloadCapabilities:runmode-dpdk.c:1006]
Config: dpdk: RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM - NOT available [DumpRXOffloadCapabilities:runmode-dpdk.c:1008]
Config: dpdk: RTE_ETH_RX_OFFLOAD_MACSEC_STRIP - NOT available [DumpRXOffloadCapabilities:runmode-dpdk.c:1010]
Config: dpdk: RTE_ETH_RX_OFFLOAD_VLAN_FILTER - NOT available [DumpRXOffloadCapabilities:runmode-dpdk.c:1016]
Config: dpdk: RTE_ETH_RX_OFFLOAD_VLAN_EXTEND - NOT available [DumpRXOffloadCapabilities:runmode-dpdk.c:1018]
Config: dpdk: RTE_ETH_RX_OFFLOAD_SCATTER - NOT available [DumpRXOffloadCapabilities:runmode-dpdk.c:1020]
Config: dpdk: RTE_ETH_RX_OFFLOAD_TIMESTAMP - NOT available [DumpRXOffloadCapabilities:runmode-dpdk.c:1022]
Config: dpdk: RTE_ETH_RX_OFFLOAD_SECURITY - NOT available [DumpRXOffloadCapabilities:runmode-dpdk.c:1024]
Config: dpdk: RTE_ETH_RX_OFFLOAD_KEEP_CRC - NOT available [DumpRXOffloadCapabilities:runmode-dpdk.c:1026]
Config: dpdk: RTE_ETH_RX_OFFLOAD_SCTP_CKSUM - NOT available [DumpRXOffloadCapabilities:runmode-dpdk.c:1028]
Config: dpdk: RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM - NOT available [DumpRXOffloadCapabilities:runmode-dpdk.c:1030]
Config: dpdk: RTE_ETH_RX_OFFLOAD_RSS_HASH - NOT available [DumpRXOffloadCapabilities:runmode-dpdk.c:1032]
Config: dpdk: RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT - NOT available [DumpRXOffloadCapabilities:runmode-dpdk.c:1035]
Config: dpdk: net_memif0: RSS not supported [DeviceInitPortConf:runmode-dpdk.c:1143]
Config: dpdk: net_memif0: checksum validation disabled [DeviceInitPortConf:runmode-dpdk.c:1147]
Config: dpdk: net_memif0: setting MTU to 1500 [DeviceConfigure:runmode-dpdk.c:1478]
Warning: dpdk: net_memif0: changing MTU on port 0 is not supported, ignoring the setting [DeviceConfigure:runmode-dpdk.c:1481]
Config: dpdk: net_memif0: creating packet mbuf pool mempool_net_memif0 of size 65535, cache size 257, mbuf size 2176 [DeviceConfigureQueues:runmode-dpdk.c:1182]
Config: dpdk: net_memif0: rx queue setup: queue:0 port:0 rx_desc:1024 tx_desc:1024 rx: hthresh: 0 pthresh 0 wthresh 0 free_thresh 0 drop_en 0 offloads 0 [DeviceConfigureQueues:runmode-dpdk.c:1202]
Config: dpdk: net_memif0: tx queue setup: queue:0 port:0 [DeviceConfigureQueues:runmode-dpdk.c:1222]
Notice: conf: unable to find interface default in DPDK config [ConfSetIfaceNode:conf.c:968]
Info: dpdk: net_memif0: DPDK IPS mode activated: net_memif0->net_memif1 [DeviceConfigureIPS:runmode-dpdk.c:1316]
Info: runmodes: net_memif0: creating 1 thread [RunModeSetLiveCaptureWorkersForDevice:util-runmodes.c:254]
Perf: threads: Setting prio 0 for thread “W#01-net_memif0” to cpu/core 6, thread id 2331 [TmThreadSetupOptions:tm-threads.c:873]
Notice: dpdk: net_memif0: unable to determine NIC’s NUMA node, degraded performance can be expected [ReceiveDPDKThreadInit:source-dpdk.c:560]
Notice: conf: unable to find interface default in DPDK config [ConfSetIfaceNode:conf.c:968]
Config: dpdk: RTE_ETH_RX_OFFLOAD_VLAN_STRIP - NOT available [DumpRXOffloadCapabilities:runmode-dpdk.c:996]
Config: dpdk: RTE_ETH_RX_OFFLOAD_IPV4_CKSUM - NOT available [DumpRXOffloadCapabilities:runmode-dpdk.c:998]
Config: dpdk: RTE_ETH_RX_OFFLOAD_UDP_CKSUM - NOT available [DumpRXOffloadCapabilities:runmode-dpdk.c:1000]
Config: dpdk: RTE_ETH_RX_OFFLOAD_TCP_CKSUM - NOT available [DumpRXOffloadCapabilities:runmode-dpdk.c:1002]
Config: dpdk: RTE_ETH_RX_OFFLOAD_TCP_LRO - NOT available [DumpRXOffloadCapabilities:runmode-dpdk.c:1004]
Config: dpdk: RTE_ETH_RX_OFFLOAD_QINQ_STRIP - NOT available [DumpRXOffloadCapabilities:runmode-dpdk.c:1006]
Config: dpdk: RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM - NOT available [DumpRXOffloadCapabilities:runmode-dpdk.c:1008]
Config: dpdk: RTE_ETH_RX_OFFLOAD_MACSEC_STRIP - NOT available [DumpRXOffloadCapabilities:runmode-dpdk.c:1010]
Config: dpdk: RTE_ETH_RX_OFFLOAD_VLAN_FILTER - NOT available [DumpRXOffloadCapabilities:runmode-dpdk.c:1016]
Config: dpdk: RTE_ETH_RX_OFFLOAD_VLAN_EXTEND - NOT available [DumpRXOffloadCapabilities:runmode-dpdk.c:1018]
Config: dpdk: RTE_ETH_RX_OFFLOAD_SCATTER - NOT available [DumpRXOffloadCapabilities:runmode-dpdk.c:1020]
Config: dpdk: RTE_ETH_RX_OFFLOAD_TIMESTAMP - NOT available [DumpRXOffloadCapabilities:runmode-dpdk.c:1022]
Config: dpdk: RTE_ETH_RX_OFFLOAD_SECURITY - NOT available [DumpRXOffloadCapabilities:runmode-dpdk.c:1024]
Config: dpdk: RTE_ETH_RX_OFFLOAD_KEEP_CRC - NOT available [DumpRXOffloadCapabilities:runmode-dpdk.c:1026]
Config: dpdk: RTE_ETH_RX_OFFLOAD_SCTP_CKSUM - NOT available [DumpRXOffloadCapabilities:runmode-dpdk.c:1028]
Config: dpdk: RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM - NOT available [DumpRXOffloadCapabilities:runmode-dpdk.c:1030]
Config: dpdk: RTE_ETH_RX_OFFLOAD_RSS_HASH - NOT available [DumpRXOffloadCapabilities:runmode-dpdk.c:1032]
Config: dpdk: RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT - NOT available [DumpRXOffloadCapabilities:runmode-dpdk.c:1035]
Config: dpdk: net_memif1: RSS not supported [DeviceInitPortConf:runmode-dpdk.c:1143]
Config: dpdk: net_memif1: checksum validation disabled [DeviceInitPortConf:runmode-dpdk.c:1147]
Config: dpdk: net_memif1: setting MTU to 1500 [DeviceConfigure:runmode-dpdk.c:1478]
Warning: dpdk: net_memif1: changing MTU on port 1 is not supported, ignoring the setting [DeviceConfigure:runmode-dpdk.c:1481]
Config: dpdk: net_memif1: creating packet mbuf pool mempool_net_memif1 of size 65535, cache size 257, mbuf size 2176 [DeviceConfigureQueues:runmode-dpdk.c:1182]
Config: dpdk: net_memif1: rx queue setup: queue:0 port:1 rx_desc:1024 tx_desc:1024 rx: hthresh: 0 pthresh 0 wthresh 0 free_thresh 0 drop_en 0 offloads 0 [DeviceConfigureQueues:runmode-dpdk.c:1202]
Config: dpdk: net_memif1: tx queue setup: queue:0 port:1 [DeviceConfigureQueues:runmode-dpdk.c:1222]
Notice: conf: unable to find interface default in DPDK config [ConfSetIfaceNode:conf.c:968]
Info: dpdk: net_memif1: DPDK IPS mode activated: net_memif1->net_memif0 [DeviceConfigureIPS:runmode-dpdk.c:1316]
Info: runmodes: net_memif1: creating 1 thread [RunModeSetLiveCaptureWorkersForDevice:util-runmodes.c:254]
Perf: threads: Setting prio 0 for thread “W#01-net_memif1” to cpu/core 7, thread id 2332 [TmThreadSetupOptions:tm-threads.c:873]
Notice: dpdk: net_memif1: unable to determine NIC’s NUMA node, degraded performance can be expected [ReceiveDPDKThreadInit:source-dpdk.c:560]
Config: flow-manager: using 1 flow manager threads [FlowManagerThreadSpawn:flow-manager.c:948]
Perf: threads: Setting prio 0 for thread “FM#01”, thread id 2333 [TmThreadSetupOptions:tm-threads.c:879]
Config: flow-manager: using 1 flow recycler threads [FlowRecyclerThreadSpawn:flow-manager.c:1154]
Perf: threads: Setting prio 0 for thread “FR#01”, thread id 2334 [TmThreadSetupOptions:tm-threads.c:879]
Perf: threads: Setting prio 0 for thread “CW”, thread id 2335 [TmThreadSetupOptions:tm-threads.c:879]
Perf: threads: Setting prio 0 for thread “CS”, thread id 2336 [TmThreadSetupOptions:tm-threads.c:879]
Info: unix-manager: unix socket ‘/var/run/suricata/suricata-command.socket’ [UnixNew:unix-manager.c:136]
Perf: threads: Setting prio 0 for thread “US”, thread id 2337 [TmThreadSetupOptions:tm-threads.c:879]
Notice: threads: Threads created → W: 2 FM: 1 FR: 1 Engine started. [TmThreadWaitOnThreadRunning:tm-threads.c:1890]
^CNotice: suricata: Signal Received. Stopping engine. [SuricataMainLoop:suricata.c:2815]
Info: suricata: time elapsed 156.371s [SCPrintElapsedTime:suricata.c:1168]
Perf: flow-manager: 26 flows processed [FlowRecycler:flow-manager.c:1123]
Perf: dpdk: Port 0 (net_memif0) - rx_good_packets: 55 [PrintDPDKPortXstats:source-dpdk.c:612]
Perf: dpdk: Port 0 (net_memif0) - tx_good_packets: 19 [PrintDPDKPortXstats:source-dpdk.c:612]
Perf: dpdk: Port 0 (net_memif0) - rx_good_bytes: 4298 [PrintDPDKPortXstats:source-dpdk.c:612]
Perf: dpdk: Port 0 (net_memif0) - tx_good_bytes: 1634 [PrintDPDKPortXstats:source-dpdk.c:612]
Perf: dpdk: Port 0 (net_memif0) - rx_q0_packets: 55 [PrintDPDKPortXstats:source-dpdk.c:612]
Perf: dpdk: Port 0 (net_memif0) - rx_q0_bytes: 4298 [PrintDPDKPortXstats:source-dpdk.c:612]
Perf: dpdk: Port 0 (net_memif0) - tx_q0_packets: 19 [PrintDPDKPortXstats:source-dpdk.c:612]
Perf: dpdk: Port 0 (net_memif0) - tx_q0_bytes: 1634 [PrintDPDKPortXstats:source-dpdk.c:612]
Perf: dpdk: net_memif0: total RX stats: packets 55 bytes: 4298 missed: 0 errors: 0 nombufs: 0 [ReceiveDPDKThreadExitStats:source-dpdk.c:639]
Perf: dpdk: net_memif0: total TX stats: packets 19 bytes: 1634 errors: 0 [ReceiveDPDKThreadExitStats:source-dpdk.c:644]
Perf: dpdk: (W#01-net_memif0) received packets 55 [ReceiveDPDKThreadExitStats:source-dpdk.c:649]
Perf: dpdk: net_memif1: total RX stats: packets 0 bytes: 0 missed: 0 errors: 0 nombufs: 0 [ReceiveDPDKThreadExitStats:source-dpdk.c:639]
Perf: dpdk: net_memif1: total TX stats: packets 0 bytes: 0 errors: 0 [ReceiveDPDKThreadExitStats:source-dpdk.c:644]
Perf: dpdk: (W#01-net_memif1) received packets 79 [ReceiveDPDKThreadExitStats:source-dpdk.c:649]
Info: counters: Alerts: 0 [StatsLogSummary:counters.c:878]
Perf: ippair: ippair memory usage: 414144 bytes, maximum: 16777216 [IPPairPrintStats:ippair.c:296]
Perf: host: host memory usage: 398144 bytes, maximum: 33554432 [HostPrintStats:host.c:299]
Perf: dpdk: net_memif0: closing device [DPDKCloseDevice:util-dpdk.c:51]
Perf: dpdk: net_memif1: closing device [DPDKCloseDevice:util-dpdk.c:51]
Notice: device: net_memif0: packets: 55, drops: 0 (0.00%), invalid chksum: 0 [LiveDeviceListClean:util-device.c:325]
Notice: device: net_memif1: packets: 0, drops: 0 (0.00%), invalid chksum: 0 [LiveDeviceListClean:util-device.c:325]

Thanks for the report Andy.
I’d say it is likely a bug in VPP, possibly in DPDK.

are you not hitting the IPS exception policy?

From stats log I found this but still don’t know how to figure out the reason
ips.accepted | Total | 940
ips.blocked | Total | 154
ips.drop_reason.flow_drop | Total | 122
ips.drop_reason.stream_midstream | Total | 32

resolved after change exception-policy: auto to exception-policy: ignore

Great to hear that! :slight_smile:

Thanks a lot, Lukas.