Suricata and DPDK: interface(s) shut down after suricata restart

Hello,

Runnning DPDK 24.07.0, Suricata 8 main repo (8.0.0-dev (dd71ef0af 2024-11-05), Redhat 8 4.18.0-553.22.1.el8_10.x86_64 , for some weeks now and after a restart of suricata most of the time one or more 100G spanports go inactive, no link seen.
Today this worked again to get the spanport active:
$ systemctl stop suricata
$ /usr/local/bin/dpdk-devbind.py -b ice 0000:84:00.1
$ /usr/local/bin/dpdk-devbind.py --bind=vfio-pci 0000:84:00.1
$ systemctl start suricata

What todo? Going back to DPDK 24.03.0 which had no issues, or do you have some ideas/tips to debug?

Cheers,
Andre

Hello,

we can work on the bug report and then we can reach out to Intel through DPDK “users” mailing list.

  1. I guess I would need more info, so how do you restart Suricata? Is it rule reload or stopping it and then starting again?
  2. I guess the spanport information is not crucial here as it is a configuration outside of the current ice port, am I right?
  3. Does it happen right after Suricata startup or usually Suricata needs to run for a few hours/days and then it happens? If you start and stop Suri immediately can you still loose the link?
  4. Especially if it can be reproduced immediately, can you test it with other DPDK apps as well, e.g. dpdk-testpmd?
  5. I’ve noticed that ice seems to have a delayed startup - the first second or two - Suricata say it started the workers and everything but the card really only starts to receive packets after those 2 seconds. This can also be observed with the aforementioned testpmd app which shows something like Link status changed. So maybe the startup lag is a bit longer in this version? Did you give it enough time to start before killing it?
  6. So I assume you monitor multiple ice ports and they randomly go inactive. Is it correct or have you seen particularly one port being more prone to this?

Thanks for caring about this!
Lukas

Hi Lukas,

  1. Rule reload has no affect, it is indeed service restart or stop start via systemctl
  2. Yes you are right
  3. I need to test that one, it restarts every night due to log rotation and then an interface usesually dissapears
  4. I need to test that one
  5. You mean suricata? I should check the logs, sometimes it takes more time because of closing pcap logging. But nothing changed except the dpdk version and running the suricata daily
  6. Last week three ports gone, yesterday and today 1 (the same both times)

Thanks!
Andre

I tried to test it with testpmd and it works ok for me.

sudo dpdk-testpmd -a af:00.1 -l 0,1

I tried to run it dozen times and it worked (but the longest run was about 30 mins).
But I had to wait like 5 seconds until Port 0: link state change event showed - before that I received 0 packets.

I tried it with Suricata as well and had no problems as well - again no long runs but rather start & stops but it didn’t occur - tried it with fresh master (278dc24cd).

Maybe your card is acting funky? Maybe something else is intercepting the traffic?
If your testpmd works fine, you could go back to 24.03 and verify that it works 100% all the time.

It was going well for days (since the post :wink: , but today the same port down again after a restart during logcycle. So maybe indeed a funky card, sfp or span config or such. I’ve cut down Suricata rules down in half (70k to under 40k) but to no avail.
No I’ve modified the logrotate script and replaced suricata restart with a stop and a start, see if this has a possitive timing effect.
Thanks for your time, much apriciated!

Where should I be looking for when running ths test Lukas?

]# dpdk-testpmd -a 0000:84:00.1 -l 0,1
EAL: Detected CPU lcores: 128
EAL: Detected NUMA nodes: 2
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode ‘VA’
EAL: VFIO support initialized
EAL: Using IOMMU type 1 (Type 1)
ice_load_pkg_type(): Active package is: 1.3.36.0, ICE OS Default Package (double VLAN mode)
testpmd: create a new mbuf pool <mb_pool_0>: n=155456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
testpmd: create a new mbuf pool <mb_pool_1>: n=155456, size=2176, socket=1
testpmd: preferred mempool ops selected: ring_mp_mc

Warning! port-topology=paired and odd forward ports number, the last port will pair with itself.

Configuring Port 0 (socket 1)
ice_set_rx_function(): Using AVX2 Vector Rx (port 0).
Port 0: B4:96:91:E7:52:99
Checking link statuses…
Done
No commandline core given, start packet forwarding
io packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP allocation mode: native
Logical Core 1 (socket 0) forwards packets on 1 streams:
RX P=0/Q=0 (socket 1) → TX P=0/Q=0 (socket 1) peer=02:00:00:00:00:00

io packet forwarding packets/burst=32
nb forwarding cores=1 - nb forwarding ports=1
port 0: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x10000
RX queue: 0
RX desc=1024 - RX free threshold=32
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=1024 - TX free threshold=32
TX threshold registers: pthresh=32 hthresh=0 wthresh=0
TX offloads=0x10000 - TX RS bit threshold=32
Press enter to exit

Port 0: link state change event

Ah ok when stopping the stats ofcourse:

Waiting for lcores to finish…

---------------------- Forward statistics for port 0 ----------------------
RX-packets: 95289386 RX-dropped: 0 RX-total: 95289386
RX-error: 17
RX-nombufs: 0
TX-packets: 95289369 TX-dropped: 0 TX-total: 95289369
\ ----------------------------------------------------------------------------

+++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
RX-packets: 95289386 RX-dropped: 0 RX-total: 95289386
TX-packets: 95289369 TX-dropped: 0 TX-total: 95289369
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

3 runs, last 10minute run showed zero bytes so for me reproducable.

]# dpdk-testpmd -a 0000:84:00.1 -l 0,1
EAL: Detected CPU lcores: 128
EAL: Detected NUMA nodes: 2
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode ‘VA’
EAL: VFIO support initialized
EAL: Using IOMMU type 1 (Type 1)
ice_load_pkg_type(): Active package is: 1.3.36.0, ICE OS Default Package (double VLAN mode)
testpmd: create a new mbuf pool <mb_pool_0>: n=155456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
testpmd: create a new mbuf pool <mb_pool_1>: n=155456, size=2176, socket=1
testpmd: preferred mempool ops selected: ring_mp_mc

Warning! port-topology=paired and odd forward ports number, the last port will pair with itself.

Configuring Port 0 (socket 1)
ice_set_rx_function(): Using AVX2 Vector Rx (port 0).
Port 0: B4:96:91:E7:52:99
Checking link statuses…
Done
No commandline core given, start packet forwarding
io packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP allocation mode: native
Logical Core 1 (socket 0) forwards packets on 1 streams:
RX P=0/Q=0 (socket 1) → TX P=0/Q=0 (socket 1) peer=02:00:00:00:00:00

io packet forwarding packets/burst=32
nb forwarding cores=1 - nb forwarding ports=1
port 0: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x10000
RX queue: 0
RX desc=1024 - RX free threshold=32
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=1024 - TX free threshold=32
TX threshold registers: pthresh=32 hthresh=0 wthresh=0
TX offloads=0x10000 - TX RS bit threshold=32
Press enter to exit

Telling cores to stop…
Waiting for lcores to finish…

---------------------- Forward statistics for port 0 ----------------------
RX-packets: 0 RX-dropped: 0 RX-total: 0
TX-packets: 0 TX-dropped: 0 TX-total: 0
----------------------------------------------------------------------------

+++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
RX-packets: 0 RX-dropped: 0 RX-total: 0
TX-packets: 0 TX-dropped: 0 TX-total: 0
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Done.

Stopping port 0…
Stopping ports…
Done

Shutting down port 0…
Closing ports…
Port 0 is closed
Done

Bye…

A new small 10minute run after the zero bytes run showed a working link again:
Waiting for lcores to finish…

---------------------- Forward statistics for port 0 ----------------------
RX-packets: 141039456 RX-dropped: 0 RX-total: 141039456
RX-error: 16
RX-nombufs: 0
TX-packets: 141039420 TX-dropped: 0 TX-total: 141039420
\ ----------------------------------------------------------------------------

+++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
RX-packets: 141039456 RX-dropped: 0 RX-total: 141039456
TX-packets: 141039420 TX-dropped: 0 TX-total: 141039420

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Please do advise what to do next? Is this span port flaky? Going back with dpdk version and run tests again? Other ideas? Thanks!

Same behaviour with DPDK 24.03.0, so a flaky card/port/sfp/line I guess.
Only clue I see now:
Nov 11 09:19:19 kernel: ice 0000:84:00.1: IRDMA hardware initialization FAILED init_state=2 status=-12
Nov 11 12:23:50 kernel: ice 0000:84:00.1: IRDMA hardware initialization FAILED init_state=2 status=-12
Nov 11 12:24:15 kernel: ice 0000:84:00.1: IRDMA hardware initialization FAILED init_state=2 status=-12
Nov 11 14:55:56 kernel: ice 0000:84:00.1: IRDMA hardware initialization FAILED init_state=2 status=-12
But thats when unbinding I guess from dpdk.

What is your ice driver? Have you recently updated it?
For instance I’ve got this version:

$ modinfo ice
filename:       /lib/modules/4.18.0-477.10.1.el8_8.x86_64/kernel/drivers/net/ethernet/intel/ice/ice.ko.xz
firmware:       intel/ice/ddp/ice.pkg
version:        0.8.2-k
license:        GPL v2
description:    Intel(R) Ethernet Connection E800 Series Linux Driver
author:         Intel Corporation, <linux.nics@intel.com>
rhelversion:    8.8

Last night again the same probs, startup looks oke, but no data on 84:00.1

Nov 11 03:35:55 suricata[994583]: Warning: dpdk: 0000:10:00.0: device queue descriptors adjusted (RX: from 32768 to 4096, TX: from 32768 to 4096) [DeviceConfigure:runmode-dpdk.c:1522]
Nov 11 03:35:55 suricata[994583]: Notice: log-pcap: Ring buffer initialized with 99 files. [PcapLogInitRingBuffer:log-pcap.c:987]
Nov 11 03:35:59 suricata[994583]: Warning: dpdk: 0000:84:00.0: device queue descriptors adjusted (RX: from 32768 to 4096, TX: from 32768 to 4096) [DeviceConfigure:runmode-dpdk.c:1522]
Nov 11 03:36:03 suricata[994583]: Warning: dpdk: 0000:10:00.1: device queue descriptors adjusted (RX: from 32768 to 4096, TX: from 32768 to 4096) [DeviceConfigure:runmode-dpdk.c:1522]
Nov 11 03:36:07 suricata[994583]: Warning: dpdk: 0000:0f:00.0: device queue descriptors adjusted (RX: from 32768 to 4096, TX: from 32768 to 4096) [DeviceConfigure:runmode-dpdk.c:1522]
Nov 11 03:36:08 suricata[994583]: Warning: dpdk: 0000:84:00.1: device queue descriptors adjusted (RX: from 32768 to 4096, TX: from 32768 to 4096) [DeviceConfigure:runmode-dpdk.c:1522]
Nov 11 03:36:13 suricata[994583]: Notice: threads: Threads created → W: 102 FM: 5 FR: 5 Engine started. [TmThreadWaitOnThreadRunning:tm-threads.c:1894]

It is the last interface in the suricata.yaml config file so the last one to startup.
→ /ethdev/link_status,4
{
“/ethdev/link_status”: {
“status”: “DOWN”
}
}

On the side of the switch providing the span data, the port is not connected:
Nov 12 02:29:03.776: %LINK-3-UPDOWN: Interface HundredGigE1/1/0/41, changed state to down (UTC) so this matches suricata stop time:

Nov 12 03:29:01 systemd[1]: Stopping Suricata Intrusion Detection Service…
Nov 12 03:29:01 suricata[1331550]: Notice: suricata: Signal Received. Stopping engine. [SuricataMainLoop:suricata.c:2820]
Nov 12 03:29:22 suricata[1331550]: Notice: device: 0000:10:00.0: packets: 16409584083, drops: 130196087 (0.79%), invalid chksum: 98 [LiveDeviceListClean:util-device.c:331]
Nov 12 03:29:22 suricata[1331550]: Notice: device: 0000:84:00.0: packets: 15175383723, drops: 260275895 (1.72%), invalid chksum: 106 [LiveDeviceListClean:util-device.c:331]
Nov 12 03:29:22 suricata[1331550]: Notice: device: 0000:10:00.1: packets: 0, drops: 0 (0.00%), invalid chksum: 0 [LiveDeviceListClean:util-device.c:331]
Nov 12 03:29:22 suricata[1331550]: Notice: device: 0000:0f:00.0: packets: 11152710338, drops: 32373633 (0.29%), invalid chksum: 0 [LiveDeviceListClean:util-device.c:331]
Nov 12 03:29:22 suricata[1331550]: Notice: device: 0000:84:00.1: packets: 4951591713, drops: 82818 (0.00%), invalid chksum: 2289 [LiveDeviceListClean:util-device.c:331]

Regarding ice:
]# modinfo ice
filename: /lib/modules/4.18.0-553.27.1.el8_10.x86_64/kernel/drivers/net/ethernet/intel/ice/ice.ko.xz
firmware: intel/ice/ddp/ice.pkg
version: 0.8.2-k
license: GPL v2
description: Intel(R) Ethernet Connection E800 Series Linux Driver
author: Intel Corporation, linux.nics@intel.com
rhelversion: 8.10
srcversion: 48465C2F33B3B14FFD435B8

Cheers

As a workaround been running a logrotate script which first stops suricata, changes the dpdk binding back to ice, sleep some seconds, loads the dpdk driver, sleep and then start suricata. But even that is not a 100% guarantee it will work, see attached file.
Cheers
a.yaml (8.9 KB)

It is a text file, but txt extension was grayed out while selecting the upload :wink:

Replaced a QFSP today, let’s see this week how things work out.

Hello André, did it help?

Hi Lucas, until now yes. Seems to be an issue with Solid Optics qfps, yesterday we had to replace 2 (now Cisco) others to bring up spanports in an other datacenter. Odd stuff.

1 Like

Well good to hear that it seems fixed!