Elephant flows bypass with eBPF/XDP

I’m running Suricata version 8.0.0-dev, compiled with eBPF/XDP support, in IDS mode on Rocky Linux 9.5. The network interface card (NIC) is an Intel 82599ES using the ixgbe module, which supports eBPF/XDP.

I’m encountering an issue where elephant flows, specifically SMB (TCP/445) and an in-house application (TCP/902), are causing the CPU/thread handling these flows to immediately hit 100% utilization resulting in massive capture.kernel_drops for that and other flows on that thread.

As I understood from the docs/forum: the config below is supposed to bypass flows >1mb within the NIC driver using eBPF/XDP prior to reaching kernel but it seems it’s not happening.

Any insights or guidance on where to investigate further would be greatly appreciated

af-packet:
  - interface: enp130s0f1
    threads: 16
    cluster-id: 97
    cluster-type: cluster_qm
    defrag: yes
    use-mmap: yes
    tpacket-v3: yes
    ring-size: 400000
    block-size: 1048576
    bypass: yes
    xdp-mode: driver
    xdp-filter-file: /etc/suricata/ebpf/xdp_filter.bpf
stream:
  memcap: 2 GiB
  checksum-validation: yes     
  inline: auto              
  bypass: yes
  reassembly:
    urgent:
      policy: oob           
      oob-limit-policy: drop
    memcap: 4 GiB
    depth: 1 MiB            
    toserver-chunk-size: 2560
    toclient-chunk-size: 2560
    randomize-chunk-size: yes
[root@yy-xxx-01 ~]# xdp-loader status
CURRENT XDP PROGRAM STATUS:

Interface        Prio  Program name      Mode     ID   Tag               Chain actions
--------------------------------------------------------------------------------------
lo                     <No XDP program loaded!>
eno1                   <No XDP program loaded!>
enp130s0f1             xdp_hashfilter    native   71   baa10127dca180f2