AF_XDP mode not working as expected with high traffic volume

Hello, I’m doing some performance experiments with Suricata, but I’m having some issues with AF_XDP’s expected behavior.

I have two 100Gbps NICs connected. On one side I generated 100Gbps traffic with Pktgen, creating a significant number of flows to not saturate any single buffer/CPU. When running with AF_PACKET, I can see all packets being received by Suricata, even if it reports dropped packets.

However, when using AF_XDP, I can only reach around ~25Gbps with the expected behavior. After that, I can’t gracefully shut it down (CTRL-C), and it returns the following error:

threads: Engine unable to disable detect thread - “W#01-ens3f0np0”

Even tough I can’t gracefully shut it down, stats.logs continues to report capture.afxdp_packets and capture.kernel_drops. However, as I increase the throughput, their sum differs more and more compared to what Pktgen reports. Interesting, with ~20 Gbps and Suricata working as expected, the percentage of dropped packets was small, less than 1%.

I set 30 RSS queues with 30 threads on the threading settings and I’ve also configured the IRQ affinity. Since I observe the desired behavior with smaller throughput, this does not appear to be the issue.

The config for af-xdp is the following:

 threads: 30
 disable-promisc: false
 force-xdp-mode: drv
 force-bind-mode: zero
 mem-unaligned: yes
 enable-busy-poll: yes
 busy-poll-time: 20
 busy-poll-budget: 64
 gro-flush-timeout: 2000000
 napi-defer-hard-irq: 2

I tried to play with busy-poll-budget, gro-flush-timeout, napi-defer-hard-irq, but I had no success in solving my issue. Maybe there is an optimal config, but I could not find it. For max-pending-packets, I tried 35000, 50000 or even bigger, to no success.

Finally, I saw the following warnings regarding XDP and eBPF:

libbpf: elf: skipping unrecognized data section(8) .xdp_run_config
libbpf: elf: skipping unrecognized data section(9) xdp_metadata
libbpf: elf: skipping unrecognized data section(7) xdp_metadata
libbpf: elf: skipping unrecognized data section(7) xdp_metadata
libxdp: No bpffs found at /sys/fs/bpf
libxdp: Can’t use dispatcher without a working bpffs
libxdp: Falling back to loading single prog without dispatcher
libbpf: elf: skipping unrecognized data section(7) xdp_metadata
libxdp: No bpffs found at /sys/fs/bpf

Since it worked as it should up to 20 Gbps, I do not believe these could be an issue, but I’m not sure.

I expected that packets would be dropped even with AF_XDP, but I hoped it would continue to work as AF_PACKET did, even with 100Gbps. If anyone can shed some light into this, I would be grateful.

Can you share the last full stats.log entry from the failed run?

Since I’m a new user and can’t attach files, I’m only putting the first and last report of stats.log for an experiment with 25 Gbps. I created a table below to summarize the reports from different runs.

For all these throughput, Pkgten generated only TCP traffic and the ruleset was a shortened version of the emerging threats ruleset (only with IP, ICMP and TCP rules).

Throughput (Gbps) Packets sent Packets received* suricata.log afxdp_packets kernel_drops
1 4964832 4964832 packets: 4931569, drops: 0 4931569 33263
25 123680160 86341892 - 86211194 130698
50 246457440 31838315 - 31770382 67933

*Packet received = afxdp_packets + kernel_drops

Two noticeable results. First, suricata.log with 1 Gbps reports no drop even tough there are kernel drops, which feels weird. Second, with 50 Gbps, less packet were reported by afxdp_packet compared to 25 Gbps.

Using af-packet in the same 50 Gbps scenario resulted in 20% packets dropped, but kernel_packets had the same number of packets as reported by Pktgen.

Below are the first and last report of the 25 Gbps in the stats.log file:


Date: 12/13/2025 – 23:29:55 (uptime: 0d, 00h 00m 17s)

Counter | TM Name | Value

capture.afxdp.empty_reads | Total | 12739740
flow.mgr.full_hash_pass | Total | 2
flow.mgr.rows_per_sec | Total | 24248
flow.spare | Total | 10000
memcap.pressure | Total | 37
memcap.pressure_max | Total | 37
defrag.memuse | Total | 33554432
tcp.memuse | Total | 24903680
tcp.reassembly_memuse | Total | 4587520
http.byterange.memuse | Total | 168384
http.byterange.memcap | Total | 104857600
ippair.memuse | Total | 398144
ippair.memcap | Total | 16777216
host.memuse | Total | 382144
host.memcap | Total | 33554432
flow.memuse | Total | 8042304


Date: 12/13/2025 – 23:31:07 (uptime: 0d, 00h 01m 29s)

Counter | TM Name | Value

capture.afxdp_packets | Total | 86211194
capture.kernel_drops | Total | 130698
capture.afxdp.empty_reads | Total | 729040749
decoder.pkts | Total | 86211194
decoder.bytes | Total | 128971946224
decoder.ipv4 | Total | 86211194
decoder.ethernet | Total | 86211194
decoder.tcp | Total | 86211194
decoder.avg_pkt_size | Total | 1496
decoder.max_pkt_size | Total | 1496
flow.total | Total | 133
flow.active | Total | 100
flow.tcp | Total | 133
flow.wrk.spare_sync_avg | Total | 100
flow.wrk.spare_sync | Total | 24
flow.end.state.new | Total | 33
flow.mgr.full_hash_pass | Total | 29
flow.mgr.rows_per_sec | Total | 24248
flow.spare | Total | 10132
flow.mgr.rows_maxlen | Total | 5
flow.mgr.flows_checked | Total | 311
flow.mgr.flows_notimeout | Total | 278
flow.mgr.flows_timeout | Total | 33
flow.mgr.flows_evicted | Total | 33
memcap.pressure | Total | 37
memcap.pressure_max | Total | 37
defrag.memuse | Total | 33554432
flow.recycler.recycled | Total | 33
flow.recycler.queue_max | Total | 12
tcp.memuse | Total | 24903680
tcp.reassembly_memuse | Total | 4587520
http.byterange.memuse | Total | 168384
http.byterange.memcap | Total | 104857600
ippair.memuse | Total | 398144
ippair.memcap | Total | 16777216
host.memuse | Total | 382144
host.memcap | Total | 33554432
flow.memuse | Total | 8042304