Hi everyone,
I’ve been testing Suricata with DPDK
against AF_PACKET
to compare their performance on my hardware under high traffic volumes. Due to my system’s specific hardware configuration, I’ve observed that AF_PACKET
performs better than DPDK
when handling high traffic rates.
However, I’ve encountered an unexpected and intriguing result during these tests. As the traffic rate increases:
- The number of dropped packets for both
DPDK
andAF_PACKET
increases, as expected. - Surprisingly, despite
DPDK
dropping significantly more packets thanAF_PACKET
, the number of alerts detected byDPDK
is consistently higher at higher traffic rates.
This behavior seems counterintuitive. If DPDK
is dropping more packets, why is it detecting more alerts compared to AF_PACKET
?
One hypothesis I considered is that DPDK
might use a specific packet discard mechanism:
- By saturation: It discards packets once the buffer is full but continues focusing on processing packets already queued for analysis.
- By heuristic: It might prioritize or discard packets based on structure, type, or other criteria.
Below, I’ve attached two graphs for reference:
-
Dropped Packets: The number of packets dropped by Suricata increases with traffic rate for both
DPDK
andAF_PACKET
. -
Severity 1 Alerts: The number of severity 1 alerts detected by Suricata with
DPDK
is higher than withAF_PACKET
as traffic rate increases, even thoughDPDK
drops more packets overall.
I’d really appreciate any insights into why this might be happening. Does DPDK
have a different packet handling or prioritization logic compared to AF_PACKET
that could explain these results? Is there any internal mechanism in Suricata or DPDK
that could be contributing to this?
Additional Notes:
- The traffic content is identical across all tests and executions.
- The same Suricata ruleset is loaded for all trials, managed using
suricata-update
. I can share the rule sources if needed. - Suricata version: 7.0.7 (built from source)
- OS: Ubuntu 22.04
Thank you in advance for your help!