Iβd like to share production data showing the memory impact of enabling tpacket-v3 when using AF_PACKET capture mode in Suricata.
Test Environment
-
Suricata 7.x
-
~48,000 ET Open rules
-
AF_PACKET +
use-mmap -
Multiple distros, identical configs where possible
Measured Results (Production)
| Server | OS | Before (v2) | After (v3) | Reduction |
|---|---|---|---|---|
| Alma #1 | AlmaLinux 9.7 | 1,828 MB | 454 MB | 75% |
| Alma #2 | AlmaLinux 9.7 | ~900 MB | 168 MB | 81% |
| Alma #3 | AlmaLinux 9.7 | ~900 MB | 167 MB | 81% |
| Ubuntu | 24.04 | n/a | 358 MB | (already v3) |
Ubuntu 24.04 already defaults to tpacket-v3; Alma/RHEL do not.
Root Cause
-
TPACKET_V2 uses fixed-size ring buffers sized for worst-case MTU
-
Memory scales as:
ring-size Γ block-size Γ threads -
With large rings (e.g. 100k), this alone can exceed 700β900 MB
TPACKET_V3 uses variable-sized blocks, allocating based on actual traffic, not worst-case assumptions.
The Clue in Logs
Before the fix:
[Warning] - AF_PACKET tpacket-v3 is recommended for non-inline operation
This warning disappears after enabling v3.
Fix (30 seconds)
af-packet:
- interface: eth0
use-mmap: yes
tpacket-v3: yes
Or:
sed -i '/use-mmap: yes/a\ tpacket-v3: yes' /etc/suricata/suricata.yaml
systemctl restart suricata
Impact in Practice
Before:
-
Suricata consumed 45% of RAM on 4 GB VPS nodes
-
Swap activity during
suricata-update -
OOM risk under load
After:
-
Memory ~10% of system RAM
-
No swap pressure
-
Faster reloads
-
More predictable performance
Recommendation
Always explicitly set:
tpacket-v3: yes
β¦regardless of distro.
Itβs a zero-cost optimization with massive memory savings on systems that still default to v2.