Running DPDK Suricata as a Regular User

Running DPDK Suricata as a Regular User

When I first started working with DPDK Suricata, like most people, I ran everything as root. While this is a common practice, running Suricata as a regular user enhances security by restricting permissions to only what’s strictly necessary (follows Principle of Least Privilege (PoLP)). In this miniblog post, we go over how to set up Suricata with DPDK capture mode to run as a regular user, as we covered in our May webinar.
This guide runs Suricata directly as a non-root user, instead of the Suricata’s built-in privilege drop (run-as YAML node), as it was more achievable using this approach. In other capture modes generally drop privileges after Suricata startup as it is easier to set up.

Creating the Suricata User

First, let’s create a dedicated user for Suricata. I use -s /bin/nologin to make it a system account, but skip that if you want to be able to log in as this user for interactive sessions (e.g. for debugging).

sudo groupadd -r suricata
sudo useradd -r -M -g suricata -s /bin/nologin suricata

Verification:

id suricata

Setting Up Directories

Suricata needs to write logs and read config files. Adjust the folder permissions with:

for d in /var/run/suricata/ /var/log/suricata/ /etc/suricata/; do
  sudo mkdir -p "$d"
  sudo chown suricata:suricata "$d"
  sudo chmod 0755 "$d"
done

Verification:

ls -ld /var/run/suricata/ /var/log/suricata/ /etc/suricata/

The /etc/suricata/ write access is optional, but of course, the read access is essential.

Note: Adjust paths according to your installation layout.

Linux Capabilities

Linux capabilities provide fine-grained privilege control and eliminate the need for full root access. It is always a good idea to use the minimum capabilities that are required for Suricata to run. Based on what network card (and their DPDK Poll Mode Driver) you plan to use, you may require specific capabilities based on their hardware interaction patterns.

Mellanox/NVIDIA (mlx5)

Bifurcated drivers maintain kernel network stack compatibility while providing DPDK acceleration:

sudo setcap "cap_ipc_lock,cap_net_raw,cap_net_admin=eip cap_sys_nice=ep" /usr/bin/suricata

Intel E810 (ice)

VFIO-based drivers operate in userspace with kernel bypass. While this mode requires less capabilities, later, we need to do extra steps to allow user to interfact with vfio-pci driver:

sudo setcap "cap_ipc_lock,cap_sys_nice=ep" /usr/bin/suricata

Verification:

getcap /usr/bin/suricata 

Capability Descriptions

  • CAP_IPC_LOCK: Enables hugepage memory reservation and prevents memory swapping
  • CAP_SYS_NICE: Allows real-time thread priority configuration for packet processing threads
  • CAP_NET_ADMIN: Permits network interface configuration (required for bifurcated drivers) (mlx5 only)
  • CAP_NET_RAW: Enables raw packet socket access for traffic capture (mlx5 only)

Hugepage Setup

DPDK prefers contiguous memory allocation through hugepages as standard page sizes (4KB) create excessive TLB overhead. Instructions below show how to make them accessible to our non-root user:

# Reserve 8GB of hugepages
sudo dpdk-hugepages.py --reserve 8G

# Create a mount point owned by our user
sudo mkdir -p /mnt/surihugepages
sudo chown suricata:suricata /mnt/surihugepages

# Mount with user permissions
sudo dpdk-hugepages.py --mount --directory /mnt/surihugepages \
  --user $(id -u suricata) --group $(id -g suricata)

Verification:

dpdk-hugepages.py -s # check that `/mnt/surihugepages` shows up
cat /proc/meminfo | grep -i huge
mount | grep hugepage

VFIO Setup (vfio-pci PMD driver only)

If you’re using a NIC that requires vfio-pci (e.g. all Intel NICs), you need to set up VFIO. Other (e.g. NVIDIA) users can skip this section entirely.

Load the VFIO Driver

sudo modprobe vfio-pci

sudo groupadd -f vfio
sudo usermod -aG vfio suricata

Note: If you run into a problem of being unable to devbind your NIC to vfio-pci, commands below can help. The hotfix can be used if you can’t adjust kernell boot parameters, otherwise I recommend the second option. Try to bind the NIC as a root first to see if it is successful.

Hotfix for NICs that can’t bind to vfio-pci:

# If binding fails, temporarily disable IOMMU (unsafe as it reduces security)
echo 1 | sudo tee /sys/module/vfio/parameters/enable_unsafe_noiommu_mode

Permanent solution - add to kernel boot parameters:

# Add these parameters to /etc/default/grub in GRUB_CMDLINE_LINUX: 
# intel_iommu=on iommu=pt

# Then regenerate grub config and reboot:
sudo update-grub  # or grub2-mkconfig -o /boot/grub2/grub.cfg on some systems
sudo reboot
# After reboot you need to modprobe vfio-pci again but now it should bind to the NIC.

Set Up Device Permissions

Create udev rules so our user can access VFIO devices:

cat | sudo tee /etc/udev/rules.d/99-vfio.rules <<'EOF'
SUBSYSTEM=="misc",   KERNEL=="vfio",  MODE="0660", GROUP="vfio"
SUBSYSTEM=="vfio",   MODE="0660", GROUP="vfio"
EOF

sudo udevadm control --reload-rules
sudo udevadm trigger --subsystem-match=vfio

Bind Your NICs

# Check what you have first
dpdk-devbind.py --status

# Bind your Intel NICs (adjust PCI addresses for your setup)
sudo dpdk-devbind.py --bind=vfio-pci 0000:af:00.0 0000:af:00.1

Running Suricata

Since we created a system user, we can’t directly log in as that user. Instead, we use sudo -u to run Suricata as our dedicated user:

# Test that everything works
sudo -u suricata suricata --dpdk -vvv -S /dev/null

If you see Suricata started with no DPDK initialization messages and no permission errors, you have successfully completed our whole guide!

2 Likes