Suricata IPS Mode Not Dropping Packets in af-packet Inline Configuration – Need Help!

Hello Everyone,

After having spent more than a week trying to understand the IPS mode in Suricata, trying both the Netfilter (NFQ) and af-packet modes, I finally decided to configure my Suricata installation as an af-packet inline/IPS mode on my Ubuntu virtual private server with a single network interface and limited resources (1 vCPU, 3 GB RAM). I finally tested it today and it failed! My testing was trying to drop ping requests. Here’s a step-by-step of what I did.

Before implementing the af-packet inline/IPS mode, I’d to find out which Suricata rule gets triggered when pings come in. I pinged my VPS and the following alert was logged in file, ‘/var/log/suricata/fast.log’:

12/31/2024-12:25:14.937611  [**] [1:2100369:7] GPL ICMP PING BayRS Router [**] [Classification: Misc activity] [Priority: 3] {ICMP} 96.32.XXX.XXX:8 -> 23.94.XX.XXX:0

So, the relevant rule is with an sid (signature ID) of 2100369. I located this rule in ‘/var/lib/suricata/rules/suricata.rules’ and looks like this:

alert icmp $EXTERNAL_NET any -> $HOME_NET any (msg:"GPL ICMP PING BayRS Router"; itype:8; content:"|01 02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E 0F|"; depth:32; reference:arachnids,438; reference:arachnids,444; classtype:misc-activity; sid:2100369; rev:7; metadata:created_at 2010_09_23, confidence Medium, signature_severity Informational, updated_at 2019_07_26;)

I included the sid (2100369) in file ‘/etc/suricata/disable.conf’ to disable it, copied the rule over to my custom rules file at ‘/var/lib/suricata/rules/custom.rules’ to be modified. In my custom rules file, I edited it to appear like the following:

drop icmp $EXTERNAL_NET any -> $HOME_NET any (msg:"DROPPED: GPL ICMP PING BayRS Router"; itype:8; content:"|01 02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E 0F|"; depth:32; threshold: type limit, track by_src, seconds 300, count 1; reference:arachnids,438; reference:arachnids,444; classtype:misc-activity; sid:1000003; rev:7; metadata:created_at 2010_09_23, confidence Medium, signature_severity Informational, updated_at 2019_07_26;)

Basically, I replaced ‘alert’ with ‘drop’, inserted 'DROPPED: ’ right after ‘msg:"’ for more appropriate description of expected outcome, replaced the original sid with 1000003 ( which is within the 1000000-1999999 range reserved for local use to avoid conflicts) and inserted threshold settings, ‘threshold: type limit, track by_src, seconds 300, count 1;’, in the options area to make sure I’m not getting flooded with alerts.

Portions of the relevant settings in my main Suricata configuration file ‘/etc/suricata/suricata.yaml’ look like this:

%YAML 1.1
---

vars:
  address-groups:
    HOME_NET: "[23.94.XX.XXX]"
    EXTERNAL_NET: "!$HOME_NET"

af-packet:
  - interface: eth0
    threads: 1
    copy-mode: ips
    bypass: no
    defrag: yes
    cluster-type: cluster_flow
    cluster-id: 98
    tpacket-v3: yes
    ring-size: 1024
    buffer-size: 64535
    use-mmap: yes

default-rule-path: /var/lib/suricata/rules

rule-files:
  - custom.rules
  - suricata.rules

Tested for the correctness of the configuration file.

$ sudo suricata -T -c /etc/suricata/suricata.yaml -v
Notice: suricata: This is Suricata version 7.0.8 RELEASE running in SYSTEM mode
Info: cpu: CPUs/cores online: 1
Info: suricata: Running suricata under test mode
Info: suricata: Setting engine mode to IDS mode by default
Info: exception-policy: master exception-policy set to: auto
Info: logopenfile: fast output device (regular) initialized: fast.log
Info: logopenfile: eve-log output device (regular) initialized: eve.json
Info: logopenfile: stats output device (regular) initialized: stats.log
Info: detect: 2 rule files processed. 41420 rules successfully loaded, 0 rules failed, 0
Info: threshold-config: Threshold config parsed: 0 rule(s) found
Info: detect: 41423 signatures processed. 1166 are IP-only rules, 4292 are inspecting packet payload, 35746 inspect application layer, 108 are decoder event only
Notice: suricata: Configuration provided was successfully loaded. Exiting.

Seeing ‘Setting engine mode to IDS mode by default’ in the test result is unexpected when it was supposed to be IPS.

I then executed the following commands to apply the changes.

sudo systemctl restart suricata
sudo suricata-update

From my local computer, I pinged my VPS and here’s the result:

$ ping -c 4 23.94.XX.XXX
PING 23.94.XX.XXX (23.94.XX.XXX): 56 data bytes
64 bytes from 23.94.XX.XXX: icmp_seq=0 ttl=53 time=11.536 ms
64 bytes from 23.94.XX.XXX: icmp_seq=1 ttl=53 time=16.576 ms
64 bytes from 23.94.XX.XXX: icmp_seq=2 ttl=53 time=15.246 ms
64 bytes from 23.94.XX.XXX: icmp_seq=3 ttl=53 time=81.148 ms
--- 23.94.XX.XXX ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max/stddev = 11.536/31.127/81.148/28.939 ms

As you can see, there is 0% packet loss indicating the custom drop rule did not work.

The ‘/var/log/suricata/fast.log’ recorded this:

12/31/2024-12:36:16.571952  [wDrop] [**] [1:1000003:7] DROPPED: GPL ICMP PING BayRS Router [**] [Classification: Misc activity] [Priority: 3] {ICMP} 96.32.XXX.XXX:8 -> 23.94.XX.XXX:0

So, as you can see all my 4 ping packets have been received although my Suricata installation that’s supposed to operate in an IPS mode triggered the corresponding drop rule for pings and wrote a record in the log file as if it dropped the ping packets when it actually did not. Please note the bypass setting under af-packet is set to ‘no’ to ensure packets won’t bypass Suricata even when it’s overwhelmed.

This is really frustrating me. If Suricata will not drop ping packets how can I have the confidence it would drop any other kinds of dangerous packets out in the internet wilderness? Please help.

For AF_PACKET IPS you need 2 interfaces, it won’t work with 1. It works by creating a layer 2 bridge (ie. ethernet) between the 2 interfaces, copying packets from one to the other applying the drop rules as needed.

If you are trying to protect the host that Suricata is running on with IPS, you will have to use NFQ. I cover this for RedHat like systems here, Guide: Getting Started on RHEL, CentOS and rebuild Linux Distributions, but I can’t imagine Ubuntu is much different. Essentially you need to add iptables rules on INPUT and OUTPUT to send the packets to NFQUEUE.

This is also covered in the user guide here: 15. Setting up IPS/inline for Linux — Suricata 8.0.0-dev documentation

Thanks for the swift reply, Jason!

Although I had seen your official documentation on how to set up af-packet IPS mode before, which basically explains as you just said and gives example for a setup with two network interfaces, I read somewhere that it’s possible with a single network interface by elaborating Suricata would tap the packets and make decisions based on the rules and configurations whether to let the packets proceed or drop off. Besides, the reason I decided to stick with the af-packet mode is that this mode tends to offer lower latency and better throughput than NFQ, especially when the system has a high packet rate. af-packet can be more resource-efficient (in terms of CPU and memory) on systems with adequate hardware because it directly interacts with the kernel’s packet capture infrastructure.

With NFQ mode, however, packets are handed off to user space and then back to the kernel, NFQ can introduce more latency compared to af-packet. It’s not as fast, particularly when you need to drop packets inline because of the additional communication between the kernel (via Netfilter) and Suricata (in user space). NFQ tends to use more CPU (remember I’ve only 1 vCPU) and introduces higher latency due to the extra layers involved (Netfilter, the queue, Suricata’s user space processing).

Before actually switching to NFQ mode, I was wondering why I couldn’t use the loopback interface “lo” as the second network interface. Also, how about one of my Docker network interfaces, say “docker0”?

I’m not aware of using AF_PACKET IPS with a single interface. It is implemented by having 2 interfaces that peer with each other, copying packets from one to another.

I suppose with virtual interfaces and Linux namespaces you could come up with some network scenario where you run AF_PACKET IPS between the physical interface (no IP address), and a virtual interface which Linux uses as its primary interface. But I think it would be fragile, and not that useful outside of proof of concept and testing scenarios. Just note that it doesn’t fit the AF_PACKET IPS use-case, a dedicated “bump-in-the-wire” IPS machine.

Ok, now, I’ve resorted to the NFQ IPS mode but it didn’t work.

Here are the relevant configurations:

In suricata.yaml

af-packet:

  • interface: eth0
    threads: 1
    bypass: no
    defrag: yes
    cluster-type: cluster_flow
    cluster-id: 98
    tpacket-v3: yes
    ring-size: 2048
    buffer-size: 64535
    use-mmap: yes

nfq:
mode: accept
repeat-mark: 1
repeat-mask: 1
route-queue: 2

The contents of my ‘/etc/default/suricata’ are:

RUN=yes
RUN_AS_USER=suricata
SURCONF=/etc/suricata/suricata.yaml
LISTENMODE=nfqueue
IFACE=eth0
NFQUEUE=“-q 0”
PIDFILE=/var/run/suricata.pid

Since my main firewall is nftables, I ran these two commands:

nft> add chain filter IPS { type filter hook forward priority 10;}
nft> add rule filter IPS queue

As a result, I have the following nftables chain inside my table inet filter:

    chain IPS {
            type filter hook forward priority filter + 10; policy accept;
            queue to 0
    }

Another question: Can I put bypass: no under nfq: in suricata.yaml or I should use fail-open: no instead?

Just so I don’t leave you silent, this is where I step out for now. I’m not familiar with NFT or the /etc/default/suricata file. Hopefully someone can fill that void here.

One thing you could check is to make sure Suricata is running with “-q”, as in:

ps auxw|grep suricata

and make sure the -q parameter is set.

Thanks. Yes, Suricata is running with the “-q” option as you can see below, second line of CGroup. I believe that is made possible with the setting in ‘/etc/default/suricata’ file.

$ sudo systemctl status suricata
● suricata.service - LSB: Next Generation IDS/IPS
Loaded: loaded (/etc/init.d/suricata; generated)
Drop-In: /etc/systemd/system/suricata.service.d
└─override.conf
Active: active (running) since Tue 2024-12-31 22:27:33 UTC; 8min ago
Docs: man:systemd-sysv-generator(8)
Process: 338712 ExecStart=/etc/init.d/suricata start (code=exited, status=0/SUCCESS)
Tasks: 9 (limit: 3425)
Memory: 435.9M
CPU: 52.874s
CGroup: /system.slice/suricata.service
└─338718 /usr/bin/suricata -c /etc/suricata/suricata.yaml --pidfile /var/run/suricata.pid -q 0 -D -vvv --user=suricata

Dec 31 22:27:33 rvps-1 systemd[1]: suricata.service: Consumed 51.821s CPU time.
Dec 31 22:27:33 rvps-1 systemd[1]: Starting LSB: Next Generation IDS/IPS…
Dec 31 22:27:33 rvps-1 suricata[338712]: Starting suricata in IPS (nfqueue) mode… done.
Dec 31 22:27:33 rvps-1 systemd[1]: Started LSB: Next Generation IDS/IPS.

Hi,

Did you get this up and running already?
For what its worth; I am using Suricata 7.0.8 on Almalinux with nftables.

This is my configuration:
File: /etc/sysconfig/suricata

OPTIONS="-q 0 --user suricata "

File: /etc/suricata/suricata.yaml

af-packet:
  #- interface: enp1s0
  # Number of receive threads. "auto" uses the number of cores
  # threads: auto
  # Default clusterid. AF_PACKET will load balance packets based on flow.
  # cluster-id: 99
  # Default AF_PACKET cluster type. AF_PACKET can load balance per flow or per hash.
  # This is only supported for Linux kernel > 3.1
  # possible value are:
  #  * cluster_flow: all packets of a given flow are sent to the same socket
  # ...

Under af-packet i have commented everything, since i am not using af-packet.

nfq:
#  mode: accept
#  repeat-mark: 1
#  repeat-mask: 1
#  bypass-mark: 1
#  bypass-mask: 1
#  route-queue: 2
#  batchcount: 20
#  fail-open: yes

nftables example:
include “/etc/nftables/variables.conf”

include "/etc/nftables/variables.conf"

table ip filter {
	set blocked_countries {
		type ipv4_addr
		flags interval
	}

	chain input {
		type filter hook input priority filter; policy drop;
		iifname "lo" accept
		ip saddr @blocked_countries drop
	}

	chain forward {
		type filter hook forward priority filter; policy accept;
		ip saddr @blocked_countries drop
	}

	chain output {
		type filter hook output priority filter; policy accept;
		ip daddr @blocked_countries drop
	}

	chain IPS {
		type filter hook forward priority filter + 10; policy accept;
		queue to 0
	}
}
table ip nat {
	chain prerouting {
		type nat hook prerouting priority dstnat; policy accept;
	}

	chain postrouting {
		type nat hook postrouting priority srcnat; policy accept;
	}
}

Make sure to add chain IPS like this.

I still need to tinker with settings and configuration options since I started using suricata recently and I am very happy about it, i do not notice any latency or performance loss in this configuration althought this is on a home network, no business use-case.

I am using Xeon E5 dual CPU, DDR4, SSD’s, 2 NIC’s (WAN / LAN) one goes directly to the modem (ISP provided) and the LAN goes to my main switch offcourse.
The above configuration is running on QEMU-KVM configuration

I did it this way mainly for learning purposes althought I am pretty confident if a business customer wants to do it i would implement it with nfqueue.

Regards,
Steven