Linux bridge and af-packet wont drop on rule

Version: 1:7.0.7
OS: Debian 12.7

I have create a bridge using the ip package in /etc/network/interfaces that links two interfaces together named enp1s0f0 and enp1s0f1 into br0.

I have setup Suricata using af-packet using the following config:

af-packet:
  - interface: br0
    threads: 8
    cluster-id: 98
    cluster-type: cluster_qm
    defrag: no
    use-mmap: yes
    buffer-size: 262144

I have the following test rule setup (no other rules that conflict)

drop ip any any -> any any (msg:"GPL ATTACK_RESPONSE id check returned root"; content:"uid=0|28|root|29|"; classtype:bad-unknown; sid:2100498; rev:7; metadata:created_at 2010_09_23, updated_at 2019_07_26;)

And I get the following alerts in my eve.json (but it doesn’t block):

 alert: {
         action: "allowed",
         category: "Potentially Bad Traffic",
         gid: 1,
         metadata: {
            created_at: [
               "2010_09_23"
            ],
            updated_at: [
               "2019_07_26"
            ]
         },
         rev: 7,
         rule: "drop ip any any -> any any (msg:\"GPL ATTACK_RESPONSE id check returned root\"; content:\"uid=0|28|root|29|\"; classtype:bad-unknown; sid:2100498; rev:7; metadata:created_at 2010_09_23, updated_at 2019_07_26;)",
         severity: 2,
         signature: "GPL ATTACK_RESPONSE id check returned root",
         signature_id: 2100498
      },

It allowed the traffic to go through. I was under the understanding in AF-Packet mode without a copy it would still be able to block traffic.

I switch to:

af-packet:
  - interface: enp1s0f1
    threads: 8
    cluster-id: 98
    cluster-type: cluster_qm
    defrag: no
    use-mmap: yes
    buffer-size: 262144
    copy-mode: ips
    copy-iface: enp1s0f0
  - interface: enp1s0f0
    threads: 8
    cluster-id: 98
    cluster-type: cluster_qm
    defrag: no
    use-mmap: yes
    buffer-size: 262144
    copy-mode: ips
    copy-iface: enp1s0f1

and it blocks fine but the throughput goes from 6 Gbps (above) to 2 Gbps max.

  alert: {
         action: "blocked",
         category: "Potentially Bad Traffic",
         gid: 1,
         metadata: {
            created_at: [
               "2010_09_23"
            ],
            updated_at: [
               "2019_07_26"
            ]
         },
         rev: 7,
         rule: "drop ip any any -> any any (msg:\"GPL ATTACK_RESPONSE id check returned root\"; content:\"uid=0|28|root|29|\"; classtype:bad-unknown; sid:2100498; rev:7; metadata:created_at 2010_09_23, updated_at 2019_07_26;)",
         severity: 2,
         signature: "GPL ATTACK_RESPONSE id check returned root",
         signature_id: 2100498
      },

Hoping theres just a small configuration issue with af-packet that allows it to run in IPS mode against a bridged interface.

Thanks,
Jake

It’s important to note that AF_PACKET IPS works by having Suricata bridge the packets, it does not work with the Linux bridge, so be sure to remove that.

First make sure you are not mixing the 2. When doing AF_PACKE IPS I suggest configuring the interfaces such that they do not have IP addresses either. Let Suricata bridge them, making sure they are not part of a Linux bridge, and re-check performance without any rules load (quick hack in -S /dev/null).

Thanks Jason,

Would it work to use AF-Packet on a single interface say enp1s0f1 (WAN) and leave the bridge in linux running? I am using this as a transparent inline firewall between my WAN and Router.

I will run the bridge test to see how performance is with your suggestions.

That won’t work for an IPS, fine for pure IDS though.

The way AF_PACKET IPS works is by being the bridge where Suricata reads the packets from one interface, choosing whether or not to write (or “bridge”) them to the other interface. The Linux bridge works in the kernel at a different layer that Suricata can’t control.

Test Results:

I removed any bridge configurations in Debian and restarted the box.

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute 
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether b4:2e:99:a1:46:c0 brd ff:ff:ff:ff:ff:ff
    altname enp4s0
    inet 192.168.1.114/24 brd 192.168.1.255 scope global dynamic eno1
       valid_lft 85640sec preferred_lft 85640sec
    inet6 fd1b:109f:58b2:f248:b62e:99ff:fea1:xxxx/64 scope global dynamic mngtmpaddr 
       valid_lft 1759sec preferred_lft 1759sec
    inet6 fe80::b62e:99ff:fea1:xxxx/64 scope link 
       valid_lft forever preferred_lft forever
3: enp1s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 98:b7:85:20:5b:xx brd ff:ff:ff:ff:ff:ff
    inet6 fe80::9ab7:85ff:fe20:xxxx/64 scope link 
       valid_lft forever preferred_lft forever
4: enp1s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 98:b7:85:20:5b:xx brd ff:ff:ff:ff:ff:ff
    inet6 fe80::9ab7:85ff:fe20:xxxx/64 scope link 
       valid_lft forever preferred_lft forever

I have not setup any IP Addresses on either interface.

I removed the rules file from the suricata.yaml and restarted the service.

A quick speed test resulted in: Down 779 Mbps UP: 2.611 Gbps

Still a far cry from the 6 Gbps I got in bridge mode with IDS only.

htop reports only 2% CPU usage across 8 cores and 629M of 60.7G of RAM used.

Is there a different way to use IPS attached to a Linux Bridge?

Slower is expected as Suricata, even without rules is processing, decoding, tracking flows, re-assembling streams, none of which the Linux bridge does which is a pure copy. 2 things though:

  • make sure cluster-id is unique for each interface, you should be seeing a warning on startup about this
  • unless you’ve done the required extra setup for rss flowhash, switch to cluster_flow.

I switched to FreeBSD with netmap, disabled hw_vlan checksum lro gso tso etc. Setup a brand new instance of suricata using pkg. Configured it to use netmap between ix0 and ix1. Installed only the et/open rules and nothing else and ran it as a background service.

Results are:

UP: 2.621 Gbps DOWN: 2.416 Gbp

It’s definitely faster than linux af-packet, unfortunately still too big of a hit to be run in this environment.

If you think of anything else worth experimenting on, let me know and I will give it a go.

I re-did my box with CachyOS (shout out to them) and using a guide found here: SEPTun/SEPTun.rst at master · pevma/SEPTun · GitHub

I am using AF-Packet in inline ips mode with 0 rules loaded and I receive:

DOWN: 5.073 Gbps UP: 4.986 Gbps
Specs of box:

  • AMD 3400G (4 Core / 8 Threads, cost < $150)
  • X520 INTEL
  • 64GB RAM

Only changes to config:

af-packet:
  - interface: default
    threads: auto
    cluster-type: cluster_flow
    defrag: yes
    use-mmap: yes
    tpacket-v3: no
    ring-size: 400000
    block-size: 393216
    copy-mode: ips

  - interface: enp1s0f0
    cluster-id: 99
    copy-iface: enp1s0f1

  - interface: enp1s0f1
    cluster-id: 98
    copy-iface: enp1s0f0

threading:
  set-cpu-affinity: yes

I also created this service to run when my system first starts:

[Unit]
Description=Disable NIC Offloads
Before=network.target

[Service]
Type=oneshot
ExecStart=/usr/sbin/ethtool -K enp1s0f0 tso off
ExecStart=/usr/sbin/ethtool -K enp1s0f0 gso off
ExecStart=/usr/sbin/ethtool -K enp1s0f0 lro off
ExecStart=/usr/sbin/ethtool -K enp1s0f0 gro off
ExecStart=/usr/sbin/ethtool -K enp1s0f1 tso off
ExecStart=/usr/sbin/ethtool -K enp1s0f1 gso off
ExecStart=/usr/sbin/ethtool -K enp1s0f1 lro off
ExecStart=/usr/sbin/ethtool -K enp1s0f1 gro off

ExecStart=/usr/sbin/ethtool -K enp1s0f0 txvlan off
ExecStart=/usr/sbin/ethtool -K enp1s0f1 txvlan off

ExecStart=/usr/sbin/ethtool -K enp1s0f0 rxvlan off
ExecStart=/usr/sbin/ethtool -K enp1s0f1 rxvlan off

ExecStart=/usr/sbin/ethtool -K enp1s0f0 sg off
ExecStart=/usr/sbin/ethtool -K enp1s0f1 sg off

ExecStart=/usr/sbin/ethtool -K enp1s0f0 ntuple off
ExecStart=/usr/sbin/ethtool -K enp1s0f1 ntuple off

ExecStart=/usr/sbin/ethtool -K enp1s0f0 rxhash off
ExecStart=/usr/sbin/ethtool -K enp1s0f1 rxhash off

ExecStart=/usr/sbin/ethtool -L enp1s0f1 combined 1
ExecStart=/usr/sbin/ethtool -L enp1s0f0 combined 1

ExecStart=/usr/sbin/ethtool -G enp1s0f0 rx 512
ExecStart=/usr/sbin/ethtool -G enp1s0f1 rx 512

ExecStart=/usr/sbin/ethtool -A enp1s0f0 rx off tx off
ExecStart=/usr/sbin/ethtool -A enp1s0f1 rx off tx off

ExecStart=/usr/sbin/ethtool -C enp1s0f0 rx-usecs 100
ExecStart=/usr/sbin/ethtool -C enp1s0f1 rx-usecs 100

ExecStart=/sbin/ip link set enp1s0f0 promisc off arp off up
ExecStart=/sbin/ip link set enp1s0f1 promisc off arp off up

[Install]
WantedBy=multi-user.target

5G/5G would pass my acceptance criteria. I am now going to add some rules to see what happens once they are enabled.

I ran the same configuration with ET/open rules enabled and recieved:

Down: 4.99 Gbps UP: 4.952 Gbps On average. Still close enough to 5 Gbps to be considered acceptable.

Thanks for your help. This was a pass from me.