Can Suricata reassemble a flow from different network interfaces?

Hello,

In a setup where traffic is mirrored from 2 network ports (10G) configured in LACP mode to 2 network cards (also 10G) on a Suricata sensor, which simply means that different packets of the same flow could appear on different interfaces on the sensor side, how can I configure Suricata to reassemble such flow correctly?

Will appreciate any help!

1 Like

Hello,

This can be solved with bonding of the network interfaces, but it costs a bit in CPU time. But I would try it out and see how it performs with your hardware. Some more information can be found here https://wiki.debian.org/Bonding

1 Like

Thanks StianB, I’ll give it a try and update this thread once I have news.

The boding is done, though all I can see in tcpdump on the bond interface is STP packets, whereas on the slave interfaces mirrored traffic can be clearly seen with tcpdump.

# cat /etc/sysconfig/network-scripts/ifcfg-bond1
DEVICE=bond1
BOOTPROTO=static
MTU=9000
ONBOOT=yes
USERCTL=no
BONDING_OPTS="mode=4 miimon=100"

# cat /etc/sysconfig/network-scripts/ifcfg-enp3s0f0
DEVICE=enp3s0f0
BOOTPROTO=none
USERCTL=no
ONBOOT=yes
SLAVE=yes
MASTER=bond1

# cat /etc/sysconfig/network-scripts/ifcfg-enp3s0f1
DEVICE=enp3s0f1
BOOTPROTO=none
USERCTL=no
ONBOOT=yes
SLAVE=yes
MASTER=bond1

# cat /proc/net/bonding/bond1
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

802.3ad info
LACP rate: slow
Min links: 0
Aggregator selection policy (ad_select): stable
System priority: 65535
System MAC address: ac:1f:6b:a4:4f:90
Active Aggregator Info:
        Aggregator ID: 5
        Number of ports: 2
        Actor Key: 15
        Partner Key: 7
        Partner Mac Address: fe:e1:ba:d0:92:1a

Slave Interface: enp3s0f0
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: ac:1f:6b:a4:4f:90
Slave queue ID: 0
Aggregator ID: 5
Actor Churn State: none
Partner Churn State: churned
Actor Churned Count: 0
Partner Churned Count: 1
details actor lacp pdu:
    system priority: 65535
    system mac address: ac:1f:6b:a4:4f:90
    port key: 15
    port priority: 255
    port number: 1
    port state: 13
details partner lacp pdu:
    system priority: 32768
    system mac address: fe:e1:ba:d0:92:1a
    oper key: 7
    port priority: 32768
    port number: 3
    port state: 53

Slave Interface: enp3s0f1
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: ac:1f:6b:a4:4f:91
Slave queue ID: 0
Aggregator ID: 5
Actor Churn State: none
Partner Churn State: churned
Actor Churned Count: 0
Partner Churned Count: 1
details actor lacp pdu:
    system priority: 65535
    system mac address: ac:1f:6b:a4:4f:90
    port key: 15
    port priority: 255
    port number: 2
    port state: 13
details partner lacp pdu:
    system priority: 32768
    system mac address: fe:e1:ba:d0:92:1a
    oper key: 7
    port priority: 32768
    port number: 4
    port state: 53

Sorry, I probably wasn’t clear enough in describing the setup, so:

  • I have 2 ports in LACP mode on the switch, production traffic passes through them to another network device.
  • I have another 2 ports on the same switch, which are configured to mirror the 2 ports from above to the IDS server

After configuring LACP both on the server side and on the switch side it worked out.
So, instead of having 1-to-1 physical interface mirroring on the switch, both production physical interfaces are now mirrored to the single Eth-Trunk (which in turn consists of two physical interfaces which are connected to the server).
On the server side “tcpdump -i bond1” now shows the mirrored traffic.