I am trying to deploy Suricata in IDS mode in an independent docker container. This container has a dedicated interface for IDS (lets say eth1). Basically all the relevant traffic that needs to be inspected by IDS-service is replicated and pushed to the Suricata container via eth1 interface. Since Suricata is already getting copies of original packets, it can actually just consume these packets. But as I understand from the documentation, Suricata works on a “copy” of the packet received on the input interface.
Is there a way I can achieve the “sinkhole” kind of functionality where Suricata just consumes all the packets received on the input interface, instead of making copies of received packets?
In IPS mode, I see that there is AF_PACKET IPS mode where packets on the input interface get copied to the output interface. Is it possible to have something like a NULL output interface where the packets will get discarded post Suricata inspection?
Thanks & Regards,
I’m not quite sure I understand your question.
You want to inspect packets from one interface and then “sinkhole” the packets?
Sounds like you want IDS mode, because the container is not forwarding the packets, it’s not used inline on a connect standing between hosts?
AF_PACKET in IDS mode will copy the packets (or information from the packets) from the socket (or shared kernel memory space) and into Suricata user space. This is required for most inspection use cases because you want to aggregate information from multiple packets in order to have a concept about packet streams (think TCP connections) and protocols (think HTTP).
The packets that are read by Suricata are not kept unless you configure Suricata to store them through pcap logging, so they are “sinkholed”/thrown away after their relevant information has been extracted.
Are you worried about wasting resources?
Thank you for your response.
Yes, I want IDS mode and yes, I am trying to avoid wasting resources. As you said “AF_PACKET in IDS mode will copy the packets from the socket into Suricata user space”. I am trying to see if this copying can be avoided and Suricata can instead just suck in the original packets as is. As I mentioned earlier, these original packets are anyway going to get blackholed/discarded in the container since these are just replicas that were sent for IDS analysis.
Hmm. I’m not on the Suricata team and not too familiar with the code.
Looking at https://github.com/OISF/suricata/blob/master/src/source-af-packet.c#L1069 and surrounding functions it seems like you can get Suricata to populate a packet struct pointer without actually copying the data. Some fields will probably actually get copied, but I don’t see how you can track packet fields over time without keeping said data.
If you use Suricata in IDS mode Suricata will just discard the packets after its done with them. In the default case it will use zero copy in AF_PACKET.
I referred to the steps mentioned here:
I tried out the same on version 5.0.4, but it didn’t work for me. I am assuming here that zero copy means that the original packet arriving on the input-interface is consumed by Suricata. I don’t see that happening. However I would like to highlight that I tried this on a tun/tap interface - not sure if that is supported with this configuration.
!# ethtool -i host-tap4
How would you observe that “the original packet arriving on the input-interface is consumed by Suricata” and how are you seeing this isn’t happening?
The packet rx’ed on Suricata’s input interface is a ICMP echo request packet with destination address same as the one configured on that interface. In absence of Suricata daemon, this ICMP ping request packet enters the Linux stack and ping response is generated and sent back on the same interface. If I run Suricata on this same interface, my expectation was that Suricata will consume the ping request packet before it enters the Linux network stack, and hence I won’t see the ICMP echo reply. But I don’t see that happening. ICMP echo reply goes out of this interface.
Also observed this same behavior with a veth-pair. Some details for the same below:
- interface: eth1
##Suricata run command
suricata -c ./suricata.yaml -i eth1 -D --init-errors-fatal -vvv`
##Suricata log snippet
19/10/2020 – 06:43:35 - - This is Suricata version 5.0.4 RELEASE running in SYSTEM mode
19/10/2020 – 06:43:35 - - CPUs/cores online: 8
19/10/2020 – 06:43:39 - - Enabling locked memory for mmap on iface eth1
19/10/2020 – 06:43:39 - - Enabling tpacket v3 capture on iface eth1
19/10/2020 – 06:43:39 - - Using flow cluster mode for AF_PACKET (iface eth1)
19/10/2020 – 06:43:39 - - Using defrag kernel functionality for AF_PACKET (iface eth1)
19/10/2020 – 06:43:40 - - 8 cores, so using 8 threads
19/10/2020 – 06:43:40 - - Using 8 AF_PACKET threads for interface eth1
19/10/2020 – 06:43:40 - - eth1: enabling zero copy mode by using data release call
19/10/2020 – 06:43:40 - - Going to use 8 thread(s)
##Decoder shows two ICMP packets - one for the echo request and other likely for the echo
Date: 10/19/2020 – 06:51:00 (uptime: 0d, 00h 07m 25s)
Counter | TM Name | Value
capture.kernel_packets | Total | 6
decoder.pkts | Total | 6
decoder.bytes | Total | 364
decoder.ipv4 | Total | 2
decoder.ethernet | Total | 6
decoder.icmpv4 | Total | 2
decoder.avg_pkt_size | Total | 60
decoder.max_pkt_size | Total | 98
flow.icmpv4 | Total | 1
flow.spare | Total | 10000
flow_mgr.rows_checked | Total | 65536
flow_mgr.rows_skipped | Total | 65536
tcp.memuse | Total | 4587520
tcp.reassembly_memuse | Total | 786432
flow.memuse | Total | 7474632
[suricata]# ethtool -i eth1
##Packet sender container gets the ICMP echo reply back
root@ab0c39ca5df9:/data# ping 172.25.0.2 -c1
PING 172.25.0.2 (172.25.0.2) 56(84) bytes of data.
64 bytes from 172.25.0.2: icmp_seq=1 ttl=64 time=0.052 ms
--- 172.25.0.2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms
When running in IDS mode, Suricata will not take a packet away from the OS. It tries to be as passive and non-intrusive as possible. So the behavior you’re describing looks correct. There is an exception with the netmap capture method, there its a property of netmap to “consume” the packet.
Is this behavior anyway related to af-packet version (v3 vs v4)? I had the tpacket-v3 flag enabled under af-packet section. Not sure if there is a way to try v4.
When V4 executes like this, we say that it executes in “copy-mode”. Each packet is sent to the Linux stack and a copy of it is sent to user space, so V4 behaves in the same way as V2 and V3.
However, when the new PACKET_ZEROCOPY setsockopt is called, V4 starts to operate in true zero-copy mode. In this mode, the networking HW (or SW driver if it is a virtual driver like veth) DMAs/puts packets straight into the packet buffer that is shared between user space and kernel space.
We’re also suggesting adding a new XDP action, XDP_PASS_TO_KERNEL, to pass copies to the kernel stack instead of the V4 user space queue in zero-copy mode.
Yes, I will also evaluate netmap later.
Suricata’s AF_PACKET implementation uses tpacket v2 (default) or v3 (if you set