Does Suricata support cross-packet reassembly with PF_RING cluster_round_robin?

In my production environment, traffic mirroring is done by encapsulating data packets in VXLAN format and sending them out. Unfortunately, due to some reasons, the agent collecting traffic mirroring always sends data with the same source port, which prevents me from using the cluster_per_flow configuration. Since their 5-tuple is always the same, Suricata’s multithreading cannot achieve load balancing. After running tcpdump, you will see data similar to this: > > > >

After reviewing the Suricata manual, Suricata supports another PF_RING mode called cluster_round_robin. As described: “The cluster_round_robin manner is a way of distributing packets one at a time to each thread (like distributing playing cards to fellow players). The cluster_flow manner is a way of distributing all packets of the same flow to the same thread. The flows itself will be distributed to the threads in a round-robin manner.”

I have a question: in this mode, can Suricata properly parse a long HTTP request connection? Suppose I have an HTTP flow that will send 100 packets. PF_RING will distribute each packet in a round-robin manner to each thread. Does Suricata support data sharing between threads? Can it correctly restore the HTTP request?

I encountered the same problem with Zeek, but Zeek supports more PF_RING cluster_type options than Suricata. I used the inner_flow_5_tuple mode to solve this problem. I am not sure if Suricata can support this mode, or if the cluster_round_robin mode can help me?

A per packet round robin mode will not work well with Suricata.

It sounds like what is needed is to add support for the inner flow 5 tuple mode. Feel free to open a feature ticket for this.

Ok, I will open a new Ticket to discuss that Suricata supports innet flow 5 tuple mode feature.

I Create new Topic.

Feature request, requires Suricata support for innet 5 tuple mode

Ah sorry for not being more clear. We track tickets here:

Create new feature