I’m confused by what this counter is saying.
I have a cluster_flow deployment of Suricata which sees this counter incrementing regularly, and it appears to be following a pattern proportional to kernel_drops, which I’ve been trying to remove (the drops are under what the box should be able to handle)
Because of this, I stumbled on this:
And opted to try and configure cluster_qm w/ rss on a different/test system using X710’s, to see how it behaves.
Prior to doing this, the system was configured with cluster_flow (fairly similarly to the production box which was dropping), and wasn’t really seeing any pkt_on_wrong_thread counters.
However, after following these steps:
The box now regularly sees pkt_on_wrong_thread incrementing… I feel like I must’ve done something wrong, but I believe I follow the steps correctly.
I thought, with cluster_qm, the balancing was done by the NIC, and so each worker thread would read from its own rss queue, fed by the NIC balancing via hw using a hash of 5-tuple… is this correct?
If so; how could a pkt be balanced to the wrong thread!?
On the first link above there’s concern over tunneling and how they’re balanced, but my traffic mix is entirely IPV4 w/ TCP, and UDP. No tunnels.