Suricata run only one cpu process packet

As shown in the figure, my server has eight CPUs, but only one core is used to process packets


Can you share the suricata.yaml?

suricata.yaml (70.8 KB)

I compiled PF_ Ring and hyperscan functions
image

prueba eso a ver:

Hí,

Is it possible that the problem is in the ‘cpu-affinity’ configuration? Try changing the settings. For example:

 cpu-affinity:
    - management-cpu-set:
        cpu: [ "0", "1" ]  
o

    - management-cpu-set:
        cpu: [ 0,7 ]  # include only these CPUs in affinity settings
```etc

On the other hand. Are you running in podo FP_RING ?. The standard or zero_copy mode?

I tested it according to your settings, but it didn’t work.still one CPU process


I use AF_ Pack mode, FP_RING mode is not used

I use tcpreplay to test the traffic and send about 2GB of traffic per second

Hi,

This by testing with different configurations.

i use PF_RING mode
suricata --pfring-int=eth0 --pfring-cluster-id=99 --pfring-cluster-type=cluster_flow -c /etc/suricata/suricata.yaml -vvv
suricat still use one CPU process packet


Confusingly, there are performance problems with this pattern

Hello, are you still helping me solve the problem?

Hi,
What is the output of

top -H -p `pidof suricata` 

also can you please share the output of
ethtool -l eth0

i use command to run suricata

suricata  -i eth0   -c /etc/suricata/suricata.yaml -vvv 

and run

top -H -p 


Suricata dropped more than 2 million packets, of which the kernel dropped 600000

You should also update the receive-cpu-set configuratin or just disable cpu affinity for testing.

What type of CPU and NIC is used and which distribution?

Do you have a technical support group? This kind of communication is too time-consuming

Contact us at info AT oisf.net to discuss commercial support options. Otherwise, this is it.

Since both PF_RING and AF_PACKET behave the same I suspect the issue is in the traffic. Suricata does per flow load balancing, so if the traffic is an elephant flow it will be processed by a single thread. You can check this by looking at your eve.json and then specifically at the flow event types.

yep - I second that, was thinking exactly the same.

I know the problem. The reason is that a certain traffic triggers a rule too many times, which results in performance degradation. After I delete the rule, the packet loss rate is very low.

Thank you very much for your help!!!

This may be a bug. Can you tell us a bit more about the rule and the traffic that triggers it?

i use gre tunnel to monit traffic,I guess that because of decoding GRE traffic, the alarm log is constantly written, and the disk IO is full.
triggers rule signature_id:

2210029
2210045

test traffic:

http://downloads.digitalcorpora.org/corpora/packets/5gb-tcp-connection.pcap.gz

This pcap contains a single http flow, so it is expected behavior that Suricata processes it using a single thread. In this case GRE is not causing it.

I suspect that minor packet loss is leading to the flood of stream events, which then makes the packet loss worse. I’ll have a look at why it doesn’t stop those events after some time.

1 Like