Suricata high capture.kernel_drops count. I use the PF_RING zc mode

So you have just one single rule, the custom rule?
How much traffic is currently forwarded?
You made sure that the current build doesn’t use profiling anymore?
You could try AF_PACKET again to rule out that it’s something with PF_RING

about 5G/s traffic.
yes, I am not using profiling anymore.
ok, but AF_PACKET can’t deal with so much traffic.

AF_PACKET can deal with that amount of traffic, you have enough cores for that. I’ve seen 20Gbit/s deployments with less cores and running AF_PACKET.

Especially when you just run one rule the load is even less.

When I use AF_PACKET engine, the same situation happens. Just because I opened the file-store option.

How does htop look like if you run it with AF_PACKET and perf top?
Also a run without that specific rule enabled and again htop+perf top.
What is the source of the traffic, maybe something really strange is in there.

When I run it with AF_PACKET and close the file-store option. Everything is OK.

But when I opened the file-store option. Strange things happened.

The traffic is imported from 11 load machines through a splitter device.

When I run it with AF_PACKET and without file.rules, that packets dropped situation still happens.

What Linux Distribution is this?

In the perf top the is very strange to be that high. So I would debug further there.

But also in your perf top output with AF_PACKET without filestore I would check why read_tsc and PacketPoolReturnPacket are so high on the overhead section.

My Linux Distribution is this.

[root@sec-audit-lljx-027093 ~]# uname -a
Linux 3.10.0-862.el7.x86_64 #1 SMP Fri Apr 20 16:44:24 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

Thank you very much.

This is just the kernel version but not if it’s Debian, Ubuntu, CentOS etc.

But regardless of that, 3.10 is very old from 2013 and EOL 2017. So I would make sure to run a more current kernel before digging deeper.

I get it, this is my Linux Distribution.

That’s great. Thank you very much.