Suricata with dpdk can not set up

Hi,

first of all 7.0.0 RC1 is out and without more details about your config, setup, NIC used etc. it’s hard to help.

1 start

suricata --dpdk -v

2 yaml

dpdk:
  eal-params:
    proc-type: primary
  # DPDK capture support
  # RX queues (and TX queues in IPS mode) are assigned to cores in 1:1 ratio
  interfaces:
    - interface: 0000:02:06.0 # PCIe address of the NIC port
    - interface: 0000:02:01.0
      # Threading: possible values are either "auto" or number of threads
      # - auto takes all cores
      # in IPS mode it is required to specify the number of cores and the numbers on both interfaces must match
      threads: auto
      promisc: true # promiscuous mode - capture all packets
      multicast: true # enables also detection on multicast packets
      checksum-checks: true # if Suricata should validate checksums
      checksum-checks-offload: true # if possible offload checksum validation to the NIC (saves Suricata resources)
      mtu: 1500 # Set MTU of the device in bytes
      # rss-hash-functions: 0x0 # advanced configuration option, use only if you use untested NIC card and experience RSS warnings,
      # For `rss-hash-functions` use hexadecimal 0x01ab format to specify RSS hash function flags - DumpRssFlags can help (you can see output if you use -vvv option during Suri startup)
      # setting auto to rss_hf sets the default RSS hash functions (based on IP addresses)

      # To approximately calculate required amount of space (in bytes) for interface's mempool: mempool-size * mtu
      # Make sure you have enough allocated hugepages.
      # The optimum size for the packet memory pool (in terms of memory usage) is power of two minus one: n = (2^q - 1)
      mempool-size: 65535 # The number of elements in the mbuf pool

      # Mempool cache size must be lower or equal to:
      #     - RTE_MEMPOOL_CACHE_MAX_SIZE (by default 512) and
      #     - "mempool-size / 1.5"
      # It is advised to choose cache_size to have "mempool-size modulo cache_size == 0".
      # If this is not the case, some elements will always stay in the pool and will never be used.
      # The cache can be disabled if the cache_size argument is set to 0, can be useful to avoid losing objects in cache
      # If the value is empty or set to "auto", Suricata will attempt to set cache size of the mempool to a value
      # that matches the previously mentioned recommendations
      mempool-cache-size: 257
      rx-descriptors: 1024
      tx-descriptors: 1024
      #
      # IPS mode for Suricata works in 3 modes - none, tap, ips
      # - none: IDS mode only - disables IPS functionality (does not further forward packets)
      # - tap: forwards all packets and generates alerts (omits DROP action) This is not DPDK TAP
      # - ips: the same as tap mode but it also drops packets that are flagged by rules to be dropped
      copy-mode: none
      copy-iface: none # or PCIe address of the second interface

    - interface: auto
      threads: auto
      promisc: true
      multicast: true
      checksum-checks: true
      checksum-checks-offload: true
      mtu: 1500
      rss-hash-functions: auto
      mempool-size: 65535
      mempool-cache-size: 257
      rx-descriptors: 1024
      tx-descriptors: 1024
      copy-mode: none
      copy-iface: none

3 nic

my nic can’t use ethtool
捕获5

What type of NIC is this exactly?

Please avoid opening multiple threads on the same topic, thus we deleted the newest one. Also keep in mind this is a community forum, so replies are on a best effort mode.

Assuming from the driver and name of the machine - you are in the virtual environment.
Generally speaking, virtualized environments have not been tested so far as indicated in the docs:

The work has not been tested neither with the virtual interfaces nor in the virtual environments like VMs, Docker or similar.
12.1. Suricata.yaml — Suricata 8.0.0-dev documentation

Please be more specific about your environment regardless of my assumptions.

Also - I am not sure if that’s intended or typo but to run DPDK application, you need to bind all NICs that you are going to use to drivers compatible with DPDK. In your suricata.yaml configuration, I see two PCIe addresses of which one is still bound to e1000 driver - 0000:02:01.0

Why you can’t use ethtool on the other NIC - I am not sure but unless you have bifurcated driver (Intel drivers are not bifurcated), ethtool is not used in DPDK environment.

Are you able to run e.g. testpmd application?

Hi,thank you for answering me.I’m able to run testpmd app.The nic I used for dpdk is ens38 or 0000:02:06.0,I tried to change its driver from e1000 to vmxnet3 to solve my peoblem.But I met new problem.Could you please help me?

Hi @jayzhu,

I have recently tried running Suricata with e1000 driver in a VM.
As the initial error message states, e1000 driver does not support multiple receive queues. Therefore, it is limited to work with a single Suricata worker (only 1 thread in dpdk: config). I confirmed this with testpmd app that only reports support for a single rx/tx queue.

Trying out Suricata with vmxnet3 might be interesting as that is a more high-performance virtual interface that should support multiple queues. That might take some time until I get access to one.

Thanks very much for answerig me.You are right.I changed the thread to 1 and it woks.


But I meet a new problem.As you can see in the picture above.When I set up suricata with DPDK,there are so many unknown packets that it leads my vm to crash.My setting is as fllows.

Hi @jayzhu

checking in, do you still face the issue?
Do you possibly have enough resources (hugepages) allocated in your DPDK setup?
What is your virtualization platform? Do you have it set up correctly?
Crashing the VM seems wild!

If you were able to solve the issue, please provide the solution. Thanks.
Lukas

Thanks very much tfor ansering me.I have solved the problem by changing the copy-mode in suricata.yaml from “ips” to “none”.But I met a new problem.That is,suricata with dpdk has high drop rate.


my suricata.yaml is as follows.
suricata.yaml (80.5 KB)

Ok.

With regards to your config. I think it is interesting to see that you can run multiple workers on your (I assume) virtualized interface. Are you using VMXNET3? Is packet distribution working? You can run Suricata with the verbose argument to have more detailed stats (command-line -vvvv parameter).

The other thing I’ve noticed from your config is that it runs on 4 workers but in CPU affinity, you only have 3 cores assigned to worker threads. That means that one extra worker thread (the fourth one) will overlap/spawn on the same core as the first worker thread. That is definitely not intended - I would suggest to manually set the appropriate number of threads in your DPDK interface settings (dpdk.interfaces[“0000:3b:00.0”].threads) - (that means set it to 3).

On a side note, yesterday, we had a webinar where I presented how Suricata can benefit from increasing the mempool size or from the number of descriptors. Maybe that’s also worth to watch.

Apart from that, watch also other counters in the stats or check perf usage to identify bottlenecks.

Yes,I am using VMXNET3.I will try what you said.And I am not able to watch the vedio.Could you please Send me the video as a file.Thank you very much.

Hi Jayzhu,

I got curious: are you unable to access youtube as a whole, or just our video?

Not sure we’ll be able to help there, but would be good to understand if our video/channel is blocked in specific countries…