Hi,
first of all 7.0.0 RC1 is out and without more details about your config, setup, NIC used etc. it’s hard to help.
1 start
suricata --dpdk -v
2 yaml
dpdk:
eal-params:
proc-type: primary
# DPDK capture support
# RX queues (and TX queues in IPS mode) are assigned to cores in 1:1 ratio
interfaces:
- interface: 0000:02:06.0 # PCIe address of the NIC port
- interface: 0000:02:01.0
# Threading: possible values are either "auto" or number of threads
# - auto takes all cores
# in IPS mode it is required to specify the number of cores and the numbers on both interfaces must match
threads: auto
promisc: true # promiscuous mode - capture all packets
multicast: true # enables also detection on multicast packets
checksum-checks: true # if Suricata should validate checksums
checksum-checks-offload: true # if possible offload checksum validation to the NIC (saves Suricata resources)
mtu: 1500 # Set MTU of the device in bytes
# rss-hash-functions: 0x0 # advanced configuration option, use only if you use untested NIC card and experience RSS warnings,
# For `rss-hash-functions` use hexadecimal 0x01ab format to specify RSS hash function flags - DumpRssFlags can help (you can see output if you use -vvv option during Suri startup)
# setting auto to rss_hf sets the default RSS hash functions (based on IP addresses)
# To approximately calculate required amount of space (in bytes) for interface's mempool: mempool-size * mtu
# Make sure you have enough allocated hugepages.
# The optimum size for the packet memory pool (in terms of memory usage) is power of two minus one: n = (2^q - 1)
mempool-size: 65535 # The number of elements in the mbuf pool
# Mempool cache size must be lower or equal to:
# - RTE_MEMPOOL_CACHE_MAX_SIZE (by default 512) and
# - "mempool-size / 1.5"
# It is advised to choose cache_size to have "mempool-size modulo cache_size == 0".
# If this is not the case, some elements will always stay in the pool and will never be used.
# The cache can be disabled if the cache_size argument is set to 0, can be useful to avoid losing objects in cache
# If the value is empty or set to "auto", Suricata will attempt to set cache size of the mempool to a value
# that matches the previously mentioned recommendations
mempool-cache-size: 257
rx-descriptors: 1024
tx-descriptors: 1024
#
# IPS mode for Suricata works in 3 modes - none, tap, ips
# - none: IDS mode only - disables IPS functionality (does not further forward packets)
# - tap: forwards all packets and generates alerts (omits DROP action) This is not DPDK TAP
# - ips: the same as tap mode but it also drops packets that are flagged by rules to be dropped
copy-mode: none
copy-iface: none # or PCIe address of the second interface
- interface: auto
threads: auto
promisc: true
multicast: true
checksum-checks: true
checksum-checks-offload: true
mtu: 1500
rss-hash-functions: auto
mempool-size: 65535
mempool-cache-size: 257
rx-descriptors: 1024
tx-descriptors: 1024
copy-mode: none
copy-iface: none
3 nic
my nic can’t use ethtool
What type of NIC is this exactly?
Please avoid opening multiple threads on the same topic, thus we deleted the newest one. Also keep in mind this is a community forum, so replies are on a best effort mode.
Assuming from the driver and name of the machine - you are in the virtual environment.
Generally speaking, virtualized environments have not been tested so far as indicated in the docs:
The work has not been tested neither with the virtual interfaces nor in the virtual environments like VMs, Docker or similar.
10.1. Suricata.yaml — Suricata 7.0.0-rc2-dev documentation
Please be more specific about your environment regardless of my assumptions.
Also - I am not sure if that’s intended or typo but to run DPDK application, you need to bind all NICs that you are going to use to drivers compatible with DPDK. In your suricata.yaml
configuration, I see two PCIe addresses of which one is still bound to e1000 driver - 0000:02:01.0
Why you can’t use ethtool
on the other NIC - I am not sure but unless you have bifurcated driver (Intel drivers are not bifurcated), ethtool
is not used in DPDK environment.
Are you able to run e.g. testpmd
application?
Hi,thank you for answering me.I’m able to run testpmd app.The nic I used for dpdk is ens38 or 0000:02:06.0,I tried to change its driver from e1000 to vmxnet3 to solve my peoblem.But I met new problem.Could you please help me?
Hi @jayzhu,
I have recently tried running Suricata with e1000 driver in a VM.
As the initial error message states, e1000 driver does not support multiple receive queues. Therefore, it is limited to work with a single Suricata worker (only 1 thread in dpdk:
config). I confirmed this with testpmd app that only reports support for a single rx/tx queue.
Trying out Suricata with vmxnet3
might be interesting as that is a more high-performance virtual interface that should support multiple queues. That might take some time until I get access to one.