Hello, I'm a novice. May I ask you a question(dpdk,suricata)Thank you

Uploading: 屏幕截图 2022-10-21 180508.png…
Uploading: 屏幕截图 2022-10-21 180629.png…

Thank you.

Hi,

looks like something went wrong, what is your actual question and if you post screenshots, try again.


hi ,It bothered me for a long time…

This is version 4.1.4 which is EOL for a long time. If you want to try out DPDK support in Suricata you can try the current master branch in GitHub - OISF/suricata: Suricata git repository maintained by the OISF which will result in 7.0.

Thank you very much for your answer. I’ll try

Hi, I have successfully started it, but now I encounter this problem, how to adjust it


Thank you.

Please paste your suricata.yaml and how you start it. Ideally as textfile not screenshot.
Also what type of hardware, especially NIC is used?

1 Like

Hello, here are some information
1.start
suricata --dpdk

2.suricata.yaml
dpdk:
eal-params:
proc-type: primary

DPDK capture support

RX queues (and TX queues in IPS mode) are assigned to cores in 1:1 ratio

interfaces:
- interface: 0000:3b:00.1 # PCIe address of the NIC port
# Threading: possible values are either “auto” or number of threads
# - auto takes all cores
# in IPS mode it is required to specify the number of cores and the numbers on both interfaces must match
threads: auto
promisc: true # promiscuous mode - capture all packets
multicast: true # enables also detection on multicast packets
checksum-checks: true # if Suricata should validate checksums
checksum-checks-offload: true # if possible offload checksum validation to the NIC (saves Suricata resources)
mtu: 1500 # Set MTU of the device in bytes
# rss-hash-functions: 0x0 # advanced configuration option, use only if you use untested NIC card and experience RSS warnings,
# For rss-hash-functions use hexadecimal 0x01ab format to specify RSS hash function flags - DumpRssFlags can help (you can see output if you use -vvv option during Suri startup)
# setting auto to rss_hf sets the default RSS hash functions (based on IP addresses)

  # To approximately calculate required amount of space (in bytes) for interface's mempool: mempool-size * mtu
  # Make sure you have enough allocated hugepages.
  # The optimum size for the packet memory pool (in terms of memory usage) is power of two minus one: n = (2^q - 1)
  mempool-size: 65535 # The number of elements in the mbuf pool

  # Mempool cache size must be lower or equal to:
  #     - RTE_MEMPOOL_CACHE_MAX_SIZE (by default 512) and
  #     - "mempool-size / 1.5"
  # It is advised to choose cache_size to have "mempool-size modulo cache_size == 0".
  # If this is not the case, some elements will always stay in the pool and will never be used.
  # The cache can be disabled if the cache_size argument is set to 0, can be useful to avoid losing objects in cache
  # If the value is empty or set to "auto", Suricata will attempt to set cache size of the mempool to a value
  # that matches the previously mentioned recommendations
  mempool-cache-size: 257
  rx-descriptors: 1024
  tx-descriptors: 1024
  #
  # IPS mode for Suricata works in 3 modes - none, tap, ips
  # - none: IDS mode only - disables IPS functionality (does not further forward packets)
  # - tap: forwards all packets and generates alerts (omits DROP action) This is not DPDK TAP
  # - ips: the same as tap mode but it also drops packets that are flagged by rules to be dropped
  copy-mode: none
  copy-iface: none # or PCIe address of the second interface

- interface: 0000:5e:00.1
  threads: auto
  promisc: true
  multicast: true
  checksum-checks: true
  checksum-checks-offload: true
  mtu: 1500
  rss-hash-functions: auto
  mempool-size: 65535
  mempool-cache-size: 257
  rx-descriptors: 1024
  tx-descriptors: 1024
  copy-mode: none
  copy-iface: none

3.dpdk-devbind.py -s
Network devices using DPDK-compatible driver

0000:3b:00.0 ‘Ethernet 10G 2P X520 Adapter 154d’ drv=igb_uio unused=ixgbe,vfio-pci,uio_pci_generic
0000:3b:00.1 ‘Ethernet 10G 2P X520 Adapter 154d’ drv=igb_uio unused=ixgbe,vfio-pci,uio_pci_generic
0000:5e:00.0 ‘82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb’ drv=igb_uio unused=ixgbe,vfio-pci,uio_pci_generic
0000:5e:00.1 ‘82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb’ drv=igb_uio unused=ixgbe,vfio-pci,uio_pci_generic

Network devices using kernel driver

0000:18:00.0 ‘NetXtreme BCM5720 2-port Gigabit Ethernet PCIe 165f’ if=eno1 drv=tg3 unused=igb_uio,vfio-pci,uio_pci_generic Active

Other Network devices

0000:18:00.1 ‘NetXtreme BCM5720 2-port Gigabit Ethernet PCIe 165f’ unused=tg3,igb_uio,vfio-pci,uio_pci_generic
0000:19:00.0 ‘NetXtreme BCM5720 2-port Gigabit Ethernet PCIe 165f’ unused=tg3,igb_uio,vfio-pci,uio_pci_generic
0000:19:00.1 ‘NetXtreme BCM5720 2-port Gigabit Ethernet PCIe 165f’ unused=tg3,igb_uio,vfio-pci,uio_pci_generic

  1. lspci | grep -i net
    18:00.0 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe
    18:00.1 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe
    19:00.0 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe
    19:00.1 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe
    3b:00.0 Ethernet controller: Intel Corporation Ethernet 10G 2P X520 Adapter (rev 01)
    3b:00.1 Ethernet controller: Intel Corporation Ethernet 10G 2P X520 Adapter (rev 01)
    5e:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
    5e:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)

5.cat /proc/version
Linux version 5.15.0-48-generic (buildd@lcy02-amd64-043) (gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0, GNU ld (GNU Binutils for Ubuntu) 2.34) #54~20.04.1-Ubuntu SMP Thu Sep 1 16:17:26 UTC 2022

thank you.

There is a problem with the format. I mark each command with a number, such as 1, 2, 3. :rofl:

hi,Whether this information is useful

lscpu
Architecture: x86_64
CPU operating mode: 32-bit, 64-bit
Byte order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU: 80
List of online cpus: 0-79
Number of threads per core: 2
Number of cores per seat: 20
A: 2
NUMA node: 2
Vendor ID: GenuineIntel
CPU series: 6
Model: 85
Model: Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz
Step: 7
CPU MHz, 2100.000
CPU maximum MHz: 4000.0000
Minimum CPU MHz: 800.0000
BogoMIPS: 4200.00
Virtualization: VT-x
L1d cache: 1.3 MiB
L1i cache: 1.3 MiB
L2 cache: 40 MiB
L3 cache: 55 MiB
NUMA node 0 CPU: 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74
, 76,78
NUMA CPU on node 1: 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75
, 77,

Hi. Please insert the following characters before and after you copy paste text into discord. That makes the formatting better. ```

Seems like your tx-descriptors and rx-descriptors values might be to high. Try running ethtool -l <name of one of the network interfaces here> and insert the combined value in the config instead of 1024. You could also try setting a low value like 1 just to check if that is the issue.

Hi. I changed the value of tx descriptors and rx descriptors in the configuration file to 1, but it seems that it has not changed. Here is my running result. Please check
1.

  eal-params:
    proc-type: primary

  # DPDK capture support
  # RX queues (and TX queues in IPS mode) are assigned to cores in 1:1 ratio
  interfaces:
    - interface: default # PCIe address of the NIC port
      # Threading: possible values are either "auto" or number of threads
      # - auto takes all cores
      # in IPS mode it is required to specify the number of cores and the numbers on both interfaces must match
      threads: auto
      promisc: true # promiscuous mode - capture all packets
      multicast: true # enables also detection on multicast packets
      checksum-checks: true # if Suricata should validate checksums
      checksum-checks-offload: true # if possible offload checksum validation to the NIC (saves Suricata resources)
      mtu: 1500 # Set MTU of the device in bytes
      # rss-hash-functions: 0x0 # advanced configuration option, use only if you use untested NIC card and experience RSS warnings,
      # For `rss-hash-functions` use hexadecimal 0x01ab format to specify RSS hash function flags - DumpRssFlags can help (you can see output if you use -vvv option during Suri startup)
      # setting auto to rss_hf sets the default RSS hash functions (based on IP addresses)

      # To approximately calculate required amount of space (in bytes) for interface's mempool: mempool-size * mtu
      # Make sure you have enough allocated hugepages.
      # The optimum size for the packet memory pool (in terms of memory usage) is power of two minus one: n = (2^q - 1)
      mempool-size: 65535 # The number of elements in the mbuf pool

      # Mempool cache size must be lower or equal to:
      #     - RTE_MEMPOOL_CACHE_MAX_SIZE (by default 512) and
      #     - "mempool-size / 1.5"
      # It is advised to choose cache_size to have "mempool-size modulo cache_size == 0".
      # If this is not the case, some elements will always stay in the pool and will never be used.
      # The cache can be disabled if the cache_size argument is set to 0, can be useful to avoid losing objects in cache
      # If the value is empty or set to "auto", Suricata will attempt to set cache size of the mempool to a value
      # that matches the previously mentioned recommendations
      mempool-cache-size: 257
      rx-descriptors: 1
      tx-descriptors: 1
      #
      # IPS mode for Suricata works in 3 modes - none, tap, ips
      # - none: IDS mode only - disables IPS functionality (does not further forward packets)
      # - tap: forwards all packets and generates alerts (omits DROP action) This is not DPDK TAP
      # - ips: the same as tap mode but it also drops packets that are flagged by rules to be dropped
      copy-mode: none
      copy-iface: none # or PCIe address of the second interface

    - interface: 0000:3b:00.1
      threads: auto
      promisc: true
      multicast: true
      checksum-checks: true
      checksum-checks-offload: true
      mtu: 1500
      rss-hash-functions: auto
      mempool-size: 65535
      mempool-cache-size: 257
rx-descriptors: 1
      tx-descriptors: 1
      #
      # IPS mode for Suricata works in 3 modes - none, tap, ips
      # - none: IDS mode only - disables IPS functionality (does not further forward packets)
      # - tap: forwards all packets and generates alerts (omits DROP action) This is not DPDK TAP
      # - ips: the same as tap mode but it also drops packets that are flagged by rules to be dropped
      copy-mode: none
      copy-iface: none # or PCIe address of the second interface

    - interface: 0000:3b:00.1
      threads: auto
      promisc: true
      multicast: true
      checksum-checks: true
      checksum-checks-offload: true
      mtu: 1500
      rss-hash-functions: auto
      mempool-size: 65535
      mempool-cache-size: 257
      rx-descriptors: 1
      tx-descriptors: 1
      copy-mode: none
      copy-iface: none 
suricata --dpdk
[3455427] 24/10/2022 -- 17:00:12 - (suricata.c:1091) <Notice> (LogVersion) -- This is Suricata version 7.0.0-dev running in SYSTEM mode
EAL: No available hugepages reported in hugepages-1048576kB
[3455427] 24/10/2022 -- 17:00:15 - (runmode-dpdk.c:1229) <Error> (DeviceConfigure) -- [ERRCODE: SC_ERR_DPDK_INIT(340)] - Number of configured TX queues of 0000:3b:00.1 is higher than maximum allowed (64),(503)
[3455427] 24/10/2022 -- 17:00:15 - (runmode-dpdk.c:1358) <Error> (ParseDpdkConfigAndConfigureDevice) -- [ERRCODE: SC_ERR_DPDK_CONF(343)] - Device 0000:3b:00.1 fails to configure
test 80root@DPDK02:/home/xmzhou/suricata-master# suricata --dpdk
[3455511] 24/10/2022 -- 17:00:23 - (suricata.c:1091) <Notice> (LogVersion) -- This is Suricata version 7.0.0-dev running in SYSTEM mode
EAL: No available hugepages reported in hugepages-1048576kB
[3455511] 24/10/2022 -- 17:00:26 - (runmode-dpdk.c:1229) <Error> (DeviceConfigure) -- [ERRCODE: SC_ERR_DPDK_INIT(340)] - Number of configured TX queues of 0000:3b:00.1 is higher than maximum allowed (64),(382231360)
[3455511] 24/10/2022 -- 17:00:26 - (runmode-dpdk.c:1358) <Error> (ParseDpdkConfigAndConfigureDevice) -- [ERRCODE: SC_ERR_DPDK_CONF(343)] - Device 0000:3b:00.1 fails to configure
test 80root@DPDK02:/home/xmzhou/suricata-master#

Here’s another message
···
ethtool -l ens3f1
Channel parameters for ens3f1:
Pre-set maximums:
RX: 0
TX: 0
Other: 1
Combined: 63
Current hardware settings:
RX: 0
TX: 0
Other: 1
Combined: 63
···

Hi, I can run it successfully now because the default value of the threads parameter in the configuration file is auto. When the value is auto, the function UtilCpuGetNumProcessorsOnline() in the runmode-dpdk.c code will be called. This function will set threads. Threads will become tx_queues by passing in the function. Modify the appropriate threads according to ethtool - l
My English is not good. The general description is like this

 suricata --dpdk
[3457529] 24/10/2022 -- 17:45:18 - (suricata.c:1091) <Notice> (LogVersion) -- This is Suricata version 7.0.0-dev running in SYSTEM mode
EAL: No available hugepages reported in hugepages-1048576kB
[3457529] 24/10/2022 -- 17:45:22 - (tm-threads.c:1927) <Notice> (TmThreadWaitOnThreadInit) -- Threads created -> W: 10 FM: 1 FR: 1   Engine started.
^C[3457529] 24/10/2022 -- 17:46:33 - (suricata.c:2719) <Notice> (SuricataMainLoop) -- Signal Received.  Stopping engine.
[3457529] 24/10/2022 -- 17:46:35 - (util-device.c:356) <Notice> (LiveDeviceListClean) -- Stats for '0000:3b:00.1':  pkts: 68, drop: 0 (0.00%), invalid chksum: 0

It did not report an error. Did it run successfully
I’m thinking about how to test the traffic performance next
Thank you

Do you have any data about the performance of the dpdk mode
Thank you

There was a talk from @lukashino at Suricon 2022 (https://suricon.net) which talked about the performance improvments of DPDK which ranged at around 20%. The recordings and slides will be available within a few days/weeks.