Hi Lukas,
I have set both to 8192, looks like its only able to use 4096, or im reading the output wrong:
Sadly with the same results:
Notice: suricata: This is Suricata version 7.0.2 RELEASE running in SYSTEM mode [LogVersion:suricata.c:1148]
Info: cpu: CPUs/cores online: 112 [UtilCpuPrintSummary:util-cpu.c:182]
Config: affinity: Found affinity definition for "management-cpu-set" [AffinitySetupLoadFromConfig:util-affinity.c:201]
Config: affinity: Found affinity definition for "worker-cpu-set" [AffinitySetupLoadFromConfig:util-affinity.c:201]
Config: affinity: Using default prio 'high' for set 'worker-cpu-set' [AffinitySetupLoadFromConfig:util-affinity.c:248]
Config: device: Adding interface 0000:d8:00.0 from config file [LiveBuildDeviceListCustom:util-device.c:294]
Info: suricata: Setting engine mode to IDS mode by default [PostConfLoadedSetup:suricata.c:2689]
Info: exception-policy: master exception-policy set to: auto [ExceptionPolicyMasterParse:util-exception-policy.c:200]
Config: exception-policy: app-layer.error-policy: ignore (defined via 'exception-policy' master switch) [ExceptionPolicyGetDefault:util-exception-policy.c:219]
Config: app-layer-htp: 'default' server has 'request-body-minimal-inspect-size' set to 32820 and 'request-body-inspect-window' set to 3955 after randomization. [HTPConfigSetDefaultsPhase2:app-layer-htp.c:2564]
Config: app-layer-htp: 'default' server has 'response-body-minimal-inspect-size' set to 41719 and 'response-body-inspect-window' set to 15790 after randomization. [HTPConfigSetDefaultsPhase2:app-layer-htp.c:2577]
Config: smb: read: max record size: 16777216, max queued chunks 64, max queued size 67108864 [suricata::smb::smb::rs_smb_register_parser:smb.rs:2428]
Config: smb: write: max record size: 16777216, max queued chunks 64, max queued size 67108864 [suricata::smb::smb::rs_smb_register_parser:smb.rs:2430]
Config: app-layer-enip: Protocol detection and parser disabled for enip protocol. [RegisterENIPUDPParsers:app-layer-enip.c:538]
Config: app-layer-dnp3: Protocol detection and parser disabled for DNP3. [RegisterDNP3Parsers:app-layer-dnp3.c:1565]
Config: host: allocated 262144 bytes of memory for the host hash... 4096 buckets of size 64 [HostInitConfig:host.c:256]
Config: host: preallocated 1000 hosts of size 136 [HostInitConfig:host.c:282]
Config: host: host memory usage: 398144 bytes, maximum: 33554432 [HostInitConfig:host.c:284]
Config: coredump-config: Core dump size set to unlimited. [CoredumpLoadConfig:util-coredump-config.c:155]
Config: exception-policy: defrag.memcap-policy: ignore (defined via 'exception-policy' master switch) [ExceptionPolicyGetDefault:util-exception-policy.c:219]
Config: defrag-hash: allocated 3670016 bytes of memory for the defrag hash... 65536 buckets of size 56 [DefragInitConfig:defrag-hash.c:251]
Config: defrag-hash: preallocated 65535 defrag trackers of size 160 [DefragInitConfig:defrag-hash.c:280]
Config: defrag-hash: defrag memory usage: 14155616 bytes, maximum: 1073741824 [DefragInitConfig:defrag-hash.c:287]
Config: exception-policy: flow.memcap-policy: ignore (defined via 'exception-policy' master switch) [ExceptionPolicyGetDefault:util-exception-policy.c:219]
Config: flow: flow size 296, memcap allows for 58040098 flows. Per hash row in perfect conditions 55 [FlowInitConfig:flow.c:673]
Config: stream-tcp: stream "prealloc-sessions": 2048 (per thread) [StreamTcpInitConfig:stream-tcp.c:392]
Config: stream-tcp: stream "memcap": 15032385536 [StreamTcpInitConfig:stream-tcp.c:412]
Config: stream-tcp: stream "midstream" session pickups: disabled [StreamTcpInitConfig:stream-tcp.c:420]
Config: stream-tcp: stream "async-oneside": disabled [StreamTcpInitConfig:stream-tcp.c:428]
Config: stream-tcp: stream "checksum-validation": disabled [StreamTcpInitConfig:stream-tcp.c:443]
Config: exception-policy: stream.memcap-policy: ignore (defined via 'exception-policy' master switch) [ExceptionPolicyGetDefault:util-exception-policy.c:219]
Config: exception-policy: stream.reassembly.memcap-policy: ignore (defined via 'exception-policy' master switch) [ExceptionPolicyGetDefault:util-exception-policy.c:219]
Config: exception-policy: stream.midstream-policy: ignore (defined via 'exception-policy' master switch) [ExceptionPolicyGetDefault:util-exception-policy.c:219]
Config: stream-tcp: stream."inline": enabled [StreamTcpInitConfig:stream-tcp.c:475]
Config: stream-tcp: stream "bypass": disabled [StreamTcpInitConfig:stream-tcp.c:488]
Config: stream-tcp: stream "max-syn-queued": 10 [StreamTcpInitConfig:stream-tcp.c:512]
Config: stream-tcp: stream "max-synack-queued": 5 [StreamTcpInitConfig:stream-tcp.c:525]
Config: stream-tcp: stream.reassembly "memcap": 21474836480 [StreamTcpInitConfig:stream-tcp.c:546]
Config: stream-tcp: stream.reassembly "depth": 1048576 [StreamTcpInitConfig:stream-tcp.c:565]
Config: stream-tcp: stream.reassembly "toserver-chunk-size": 2621 [StreamTcpInitConfig:stream-tcp.c:637]
Config: stream-tcp: stream.reassembly "toclient-chunk-size": 2453 [StreamTcpInitConfig:stream-tcp.c:639]
Config: stream-tcp: stream.reassembly.raw: enabled [StreamTcpInitConfig:stream-tcp.c:652]
Config: stream-tcp: stream.liberal-timestamps: disabled [StreamTcpInitConfig:stream-tcp.c:661]
Config: stream-tcp-reassemble: stream.reassembly "segment-prealloc": 200000 [StreamTcpReassemblyConfig:stream-tcp-reassemble.c:491]
Config: stream-tcp-reassemble: stream.reassembly "max-regions": 8 [StreamTcpReassemblyConfig:stream-tcp-reassemble.c:514]
Info: conf: Running in live mode, activating unix socket [ConfUnixSocketIsEnable:util-conf.c:154]
Info: logopenfile: fast output device (regular) initialized: fast.log [SCConfLogOpenGeneric:util-logopenfile.c:617]
Info: logopenfile: stats output device (regular) initialized: stats.log [SCConfLogOpenGeneric:util-logopenfile.c:617]
Config: landlock: Landlock is not enabled in configuration [LandlockSandboxing:util-landlock.c:183]
Config: suricata: Delayed detect disabled [SetupDelayedDetect:suricata.c:2406]
Config: detect: pattern matchers: MPM: hs, SPM: hs [DetectEngineCtxInitReal:detect-engine.c:2496]
Config: detect: grouping: tcp-whitelist (default) 53, 80, 139, 443, 445, 1433, 3306, 3389, 6666, 6667, 8080 [DetectEngineCtxLoadConf:detect-engine.c:2910]
Config: detect: grouping: udp-whitelist (default) 53, 135, 5060 [DetectEngineCtxLoadConf:detect-engine.c:2936]
Config: detect: prefilter engines: MPM and keywords [DetectEngineCtxLoadConf:detect-engine.c:2969]
Config: reputation: IP reputation disabled [SRepInit:reputation.c:612]
Warning: detect: No rule files match the pattern /usr/local/var/lib/suricata/rules/suricata.rules [ProcessSigFiles:detect-engine-loader.c:230]
Config: detect: No rules loaded from suricata.rules. [SigLoadSignatures:detect-engine-loader.c:317]
Warning: detect: No rule files match the pattern /src/ [ProcessSigFiles:detect-engine-loader.c:230]
Config: detect: No rules loaded from /src/ [SigLoadSignatures:detect-engine-loader.c:335]
Warning: detect: 2 rule files specified, but no rules were loaded! [SigLoadSignatures:detect-engine-loader.c:342]
Warning: threshold-config: Error opening file: "/opt/suricata-suricata-7.0.2/treshold.config": No such file or directory [SCThresholdConfInitContext:util-threshold-config.c:177]
Info: detect: 0 signatures processed. 0 are IP-only rules, 0 are inspecting packet payload, 0 inspect application layer, 0 are decoder event only [SigAddressPrepareStage1:detect-engine-build.c:1499]
Config: detect: building signature grouping structure, stage 1: preprocessing rules... complete [SigAddressPrepareStage1:detect-engine-build.c:1505]
Perf: detect: TCP toserver: 0 port groups, 0 unique SGH's, 0 copies [RulesGroupByPorts:detect-engine-build.c:1293]
Perf: detect: TCP toclient: 0 port groups, 0 unique SGH's, 0 copies [RulesGroupByPorts:detect-engine-build.c:1293]
Perf: detect: UDP toserver: 0 port groups, 0 unique SGH's, 0 copies [RulesGroupByPorts:detect-engine-build.c:1293]
Perf: detect: UDP toclient: 0 port groups, 0 unique SGH's, 0 copies [RulesGroupByPorts:detect-engine-build.c:1293]
Perf: detect: OTHER toserver: 0 proto groups, 0 unique SGH's, 0 copies [RulesGroupByProto:detect-engine-build.c:1049]
Perf: detect: OTHER toclient: 0 proto groups, 0 unique SGH's, 0 copies [RulesGroupByProto:detect-engine-build.c:1082]
Perf: detect: Unique rule groups: 0 [SigAddressPrepareStage4:detect-engine-build.c:1858]
Perf: detect: Builtin MPM "toserver TCP packet": 0 [MpmStoreReportStats:detect-engine-mpm.c:1480]
Perf: detect: Builtin MPM "toclient TCP packet": 0 [MpmStoreReportStats:detect-engine-mpm.c:1480]
Perf: detect: Builtin MPM "toserver TCP stream": 0 [MpmStoreReportStats:detect-engine-mpm.c:1480]
Perf: detect: Builtin MPM "toclient TCP stream": 0 [MpmStoreReportStats:detect-engine-mpm.c:1480]
Perf: detect: Builtin MPM "toserver UDP packet": 0 [MpmStoreReportStats:detect-engine-mpm.c:1480]
Perf: detect: Builtin MPM "toclient UDP packet": 0 [MpmStoreReportStats:detect-engine-mpm.c:1480]
Perf: detect: Builtin MPM "other IP packet": 0 [MpmStoreReportStats:detect-engine-mpm.c:1480]
TELEMETRY: No legacy callbacks, legacy socket not created
Config: dpdk: RTE_ETH_RX_OFFLOAD_VLAN_STRIP - available [DumpRXOffloadCapabilities:runmode-dpdk.c:996]
Config: dpdk: RTE_ETH_RX_OFFLOAD_IPV4_CKSUM - available [DumpRXOffloadCapabilities:runmode-dpdk.c:998]
Config: dpdk: RTE_ETH_RX_OFFLOAD_UDP_CKSUM - available [DumpRXOffloadCapabilities:runmode-dpdk.c:1000]
Config: dpdk: RTE_ETH_RX_OFFLOAD_TCP_CKSUM - available [DumpRXOffloadCapabilities:runmode-dpdk.c:1002]
Config: dpdk: RTE_ETH_RX_OFFLOAD_TCP_LRO - NOT available [DumpRXOffloadCapabilities:runmode-dpdk.c:1004]
Config: dpdk: RTE_ETH_RX_OFFLOAD_QINQ_STRIP - available [DumpRXOffloadCapabilities:runmode-dpdk.c:1006]
Config: dpdk: RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM - available [DumpRXOffloadCapabilities:runmode-dpdk.c:1008]
Config: dpdk: RTE_ETH_RX_OFFLOAD_MACSEC_STRIP - NOT available [DumpRXOffloadCapabilities:runmode-dpdk.c:1010]
Config: dpdk: RTE_ETH_RX_OFFLOAD_VLAN_FILTER - available [DumpRXOffloadCapabilities:runmode-dpdk.c:1016]
Config: dpdk: RTE_ETH_RX_OFFLOAD_VLAN_EXTEND - available [DumpRXOffloadCapabilities:runmode-dpdk.c:1018]
Config: dpdk: RTE_ETH_RX_OFFLOAD_SCATTER - available [DumpRXOffloadCapabilities:runmode-dpdk.c:1020]
Config: dpdk: RTE_ETH_RX_OFFLOAD_TIMESTAMP - NOT available [DumpRXOffloadCapabilities:runmode-dpdk.c:1022]
Config: dpdk: RTE_ETH_RX_OFFLOAD_SECURITY - NOT available [DumpRXOffloadCapabilities:runmode-dpdk.c:1024]
Config: dpdk: RTE_ETH_RX_OFFLOAD_KEEP_CRC - available [DumpRXOffloadCapabilities:runmode-dpdk.c:1026]
Config: dpdk: RTE_ETH_RX_OFFLOAD_SCTP_CKSUM - NOT available [DumpRXOffloadCapabilities:runmode-dpdk.c:1028]
Config: dpdk: RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM - NOT available [DumpRXOffloadCapabilities:runmode-dpdk.c:1030]
Config: dpdk: RTE_ETH_RX_OFFLOAD_RSS_HASH - available [DumpRXOffloadCapabilities:runmode-dpdk.c:1032]
Config: dpdk: RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT - NOT available [DumpRXOffloadCapabilities:runmode-dpdk.c:1035]
Config: dpdk: 0000:d8:00.0: RSS enabled for 8 queues [DeviceInitPortConf:runmode-dpdk.c:1112]
Config: dpdk: 0000:d8:00.0: IP, TCP and UDP checksum validation offloaded [DeviceInitPortConf:runmode-dpdk.c:1152]
Config: dpdk: 0000:d8:00.0: setting MTU to 9200 [DeviceConfigure:runmode-dpdk.c:1478]
Config: dpdk: 0000:d8:00.0: creating packet mbuf pool mempool_0000:d8:00.0 of size 1048575, cache size 512, mbuf size 10368 [DeviceConfigureQueues:runmode-dpdk.c:1182]
Config: dpdk: 0000:d8:00.0: rx queue setup: queue:0 port:0 rx_desc:4096 tx_desc:4096 rx: hthresh: 0 pthresh 0 wthresh 0 free_thresh 0 drop_en 0 offloads 14 [DeviceConfigureQueues:runmode-dpdk.c:1202]
Config: dpdk: 0000:d8:00.0: rx queue setup: queue:1 port:0 rx_desc:4096 tx_desc:4096 rx: hthresh: 0 pthresh 0 wthresh 0 free_thresh 0 drop_en 0 offloads 14 [DeviceConfigureQueues:runmode-dpdk.c:1202]
Config: dpdk: 0000:d8:00.0: rx queue setup: queue:2 port:0 rx_desc:4096 tx_desc:4096 rx: hthresh: 0 pthresh 0 wthresh 0 free_thresh 0 drop_en 0 offloads 14 [DeviceConfigureQueues:runmode-dpdk.c:1202]
Config: dpdk: 0000:d8:00.0: rx queue setup: queue:3 port:0 rx_desc:4096 tx_desc:4096 rx: hthresh: 0 pthresh 0 wthresh 0 free_thresh 0 drop_en 0 offloads 14 [DeviceConfigureQueues:runmode-dpdk.c:1202]
Config: dpdk: 0000:d8:00.0: rx queue setup: queue:4 port:0 rx_desc:4096 tx_desc:4096 rx: hthresh: 0 pthresh 0 wthresh 0 free_thresh 0 drop_en 0 offloads 14 [DeviceConfigureQueues:runmode-dpdk.c:1202]
Config: dpdk: 0000:d8:00.0: rx queue setup: queue:5 port:0 rx_desc:4096 tx_desc:4096 rx: hthresh: 0 pthresh 0 wthresh 0 free_thresh 0 drop_en 0 offloads 14 [DeviceConfigureQueues:runmode-dpdk.c:1202]
Config: dpdk: 0000:d8:00.0: rx queue setup: queue:6 port:0 rx_desc:4096 tx_desc:4096 rx: hthresh: 0 pthresh 0 wthresh 0 free_thresh 0 drop_en 0 offloads 14 [DeviceConfigureQueues:runmode-dpdk.c:1202]
Config: dpdk: 0000:d8:00.0: rx queue setup: queue:7 port:0 rx_desc:4096 tx_desc:4096 rx: hthresh: 0 pthresh 0 wthresh 0 free_thresh 0 drop_en 0 offloads 14 [DeviceConfigureQueues:runmode-dpdk.c:1202]
Config: dpdk: 0000:d8:00.0: tx queue setup: queue:0 port:0 [DeviceConfigureQueues:runmode-dpdk.c:1222]
Config: dpdk: 0000:d8:00.0: tx queue setup: queue:1 port:0 [DeviceConfigureQueues:runmode-dpdk.c:1222]
Config: dpdk: 0000:d8:00.0: tx queue setup: queue:2 port:0 [DeviceConfigureQueues:runmode-dpdk.c:1222]
Config: dpdk: 0000:d8:00.0: tx queue setup: queue:3 port:0 [DeviceConfigureQueues:runmode-dpdk.c:1222]
Config: dpdk: 0000:d8:00.0: tx queue setup: queue:4 port:0 [DeviceConfigureQueues:runmode-dpdk.c:1222]
Config: dpdk: 0000:d8:00.0: tx queue setup: queue:5 port:0 [DeviceConfigureQueues:runmode-dpdk.c:1222]
Config: dpdk: 0000:d8:00.0: tx queue setup: queue:6 port:0 [DeviceConfigureQueues:runmode-dpdk.c:1222]
Config: dpdk: 0000:d8:00.0: tx queue setup: queue:7 port:0 [DeviceConfigureQueues:runmode-dpdk.c:1222]
Info: runmodes: 0000:d8:00.0: creating 8 threads [RunModeSetLiveCaptureWorkersForDevice:util-runmodes.c:254]
Perf: threads: Setting prio -2 for thread "W#01-d8:00.0" to cpu/core 32, thread id 6505 [TmThreadSetupOptions:tm-threads.c:876]
Perf: threads: Setting prio -2 for thread "W#02-d8:00.0" to cpu/core 33, thread id 6506 [TmThreadSetupOptions:tm-threads.c:876]
Perf: threads: Setting prio -2 for thread "W#03-d8:00.0" to cpu/core 34, thread id 6507 [TmThreadSetupOptions:tm-threads.c:876]
Perf: threads: Setting prio -2 for thread "W#04-d8:00.0" to cpu/core 35, thread id 6508 [TmThreadSetupOptions:tm-threads.c:876]
Perf: threads: Setting prio -2 for thread "W#05-d8:00.0" to cpu/core 36, thread id 6509 [TmThreadSetupOptions:tm-threads.c:876]
Perf: threads: Setting prio -2 for thread "W#06-d8:00.0" to cpu/core 37, thread id 6510 [TmThreadSetupOptions:tm-threads.c:876]
Perf: threads: Setting prio -2 for thread "W#07-d8:00.0" to cpu/core 38, thread id 6511 [TmThreadSetupOptions:tm-threads.c:876]
Perf: threads: Setting prio -2 for thread "W#08-d8:00.0" to cpu/core 39, thread id 6512 [TmThreadSetupOptions:tm-threads.c:876]
Info: dpdk-i40e: RTE_FLOW queue region created for port 0000:d8:00.0 [i40eDeviceSetRSSFlowQueues:util-dpdk-i40e.c:211]
Info: dpdk-i40e: RTE_FLOW flow rule created for port 0000:d8:00.0 [i40eDeviceCreateRSSFlow:util-dpdk-i40e.c:246]
Info: dpdk-i40e: RTE_FLOW flow rule created for port 0000:d8:00.0 [i40eDeviceCreateRSSFlow:util-dpdk-i40e.c:246]
Info: dpdk-i40e: RTE_FLOW flow rule created for port 0000:d8:00.0 [i40eDeviceCreateRSSFlow:util-dpdk-i40e.c:246]
Info: dpdk-i40e: RTE_FLOW flow rule created for port 0000:d8:00.0 [i40eDeviceCreateRSSFlow:util-dpdk-i40e.c:246]
Info: dpdk-i40e: RTE_FLOW flow rule created for port 0000:d8:00.0 [i40eDeviceCreateRSSFlow:util-dpdk-i40e.c:246]
Info: dpdk-i40e: RTE_FLOW flow rule created for port 0000:d8:00.0 [i40eDeviceCreateRSSFlow:util-dpdk-i40e.c:246]
Info: dpdk-i40e: RTE_FLOW flow rule created for port 0000:d8:00.0 [i40eDeviceCreateRSSFlow:util-dpdk-i40e.c:246]
Info: dpdk-i40e: RTE_FLOW flow rule created for port 0000:d8:00.0 [i40eDeviceCreateRSSFlow:util-dpdk-i40e.c:246]
Info: dpdk-i40e: RTE_FLOW flow rule created for port 0000:d8:00.0 [i40eDeviceCreateRSSFlow:util-dpdk-i40e.c:246]
Info: dpdk-i40e: RTE_FLOW flow rule created for port 0000:d8:00.0 [i40eDeviceCreateRSSFlow:util-dpdk-i40e.c:246]
Config: flow-manager: using 1 flow manager threads [FlowManagerThreadSpawn:flow-manager.c:948]
Perf: threads: Setting prio 0 for thread "FM#01", thread id 6513 [TmThreadSetupOptions:tm-threads.c:882]
Config: flow-manager: using 1 flow recycler threads [FlowRecyclerThreadSpawn:flow-manager.c:1154]
Perf: threads: Setting prio 0 for thread "FR#01", thread id 6514 [TmThreadSetupOptions:tm-threads.c:882]
Perf: threads: Setting prio 0 for thread "CW", thread id 6515 [TmThreadSetupOptions:tm-threads.c:882]
Perf: threads: Setting prio 0 for thread "CS", thread id 6516 [TmThreadSetupOptions:tm-threads.c:882]
Info: unix-manager: unix socket '/usr/local/var/run/suricata/suricata-command.socket' [UnixNew:unix-manager.c:136]
Perf: threads: Setting prio 0 for thread "US", thread id 6517 [TmThreadSetupOptions:tm-threads.c:882]
Notice: threads: Threads created -> W: 8 FM: 1 FR: 1 Engine started. [TmThreadWaitOnThreadRunning:tm-threads.c:1893]
Info: dpdk: 27463 of 32768 of hugepages are free - number of hugepages can be lowered to e.g. 6101 [MemInfoEvaluateHugepages:util-dpdk.c:137]
^CNotice: suricata: Signal Received. Stopping engine. [SuricataMainLoop:suricata.c:2816]
Info: suricata: time elapsed 124.131s [SCPrintElapsedTime:suricata.c:1168]
Perf: flow-manager: 260651 flows processed [FlowRecycler:flow-manager.c:1123]
Perf: dpdk: Port 0 (0000:d8:00.0) - rx_good_packets: 172432797 [PrintDPDKPortXstats:source-dpdk.c:620]
Perf: dpdk: Port 0 (0000:d8:00.0) - rx_good_bytes: 223002372875 [PrintDPDKPortXstats:source-dpdk.c:620]
Perf: dpdk: Port 0 (0000:d8:00.0) - rx_missed_errors: 153683842 [PrintDPDKPortXstats:source-dpdk.c:620]
Perf: dpdk: Port 0 (0000:d8:00.0) - rx_unicast_packets: 326116634 [PrintDPDKPortXstats:source-dpdk.c:620]
Perf: dpdk: Port 0 (0000:d8:00.0) - rx_multicast_packets: 5 [PrintDPDKPortXstats:source-dpdk.c:620]
Perf: dpdk: Port 0 (0000:d8:00.0) - rx_unknown_protocol_packets: 326116639 [PrintDPDKPortXstats:source-dpdk.c:620]
Perf: dpdk: Port 0 (0000:d8:00.0) - mac_local_errors: 1 [PrintDPDKPortXstats:source-dpdk.c:620]
Perf: dpdk: Port 0 (0000:d8:00.0) - rx_size_64_packets: 29108126 [PrintDPDKPortXstats:source-dpdk.c:620]
Perf: dpdk: Port 0 (0000:d8:00.0) - rx_size_65_to_127_packets: 62721725 [PrintDPDKPortXstats:source-dpdk.c:620]
Perf: dpdk: Port 0 (0000:d8:00.0) - rx_size_128_to_255_packets: 45020766 [PrintDPDKPortXstats:source-dpdk.c:620]
Perf: dpdk: Port 0 (0000:d8:00.0) - rx_size_256_to_511_packets: 21911636 [PrintDPDKPortXstats:source-dpdk.c:620]
Perf: dpdk: Port 0 (0000:d8:00.0) - rx_size_512_to_1023_packets: 48344790 [PrintDPDKPortXstats:source-dpdk.c:620]
Perf: dpdk: Port 0 (0000:d8:00.0) - rx_size_1024_to_1522_packets: 119009596 [PrintDPDKPortXstats:source-dpdk.c:620]
Perf: dpdk: 0000:d8:00.0: total RX stats: packets 172432797 bytes: 223002372875 missed: 153683842 errors: 0 nombufs: 0 [ReceiveDPDKThreadExitStats:source-dpdk.c:647]
Perf: dpdk: (W#01-d8:00.0) received packets 21668277 [ReceiveDPDKThreadExitStats:source-dpdk.c:657]
Perf: dpdk: (W#02-d8:00.0) received packets 21581805 [ReceiveDPDKThreadExitStats:source-dpdk.c:657]
Perf: dpdk: (W#03-d8:00.0) received packets 21502171 [ReceiveDPDKThreadExitStats:source-dpdk.c:657]
Perf: dpdk: (W#04-d8:00.0) received packets 21459999 [ReceiveDPDKThreadExitStats:source-dpdk.c:657]
Perf: dpdk: (W#05-d8:00.0) received packets 21629595 [ReceiveDPDKThreadExitStats:source-dpdk.c:657]
Perf: dpdk: (W#06-d8:00.0) received packets 21353246 [ReceiveDPDKThreadExitStats:source-dpdk.c:657]
Perf: dpdk: (W#07-d8:00.0) received packets 21622006 [ReceiveDPDKThreadExitStats:source-dpdk.c:657]
Perf: dpdk: (W#08-d8:00.0) received packets 21615698 [ReceiveDPDKThreadExitStats:source-dpdk.c:657]
Info: counters: Alerts: 0 [StatsLogSummary:counters.c:878]
Perf: ippair: ippair memory usage: 414144 bytes, maximum: 16777216 [IPPairPrintStats:ippair.c:296]
Perf: host: host memory usage: 398144 bytes, maximum: 33554432 [HostPrintStats:host.c:299]
Perf: dpdk: 0000:d8:00.0: closing device [DPDKCloseDevice:util-dpdk.c:52]
Notice: device: 0000:d8:00.0: packets: 326116639, drops: 153683842 (47.13%), invalid chksum: 0 [LiveDeviceListClean:util-device.c:325]
To me it also looks like the cores/threads are fighting for someting, but just cant get my finger behind it what it is. Setting the MTU from 9200 to 1500 doesn’t make a difference.