7.0.0-beta1 dpdk alert performance problem?

7.0.0-beta1 dpdk alert performance problem? When I use rule to generate alert (even suppressed alerts ) it will become slow down and loss packet.

Test about 7.0.0-beta1 mode DPDK(IPS mode , copy-mode: ips)

  1. no rules, all nsm function + pcap log
    ~ 10Gbps mixed traffic,works fine, no packet loss. cpu keep 100%

  2. add 1 rule to generate many alerts and write to event log.
    for test only , just want to get about ~5000logs/sec with low network traffic
    alert ip any any → any any (msg:“per_pkt_alert_test”;flow: no_stream;rev:1;sid:1;)
    ~ 80Mbps Traffic , about 30% packet loss. some cpu usage lower to about 20%-40%(in htop)

  3. add 1 rule to generate many alerts (but no log to disk)
    alert ip any any → any any (msg:“per_pkt_alert_test”;flow: no_stream;noalert;rev:1;sid:1;)
    ~80Mbps Traffic , 30% packet loss, some cpu usage lower to about 20%-40%(in htop)
    so it is not about disk speed, just when there is a lot of alerts it will slow down.

but why?

af_packet mode have no problem

Here is configs
Machine: Intel Xeon with 36Cores, HT disabled + Mellanox mcx512a NIC
Kernel cmdline

BOOT_IMAGE=(hd2,gpt2)/vmlinuz-5.14.0-70.26.1.el9_0.x86_64 root=UUID=99bb48ce-5342-4094-8ca9-48cf3a2a467f ro hugepagesz=1G hugepages=36 default_hugepagesz=1G transparent_hugepage=never crashkernel=160M nmi_watchdog=0 audit=0 nosoftlockup processor.max_cstate=0 intel_idle.max_cstate=0 hpet=disable mce=ignore_ce tsc=reliable numa_balancing=disable isolcpus=1-35 rcu_nocbs=1-35 nohz_full=1-35

Interface

dpdk:
  eal-params:
    proc-type: primary
  
  interfaces:
    - interface: 0000:c3:00.0 
      threads: 16
      promisc: true 
      multicast: true 
      checksum-checks: false
      checksum-checks-offload: true
      mtu: 1500
      mempool-size: 65535
      mempool-cache-size: 257
      rx-descriptors: 1024
      tx-descriptors: 1024
      copy-mode: ips
      copy-iface: 0000:c3:00.1
	  
	- interface: 0000:c3:00.1 
      threads: 16
      promisc: true 
      multicast: true 
      checksum-checks: false
      checksum-checks-offload: true
      mtu: 1500
      mempool-size: 65535
      mempool-cache-size: 257
      rx-descriptors: 1024
      tx-descriptors: 1024
      copy-mode: ips
      copy-iface: 0000:c3:00.0

CPU Affinity

threading:
  set-cpu-affinity: yes
  cpu-affinity:
    - management-cpu-set:
        cpu: [ "33-35" ]  # include only these CPUs in affinity settings
    #- receive-cpu-set:
       # cpu: [ 0 ]  # include only these CPUs in affinity settings
    - worker-cpu-set:
        cpu: [ "1-32" ]
        mode: "exclusive"
        # Use explicitly 3 threads and don't compute number by using
        # detect-thread-ratio variable:
        threads: 32
        prio:
          low: [ 0 ]
          medium: [ "1-35"  ]
          high: [  ]
          default: "medium"

is it a bug? how to resolve this problem or just back to af_packet mode?

Hi @abigyellowdog

sorry for the delayed response.
Just wanted to check - your second and third test case was really on 80Mbps? That seems really low for drops to occur - especially with such settings (32 workers)… But you mention your CPU cores usage drops to 20 to 40%. That should never happen with DPDK - CPUs are supposed to keep polling on the NIC and have 100% CPU usage all the time (even with no packets coming in). That indicates something is really blocking the CPU cores. I did IPS tests for Suricon 2022 (talks available soon) and in there I had no problem with the performance). I had a full ET Open ruleset enabled with the performance of ~600 Mbps per worker (as usual for my CPU). It was running on 8 workers in total.

Just to be sure - can you try to increase the mempool size, cache size and rx/tx descriptors and see if it helps?
I would try something like this on both interfaces:

mempool-size: 262143
mempool-cache-size: 511
rx-descriptors: 8192
tx-descriptors: 8192

I would try running it on a lower number of cores - say 4 cores per interface.

Also, could you please attach perf top log?

Thanks.
Lukas

Hi @lukashino
Where can I see the video about performance speech?

Hi!

I don’t think there is a separate DPDK speech that focuses on performance. Maybe the most relevant might be this one:

However, please watch Suricata social channels, we plan on organizing a webinar in April that will be focused on DPDK and we will try to cover everything from the setup to performance evaluation (it is work in progress).

Hi, the webinar recording is here!

Hi @lukashino ,
I’ve met the same problem. 10gb/s traffic, 2000000packet lose every 8s in pf_ring 32 workers while 17000000 packets loss every 8s in dpdk. same rules,same machine,16g hugepage. The only difference is all my worker cpu core is 100%.

Hey @antsknows,

I can have a look into that but verifying that will probably take some time. I’ll try to report back.
In the meantime, can you possibly share the extended output from Suricata? (You enable it with -vvvv commandline options. I am especially interested in the xstats.
Also: are you on the newest master/7.0.0 release?

Lukas

Hi Lukas,
Thanks for your help, Here’s the extended output:

Hey,
the output doesn’t seem to make it to the forum post. Can you try again?
Thanks.

I tried to reproduce the issue but without great success so far. If somebody would have more reliable test scenario I am happy to try it. Here is my progress so far:

I’ve used 2+2 threads both in DPDK and AF-PACKET and tested with rulesets:

alert every pkt, generate alerts:

alert ip any any -> any any (msg: "Packet!"; flow: to_server; sid: 999; rev:1;)
alert ip any any -> any any (msg: "Packet!"; flow: to_client; sid: 998; rev:1;)

alert every pkt, suppress alerts:

alert ip any any -> any any (msg: "Packet!"; flow: to_server; sid: 999; rev:1; noalert;)
alert ip any any -> any any (msg: "Packet!"; flow: to_client; sid: 998; rev:1; noalert;)

I am attaching my suricata.yaml file.

As a PCAP I used PCAPs 4SICS-GeekLounge-15102{0,1,2}.pcap from

For the replay I used tcpreplay-edit in various forms such as:

sudo tcpreplay-edit --enet-vlan=add --enet-vlan-tag=14 --enet-vlan-pri=1 --enet-vlan-cfi=0 -i eno2 -L 10000000 --enet-dmac="1e:23:de:52:66:89" -l0 --seed=1344 --mbps 100 4SICS-GeekLounge-151020.pcap

Tests with alerting rules

AF_PACKET performance was better only in case I was replaying 4SICS-GeekLounge-151020.pcap

sudo tcpreplay-edit --enet-vlan=add --enet-vlan-tag=14 --enet-vlan-pri=1 --enet-vlan-cfi=0 -i eno2 -L 10000000 --enet-dmac="1e:23:de:52:66:89" -l0 --seed=1344 --mbps 100 4SICS-GeekLounge-151020.pcap

The replay took around 70 seconds.
DPDK was able to maintain 0% drop on lower speeds (tested 50 Mbps), but was having ~10% drop on 100 Mbps. AF_PACKET had about ~1.5% drop rate.

But when I was replaying more diverse PCAPs (e.g. all 3 of them) the AF_PACKET had worse results than DPDK. When I was changing seed each replay of the 4SICS-GeekLounge-151020.pcap then AF_PACKET was having worse results with ~6% drop rate but was still better than DPDK (~9%)

while true
do
 sudo tcpreplay-edit --enet-vlan=add --enet-vlan-tag=14 --enet-vlan-pri=1 --enet-vlan-cfi=0 -i eno2 --enet-dmac="1e:23:de:52:66:89" --seed=$RANDOM --mbps 100 4SICS-GeekLounge-151020.pcap
done

The other test with 4SICS-GeekLounge-151022.pcap showed me similar but better results for DPDK compared to AF_PACKET (37% vs 40% drop rate respectively). When all 3 PCAPs were combined then DPDK was again better than AF_PACKET.

Tests with muted rules - noalert;
DPDK was better/same in all tests.

Temporary conclusion:
Seems like some in-kernel logic might do better on traffic that is repeating, with more diverse traffic DPDK seems to do better.

suricata-one-noalert.rules (615 Bytes)
suricata-one.rules (597 Bytes)
suricata-ips.yaml (88.8 KB)

One reproducible case might be:

A single replay of 4SICS-GeekLounge-151020.pcap from SCADA / ICS PCAP files from 4SICS
Replay at ~100 Mbps, Suricata can run in IPS (or IDS with --simulate-ips switch) mode. The difference is occurring also in the purely IDS mode but is much less visible 1-2 % difference. The performance difference in between DPDK and AF_PACKET is more notable in IPS mode. The IPS mode performs better on this PCAP.

With the config/rules/PCAP as in the previous post, one can run Suricata as:

sudo ./src/suricata -c suricata-ips.yaml -l /tmp/ -S ./rules/suricata-one.rules -vvvv --af-packet/--dpdk

Edit1:

Ok, I think the solution here is to better finetune RX/TX descriptors so that the cost of file operations is diminished by the size of the buffers but at the same time, the buffers are not too big…

After fine-tuning Suricata for this extreme case, the performance of DPDK was better/the same as AF_PACKET’s performance. I tested this on both the single 151020 PCAP and all 4SICS PCAPs.

For me the sweet spot was 32768 descriptors so DPDK settings looked like:

mempool-size: 262143
mempool-cache-size: 511
rx-descriptors: 32768
tx-descriptors: 32768

My bad,i reply to u via gmail,don’t know why the extend information is lost while it exists in gmail :joy:

Here’s the information

Notice: suricata: This is Suricata version 7.0.0-rc2 RELEASE running in SYSTEM mode [LogVersion:suricata.c:1154]                                                                                                                           
Info: cpu: CPUs/cores online: 40 [UtilCpuPrintSummary:util-cpu.c:182]                                                                                                                                                                      
Config: device: Adding interface 0000:84:00.0 from config file [LiveBuildDeviceListCustom:util-device.c:295]                                                                                                                               
Config: luajit: luajit states preallocated: 256 [LuajitSetupStatesPool:util-luajit.c:99]                                                                                                                                                   
Info: suricata: Setting engine mode to IDS mode by default [PostConfLoadedSetup:suricata.c:2698]                                                                                                                                           
Config: exception-policy: exception-policy: ignore (defined via 'built-in default' for IDS-mode) [ExceptionPolicyGetDefault:util-exception-policy.c:226]                                                                                   
Config: exception-policy: app-layer.error-policy: ignore (defined via 'built-in default' for IDS-mode) [ExceptionPolicyGetDefault:util-exception-policy.c:226]                                                                             
Config: app-layer-htp: 'default' server has 'request-body-minimal-inspect-size' set to 33593 and 'request-body-inspect-window' set to 4235 after randomization. [HTPConfigSetDefaultsPhase2:app-layer-htp.c:2567]                          
Config: app-layer-htp: 'default' server has 'response-body-minimal-inspect-size' set to 42049 and 'response-body-inspect-window' set to 16715 after randomization. [HTPConfigSetDefaultsPhase2:app-layer-htp.c:2580]                       
Config: smb: read: max record size: 16777216, max queued chunks 64, max queued size 67108864 [suricata::smb::smb::rs_smb_register_parser:smb.rs:2428]                                                                                      
Config: smb: write: max record size: 16777216, max queued chunks 64, max queued size 67108864 [suricata::smb::smb::rs_smb_register_parser:smb.rs:2430]                                                                                     
Info: app-layer-ftp: FTP memcap: 67108864 [FTPParseMemcap:app-layer-ftp.c:129]                                                                                                                                                             
Config: app-layer-enip: Protocol detection and parser disabled for enip protocol. [RegisterENIPUDPParsers:app-layer-enip.c:539]                                                                                                            
Config: app-layer-dnp3: Protocol detection and parser disabled for DNP3. [RegisterDNP3Parsers:app-layer-dnp3.c:1565]                                                                                                                       
Config: host: allocated 262144 bytes of memory for the host hash... 4096 buckets of size 64 [HostInitConfig:host.c:259]                                                                                                                    
Config: host: preallocated 1000 hosts of size 136 [HostInitConfig:host.c:283]                                                                                                                                                              
Config: host: host memory usage: 398144 bytes, maximum: 536870912 [HostInitConfig:host.c:285]                                                                                                                                              
Info: coredump-config: Max dump is 0 [CoredumpLoadConfig:util-coredump-config.c:131]                                                                                                                                                       
Info: coredump-config: Core dump setting attempted is 0 [CoredumpLoadConfig:util-coredump-config.c:201]                                                                                                                                    
Info: coredump-config: Core dump size set to 0 [CoredumpLoadConfig:util-coredump-config.c:213]                                                                                                                                             
Config: exception-policy: defrag.memcap-policy: ignore (defined via 'built-in default' for IDS-mode) [ExceptionPolicyGetDefault:util-exception-policy.c:226]                                                                               
Config: defrag-hash: allocated 3670016 bytes of memory for the defrag hash... 65536 buckets of size 56 [DefragInitConfig:defrag-hash.c:254]                                                                                                
Config: defrag-hash: preallocated 65535 defrag trackers of size 160 [DefragInitConfig:defrag-hash.c:281]                                                                                                                                   
Config: defrag-hash: defrag memory usage: 14155616 bytes, maximum: 1073741824 [DefragInitConfig:defrag-hash.c:288]                                                                                                                         
Config: exception-policy: flow.memcap-policy: ignore (defined via 'built-in default' for IDS-mode) [ExceptionPolicyGetDefault:util-exception-policy.c:226]                                                                                 
Config: flow: flow size 296, memcap allows for 29020049 flows. Per hash row in perfect conditions 3 [FlowInitConfig:flow.c:673]                                                                                                            
Config: stream-tcp: stream "prealloc-sessions": 1000000 (per thread) [StreamTcpInitConfig:stream-tcp.c:393]                                                                                                                                
Config: stream-tcp: stream "memcap": 12884901888 [StreamTcpInitConfig:stream-tcp.c:412]                                                                                                                                                    
Config: stream-tcp: stream "midstream" session pickups: enabled [StreamTcpInitConfig:stream-tcp.c:420]                                                                                                                                     
Config: stream-tcp: stream "async-oneside": enabled [StreamTcpInitConfig:stream-tcp.c:428]                                                                                                                                                 
Config: stream-tcp: stream "checksum-validation": disabled [StreamTcpInitConfig:stream-tcp.c:445]                                                                                                                                          
Config: exception-policy: stream.memcap-policy: ignore (defined via 'built-in default' for IDS-mode) [ExceptionPolicyGetDefault:util-exception-policy.c:226]                                                                               
Config: exception-policy: stream.reassembly.memcap-policy: ignore (defined via 'built-in default' for IDS-mode) [ExceptionPolicyGetDefault:util-exception-policy.c:226]                                                                    
Config: exception-policy: stream.midstream-policy: ignore (defined via 'built-in default' for IDS-mode) [ExceptionPolicyGetDefault:util-exception-policy.c:226]                                                                            
Config: stream-tcp: stream."inline": disabled [StreamTcpInitConfig:stream-tcp.c:477]                                                                                                                                                       
Config: stream-tcp: stream "bypass": enabled [StreamTcpInitConfig:stream-tcp.c:490]                                                                                                                                                        
Config: stream-tcp: stream "max-syn-queued": 10 [StreamTcpInitConfig:stream-tcp.c:512]                                                                                                                                                     
Config: stream-tcp: stream "max-synack-queued": 5 [StreamTcpInitConfig:stream-tcp.c:525]                                                                                                                                                   
Config: stream-tcp: stream.reassembly "memcap": 4294967296 [StreamTcpInitConfig:stream-tcp.c:547]                                                                                                                                          
Config: stream-tcp: stream.reassembly "depth": 5242880 [StreamTcpInitConfig:stream-tcp.c:565]                                                                                                                                              
Config: stream-tcp: stream.reassembly "toserver-chunk-size": 5094 [StreamTcpInitConfig:stream-tcp.c:638]                                                                                                                                   
Config: stream-tcp: stream.reassembly "toclient-chunk-size": 5184 [StreamTcpInitConfig:stream-tcp.c:640]                                                                                                                                   
Config: stream-tcp: stream.reassembly.raw: enabled [StreamTcpInitConfig:stream-tcp.c:652]                                                                                                                                                  
Config: stream-tcp: stream.liberal-timestamps: disabled [StreamTcpInitConfig:stream-tcp.c:661]                                                                                                                                             
Config: stream-tcp-reassemble: stream.reassembly "segment-prealloc": 1024000 [StreamTcpReassemblyConfig:stream-tcp-reassemble.c:491]                                                                                                       
Config: stream-tcp-reassemble: stream.reassembly "max-regions": 8 [StreamTcpReassemblyConfig:stream-tcp-reassemble.c:514]                                                                                                                  
Info: conf: Running in live mode, activating unix socket [ConfUnixSocketIsEnable:util-conf.c:163]                                                                                                                                          
u:(null) n:(null)Notice: log-kafka: [thrd:app]: No `bootstrap.servers` configured: client will not be able to connect to Kafka cluster [rd_kafka_logger:util-log-kafka.c:57]                                                               
Config: output-json: Disabling eve metadata logging. [OutputJsonInitCtx:output-json.c:1167]                                                                                                                                                
Config: output-json: Enabling eve community_id logging. [OutputJsonInitCtx:output-json.c:1185]                                                                                                                                             
Config: runmodes: enabling 'eve-log' module 'alert' [RunModeInitializeEveOutput:runmodes.c:706]                                                                                                                                            
Config: runmodes: enabling 'eve-log' module 'anomaly' [RunModeInitializeEveOutput:runmodes.c:706]                                                                                                                                          
Config: runmodes: enabling 'eve-log' module 'http' [RunModeInitializeEveOutput:runmodes.c:706]                                                                                                                                             
Config: runmodes: enabling 'eve-log' module 'dns' [RunModeInitializeEveOutput:runmodes.c:706]                                                                                                                                              
Config: runmodes: enabling 'eve-log' module 'stats' [RunModeInitializeEveOutput:runmodes.c:706]                                                                                                                                            
Info: logopenfile: stats output device (regular) initialized: stats.log [SCConfLogOpenGeneric:util-logopenfile.c:618]                                                                                                                      
Config: suricata: Delayed detect disabled [SetupDelayedDetect:suricata.c:2416]                                                                                                                                                             
Config: detect: pattern matchers: MPM: hs, SPM: hs [DetectEngineCtxInitReal:detect-engine.c:2493]                                                                                                                                          
Config: detect: grouping: tcp-whitelist 53, 80, 443, 445, 8000, 3000, 3389, 22, 6379, 8080 [DetectEngineCtxLoadConf:detect-engine.c:2901]                                                                                                  
Config: detect: grouping: udp-whitelist 53 [DetectEngineCtxLoadConf:detect-engine.c:2927]                                                                                                                                                  
Config: detect: prefilter engines: MPM and keywords [DetectEngineCtxLoadConf:detect-engine.c:2963]                                                                                                                                         
Config: reputation: IP reputation disabled [SRepInit:reputation.c:607]                                                                                                                                                                     
Config: detect: Loading rule file: /etc/suricata/rules/local.rules [ProcessSigFiles:detect-engine-loader.c:260] 

....rule info....

Info: detect: 1 rule files processed. 3185 rules successfully loaded, 14 rules failed [SigLoadSignatures:detect-engine-loader.c:364]
Info: threshold-config: Threshold config parsed: 0 rule(s) found [SCThresholdConfParseFile:util-threshold-config.c:1112]
Config: detect: sid 2019010001: prefilter is on "flow" [SigAddressPrepareStage1:detect-engine-build.c:1479]
Config: detect: sid 2019010003: prefilter is on "flow" [SigAddressPrepareStage1:detect-engine-build.c:1479]
Config: detect: sid 2019010005: prefilter is on "flow" [SigAddressPrepareStage1:detect-engine-build.c:1479]
Config: detect: sid 2000010283: prefilter is on "flow" [SigAddressPrepareStage1:detect-engine-build.c:1479]
Config: detect: sid 2000010284: prefilter is on "flow" [SigAddressPrepareStage1:detect-engine-build.c:1479]
Config: detect: sid 2000010285: prefilter is on "flow" [SigAddressPrepareStage1:detect-engine-build.c:1479]
Config: detect: sid 2000010245: prefilter is on "flow" [SigAddressPrepareStage1:detect-engine-build.c:1479]
Config: detect: sid 1999710005: prefilter is on "flow" [SigAddressPrepareStage1:detect-engine-build.c:1479]
Config: detect: sid 1999710006: prefilter is on "flow" [SigAddressPrepareStage1:detect-engine-build.c:1479]
Config: detect: sid 1999710007: prefilter is on "flow" [SigAddressPrepareStage1:detect-engine-build.c:1479]
Config: detect: sid 1999720001: prefilter is on "flow" [SigAddressPrepareStage1:detect-engine-build.c:1479]
Config: detect: sid 1999720005: prefilter is on "flow" [SigAddressPrepareStage1:detect-engine-build.c:1479]
Config: detect: sid 1999720006: prefilter is on "flow" [SigAddressPrepareStage1:detect-engine-build.c:1479]
Config: detect: sid 1999730005: prefilter is on "flow" [SigAddressPrepareStage1:detect-engine-build.c:1479]
Config: detect: sid 1999730006: prefilter is on "flow" [SigAddressPrepareStage1:detect-engine-build.c:1479]
Config: detect: sid 1999730007: prefilter is on "flow" [SigAddressPrepareStage1:detect-engine-build.c:1479]
Config: detect: sid 1999730013: prefilter is on "flow" [SigAddressPrepareStage1:detect-engine-build.c:1479]
Config: detect: sid 1999750001: prefilter is on "flow" [SigAddressPrepareStage1:detect-engine-build.c:1479]
Config: detect: sid 2009180002: prefilter is on "flow" [SigAddressPrepareStage1:detect-engine-build.c:1479]
Config: detect: sid 2009180003: prefilter is on "flow" [SigAddressPrepareStage1:detect-engine-build.c:1479]
Config: detect: sid 2009180004: prefilter is on "flow" [SigAddressPrepareStage1:detect-engine-build.c:1479]
Config: detect: sid 2009180005: prefilter is on "flow" [SigAddressPrepareStage1:detect-engine-build.c:1479]
Config: detect: sid 2012020001: prefilter is on "flow" [SigAddressPrepareStage1:detect-engine-build.c:1479]
Config: detect: sid 2034060002: prefilter is on "flow" [SigAddressPrepareStage1:detect-engine-build.c:1479]
Config: detect: sid 2034060004: prefilter is on "flow" [SigAddressPrepareStage1:detect-engine-build.c:1479]
Config: detect: sid 2034060005: prefilter is on "flow" [SigAddressPrepareStage1:detect-engine-build.c:1479]
Config: detect: sid 2034060006: prefilter is on "flow" [SigAddressPrepareStage1:detect-engine-build.c:1479]
Config: detect: sid 2034060007: prefilter is on "flow" [SigAddressPrepareStage1:detect-engine-build.c:1479]
Config: detect: sid 2034060008: prefilter is on "flow" [SigAddressPrepareStage1:detect-engine-build.c:1479]
Config: detect: sid 2034060009: prefilter is on "flow" [SigAddressPrepareStage1:detect-engine-build.c:1479]
Config: detect: sid 2034060010: prefilter is on "flow" [SigAddressPrepareStage1:detect-engine-build.c:1479]
Config: detect: sid 2034060011: prefilter is on "flow" [SigAddressPrepareStage1:detect-engine-build.c:1479]
Config: detect: sid 2034060012: prefilter is on "flow" [SigAddressPrepareStage1:detect-engine-build.c:1479]
Info: detect: 3191 signatures processed. 0 are IP-only rules, 1453 are inspecting packet payload, 1549 inspect application layer, 0 are decoder event only [SigAddressPrepareStage1:detect-engine-build.c:1504]
Config: detect: building signature grouping structure, stage 1: preprocessing rules... complete [SigAddressPrepareStage1:detect-engine-build.c:1507]
Warning: detect-flowbits: flowbit 'file.zip' is checked but not set. Checked in 2008470015 and 3 other sigs [DetectFlowbitsAnalyze:detect-flowbits.c:597]
Warning: detect-flowbits: flowbit 'EternalRomance.RaceCondition.Attempt' is checked but not set. Checked in 2000010111 and 0 other sigs [DetectFlowbitsAnalyze:detect-flowbits.c:597]
Warning: detect-flowbits: flowbit 'SMB.NTTrans.Req' is checked but not set. Checked in 2000010118 and 0 other sigs [DetectFlowbitsAnalyze:detect-flowbits.c:597]
Warning: detect-flowbits: flowbit 'CVE.2018-15442.Probe' is checked but not set. Checked in 2000010154 and 0 other sigs [DetectFlowbitsAnalyze:detect-flowbits.c:597]
Warning: detect-flowbits: flowbit 'ET.genericphish' is checked but not set. Checked in 2010030017 and 0 other sigs [DetectFlowbitsAnalyze:detect-flowbits.c:597]
Warning: detect-flowbits: flowbit 'file.exe' is checked but not set. Checked in 2004020021 and 10 other sigs [DetectFlowbitsAnalyze:detect-flowbits.c:597]
Warning: detect-flowbits: flowbit 'FB363347_3' is checked but not set. Checked in 2004020066 and 0 other sigs [DetectFlowbitsAnalyze:detect-flowbits.c:597]
Warning: detect-flowbits: flowbit 'ET.http.binary' is checked but not set. Checked in 2010010006 and 2 other sigs [DetectFlowbitsAnalyze:detect-flowbits.c:597]
Warning: detect-flowbits: flowbit 'et.JavaArchiveOrClass' is checked but not set. Checked in 2008450001 and 3 other sigs [DetectFlowbitsAnalyze:detect-flowbits.c:597]
Warning: detect-flowbits: flowbit 'file.xls' is checked but not set. Checked in 2025020007 and 32 other sigs [DetectFlowbitsAnalyze:detect-flowbits.c:597]
Warning: detect-flowbits: flowbit 'file.xlsb' is checked but not set. Checked in 2025020009 and 9 other sigs [DetectFlowbitsAnalyze:detect-flowbits.c:597]
Warning: detect-flowbits: flowbit 'file.xlsx' is checked but not set. Checked in 2025020114 and 5 other sigs [DetectFlowbitsAnalyze:detect-flowbits.c:597]
Warning: detect-flowbits: flowbit 'nsfocus_UTS2' is checked but not set. Checked in 2008770003 and 0 other sigs [DetectFlowbitsAnalyze:detect-flowbits.c:597]
Warning: detect-flowbits: flowbit 'BSis.vnc.setup' is checked but not set. Checked in 2002500004 and 1 other sigs [DetectFlowbitsAnalyze:detect-flowbits.c:597]
Warning: detect-flowbits: flowbit 'ET.armwget' is checked but not set. Checked in 2010020001 and 0 other sigs [DetectFlowbitsAnalyze:detect-flowbits.c:597]
Warning: detect-flowbits: flowbit 'ET.telnet.busybox' is checked but not set. Checked in 2011690109 and 0 other sigs [DetectFlowbitsAnalyze:detect-flowbits.c:597]
Warning: detect-flowbits: flowbit 'SMB.Trans2.SubCommand.Unimplemented' is checked but not set. Checked in 2011600011 and 0 other sigs [DetectFlowbitsAnalyze:detect-flowbits.c:597]
Warning: detect-flowbits: flowbit 'cmars.jboss' is checked but not set. Checked in 2008420020 and 0 other sigs [DetectFlowbitsAnalyze:detect-flowbits.c:597]
Perf: detect: TCP toserver: 76 port groups, 54 unique SGH's, 22 copies [RulesGroupByPorts:detect-engine-build.c:1297]
Perf: detect: TCP toclient: 39 port groups, 19 unique SGH's, 20 copies [RulesGroupByPorts:detect-engine-build.c:1297]
Perf: detect: UDP toserver: 22 port groups, 12 unique SGH's, 10 copies [RulesGroupByPorts:detect-engine-build.c:1297]
Perf: detect: UDP toclient: 10 port groups, 6 unique SGH's, 4 copies [RulesGroupByPorts:detect-engine-build.c:1297]
Perf: detect: OTHER toserver: 254 proto groups, 2 unique SGH's, 252 copies [RulesGroupByProto:detect-engine-build.c:1051]
Perf: detect: OTHER toclient: 254 proto groups, 0 unique SGH's, 254 copies [RulesGroupByProto:detect-engine-build.c:1084]
Perf: detect: Unique rule groups: 93 [SigAddressPrepareStage4:detect-engine-build.c:1861]
Perf: detect: Builtin MPM "toserver TCP packet": 31 [MpmStoreReportStats:detect-engine-mpm.c:1480]
Perf: detect: Builtin MPM "toclient TCP packet": 7 [MpmStoreReportStats:detect-engine-mpm.c:1480]
Perf: detect: Builtin MPM "toserver TCP stream": 47 [MpmStoreReportStats:detect-engine-mpm.c:1480]
Perf: detect: Builtin MPM "toclient TCP stream": 18 [MpmStoreReportStats:detect-engine-mpm.c:1480]
Perf: detect: Builtin MPM "toserver UDP packet": 12 [MpmStoreReportStats:detect-engine-mpm.c:1480]
Perf: detect: Builtin MPM "toclient UDP packet": 6 [MpmStoreReportStats:detect-engine-mpm.c:1480]
Perf: detect: Builtin MPM "other IP packet": 2 [MpmStoreReportStats:detect-engine-mpm.c:1480]
Perf: detect: AppLayer MPM "toserver http_uri (http)": 12 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toserver http_uri (http2)": 12 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toserver http_raw_uri (http)": 3 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toserver http_raw_uri (http2)": 3 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toserver http_request_line (http)": 2 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toserver http_request_line (http2)": 2 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toserver http_client_body (http)": 5 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toserver http_client_body (http2)": 5 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toserver http_header (http)": 14 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toclient http_header (http)": 14 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toserver http_header (http2)": 14 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toclient http_header (http2)": 14 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toserver http_header_names (http)": 2 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toclient http_header_names (http)": 2 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toserver http_header_names (http2)": 2 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toclient http_header_names (http2)": 2 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toserver http_accept (http)": 2 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toserver http_accept (http2)": 2 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toserver http_referer (http)": 1 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toserver http_referer (http2)": 1 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toserver http_content_type (http)": 2 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toserver http_content_type (http2)": 2 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toclient http_content_type (http)": 2 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toclient http_content_type (http2)": 2 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toclient http.server (http)": 2 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toclient http.server (http2)": 2 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toserver http_start (http)": 2 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toclient http_start (http)": 2 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toserver http_raw_header (http)": 1 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toclient http_raw_header (http)": 1 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toserver http_raw_header (http2)": 1 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toclient http_raw_header (http2)": 1 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toserver http_method (http)": 1 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toserver http_method (http2)": 1 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toserver http_cookie (http)": 3 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toclient http_cookie (http)": 3 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toserver http_cookie (http2)": 3 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toclient http_cookie (http2)": 3 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toserver http_user_agent (http)": 4 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toserver http_user_agent (http2)": 4 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toserver http_host (http)": 1 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toserver http_host (http2)": 1 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toclient http_stat_code (http)": 1 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toclient http_stat_code (http2)": 1 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toserver dns_query (dns)": 1 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toserver tls.sni (tls)": 1 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toserver tls.cert_issuer (tls)": 1 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toclient tls.cert_issuer (tls)": 1 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toserver tls.cert_subject (tls)": 1 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toclient tls.cert_subject (tls)": 1 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toclient tls.cert_serial (tls)": 1 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toserver tls.cert_serial (tls)": 1 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toclient tls.cert_fingerprint (tls)": 2 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toserver tls.cert_fingerprint (tls)": 2 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toclient file_data (nfs)": 40 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toserver file_data (nfs)": 40 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toclient file_data (smb)": 40 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toserver file_data (smb)": 40 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toclient file_data (ftp)": 40 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toserver file_data (ftp)": 40 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toclient file_data (ftp-data)": 40 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toserver file_data (ftp-data)": 40 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toclient file_data (http)": 40 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toserver file_data (http)": 40 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toclient file_data (http2)": 40 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toserver file_data (http2)": 40 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Perf: detect: AppLayer MPM "toserver file_data (smtp)": 40 [MpmStoreReportStats:detect-engine-mpm.c:1488]
Config: affinity: Found affinity definition for "management-cpu-set" [AffinitySetupLoadFromConfig:util-affinity.c:201]
Config: affinity: Found affinity definition for "receive-cpu-set" [AffinitySetupLoadFromConfig:util-affinity.c:201]
Config: affinity: Found affinity definition for "worker-cpu-set" [AffinitySetupLoadFromConfig:util-affinity.c:201]
Config: affinity: Using default prio 'high' for set 'worker-cpu-set' [AffinitySetupLoadFromConfig:util-affinity.c:249]
TELEMETRY: No legacy callbacks, legacy socket not created
Config: dpdk: RTE_ETH_RX_OFFLOAD_VLAN_STRIP - available [DumpRXOffloadCapabilities:runmode-dpdk.c:985]
Config: dpdk: RTE_ETH_RX_OFFLOAD_IPV4_CKSUM - available [DumpRXOffloadCapabilities:runmode-dpdk.c:987]
Config: dpdk: RTE_ETH_RX_OFFLOAD_UDP_CKSUM - available [DumpRXOffloadCapabilities:runmode-dpdk.c:989]
Config: dpdk: RTE_ETH_RX_OFFLOAD_TCP_CKSUM - available [DumpRXOffloadCapabilities:runmode-dpdk.c:991]
Config: dpdk: RTE_ETH_RX_OFFLOAD_TCP_LRO - available [DumpRXOffloadCapabilities:runmode-dpdk.c:993]
Config: dpdk: RTE_ETH_RX_OFFLOAD_QINQ_STRIP - NOT available [DumpRXOffloadCapabilities:runmode-dpdk.c:995]
Config: dpdk: RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM - NOT available [DumpRXOffloadCapabilities:runmode-dpdk.c:997]
Config: dpdk: RTE_ETH_RX_OFFLOAD_MACSEC_STRIP - available [DumpRXOffloadCapabilities:runmode-dpdk.c:999]
Config: dpdk: RTE_ETH_RX_OFFLOAD_VLAN_FILTER - available [DumpRXOffloadCapabilities:runmode-dpdk.c:1005]
Config: dpdk: RTE_ETH_RX_OFFLOAD_VLAN_EXTEND - available [DumpRXOffloadCapabilities:runmode-dpdk.c:1007]
Config: dpdk: RTE_ETH_RX_OFFLOAD_SCATTER - available [DumpRXOffloadCapabilities:runmode-dpdk.c:1009]
Config: dpdk: RTE_ETH_RX_OFFLOAD_TIMESTAMP - NOT available [DumpRXOffloadCapabilities:runmode-dpdk.c:1011]
Config: dpdk: RTE_ETH_RX_OFFLOAD_SECURITY - NOT available [DumpRXOffloadCapabilities:runmode-dpdk.c:1013]
Config: dpdk: RTE_ETH_RX_OFFLOAD_KEEP_CRC - available [DumpRXOffloadCapabilities:runmode-dpdk.c:1015]
Config: dpdk: RTE_ETH_RX_OFFLOAD_SCTP_CKSUM - NOT available [DumpRXOffloadCapabilities:runmode-dpdk.c:1017]
Config: dpdk: RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM - NOT available [DumpRXOffloadCapabilities:runmode-dpdk.c:1019]
Config: dpdk: RTE_ETH_RX_OFFLOAD_RSS_HASH - available [DumpRXOffloadCapabilities:runmode-dpdk.c:1021]
Config: dpdk: RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT - NOT available [DumpRXOffloadCapabilities:runmode-dpdk.c:1024]
Config: dpdk: 0000:84:00.0: RSS enabled for 32 queues [DeviceInitPortConf:runmode-dpdk.c:1097]
Config: dpdk: 0000:84:00.0: checksum validation disabled [DeviceInitPortConf:runmode-dpdk.c:1132]
Config: dpdk: 0000:84:00.0: setting MTU to 1500 [DeviceConfigure:runmode-dpdk.c:1463]
Config: dpdk: 0000:84:00.0: creating packet mbuf pool mempool_0000:84:00.0 of size 1452000, cache size 512, mbuf size 2176 [DeviceConfigureQueues:runmode-dpdk.c:1168]
Config: dpdk: 0000:84:00.0: rx queue setup: queue:0 port:0 rx_desc:1024 tx_desc:1024 rx: hthresh: 0 pthresh 0 wthresh 0 free_thresh 0 drop_en 0 offloads 0 [DeviceConfigureQueues:runmode-dpdk.c:1191]
Config: dpdk: 0000:84:00.0: rx queue setup: queue:1 port:0 rx_desc:1024 tx_desc:1024 rx: hthresh: 0 pthresh 0 wthresh 0 free_thresh 0 drop_en 0 offloads 0 [DeviceConfigureQueues:runmode-dpdk.c:1191]
Config: dpdk: 0000:84:00.0: rx queue setup: queue:2 port:0 rx_desc:1024 tx_desc:1024 rx: hthresh: 0 pthresh 0 wthresh 0 free_thresh 0 drop_en 0 offloads 0 [DeviceConfigureQueues:runmode-dpdk.c:1191]
Config: dpdk: 0000:84:00.0: rx queue setup: queue:3 port:0 rx_desc:1024 tx_desc:1024 rx: hthresh: 0 pthresh 0 wthresh 0 free_thresh 0 drop_en 0 offloads 0 [DeviceConfigureQueues:runmode-dpdk.c:1191]
Config: dpdk: 0000:84:00.0: rx queue setup: queue:4 port:0 rx_desc:1024 tx_desc:1024 rx: hthresh: 0 pthresh 0 wthresh 0 free_thresh 0 drop_en 0 offloads 0 [DeviceConfigureQueues:runmode-dpdk.c:1191]
Config: dpdk: 0000:84:00.0: rx queue setup: queue:5 port:0 rx_desc:1024 tx_desc:1024 rx: hthresh: 0 pthresh 0 wthresh 0 free_thresh 0 drop_en 0 offloads 0 [DeviceConfigureQueues:runmode-dpdk.c:1191]
Config: dpdk: 0000:84:00.0: rx queue setup: queue:6 port:0 rx_desc:1024 tx_desc:1024 rx: hthresh: 0 pthresh 0 wthresh 0 free_thresh 0 drop_en 0 offloads 0 [DeviceConfigureQueues:runmode-dpdk.c:1191]
Config: dpdk: 0000:84:00.0: rx queue setup: queue:7 port:0 rx_desc:1024 tx_desc:1024 rx: hthresh: 0 pthresh 0 wthresh 0 free_thresh 0 drop_en 0 offloads 0 [DeviceConfigureQueues:runmode-dpdk.c:1191]
Config: dpdk: 0000:84:00.0: rx queue setup: queue:8 port:0 rx_desc:1024 tx_desc:1024 rx: hthresh: 0 pthresh 0 wthresh 0 free_thresh 0 drop_en 0 offloads 0 [DeviceConfigureQueues:runmode-dpdk.c:1191]
Config: dpdk: 0000:84:00.0: rx queue setup: queue:9 port:0 rx_desc:1024 tx_desc:1024 rx: hthresh: 0 pthresh 0 wthresh 0 free_thresh 0 drop_en 0 offloads 0 [DeviceConfigureQueues:runmode-dpdk.c:1191]
Config: dpdk: 0000:84:00.0: rx queue setup: queue:10 port:0 rx_desc:1024 tx_desc:1024 rx: hthresh: 0 pthresh 0 wthresh 0 free_thresh 0 drop_en 0 offloads 0 [DeviceConfigureQueues:runmode-dpdk.c:1191]
Config: dpdk: 0000:84:00.0: rx queue setup: queue:11 port:0 rx_desc:1024 tx_desc:1024 rx: hthresh: 0 pthresh 0 wthresh 0 free_thresh 0 drop_en 0 offloads 0 [DeviceConfigureQueues:runmode-dpdk.c:1191]
Config: dpdk: 0000:84:00.0: rx queue setup: queue:12 port:0 rx_desc:1024 tx_desc:1024 rx: hthresh: 0 pthresh 0 wthresh 0 free_thresh 0 drop_en 0 offloads 0 [DeviceConfigureQueues:runmode-dpdk.c:1191]
Config: dpdk: 0000:84:00.0: rx queue setup: queue:13 port:0 rx_desc:1024 tx_desc:1024 rx: hthresh: 0 pthresh 0 wthresh 0 free_thresh 0 drop_en 0 offloads 0 [DeviceConfigureQueues:runmode-dpdk.c:1191]
Config: dpdk: 0000:84:00.0: rx queue setup: queue:14 port:0 rx_desc:1024 tx_desc:1024 rx: hthresh: 0 pthresh 0 wthresh 0 free_thresh 0 drop_en 0 offloads 0 [DeviceConfigureQueues:runmode-dpdk.c:1191]
Config: dpdk: 0000:84:00.0: rx queue setup: queue:15 port:0 rx_desc:1024 tx_desc:1024 rx: hthresh: 0 pthresh 0 wthresh 0 free_thresh 0 drop_en 0 offloads 0 [DeviceConfigureQueues:runmode-dpdk.c:1191]
Config: dpdk: 0000:84:00.0: rx queue setup: queue:16 port:0 rx_desc:1024 tx_desc:1024 rx: hthresh: 0 pthresh 0 wthresh 0 free_thresh 0 drop_en 0 offloads 0 [DeviceConfigureQueues:runmode-dpdk.c:1191]
Config: dpdk: 0000:84:00.0: rx queue setup: queue:17 port:0 rx_desc:1024 tx_desc:1024 rx: hthresh: 0 pthresh 0 wthresh 0 free_thresh 0 drop_en 0 offloads 0 [DeviceConfigureQueues:runmode-dpdk.c:1191]
Config: dpdk: 0000:84:00.0: rx queue setup: queue:18 port:0 rx_desc:1024 tx_desc:1024 rx: hthresh: 0 pthresh 0 wthresh 0 free_thresh 0 drop_en 0 offloads 0 [DeviceConfigureQueues:runmode-dpdk.c:1191]
Config: dpdk: 0000:84:00.0: rx queue setup: queue:19 port:0 rx_desc:1024 tx_desc:1024 rx: hthresh: 0 pthresh 0 wthresh 0 free_thresh 0 drop_en 0 offloads 0 [DeviceConfigureQueues:runmode-dpdk.c:1191]
Config: dpdk: 0000:84:00.0: rx queue setup: queue:20 port:0 rx_desc:1024 tx_desc:1024 rx: hthresh: 0 pthresh 0 wthresh 0 free_thresh 0 drop_en 0 offloads 0 [DeviceConfigureQueues:runmode-dpdk.c:1191]
Config: dpdk: 0000:84:00.0: rx queue setup: queue:21 port:0 rx_desc:1024 tx_desc:1024 rx: hthresh: 0 pthresh 0 wthresh 0 free_thresh 0 drop_en 0 offloads 0 [DeviceConfigureQueues:runmode-dpdk.c:1191]
Config: dpdk: 0000:84:00.0: rx queue setup: queue:22 port:0 rx_desc:1024 tx_desc:1024 rx: hthresh: 0 pthresh 0 wthresh 0 free_thresh 0 drop_en 0 offloads 0 [DeviceConfigureQueues:runmode-dpdk.c:1191]
Config: dpdk: 0000:84:00.0: rx queue setup: queue:23 port:0 rx_desc:1024 tx_desc:1024 rx: hthresh: 0 pthresh 0 wthresh 0 free_thresh 0 drop_en 0 offloads 0 [DeviceConfigureQueues:runmode-dpdk.c:1191]
Config: dpdk: 0000:84:00.0: rx queue setup: queue:24 port:0 rx_desc:1024 tx_desc:1024 rx: hthresh: 0 pthresh 0 wthresh 0 free_thresh 0 drop_en 0 offloads 0 [DeviceConfigureQueues:runmode-dpdk.c:1191]
Config: dpdk: 0000:84:00.0: rx queue setup: queue:25 port:0 rx_desc:1024 tx_desc:1024 rx: hthresh: 0 pthresh 0 wthresh 0 free_thresh 0 drop_en 0 offloads 0 [DeviceConfigureQueues:runmode-dpdk.c:1191]
Config: dpdk: 0000:84:00.0: rx queue setup: queue:26 port:0 rx_desc:1024 tx_desc:1024 rx: hthresh: 0 pthresh 0 wthresh 0 free_thresh 0 drop_en 0 offloads 0 [DeviceConfigureQueues:runmode-dpdk.c:1191]
Config: dpdk: 0000:84:00.0: rx queue setup: queue:27 port:0 rx_desc:1024 tx_desc:1024 rx: hthresh: 0 pthresh 0 wthresh 0 free_thresh 0 drop_en 0 offloads 0 [DeviceConfigureQueues:runmode-dpdk.c:1191]
Config: dpdk: 0000:84:00.0: rx queue setup: queue:28 port:0 rx_desc:1024 tx_desc:1024 rx: hthresh: 0 pthresh 0 wthresh 0 free_thresh 0 drop_en 0 offloads 0 [DeviceConfigureQueues:runmode-dpdk.c:1191]
Config: dpdk: 0000:84:00.0: rx queue setup: queue:29 port:0 rx_desc:1024 tx_desc:1024 rx: hthresh: 0 pthresh 0 wthresh 0 free_thresh 0 drop_en 0 offloads 0 [DeviceConfigureQueues:runmode-dpdk.c:1191]
Config: dpdk: 0000:84:00.0: rx queue setup: queue:30 port:0 rx_desc:1024 tx_desc:1024 rx: hthresh: 0 pthresh 0 wthresh 0 free_thresh 0 drop_en 0 offloads 0 [DeviceConfigureQueues:runmode-dpdk.c:1191]
Config: dpdk: 0000:84:00.0: rx queue setup: queue:31 port:0 rx_desc:1024 tx_desc:1024 rx: hthresh: 0 pthresh 0 wthresh 0 free_thresh 0 drop_en 0 offloads 0 [DeviceConfigureQueues:runmode-dpdk.c:1191]
Config: dpdk: 0000:84:00.0: tx queue setup: queue:0 port:0 [DeviceConfigureQueues:runmode-dpdk.c:1207]
Config: dpdk: 0000:84:00.0: tx queue setup: queue:1 port:0 [DeviceConfigureQueues:runmode-dpdk.c:1207]
Config: dpdk: 0000:84:00.0: tx queue setup: queue:2 port:0 [DeviceConfigureQueues:runmode-dpdk.c:1207]
Config: dpdk: 0000:84:00.0: tx queue setup: queue:3 port:0 [DeviceConfigureQueues:runmode-dpdk.c:1207]
Config: dpdk: 0000:84:00.0: tx queue setup: queue:4 port:0 [DeviceConfigureQueues:runmode-dpdk.c:1207]
Config: dpdk: 0000:84:00.0: tx queue setup: queue:5 port:0 [DeviceConfigureQueues:runmode-dpdk.c:1207]
Config: dpdk: 0000:84:00.0: tx queue setup: queue:6 port:0 [DeviceConfigureQueues:runmode-dpdk.c:1207]
Config: dpdk: 0000:84:00.0: tx queue setup: queue:7 port:0 [DeviceConfigureQueues:runmode-dpdk.c:1207]
Config: dpdk: 0000:84:00.0: tx queue setup: queue:8 port:0 [DeviceConfigureQueues:runmode-dpdk.c:1207]
Config: dpdk: 0000:84:00.0: tx queue setup: queue:9 port:0 [DeviceConfigureQueues:runmode-dpdk.c:1207]
Config: dpdk: 0000:84:00.0: tx queue setup: queue:10 port:0 [DeviceConfigureQueues:runmode-dpdk.c:1207]
Config: dpdk: 0000:84:00.0: tx queue setup: queue:11 port:0 [DeviceConfigureQueues:runmode-dpdk.c:1207]
Config: dpdk: 0000:84:00.0: tx queue setup: queue:12 port:0 [DeviceConfigureQueues:runmode-dpdk.c:1207]
Config: dpdk: 0000:84:00.0: tx queue setup: queue:13 port:0 [DeviceConfigureQueues:runmode-dpdk.c:1207]
Config: dpdk: 0000:84:00.0: tx queue setup: queue:14 port:0 [DeviceConfigureQueues:runmode-dpdk.c:1207]
Config: dpdk: 0000:84:00.0: tx queue setup: queue:15 port:0 [DeviceConfigureQueues:runmode-dpdk.c:1207]
Config: dpdk: 0000:84:00.0: tx queue setup: queue:16 port:0 [DeviceConfigureQueues:runmode-dpdk.c:1207]
Config: dpdk: 0000:84:00.0: tx queue setup: queue:17 port:0 [DeviceConfigureQueues:runmode-dpdk.c:1207]
Config: dpdk: 0000:84:00.0: tx queue setup: queue:18 port:0 [DeviceConfigureQueues:runmode-dpdk.c:1207]
Config: dpdk: 0000:84:00.0: tx queue setup: queue:19 port:0 [DeviceConfigureQueues:runmode-dpdk.c:1207]
Config: dpdk: 0000:84:00.0: tx queue setup: queue:20 port:0 [DeviceConfigureQueues:runmode-dpdk.c:1207]
Config: dpdk: 0000:84:00.0: tx queue setup: queue:21 port:0 [DeviceConfigureQueues:runmode-dpdk.c:1207]
Config: dpdk: 0000:84:00.0: tx queue setup: queue:22 port:0 [DeviceConfigureQueues:runmode-dpdk.c:1207]
Config: dpdk: 0000:84:00.0: tx queue setup: queue:23 port:0 [DeviceConfigureQueues:runmode-dpdk.c:1207]
Config: dpdk: 0000:84:00.0: tx queue setup: queue:24 port:0 [DeviceConfigureQueues:runmode-dpdk.c:1207]
Config: dpdk: 0000:84:00.0: tx queue setup: queue:25 port:0 [DeviceConfigureQueues:runmode-dpdk.c:1207]
Config: dpdk: 0000:84:00.0: tx queue setup: queue:26 port:0 [DeviceConfigureQueues:runmode-dpdk.c:1207]
Config: dpdk: 0000:84:00.0: tx queue setup: queue:27 port:0 [DeviceConfigureQueues:runmode-dpdk.c:1207]
Config: dpdk: 0000:84:00.0: tx queue setup: queue:28 port:0 [DeviceConfigureQueues:runmode-dpdk.c:1207]
Config: dpdk: 0000:84:00.0: tx queue setup: queue:29 port:0 [DeviceConfigureQueues:runmode-dpdk.c:1207]
Config: dpdk: 0000:84:00.0: tx queue setup: queue:30 port:0 [DeviceConfigureQueues:runmode-dpdk.c:1207]
Config: dpdk: 0000:84:00.0: tx queue setup: queue:31 port:0 [DeviceConfigureQueues:runmode-dpdk.c:1207]
Info: runmodes: 0000:84:00.0: creating 32 threads [RunModeSetLiveCaptureWorkersForDevice:util-runmodes.c:255]
Perf: threads: Setting prio -2 for thread "W#01-84:00.0" to cpu/core 8, thread id 6632 [TmThreadSetupOptions:tm-threads.c:874]
Perf: dpdk: 0000:84:00.0: NIC is on NUMA 1, thread on NUMA 0 [ReceiveDPDKThreadInit:source-dpdk.c:523]
Perf: threads: Setting prio -2 for thread "W#02-84:00.0" to cpu/core 9, thread id 6634 [TmThreadSetupOptions:tm-threads.c:874]
Perf: dpdk: 0000:84:00.0: NIC is on NUMA 1, thread on NUMA 0 [ReceiveDPDKThreadInit:source-dpdk.c:523]
Perf: threads: Setting prio -2 for thread "W#03-84:00.0" to cpu/core 10, thread id 6635 [TmThreadSetupOptions:tm-threads.c:874]
Perf: threads: Setting prio -2 for thread "W#04-84:00.0" to cpu/core 11, thread id 6636 [TmThreadSetupOptions:tm-threads.c:874]
Perf: threads: Setting prio -2 for thread "W#05-84:00.0" to cpu/core 12, thread id 6637 [TmThreadSetupOptions:tm-threads.c:874]
Perf: threads: Setting prio -2 for thread "W#06-84:00.0" to cpu/core 13, thread id 6638 [TmThreadSetupOptions:tm-threads.c:874]
Perf: threads: Setting prio -2 for thread "W#07-84:00.0" to cpu/core 14, thread id 6639 [TmThreadSetupOptions:tm-threads.c:874]
Perf: threads: Setting prio -2 for thread "W#08-84:00.0" to cpu/core 15, thread id 6640 [TmThreadSetupOptions:tm-threads.c:874]
Perf: threads: Setting prio -2 for thread "W#09-84:00.0" to cpu/core 16, thread id 6641 [TmThreadSetupOptions:tm-threads.c:874]
Perf: threads: Setting prio -2 for thread "W#10-84:00.0" to cpu/core 17, thread id 6642 [TmThreadSetupOptions:tm-threads.c:874]
Perf: threads: Setting prio -2 for thread "W#11-84:00.0" to cpu/core 18, thread id 6643 [TmThreadSetupOptions:tm-threads.c:874]
Perf: threads: Setting prio -2 for thread "W#12-84:00.0" to cpu/core 19, thread id 6644 [TmThreadSetupOptions:tm-threads.c:874]
Perf: threads: Setting prio -2 for thread "W#13-84:00.0" to cpu/core 20, thread id 6645 [TmThreadSetupOptions:tm-threads.c:874]
Perf: dpdk: 0000:84:00.0: NIC is on NUMA 1, thread on NUMA 0 [ReceiveDPDKThreadInit:source-dpdk.c:523]
Perf: threads: Setting prio -2 for thread "W#14-84:00.0" to cpu/core 21, thread id 6646 [TmThreadSetupOptions:tm-threads.c:874]
Perf: dpdk: 0000:84:00.0: NIC is on NUMA 1, thread on NUMA 0 [ReceiveDPDKThreadInit:source-dpdk.c:523]
Perf: threads: Setting prio -2 for thread "W#15-84:00.0" to cpu/core 22, thread id 6647 [TmThreadSetupOptions:tm-threads.c:874]
Perf: dpdk: 0000:84:00.0: NIC is on NUMA 1, thread on NUMA 0 [ReceiveDPDKThreadInit:source-dpdk.c:523]
Perf: threads: Setting prio -2 for thread "W#16-84:00.0" to cpu/core 23, thread id 6648 [TmThreadSetupOptions:tm-threads.c:874]
Perf: dpdk: 0000:84:00.0: NIC is on NUMA 1, thread on NUMA 0 [ReceiveDPDKThreadInit:source-dpdk.c:523]
Perf: threads: Setting prio -2 for thread "W#17-84:00.0" to cpu/core 24, thread id 6649 [TmThreadSetupOptions:tm-threads.c:874]
Perf: dpdk: 0000:84:00.0: NIC is on NUMA 1, thread on NUMA 0 [ReceiveDPDKThreadInit:source-dpdk.c:523]
Perf: threads: Setting prio -2 for thread "W#18-84:00.0" to cpu/core 25, thread id 6650 [TmThreadSetupOptions:tm-threads.c:874]
Perf: dpdk: 0000:84:00.0: NIC is on NUMA 1, thread on NUMA 0 [ReceiveDPDKThreadInit:source-dpdk.c:523]
Perf: threads: Setting prio -2 for thread "W#19-84:00.0" to cpu/core 26, thread id 6652 [TmThreadSetupOptions:tm-threads.c:874]
Perf: dpdk: 0000:84:00.0: NIC is on NUMA 1, thread on NUMA 0 [ReceiveDPDKThreadInit:source-dpdk.c:523]
Perf: threads: Setting prio -2 for thread "W#20-84:00.0" to cpu/core 27, thread id 6653 [TmThreadSetupOptions:tm-threads.c:874]
Perf: dpdk: 0000:84:00.0: NIC is on NUMA 1, thread on NUMA 0 [ReceiveDPDKThreadInit:source-dpdk.c:523]
Perf: threads: Setting prio -2 for thread "W#21-84:00.0" to cpu/core 28, thread id 6654 [TmThreadSetupOptions:tm-threads.c:874]
Perf: dpdk: 0000:84:00.0: NIC is on NUMA 1, thread on NUMA 0 [ReceiveDPDKThreadInit:source-dpdk.c:523]
Perf: threads: Setting prio -2 for thread "W#22-84:00.0" to cpu/core 29, thread id 6655 [TmThreadSetupOptions:tm-threads.c:874]
Perf: dpdk: 0000:84:00.0: NIC is on NUMA 1, thread on NUMA 0 [ReceiveDPDKThreadInit:source-dpdk.c:523]
Perf: threads: Setting prio -2 for thread "W#23-84:00.0" to cpu/core 30, thread id 6656 [TmThreadSetupOptions:tm-threads.c:874]
Perf: threads: Setting prio -2 for thread "W#24-84:00.0" to cpu/core 31, thread id 6657 [TmThreadSetupOptions:tm-threads.c:874]
Perf: threads: Setting prio -2 for thread "W#25-84:00.0" to cpu/core 32, thread id 6658 [TmThreadSetupOptions:tm-threads.c:874]
Perf: threads: Setting prio -2 for thread "W#26-84:00.0" to cpu/core 33, thread id 6681 [TmThreadSetupOptions:tm-threads.c:874]
Perf: threads: Setting prio -2 for thread "W#27-84:00.0" to cpu/core 34, thread id 6682 [TmThreadSetupOptions:tm-threads.c:874]
Perf: threads: Setting prio -2 for thread "W#28-84:00.0" to cpu/core 35, thread id 6683 [TmThreadSetupOptions:tm-threads.c:874]
Perf: threads: Setting prio -2 for thread "W#29-84:00.0" to cpu/core 36, thread id 6684 [TmThreadSetupOptions:tm-threads.c:874]
Perf: threads: Setting prio -2 for thread "W#30-84:00.0" to cpu/core 37, thread id 6685 [TmThreadSetupOptions:tm-threads.c:874]
Perf: threads: Setting prio -2 for thread "W#31-84:00.0" to cpu/core 38, thread id 6686 [TmThreadSetupOptions:tm-threads.c:874]
Perf: threads: Setting prio -2 for thread "W#32-84:00.0" to cpu/core 39, thread id 6687 [TmThreadSetupOptions:tm-threads.c:874]
Warning: dpdk: 0000:84:00.0: NIC is on NUMA 1, 12 threads on different NUMA node(s) [ReceiveDPDKThreadInit:source-dpdk.c:552]
Config: flow-manager: using 2 flow manager threads [FlowManagerThreadSpawn:flow-manager.c:956]
Perf: threads: Setting prio 0 for thread "FM#01", thread id 6691 [TmThreadSetupOptions:tm-threads.c:880]
Perf: threads: Setting prio 0 for thread "FM#02", thread id 6692 [TmThreadSetupOptions:tm-threads.c:880]
Config: flow-manager: using 2 flow recycler threads [FlowRecyclerThreadSpawn:flow-manager.c:1162]
Perf: threads: Setting prio 0 for thread "FR#01", thread id 6694 [TmThreadSetupOptions:tm-threads.c:880]
Perf: threads: Setting prio 0 for thread "FR#02", thread id 6695 [TmThreadSetupOptions:tm-threads.c:880]
Perf: threads: Setting prio 0 for thread "CW", thread id 6696 [TmThreadSetupOptions:tm-threads.c:880]
Perf: threads: Setting prio 0 for thread "CS", thread id 6697 [TmThreadSetupOptions:tm-threads.c:880]
Info: unix-manager: unix socket '/var/run/suricata/suricata-command.socket' [UnixNew:unix-manager.c:136]
Perf: threads: Setting prio 0 for thread "US", thread id 6698 [TmThreadSetupOptions:tm-threads.c:880]
Notice: threads: Threads created -> W: 32 FM: 2 FR: 2   Engine started. [TmThreadWaitOnThreadRunning:tm-threads.c:1888]