Workers mode : forced to one thread?

Hello,

I’m running suricata 5.0.2_1 on freebsd 11.3 (pfsense) and no matter what I put in the configuration, the workers mode only uses 1 thread for packet processing. In autofp mode the engine recognizes the detect-thread-ratio and I can adjust the number of threads but in workers mode I get this:

26/5/2020 -- 12:34:23 - <Info> -- CPUs/cores online: 4
26/5/2020 -- 12:36:07 - <Notice> -- all 1 packet processing threads, 4 management threads initialized, engine started.

I also tried enabling cpu-affinity but no luck, is this a bug/misconfiguration or by design ?

Can you post the output of the command if you add -vv, plus the actual commandline?

On FreeBSD we support 3 capture methods: pcap, ipfw and netmap. Of those only the last is capable of multiple workers. I think it will create as many as there are RSS queues on the capture NIC.

thanks for the info, I’m using pcap and rss is disabled but there are 4 queues configured on the nic.

I guess i will stay with autofp, because with netmap I’m having poor throughput (x550).

For the command line Pfsense handles the startup of the service not sure where I can hack this ?

/usr/local/bin/suricata -i ix0 -D -c /usr/local/etc/suricata/suricata_2589_ix0/suricata.yaml --pidfi

That is interesting. Netmap is supposed to be faster and I think it should support that NIC/driver well.

Can’t help with the pfsense service question. Probably best to ask in a pfsense forum/support channel.

Yeah that’s what I thought, I may give it a try again but I went from 5Gb/s to 1.5 Gb/s with netmap (10k rules)

So I found the template for the startup script and here is the output :

26/5/2020 – 22:16:23 - – This is Suricata version 5.0.2 RELEASE running in SYSTEM mode
26/5/2020 – 22:16:23 - – CPUs/cores online: 4
26/5/2020 – 22:16:23 - – HTTP memcap: 67108864
26/5/2020 – 22:16:23 - – fast output device (regular) initialized: alerts.log
26/5/2020 – 22:16:23 - – http-log output device (regular) initialized: http.log
26/5/2020 – 22:16:23 - – stats output device (regular) initialized: stats.log
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for http_uri
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for http_raw_uri
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for http_request_line
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for http_client_body
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for http_response_line
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for http_header
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for http_header
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for http_header_names
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for http_header_names
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for http_accept
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for http_accept_enc
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for http_accept_lang
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for http_referer
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for http_connection
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for http_content_len
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for http_content_len
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for http_content_type
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for http_content_type
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for http.server
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for http.location
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for http_protocol
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for http_protocol
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for http_start
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for http_start
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for http_raw_header
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for http_raw_header
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for http_method
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for http_cookie
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for http_cookie
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for file.name
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for file.name
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for file.name
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for file.name
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for file.name
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for file.name
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for file.name
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for file.name
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for file.name
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for file.magic
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for file.magic
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for file.magic
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for file.magic
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for file.magic
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for file.magic
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for file.magic
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for file.magic
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for file.magic
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for http_user_agent
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for http_host
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for http_raw_host
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for http_stat_msg
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for http_stat_code
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for dns_query
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for dnp3_data
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for dnp3_data
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for tls.sni
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for tls.cert_issuer
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for tls.cert_subject
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for tls.cert_serial
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for tls.cert_fingerprint
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for tls.certs
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for ja3.hash
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for ja3.string
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for ja3s.hash
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for ja3s.string
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for dce_stub_data
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for dce_stub_data
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for dce_stub_data
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for dce_stub_data
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for smb_named_pipe
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for smb_share
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for ssh.proto
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for ssh.proto
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for ssh_software
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for ssh_software
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for file_data
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for file_data
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for file_data
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for file_data
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for krb5_cname
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for krb5_sname
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for sip.method
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for sip.uri
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for sip.protocol
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for sip.protocol
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for sip.method
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for sip.stat_msg
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for sip.request_line
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for sip.response_line
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for snmp.community
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for snmp.community
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for tcp.hdr
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for udp.hdr
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for ipv4.hdr
26/5/2020 – 22:16:23 - – using unique mpm ctx’ for ipv6.hdr
26/5/2020 – 22:16:26 - – 2 rule files processed. 10719 rules successfully loaded, 0 rules failed
26/5/2020 – 22:16:26 - – Threshold config parsed: 6 rule(s) found
26/5/2020 – 22:16:26 - – using unique mpm ctx’ for tcp-packet
26/5/2020 – 22:16:26 - – using unique mpm ctx’ for tcp-stream
26/5/2020 – 22:16:26 - – using unique mpm ctx’ for udp-packet
26/5/2020 – 22:16:26 - – using unique mpm ctx’ for other-ip
26/5/2020 – 22:16:26 - – 10719 signatures processed. 76 are IP-only rules, 2248 are inspecting packet payload, 8201 inspect application layer, 103 are decoder event only
26/5/2020 – 22:16:26 - – [ERRCODE: SC_WARN_FLOWBIT(306)] - flowbit ‘ET.ELFDownload’ is checked but not set. Checked in 2019896 and 0 other sigs
26/5/2020 – 22:16:26 - – [ERRCODE: SC_WARN_FLOWBIT(306)] - flowbit ‘et.http.PK’ is checked but not set. Checked in 2019835 and 1 other sigs
26/5/2020 – 22:16:26 - – TCP toserver: 76 port groups, 61 unique SGH’s, 15 copies
26/5/2020 – 22:16:26 - – TCP toclient: 76 port groups, 33 unique SGH’s, 43 copies
26/5/2020 – 22:16:26 - – UDP toserver: 57 port groups, 29 unique SGH’s, 28 copies
26/5/2020 – 22:16:26 - – UDP toclient: 26 port groups, 14 unique SGH’s, 12 copies
26/5/2020 – 22:16:26 - – OTHER toserver: 254 proto groups, 2 unique SGH’s, 252 copies
26/5/2020 – 22:16:26 - – OTHER toclient: 254 proto groups, 0 unique SGH’s, 254 copies
26/5/2020 – 22:18:04 - – Unique rule groups: 139
26/5/2020 – 22:18:04 - – Builtin MPM “toserver TCP packet”: 32
26/5/2020 – 22:18:04 - – Builtin MPM “toclient TCP packet”: 22
26/5/2020 – 22:18:04 - – Builtin MPM “toserver TCP stream”: 32
26/5/2020 – 22:18:04 - – Builtin MPM “toclient TCP stream”: 27
26/5/2020 – 22:18:04 - – Builtin MPM “toserver UDP packet”: 29
26/5/2020 – 22:18:04 - – Builtin MPM “toclient UDP packet”: 14
26/5/2020 – 22:18:04 - – Builtin MPM “other IP packet”: 2
26/5/2020 – 22:18:04 - – AppLayer MPM “toserver http_uri (http)”: 10
26/5/2020 – 22:18:04 - – AppLayer MPM “toserver http_raw_uri (http)”: 1
26/5/2020 – 22:18:04 - – AppLayer MPM “toserver http_request_line (http)”: 2
26/5/2020 – 22:18:04 - – AppLayer MPM “toserver http_client_body (http)”: 3
26/5/2020 – 22:18:04 - – AppLayer MPM “toclient http_response_line (http)”: 1
26/5/2020 – 22:18:04 - – AppLayer MPM “toserver http_header (http)”: 6
26/5/2020 – 22:18:04 - – AppLayer MPM “toclient http_header (http)”: 6
26/5/2020 – 22:18:04 - – AppLayer MPM “toserver http_header_names (http)”: 2
26/5/2020 – 22:18:04 - – AppLayer MPM “toclient http_header_names (http)”: 2
26/5/2020 – 22:18:04 - – AppLayer MPM “toserver http_accept (http)”: 2
26/5/2020 – 22:18:04 - – AppLayer MPM “toserver http_referer (http)”: 1
26/5/2020 – 22:18:04 - – AppLayer MPM “toserver http_content_len (http)”: 1
26/5/2020 – 22:18:04 - – AppLayer MPM “toclient http_content_len (http)”: 1
26/5/2020 – 22:18:04 - – AppLayer MPM “toserver http_content_type (http)”: 2
26/5/2020 – 22:18:04 - – AppLayer MPM “toclient http_content_type (http)”: 2
26/5/2020 – 22:18:04 - – AppLayer MPM “toclient http.server (http)”: 2
26/5/2020 – 22:18:04 - – AppLayer MPM “toclient http.location (http)”: 1
26/5/2020 – 22:18:04 - – AppLayer MPM “toserver http_start (http)”: 3
26/5/2020 – 22:18:04 - – AppLayer MPM “toclient http_start (http)”: 3
26/5/2020 – 22:18:04 - – AppLayer MPM “toserver http_raw_header (http)”: 2
26/5/2020 – 22:18:04 - – AppLayer MPM “toclient http_raw_header (http)”: 2
26/5/2020 – 22:18:04 - – AppLayer MPM “toserver http_method (http)”: 1
26/5/2020 – 22:18:04 - – AppLayer MPM “toserver http_cookie (http)”: 2
26/5/2020 – 22:18:04 - – AppLayer MPM “toclient http_cookie (http)”: 2
26/5/2020 – 22:18:04 - – AppLayer MPM “toserver http_user_agent (http)”: 4
26/5/2020 – 22:18:04 - – AppLayer MPM “toserver http_host (http)”: 1
26/5/2020 – 22:18:04 - – AppLayer MPM “toserver dns_query (dns)”: 4
26/5/2020 – 22:18:04 - – AppLayer MPM “toserver tls.sni (tls)”: 2
26/5/2020 – 22:18:04 - – AppLayer MPM “toclient tls.cert_issuer (tls)”: 2
26/5/2020 – 22:18:04 - – AppLayer MPM “toclient tls.cert_subject (tls)”: 1
26/5/2020 – 22:18:04 - – AppLayer MPM “toclient tls.cert_serial (tls)”: 1
26/5/2020 – 22:18:04 - – AppLayer MPM “toserver ssh.proto (ssh)”: 1
26/5/2020 – 22:18:04 - – AppLayer MPM “toclient ssh.proto (ssh)”: 1
26/5/2020 – 22:18:04 - – AppLayer MPM “toserver file_data (smtp)”: 6
26/5/2020 – 22:18:04 - – AppLayer MPM “toclient file_data (http)”: 6
26/5/2020 – 22:18:04 - – AppLayer MPM “toserver file_data (smb)”: 6
26/5/2020 – 22:18:04 - – AppLayer MPM “toclient file_data (smb)”: 6
26/5/2020 – 22:18:05 - – Going to use 1 thread(s)
26/5/2020 – 22:18:05 - – using interface ix0
26/5/2020 – 22:18:05 - – running in ‘auto’ checksum mode. Detection of interface state will require 1000ULL packets
26/5/2020 – 22:18:05 - – Set snaplen to 1518 for ‘ix0’
26/5/2020 – 22:18:05 - – RunModeIdsPcapWorkers initialised
26/5/2020 – 22:18:05 - – all 1 packet processing threads, 4 management threads initialized, engine started.
26/5/2020 – 22:18:05 - – No packets with invalid checksum, assuming checksum offloading is NOT used

Can you share the suricata.yaml?

Sure here it is :

%YAML 1.1
---

max-pending-packets: 10000

# Runmode the engine should use.
runmode: workers

# If set to auto, the variable is internally switched to 'router' in IPS 
# mode and 'sniffer-only' in IDS mode.
host-mode: router

# Specifies the kind of flow load balancer used by the flow pinned autofp mode.
autofp-scheduler: hash

# Daemon working directory
daemon-directory: /usr/local/etc/suricata/suricata_2589_ix0

default-packet-size: 1514

# The default logging directory.
default-log-dir: /var/log/suricata/suricata_ix02589

# global stats configuration
stats:
  enabled: yes
  interval: 10
  #decoder-events: true
  decoder-events-prefix: "decoder.event"
  #stream-events: false

# Configure the type of alert (and other) logging.
outputs:

  # alert-pf blocking plugin
  - alert-pf:
      enabled: yes
      kill-state: yes
      block-drops-only: no
      pass-list: /usr/local/etc/suricata/suricata_2589_ix0/passlist
      block-ip: BOTH
      pf-table: snort2c

  # a line based alerts log similar to Snort's fast.log
  - fast:
      enabled: yes
      filename: alerts.log
      append: yes
      filetype: regular

  # alert output for use with Barnyard2
  - unified2-alert:
      enabled: no
      filename: unified2.alert
      limit: 32mb
      sensor-id: 0
      xff:
        enabled: no

  - http-log:
      enabled: yes
      filename: http.log
      append: yes
      extended: yes
      filetype: regular

  - pcap-log:
      enabled: no
      filename: log.pcap
      limit: 32mb
      max-files: 1000
      mode: normal

  - tls-log:
      enabled: no
      filename: tls.log
      extended: yes

  - tls-store:
      enabled: no
      certs-log-dir: certs

  - stats:
      enabled: yes
      filename: stats.log
      append: no
      totals: yes
      threads: no
      #null-values: yes

  - syslog:
      enabled: no
      identity: suricata
      facility: local1
      level: notice

  - drop:
      enabled: no
      filename: drop.log
      append: yes
      filetype: regular

  - file-store:
      version: 2
      enabled: no
      dir: filestore
      force-magic: no
      #force-hash: [md5]
      #waldo: file.waldo

  - file-log:
      enabled: no
      filename: files-json.log
      append: yes
      filetype: regular
      force-magic: no
      #force-hash: [md5]

  - eve-log:
      enabled: no
      filetype: regular
      filename: eve.json
      redis: 
        server: 127.0.0.1
        port: 6379
        mode: list
        key: "suricata"
      identity: "suricata"
      facility: local1
      level: notice
      xff:
        enabled: no
        mode: extra-data
        deployment: reverse
        header: X-Forwarded-For
      types: 
        - alert:
            payload: yes              # enable dumping payload in Base64
            payload-buffer-size: 4kb  # max size of payload buffer to output in eve-log
            payload-printable: yes    # enable dumping payload in printable (lossy) format
            packet: yes               # enable dumping of packet (without stream segments)
            http-body: yes            # enable dumping of http body in Base64
            http-body-printable: yes  # enable dumping of http body in printable format
            metadata: yes             # enable inclusion of app layer metadata with alert
            tagged-packets: yes       # enable logging of tagged packets for rules using the 'tag' keyword
        - http:
            extended: yes
            custom: [accept, accept-charset, accept-datetime, accept-encoding, accept-language, accept-range, age, allow, authorization, cache-control, connection, content-encoding, content-language, content-length, content-location, content-md5, content-range, content-type, cookie, date, dnt, etags, from, last-modified, link, location, max-forwards, origin, pragma, proxy-authenticate, proxy-authorization, range, referrer, refresh, retry-after, server, set-cookie, te, trailer, transfer-encoding, upgrade, vary, via, warning, www-authenticate, x-authenticated-user, x-flash-version, x-forwarded-proto, x-requested-with]
        - dns:
            version: 2
            query: yes
            answer: yes
        - tls:
            extended: yes
        - dhcp:
            extended: no
        - files:
            force-magic: no
        - ssh
        - nfs
        - smb
        - krb5
        - ikev2
        - tftp
        - smtp:
            extended: yes
            custom: [bcc, received, reply-to, x-mailer, x-originating-ip]
            md5: [subject]

# Magic file. The extension .mgc is added to the value here.
magic-file: /usr/share/misc/magic

# GeoLite2 IP geo-location database file path and filename.
geoip-database: /usr/local/share/suricata/GeoLite2/GeoLite2-Country.mmdb

# Specify a threshold config file
threshold-file: /usr/local/etc/suricata/suricata_2589_ix0/threshold.config

detect-engine:
  - profile: high
  - sgh-mpm-context: full
  - inspection-recursion-limit: 3000
  - delayed-detect: no

# Suricata is multi-threaded. Here the threading can be influenced.
threading:
  set-cpu-affinity: no
  cpu-affinity:
    - management-cpu-set:
        cpu: [ 0 ]  # include only these cpus in affinity settings
    - receive-cpu-set:
        cpu: [ 0 ]  # include only these cpus in affinity settings
    - worker-cpu-set:
        cpu: [ 1,2,3 ]
        mode: "exclusive"
  detect-thread-ratio: 1
  
# Luajit has a strange memory requirement, it's 'states' need to be in the
# first 2G of the process' memory.
#
# 'luajit.states' is used to control how many states are preallocated.
# State use: per detect script: 1 per detect thread. Per output script: 1 per
# script.
luajit:
  states: 128

# Multi pattern algorithm
# The default mpm-algo value of "auto" will use "hs" if Hyperscan is
# available, "ac" otherwise.
mpm-algo: hs

# Single pattern algorithm
# The default of "auto" will use "hs" if available, otherwise "bm".
spm-algo: auto

# PCAP
pcap:
  - interface: ix0
    checksum-checks: auto
    promisc: yes
    snaplen: 1518

# Defrag settings:
defrag:
  memcap: 134217728
  hash-size: 131070
  trackers: 131070
  max-frags: 131070
  prealloc: yes
  timeout: 30

# Flow settings:
flow:
  memcap: 33554432
  hash-size: 65536
  prealloc: 10000
  emergency-recovery: 30
  prune-flows: 5

# This option controls the use of vlan ids in the flow (and defrag)
# hashing.
vlan:
  use-for-tracking: true

# Specific timeouts for flows.
flow-timeouts:
  default:
    new: 30
    established: 300
    closed: 0
    emergency-new: 10
    emergency-established: 100
    emergency-closed: 0
  tcp:
    new: 60
    established: 3600
    closed: 120
    emergency-new: 10
    emergency-established: 300
    emergency-closed: 20
  udp:
    new: 30
    established: 300
    emergency-new: 10
    emergency-established: 100
  icmp:
    new: 30
    established: 300
    emergency-new: 10
    emergency-established: 100

stream:
  memcap: 512mb
  checksum-validation: no
  inline: auto
  prealloc-sessions: 65536
  midstream: false
  async-oneside: false
  max-synack-queued: 5
  bypass: yes
  reassembly:
    memcap: 3gb
    depth: 0
    toserver-chunk-size: 2560
    toclient-chunk-size: 2560

# Host table is used by tagging and per host thresholding subsystems.
host:
  hash-size: 4096
  prealloc: 1000
  memcap: 33554432

# Host specific policies for defragmentation and TCP stream reassembly.
host-os-policy:
  bsd: [0.0.0.0/0]

# Logging configuration.  This is not about logging IDS alerts, but
# IDS output about what its doing, errors, etc.
logging:

  # This value is overriden by the SC_LOG_LEVEL env var.
  default-log-level: info
  default-log-format: "%t - <%d> -- "

  # Define your logging outputs.
  outputs:
  - console:
      enabled: yes
  - file:
      enabled: yes
      filename: /var/log/suricata/suricata_ix02589/suricata.log
  - syslog:
      enabled: no
      facility: off
      format: "[%i] <%d> -- "

# IPS Mode Configuration

legacy:
  uricontent: enabled

default-rule-path: /usr/local/etc/suricata/suricata_2589_ix0/rules
rule-files:
 - suricata.rules
 - flowbit-required.rules

classification-file: /usr/local/etc/suricata/suricata_2589_ix0/classification.config
reference-config-file: /usr/local/etc/suricata/suricata_2589_ix0/reference.config

# Holds variables that would be used by the engine.
vars:

  # Holds the address group vars that would be passed in a Signature.
  address-groups:
    DNS_SERVERS: "$HOME_NET"
    SMTP_SERVERS: "$HOME_NET"
    HTTP_SERVERS: "$HOME_NET"
    SQL_SERVERS: "$HOME_NET"
    TELNET_SERVERS: "$HOME_NET"
    DNP3_SERVER: "$HOME_NET"
    DNP3_CLIENT: "$HOME_NET"
    MODBUS_SERVER: "$HOME_NET"
    MODBUS_CLIENT: "$HOME_NET"
    ENIP_SERVER: "$HOME_NET"
    ENIP_CLIENT: "$HOME_NET"
    FTP_SERVERS: "$HOME_NET"
    SSH_SERVERS: "$HOME_NET"
    SIP_SERVERS: "$HOME_NET"

  # Holds the port group vars that would be passed in a Signature.
  port-groups:
    FTP_PORTS: "21"
    HTTP_PORTS: "80"
    ORACLE_PORTS: "1521"
    SSH_PORTS: "22"
    SHELLCODE_PORTS: "!80"
    DNP3_PORTS: "20000"
    FILE_DATA_PORTS: "$HTTP_PORTS, 110, 143"
    SIP_PORTS: "5060, 5061, 5600"

# Set the order of alerts based on actions
action-order:
  - pass
  - drop
  - reject
  - alert

# IP Reputation


# Limit for the maximum number of asn1 frames to decode (default 256)
asn1-max-frames: 256

engine-analysis:
  rules-fast-pattern: yes
  rules: yes

#recursion and match limits for PCRE where supported
pcre:
  match-limit: 3500
  match-limit-recursion: 1500

# Holds details on the app-layer. The protocols section details each protocol.
app-layer:
  protocols:
    dcerpc:
      enabled: yes
    dhcp:
      enabled: yes
    dnp3:
      enabled: yes
      detection-ports:
        dp: 20000
    dns:
      global-memcap: 16777216
      state-memcap: 524288
      request-flood: 500
      tcp:
        enabled: yes
        detection-ports:
          dp: 53
      udp:
        enabled: yes
        detection-ports:
          dp: 53
    ftp:
      enabled: yes
    http:
      enabled: yes
      memcap: 67108864
    ikev2:
      enabled: yes
    imap:
      enabled: detection-only
    krb5:
      enabled: yes
    modbus:
      enabled: yes
      request-flood: 500
      detection-ports:
        dp: 502
      stream-depth: 0
    msn:
      enabled: no
    nfs:
      enabled: yes
    ntp:
      enabled: yes
    tls:
      enabled: yes
      detection-ports:
        dp: 443,993
      ja3-fingerprints: no
      encrypt-handling: bypass
    smb:
      enabled: yes
      detection-ports:
        dp: 139, 445
    smtp:
      enabled: yes
      mime:
        decode-mime: no
        decode-base64: yes
        decode-quoted-printable: yes
        header-value-depth: 2000
        extract-urls: yes
        body-md5: no
      inspected-tracker:
        content-limit: 100000
        content-inspect-min-size: 32768
        content-inspect-window: 4096
    ssh:
      enabled: yes
    tftp:
      enabled: yes

###########################################################################
# Configure libhtp.
libhtp:
   default-config:
     personality: IDS
     request-body-limit: 4096
     response-body-limit: 4096
     meta-field-limit: 18432
     double-decode-path: no
     double-decode-query: no
     uri-include-all: no

   

coredump:
  max-dump: unlimited

# Suricata user pass through configuration

You could try netmap and ipfw as fallback there. Another try would be to enable affinity.

like I said enabling affinity did not solve the issue, I think this should be documented that using workers on freebsd with pcap is limited to one thread ? because according to the documentation workers mode is recommended for performance setups !

It’s correct that runmode workers is faster in most cases, but pcap mode is also not very fast. You can also add threads: 8 directly (or any other value suitable for your setup)

But if you use netmap (or ipfw) how does the thread allocation look like?

I tried the threads param but still one thread, as for autofp/pcap it is actually quite fast, I’m pushing 5Gb/s with a 7y/o mini-itx board. I did not try workers mode with netmap but I’ll give it a try again as I changed some params in the meantime (interrupt moderation, msi-x, rss …)

ipfw is not supported by pfsense and I prefer not to diverge too much from the OS

So I can reproduce it when I use runmode workers and pcap capture method, just 1 packet processing thread. But when I do set

pcap:
  - interface: enp4s0
    threads: 16

I end up with 16 packet processing threads.

Thanks I was setting it in the threading: section !

Beware to disable this rule :

SURICATA STREAM pkt seen on wrong thread

I got a little surprise when I restarted suricata :smile:

Well that’s a different story, see https://redmine.openinfosecfoundation.org/issues/2725 but that’s a stream event rule that you would run for debugging/testing but not in production (at least in most cases).
But you will still see it in the stats as wrong_threads and should be tested with netmap.

With normal libpcap, this will only lead to each thread capturing the same packets. So in this case 16x the same packet. This setting is meant to be used with custom libpcaps, like the one from Myricom, that will actually split the traffic and balance by flow.

Yeah I went back to autofp, I noticed an increase in dropped packets…

Although I made an interesting discovery, when setting pcap.threads: 1 the load is distributed among the cores and if I remove the entry only one core is utilized when using workers mode :thinking:

Bumping this thread as I have upgraded to pfsense 2.5 (suricata 5.0.6) which now supports correctly netmap, unfortunately I’m still seeing this one thread limit which cripples bandwidth :

10/6/2021 -- 13:32:06 - <Notice> -- This is Suricata version 5.0.6 RELEASE running in SYSTEM mode
10/6/2021 -- 13:32:06 - <Info> -- CPUs/cores online: 4
10/6/2021 -- 13:32:06 - <Info> -- HTTP memcap: 134217728
10/6/2021 -- 13:32:06 - <Notice> -- using flow hash instead of active packets
10/6/2021 -- 13:32:06 - <Info> -- Netmap: Setting IPS mode
10/6/2021 -- 13:32:06 - <Info> -- fast output device (regular) initialized: alerts.log
10/6/2021 -- 13:32:06 - <Info> -- http-log output device (regular) initialized: http.log
10/6/2021 -- 13:32:06 - <Info> -- Setting logging socket of non-blocking in live mode.
10/6/2021 -- 13:32:06 - <Info> -- eve-log output device (unix_stream) initialized: /var/run/suricata-stats.sock
10/6/2021 -- 13:32:10 - <Error> -- [ERRCODE: SC_ERR_INVALID_SIGNATURE(39)] - depth or urilen 11 smaller than content len 17
10/6/2021 -- 13:32:10 - <Error> -- [ERRCODE: SC_ERR_INVALID_SIGNATURE(39)] - error parsing signature "alert tcp $HOME_NET any -> $EXTERNAL_NET $HTTP_PORTS (msg:"MALWARE-CNC Win.Trojan.Scranos variant outbound connection"; flow:to_server,established; content:"/fb/apk/index.php"; fast_pattern:only; http_uri; urilen:<10; metadata:impact_flag red, policy balanced-ips drop, policy max-detect-ips drop, policy security-ips drop, service http; reference:url,www.virustotal.com/gui/url/02736e4c0b9fe923602cfe739f05d82c7141fd36581b3dc7cec65cf20f9cc1a0/detection; classtype:trojan-activity; sid:50525; rev:1;)" from file /usr/local/etc/suricata/suricata_2589_ix0/rules/suricata.rules at line 13201
10/6/2021 -- 13:32:10 - <Error> -- [ERRCODE: SC_ERR_INVALID_SIGNATURE(39)] - "http_header" keyword seen with a sticky buffer still set.  Reset sticky buffer with pkt_data before using the modifier.
10/6/2021 -- 13:32:10 - <Error> -- [ERRCODE: SC_ERR_INVALID_SIGNATURE(39)] - error parsing signature "alert tcp $EXTERNAL_NET $HTTP_PORTS -> $HOME_NET any (msg:"MALWARE-CNC Osx.Trojan.Janicab runtime traffic detected"; flow:to_client,established; file_data; content:"content=|22|just something i made up for fun, check out my website at"; fast_pattern:only; content:"X-YouTube-Other-Cookies:"; nocase; http_header; metadata:impact_flag red, policy balanced-ips drop, policy max-detect-ips drop, policy security-ips drop, service http; reference:cve,2012-0158; reference:url,www.virustotal.com/file/3bc13adad9b7b60354d83bc27a507864a2639b43ec835c45d8b7c565e81f1a8f/analysis/; classtype:trojan-activity; sid:27544; rev:3;)" from file /usr/local/etc/suricata/suricata_2589_ix0/rules/suricata.rules at line 13954
10/6/2021 -- 13:32:10 - <Error> -- [ERRCODE: SC_ERR_INVALID_SIGNATURE(39)] - previous keyword has a fast_pattern:only; set. Can't have relative keywords around a fast_pattern only content
10/6/2021 -- 13:32:10 - <Error> -- [ERRCODE: SC_ERR_INVALID_SIGNATURE(39)] - error parsing signature "alert tcp $EXTERNAL_NET $HTTP_PORTS -> $HOME_NET any (msg:"MALWARE-OTHER Win.Trojan.Zeus Spam 2013 dated zip/exe HTTP Response - potential malware download"; flow:to_client,established; content:"-2013.zip|0D 0A|"; fast_pattern:only; content:"-2013.zip|0D 0A|"; http_header; content:"-"; within:1; distance:-14; http_header; file_data; content:"-2013.exe"; content:"-"; within:1; distance:-14; metadata:impact_flag red, policy balanced-ips drop, policy max-detect-ips drop, policy security-ips drop, ruleset community, service http; reference:url,www.virustotal.com/en/file/2eff3ee6ac7f5bf85e4ebcbe51974d0708cef666581ef1385c628233614b22c0/analysis/; classtype:trojan-activity; sid:26470; rev:2;)" from file /usr/local/etc/suricata/suricata_2589_ix0/rules/suricata.rules at line 14514
10/6/2021 -- 13:32:10 - <Info> -- 3 rule files processed. 14759 rules successfully loaded, 3 rules failed
10/6/2021 -- 13:32:10 - <Info> -- Threshold config parsed: 0 rule(s) found
10/6/2021 -- 13:32:10 - <Info> -- 14764 signatures processed. 61 are IP-only rules, 2583 are inspecting packet payload, 11361 inspect application layer, 103 are decoder event only
10/6/2021 -- 13:32:10 - <Warning> -- [ERRCODE: SC_WARN_FLOWBIT(306)] - flowbit 'ET.ELFDownload' is checked but not set. Checked in 2019896 and 0 other sigs
10/6/2021 -- 13:32:10 - <Warning> -- [ERRCODE: SC_WARN_FLOWBIT(306)] - flowbit 'file.exe' is checked but not set. Checked in 24144 and 203 other sigs
10/6/2021 -- 13:32:10 - <Warning> -- [ERRCODE: SC_WARN_FLOWBIT(306)] - flowbit 'file.doc' is checked but not set. Checked in 45647 and 7 other sigs
10/6/2021 -- 13:32:10 - <Warning> -- [ERRCODE: SC_WARN_FLOWBIT(306)] - flowbit 'file.elf' is checked but not set. Checked in 26531 and 6 other sigs
10/6/2021 -- 13:32:10 - <Warning> -- [ERRCODE: SC_WARN_FLOWBIT(306)] - flowbit 'file.rtf' is checked but not set. Checked in 50008 and 3 other sigs
10/6/2021 -- 13:32:10 - <Warning> -- [ERRCODE: SC_WARN_FLOWBIT(306)] - flowbit 'file.ole' is checked but not set. Checked in 38525 and 17 other sigs
10/6/2021 -- 13:32:10 - <Warning> -- [ERRCODE: SC_WARN_FLOWBIT(306)] - flowbit 'file.msi' is checked but not set. Checked in 47593 and 1 other sigs
10/6/2021 -- 13:32:10 - <Warning> -- [ERRCODE: SC_WARN_FLOWBIT(306)] - flowbit 'file.gif' is checked but not set. Checked in 46407 and 0 other sigs
10/6/2021 -- 13:32:10 - <Warning> -- [ERRCODE: SC_WARN_FLOWBIT(306)] - flowbit 'file.xls&file.ole' is checked but not set. Checked in 30990 and 1 other sigs
10/6/2021 -- 13:32:10 - <Warning> -- [ERRCODE: SC_WARN_FLOWBIT(306)] - flowbit 'file.pdf' is checked but not set. Checked in 53364 and 1 other sigs
10/6/2021 -- 13:32:10 - <Warning> -- [ERRCODE: SC_WARN_FLOWBIT(306)] - flowbit 'file.png' is checked but not set. Checked in 47868 and 1 other sigs
10/6/2021 -- 13:32:10 - <Warning> -- [ERRCODE: SC_WARN_FLOWBIT(306)] - flowbit 'file.doc|file.docm' is checked but not set. Checked in 43975 and 1 other sigs
10/6/2021 -- 13:32:10 - <Warning> -- [ERRCODE: SC_WARN_FLOWBIT(306)] - flowbit 'file.xls' is checked but not set. Checked in 42822 and 3 other sigs
10/6/2021 -- 13:32:10 - <Warning> -- [ERRCODE: SC_WARN_FLOWBIT(306)] - flowbit 'file.zip' is checked but not set. Checked in 27059 and 7 other sigs
10/6/2021 -- 13:32:10 - <Warning> -- [ERRCODE: SC_WARN_FLOWBIT(306)] - flowbit 'file.pyc' is checked but not set. Checked in 27548 and 1 other sigs
10/6/2021 -- 13:32:10 - <Warning> -- [ERRCODE: SC_WARN_FLOWBIT(306)] - flowbit 'file.universalbinary' is checked but not set. Checked in 24799 and 3 other sigs
10/6/2021 -- 13:32:10 - <Warning> -- [ERRCODE: SC_WARN_FLOWBIT(306)] - flowbit 'et.http.PK' is checked but not set. Checked in 2019835 and 1 other sigs
10/6/2021 -- 13:33:54 - <Info> -- Disabling promiscuous mode on iface ix0
10/6/2021 -- 13:33:54 - <Info> -- Disabling promiscuous mode on iface ix0^
10/6/2021 -- 13:33:54 - <Info> -- Going to use 1 thread(s)
10/6/2021 -- 13:33:55 - <Notice> -- opened netmap:ix0/R from ix0: 0x814519000
10/6/2021 -- 13:33:55 - <Notice> -- opened netmap:ix0^ from ix0^: 0x814519300
10/6/2021 -- 13:33:55 - <Info> -- Disabling promiscuous mode on iface ix0^
10/6/2021 -- 13:33:55 - <Info> -- Disabling promiscuous mode on iface ix0
10/6/2021 -- 13:33:55 - <Info> -- Going to use 1 thread(s)
10/6/2021 -- 13:33:55 - <Notice> -- opened netmap:ix0^ from ix0^: 0x81bdbb000
10/6/2021 -- 13:33:56 - <Notice> -- opened netmap:ix0/T from ix0: 0x81bdbb300
10/6/2021 -- 13:33:56 - <Notice> -- all 2 packet processing threads, 4 management threads initialized, engine started.

In workers mode no matter what I configure on threading section :

threading:
  set-cpu-affinity: no
  detect-thread-ratio: 2.0

So I have managed to recover bandwidth by setting threads in netmap block because auto forces it to 1

But I’m still seeing one thread assigned to second interface ix0^ :

1/6/2021 -- 00:18:47 - <Info> -- Disabling promiscuous mode on iface ix0
11/6/2021 -- 00:18:47 - <Info> -- Disabling promiscuous mode on iface ix0^
11/6/2021 -- 00:18:47 - <Info> -- Going to use 2 thread(s)
11/6/2021 -- 00:18:48 - <Notice> -- opened netmap:ix0-0/R from ix0: 0x812852000
11/6/2021 -- 00:18:48 - <Notice> -- opened netmap:ix0^ from ix0^: 0x812852300
11/6/2021 -- 00:18:48 - <Notice> -- opened netmap:ix0-1/R from ix0: 0x846aeb000
11/6/2021 -- 00:18:50 - <Notice> -- opened netmap:ix0^ from ix0^: 0x89ee55300
11/6/2021 -- 00:18:50 - <Info> -- Disabling promiscuous mode on iface ix0^
11/6/2021 -- 00:18:50 - <Info> -- Disabling promiscuous mode on iface ix0
**11/6/2021 -- 00:18:50 - <Info> -- Going to use 1 thread(s)**
netmap:
  - interface: default
     threads: 2
     copy-mode: ips
     disable-promisc: yes
     checksum-checks: auto
  - interface: ix0
    copy-iface: ix0^
  - interface: ix0^ 
    copy-iface: ix0
23088 root         36   16  3963M   670M nanslp   2   1:53   0.04% suricata{suricata}
23088 root         95   16  3963M   670M CPU0     0   1:01  41.71% suricata{W#01-ix0^}
23088 root         38   16  3963M   670M select   0   1:02   6.06% suricata{W#01-ix0}
23088 root         36   16  3963M   670M uwait    0   2:26   0.58% suricata{FM#01}
23088 root        -92   16  3963M   670M select   2   0:35   0.38% suricata{W#02-ix0}
23088 root         36   16  3963M   670M uwait    0   0:04   0.09% suricata{CS}

Not sure why ix0^ is forced to 1 thread ?

I don’t recall the exact reasons, but the ^ suffix forces the interface to use a software ring of which there apparently is just one. So the code forces there to be one thread for that ring.