Suricata 6 on PF_RING and Silicom Fiberblaze FPGA

Hello - the original post had become too convoluted over time with updates, this first post contains all the pertinent information needed to be operational with 0 packet loss.

I use the NTOP packages for CentOS to install pf_ring and update via the dnf package manager

System CPU and Memory Specs

Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 40
On-line CPU(s) list: 0-39
Thread(s) per core: 2
Core(s) per socket: 10
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 79
Model name: Intel(R) Xeon(R) CPU E5-2640 v4 @ 2.40GHz
Stepping: 1
CPU MHz: 2795.361
CPU max MHz: 3400.0000
CPU min MHz: 1200.0000
BogoMIPS: 4800.10
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 25600K
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39

Fiberblaze Numa Node Location

04:00.0 Ethernet controller: Silicom Denmark FB2CG Capture 2x40Gb [Savona]
Subsystem: Silicom Denmark FB2CG Capture 2x40Gb [Savona]
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- SERR- <PERR- INTx-
Latency: 0, Cache Line Size: 32 bytes
NUMA node: 0

Reserving HugePages Memory

cat /usr/lib/systemd/system/hugetlb-gigantic-pages.service
[Unit]
Description=HugeTLB Gigantic Pages Reservation
DefaultDependencies=no
Before=dev-hugepages.mount
ConditionPathExists=/sys/devices/system/node
ConditionKernelCommandLine=hugepagesz=1G

[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/lib/systemd/hugetlb-reserve-pages.sh

Contents of hugetlb-reserve-pages.sh

#!/bin/sh

nodes_path=/sys/devices/system/node/
if [ ! -d $nodes_path ]; then
echo “ERROR: $nodes_path does not exist”
exit 1
fi

reserve_pages()
{
echo $1 > $nodes_path/$2/hugepages/hugepages-1048576kB/nr_hugepages
}

reserve_pages 96 node0

Fiberblaze Card Configuration

cat /opt/fiberblaze/fbcard.cfg
; Base name for PRBs in this group is “a”. This name must be unique among prbGroups.
; dedupMode 4
; dedupMultiPort 1

prbGroup “a”
{
noPrbs 18
; hash HashPacket
hash HashSessionIP
filter “hash”
; nonBlockingDynamic true
}

; maxBufferFilling 95
; fifoExitOverflowFilling 95

Suricata Service Start Systemd Script

EnvironmentFile=-/etc/default/suricata
ExecStartPre=/sbin/setcap cap_net_raw,cap_ipc_lock,cap_sys_admin+eip /opt/suricata/bin/suricata
ExecStartPre=/sbin/setcap cap_net_raw,cap_ipc_lock,cap_sys_admin+eip /usr/bin/pfcount
ExecStartPre=/bin/su - snort -c “/bin/rm -f /opt/suricata/var/run/suricata.pid”
ExecStart=/bin/su - snort -c “/opt/suricata/bin/suricata -D --pfring-int fbcard:0:a00 --pfring-cluster-id 1 --pfring-cluster-type cluster_flow --pfring-int fbcard:0:a01 --pfring-cluster-id 1 --pfring-cluster-type cluster_flow --pfring-int fbcard:0:a02 --pfring-cluster-id 1 --pfring-cluster-type cluster_flow --pfring-int fbcard:0:a03 --pfring-cluster-id 1 --pfring-cluster-type cluster_flow --pfring-int fbcard:0:a04 --pfring-cluster-id 1 --pfring-cluster-type cluster_flow --pfring-int fbcard:0:a05 --pfring-cluster-id 1 --pfring-cluster-type cluster_flow --pfring-int fbcard:0:a06 --pfring-cluster-id 1 --pfring-cluster-type cluster_flow --pfring-int fbcard:0:a07 --pfring-cluster-id 1 --pfring-cluster-type cluster_flow --pfring-int fbcard:0:a08 --pfring-cluster-id 1 --pfring-cluster-type cluster_flow --pfring-int fbcard:0:a09 --pfring-cluster-id 1 --pfring-cluster-type cluster_flow --pfring-int fbcard:0:a10 --pfring-cluster-id 1 --pfring-cluster-type cluster_flow --pfring-int fbcard:0:a11 --pfring-cluster-id 1 --pfring-cluster-type cluster_flow --pfring-int fbcard:0:a12 --pfring-cluster-id 1 --pfring-cluster-type cluster_flow --pfring-int fbcard:0:a13 --pfring-cluster-id 1 --pfring-cluster-type cluster_flow --pfring-int fbcard:0:a14 --pfring-cluster-id 1 --pfring-cluster-type cluster_flow --pfring-int fbcard:0:a15 --pfring-cluster-id 1 --pfring-cluster-type cluster_flow --pfring-int fbcard:0:a16 --pfring-cluster-id 1 --pfring-cluster-type cluster_flow --pfring-int fbcard:0:a17 --pfring-cluster-id 1 --pfring-cluster-type cluster_flow -c /opt/suricata/etc/suricata/suricata.yaml -l /home/snort/t1 --pidfile /opt/suricata/var/run/suricata.pid”

Configure Suricata for Compilation

LIBS=“-lrt” ./configure --prefix=/opt/suricata --enable-pfring=yes --with-libpfring-includes=/usr/include --with-libpfring-libraries=/usr/lib --with-libhs-includes=/usr/local/include/hs --with-libhs-libraries=/usr/local/lib64 --enable-af-packet=no

Suricata.yaml

For the sake of brevity only including lines not commented out, all lines appear in same order as in unedited config - so discerning what is what should be relatively straight forward. Our instance is running with over 50,000 rules enabled and working great.


vars:
address-groups:
HOME_NET: “

EXTERNAL_NET: "!$HOME_NET"

HTTP_SERVERS: "$HOME_NET"
SMTP_SERVERS: "[]"
SQL_SERVERS: "$HOME_NET"
DNS_SERVERS: "[]"
TELNET_SERVERS: "$HOME_NET"
AIM_SERVERS: "[]"
DNP3_SERVER: "$HOME_NET"
DNP3_CLIENT: "$HOME_NET"
MODBUS_CLIENT: "$HOME_NET"
MODBUS_SERVER: "$HOME_NET"
ENIP_CLIENT: "$HOME_NET"
ENIP_SERVER: "$HOME_NET"

port-groups:
HTTP_PORTS: “
SHELLCODE_PORTS: “!80”
ORACLE_PORTS: “
SSH_PORTS: 22
DNP3_PORTS: 20000
MODBUS_PORTS: 502
FILE_DATA_PORTS: “[$HTTP_PORTS,110,143]”
FTP_PORTS: “[21,2100,3535]”
GENEVE_PORTS: 6081
VXLAN_PORTS: 4789
TEREDO_PORTS: 3544

default-rule-path: /opt/suricata/var/lib/suricata/rules
rule-files:

  • suricata.rules

classification-file: /opt/suricata/var/lib/suricata/rules/classification.config
reference-config-file: /opt/suricata/var/lib/suricata/rules/reference.config
threshold-file: /opt/suricata/etc/suricata/threshold.config

default-log-dir: /opt/suricata/var/log/suricata/

stats:
enabled: yes
interval: 320

outputs:

  • fast:
    enabled: no
    filename: fast.log
    append: yes

  • eve-log:
    enabled: yes
    filetype: regular #regular|syslog|unix_dgram|unix_stream|redis
    filename: eve.json
    types:
    - alert:
    payload: yes # enable dumping payload in Base64
    payload-buffer-size: 20kb # max size of payload buffer to output in eve-log
    packet: no # enable dumping of packet (without stream segments)
    metadata: yes # add L7/applayer fields, flowbit and other vars to the alert

        tagged-packets: yes
    
        xff:
          enabled: no
          mode: extra-data
          deployment: reverse
          header: X-Forwarded-For
    
    - stats:
        totals: yes        # stats for all threads merged together
        threads: no       # per thread stats
        deltas: no        # include delta values
    
  • unified2-alert:
    enabled: no
    filename: snort.u2

    limit: 1024mb
    payload: yes

    xff:
    enabled: no
    mode: extra-data
    deployment: reverse
    header: X-Forwarded-For

  • http-log:
    enabled: no
    filename: http.log
    append: yes

  • tls-log:
    enabled: no # Log TLS connections.
    filename: tls.log # File to store TLS logs.
    append: yes

  • tls-store:
    enabled: no

  • dns-log:
    enabled: no
    filename: dns.log
    append: yes

  • pcap-log:
    enabled: no
    filename: log.pcap

    limit: 1000mb

    max-files: 2000

    mode: normal # normal, multi or sguil.

    use-stream-depth: no #If set to “yes” packets seen after reaching stream inspection depth are ignored. “no” logs all packets
    honor-pass-rules: no # If set to “yes”, flows in which a pass rule matched will stopped being logged.

  • alert-debug:
    enabled: no
    filename: alert-debug.log
    append: yes

  • alert-prelude:
    enabled: no
    profile: suricata
    log-packet-content: no
    log-packet-header: yes

  • stats:
    enabled: yes
    filename: stats.log
    append: yes # append to file (yes) or overwrite it (no)
    totals: yes # stats for all threads merged together
    threads: no # per thread stats
    null-values: yes # print counters that have value 0

  • syslog:
    enabled: no
    facility: local5

  • drop:
    enabled: no
    filename: drop.log
    append: yes
    filetype: regular # ‘regular’, ‘unix_stream’ or ‘unix_dgram’

  • file-store:
    enabled: no # set to yes to enable
    log-dir: files # directory to store the files
    force-magic: no # force logging magic on all stored files
    force-filestore: no # force storing of all files

  • file-log:
    enabled: no
    filename: files-json.log
    append: yes

    force-magic: no # force logging magic on all logged files

  • tcp-data:
    enabled: no
    type: file
    filename: tcp-data.log

  • http-body-data:
    enabled: no
    type: file
    filename: http-data.log

  • lua:
    enabled: no
    scripts:

logging:
default-log-level: info

default-output-filter:

outputs:

  • console:
    enabled: yes
  • file:
    enabled: yes
    filename: /opt/suricata/var/log/suricata/suricata.log
  • syslog:
    enabled: no
    facility: local5
    format: "[%i] <%d> – "

pcap-file:
checksum-checks: auto

app-layer:
protocols:
rfb:
enabled: yes
detection-ports:
dp: 5900, 5901, 5902, 5903, 5904, 5905, 5906, 5907, 5908, 5909
mqtt:
krb5:
enabled: yes
snmp:
enabled: yes
ikev2:
enabled: yes
tls:
enabled: yes
detection-ports:
dp: 443
encryption-handling: bypass
dcerpc:
enabled: yes
ftp:
enabled: yes
ssh:
enabled: yes
smtp:
enabled: yes
raw-extraction: no
mime:
decode-mime: yes

    decode-base64: yes
    decode-quoted-printable: yes

    header-value-depth: 2000

    extract-urls: yes
    body-md5: no
  inspected-tracker:
    content-limit: 100000
    content-inspect-min-size: 32768
    content-inspect-window: 6144
imap:
  enabled: detection-only
msn:
  enabled: detection-only
smb:
  enabled: yes
  detection-ports:
    dp: 139, 445
nfs:
  enabled: no
tftp:
  enabled: yes
dns:
  global-memcap: 32mb
  state-memcap: 1024kb
  tcp:
    enabled: yes
    detection-ports:
      dp: 53
  udp:
    enabled: yes
    detection-ports:
      dp: 53
http:
  enabled: yes
  memcap: 4096mb

  libhtp:
     default-config:
       personality: IDS

       request-body-limit: 100kb
       response-body-limit: 100kb

       request-body-minimal-inspect-size: 32kb
       request-body-inspect-window: 4kb
       response-body-minimal-inspect-size: 40kb
       response-body-inspect-window: 16kb

       response-body-decompress-layer-limit: 2

       http-body-inline: auto

       double-decode-path: no
       double-decode-query: no
     server-config:

modbus:
  enabled: no
  detection-ports:
    dp: 502

  stream-depth: 0

dnp3:
  enabled: no
  detection-ports:
    dp: 20000

enip:
  enabled: no
  detection-ports:
    dp: 44818
    sp: 44818

ntp:
  enabled: no

asn1-max-frames: 256

coredump:
max-dump: unlimited

host-mode: sniffer-only

max-pending-packets: 2048

runmode: workers

autofp-scheduler: hash

default-packet-size: 9022

unix-command:
enabled: auto

legacy:
uricontent: enabled
action-order:
- pass
- alert
- drop
- reject
engine-analysis:
rules-fast-pattern: yes
rules: yes

pcre:
match-limit: 6000
match-limit-recursion: 3500

host-os-policy:
windows:
bsd:
bsd-right:
old-linux:
linux: [0.0.0.0/0]
old-solaris:
solaris:
hpux10:
hpux11:
irix:
macos:
vista:
windows2k3:

defrag:
memcap: 12288mb
hash-size: 655360
trackers: 65535 # number of defragmented flows to follow
max-frags: 65535 # number of fragments to keep (higher than trackers)
prealloc: yes
timeout: 60

flow:
memcap: 14336mb
hash-size: 655360
prealloc: 1048576
emergency-recovery: 30
managers: 2 # default to one flow manager
recyclers: 2 # default to one flow recycler thread

vlan:
use-for-tracking: true

flow-timeouts:

default:
new: 15
established: 300
closed: 0
bypassed: 100
emergency-new: 10
emergency-established: 100
emergency-closed: 0
emergency-bypassed: 50
tcp:
new: 60
established: 600
closed: 60
bypassed: 100
emergency-new: 5
emergency-established: 100
emergency-closed: 10
emergency-bypassed: 50
udp:
new: 15
established: 300
bypassed: 100
emergency-new: 10
emergency-established: 100
emergency-bypassed: 50
icmp:
new: 15
established: 300
bypassed: 100
emergency-new: 10
emergency-established: 100
emergency-bypassed: 50

stream:
memcap: 1024mb
checksum-validation: no # reject wrong csums
inline: no # auto will use inline mode in IPS mode, yes or no set it statically
bypass: yes
drop-invalid: yes
reassembly:
memcap: 256mb
depth: 1mb # reassemble 1mb into a stream
toserver-chunk-size: 2560
toclient-chunk-size: 2560
randomize-chunk-size: yes

host:
hash-size: 61440
prealloc: 1000
memcap: 14336mb

decoder:
teredo:
enabled: false

detect:
profile: custom
custom-values:
toclient-groups: 400
toserver-groups: 400
sgh-mpm-context: auto
inspection-recursion-limit: 3000

prefilter:
default: auto

grouping:
tcp-whitelist: 53, 80, 139, 443, 445, 1433, 3306, 3389, 6666, 6667, 8080
udp-whitelist: 53, 123, 135, 5060

profiling:
grouping:
dump-to-disk: false
include-rules: false # very verbose
include-mpm-stats: false

mpm-algo: hs
spm-algo: hs
threading:
set-cpu-affinity: yes
cpu-affinity:
- management-cpu-set:
cpu: [ 1,3,5 ] # include only these cpus in affinity settings
mode: balanced
prio:
default: “low”
- receive-cpu-set:
cpu: [ 5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39 ]
- worker-cpu-set:
cpu: [ 4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38 ] # include only these cpus in affinity settings
mode: exclusive
prio:
default: “high”
detect-thread-ratio: 2

luajit:
states: 128

profiling:

rules:

enabled: no
filename: rule_perf.log
append: yes
limit: 10
json: no

keywords:
enabled: yes
filename: keyword_perf.log
append: yes

rulegroups:
enabled: yes
filename: rule_group_perf.log
append: yes

packets:

enabled: yes
filename: packet_stats.log
append: yes

csv:

  enabled: no
  filename: packet_stats.csv

locks:
enabled: no
filename: lock_stats.log
append: yes

pcap-log:
enabled: no
filename: pcaplog_stats.log
append: yes

nfq:

nflog:

  • group: 2
    buffer-size: 18432
  • group: default
    qthreshold: 1
    qtimeout: 100
    max-size: 20000

capture:

netmap:

pfring:
threads: auto

cluster-id: 99

cluster-type: cluster_flow
bpf-filter: not(host 131.215.139.100 or 131.215.9.49 or 131.215.254.100)
bypass: yes
checksum-checks: auto

ipfw:

napatech:
hba: -1

use-all-streams: yes

streams: ["0-3"]

mpipe:

load-balance: dynamic

iqueue-packets: 2028

inputs:

stack:
size128: 0
size256: 9
size512: 0
size1024: 0
size1664: 7
size4096: 0
size10386: 0
size16384: 0

cuda:
mpm:
data-buffer-size-min-limit: 0
data-buffer-size-max-limit: 1500
cudabuffer-buffer-size: 500mb
gpu-transfer-size: 50mb
batching-timeout: 2000
device-id: 0
cuda-streams: 2

Can you post the actual info/guide? A post that just says “contact me” is not something this forum is for.

I just wanted to update that Suricata has been performing flawlessly in this configuration once I figured out there is an issue with memory corruption if restarted a few times without a machine reboot.

Unfortunately I am unable to perform any debugging of this issue, otherwise I would; however machine reboots solve the problem and once started Suricata runs, and runs, and runs…

1 Like

Thanks for sharing!
Were there any specific Silicom config options also needed?

Silicom specific options are mentioned earlier in the article and still
apply to the recent post. I hope this answers your question.

That’s great, yes - as long as those are still correct/the same used.
Thank you

@greg : can you let me know the throughput you could achieve with 24PRBs. Am trying to achieve 20Gbps with 48 PRBs but am not able to achieve the same.
Am not using PFRing though, was using Fbcapture APIs to read packets. Do you think that would be the problem?

Hi Greg , any modifications in ruleset also ? to boost performance

The pfring section in the Suricata configuration is missing - interface: interface-value

The structure of the pfring section should align with this (taken from the master-6.0.x branch)

 PF_RING configuration: for use with native PF_RING support
# for more info see http://www.ntop.org/products/pf_ring/
pfring:
  - interface: eth0
    # Number of receive threads. If set to 'auto' Suricata will first try
    # to use CPU (core) count and otherwise RSS queue count.
    threads: auto

    # Default clusterid.  PF_RING will load balance packets based on flow.
    # All threads/processes that will participate need to have the same
    # clusterid.
    cluster-id: 99

    # Default PF_RING cluster type. PF_RING can load balance per flow.
    # Possible values are cluster_flow or cluster_round_robin.
    cluster-type: cluster_flow

    # bpf filter for this interface
    #bpf-filter: tcp

    # If bypass is set then the PF_RING hw bypass is activated, when supported
    # by the network interface. Suricata will instruct the interface to bypass
    # all future packets for a flow that need to be bypassed.
    #bypass: yes

    # Choose checksum verification mode for the interface. At the moment
    # of the capture, some packets may have an invalid checksum due to
    # the checksum computation being offloaded to the network card.
    # Possible values are:
    #  - rxonly: only compute checksum for packets received by network card.
    #  - yes: checksum validation is forced
    #  - no: checksum validation is disabled
    #  - auto: Suricata uses a statistical approach to detect when
    #  checksum off-loading is used. (default)
    # Warning: 'checksum-validation' must be set to yes to have any validation
    #checksum-checks: auto

Sorry about that, I’m sure it wasn’t on purpose. Discourse has some rather short limits and requires higher trust levels to edit a post after some initial amount of minutes. Anyways, you should be able to do that now.

If this is a guide that you plan to keep up to date, it might make sense to put int he Guides section.