Suricata 6 on PF_RING and Silicom Fiberblaze FPGA

Jump down for the instructions…

It took me a long time to come up with the proper configs to effectively run Suricata on a 40G Silicom FPGA and PF_RING. A shout out to Alfredo and NTOP for their excellent community and continual development of PF_RING - an excellent solution for high speed network IDS via Suricata.

And a big thank you to the Suricata team, what an excellent release.

Can you post the actual info/guide? A post that just says “contact me” is not something this forum is for.

As promised here is my set up, please note that using Suricata for syslogging begins to fail after about 5 minutes. Restarting rsyslog gets it going again, then it fails after another 5 minutes. Not sure exactly what the problem is, so currently the best option is to write to disk, and utilize rsyslog and imfile to forward if necessary.

Also please note CPU pinning decreased performance for this configuration.

Please refer to the respective packages for compilation instructions.

Remember to enable the new services created…

We are running RHEL 7.9 so this would be the same for CentOS as well
##################################################################################
CPUs Utilized in a Dell R630 with 15k spindles
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 40
On-line CPU(s) list: 0-39
Thread(s) per core: 2
Core(s) per socket: 10
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 79
Model name: Intel® Xeon® CPU E5-2640 v4 @ 2.40GHz
Stepping: 1
CPU MHz: 2648.583
CPU max MHz: 3400.0000
CPU min MHz: 1200.0000
BogoMIPS: 4799.57
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 25600K
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39
#############################################################################
Our system has 62G of ram - here is utilization after running Suricata for 12 hours
free -g
total used free shared buff/cache available
Mem: 62 37 18 0 6 24
Swap: 31 0 31
#############################################################################
HugePages Required
#############################################################################
Add the following to /etc/default/grub
default_hugepagesz=1G hugepagesz=1G
Example:
GRUB_CMDLINE_LINUX=“crashkernel=auto rd.lvm.lv=rhel/swap default_hugepagesz=1G hugepagesz=1G rhgb quiet”
Execute: grub2-mkconfig -o /boot/grub2/grub.cfg
#############################################################################
lspci -vvv shows our fiblerblaze card is located on NUMA node 0

04:00.0 Ethernet controller: Silicom Denmark FB2CG Capture 2x40Gb [Savona]
Subsystem: Silicom Denmark FB2CG Capture 2x40Gb [Savona]
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- SERR- <PERR- INTx-
Latency: 0, Cache Line Size: 32 bytes
NUMA node: 0
#############################################################################
Next create the following service file and contents, this will run early
in the boot process and guarantee the memory reservations

vi /usr/lib/systemd/system/hugetlb-gigantic-pages.service

[Unit]
Description=HugeTLB Gigantic Pages Reservation
DefaultDependencies=no
Before=dev-hugepages.mount
ConditionPathExists=/sys/devices/system/node
ConditionKernelCommandLine=hugepagesz=1G

[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/lib/systemd/hugetlb-reserve-pages.sh

[Install]
WantedBy=sysinit.target

Then create /usr/lib/systemd/hugetlb-reserve-pages.sh
chmod +x /usr/lib/systemd/hugetlb-reserve-pages.sh

We are reserving 24G of hugepges for NUMA node 0
#!/bin/sh

nodes_path=/sys/devices/system/node/
if [ ! -d $nodes_path ]; then
echo “ERROR: $nodes_path does not exist”
exit 1
fi

reserve_pages()
{
echo $1 > $nodes_path/$2/hugepages/hugepages-1048576kB/nr_hugepages
}

reserve_pages 24 node0
#############################################################################
Create a fiberblaze service file
We’re utilizing the 24G memory reserved here

vi /etc/systemd/system/fiberblaze.service
[Unit]
Description=fiberblaze
After=network.target

[Service]
ExecStart=/bin/bash -c “source /opt/fiberblaze/bin/fbinit; /opt/fiberblaze/driver/load_driver.sh hugepages; /usr/bin/numactl -m 0 /opt/fiberblaze/bin/configurecard -d fbcard0 -c /opt/fiberblaze/fbcard.cfg --alloc-hugepages-mem 24G”
ExecStop=/bin/bash -c “/opt/fiberblaze/bin/configurecard -d fbcard0 --dealloc-hugepages-mem; /opt/fiberblaze/driver/unload_driver.sh”
Type=oneshot
RemainAfterExit=yes
TimeoutStopSec=600

[Install]
WantedBy=multi-user.target

The fbcard.cfg sets up the number of Packet Ring Buffers (PRB), we utilize HashSessionIP for 5-tuple hashing

cat /opt/fiberblaze/fbcard.cfg
; Base name for PRBs in this group is “a”. This name must be unique among prbGroups.
prbGroup “a”
{
noPrbs 24
; hash HashPacket
hash HashSessionIP
filter “hash”
}

#############################################################################
Our Suricata.yaml I make no claims that this is an ideal config, please review your own config
carefully as not all lines may necessary and tweaks may be required to suit your particular environment. This is an output of all lines that are not commented out…

YAML 1.1

vars:
address-groups:
HOME_NET: “[REDACTED]”

EXTERNAL_NET: "!$HOME_NET"

HTTP_SERVERS: "$HOME_NET"
SMTP_SERVERS: "[REDACTED]"
SQL_SERVERS: "$HOME_NET"
DNS_SERVERS: "[REDACTED]"
TELNET_SERVERS: "$HOME_NET"
AIM_SERVERS: "[REDACTED]"
DNP3_SERVER: "$HOME_NET"
DNP3_CLIENT: "$HOME_NET"
MODBUS_CLIENT: "$HOME_NET"
MODBUS_SERVER: "$HOME_NET"
ENIP_CLIENT: "$HOME_NET"
ENIP_SERVER: "$HOME_NET"

port-groups:
HTTP_PORTS: “[REDACTED]”
SHELLCODE_PORTS: “REDACTED”
ORACLE_PORTS: “[REDACTED]”
SSH_PORTS: REDACTED
DNP3_PORTS: REDACTED
MODBUS_PORTS: REDACTED
FILE_DATA_PORTS: “[$HTTP_PORTS,REDACTED]”
FTP_PORTS: “[REDACTED]”

default-rule-path: /opt/suricata/var/lib/suricata/rules
rule-files:

  • suricata.rules

classification-file: /opt/suricata/var/lib/suricata/rules/classification.config
reference-config-file: /opt/suricata/var/lib/suricata/rules/reference.config
threshold-file: /opt/suricata/etc/suricata/threshold.config

default-log-dir: /opt/suricata/var/log/suricata/

outputs:

  • fast:
    enabled: no
    filename: fast.log
    append: yes

  • eve-log:
    enabled: yes
    filetype: regular #regular|syslog|unix_dgram|unix_stream|redis
    filename: eve.json
    types:
    - alert:
    payload: yes # enable dumping payload in Base64
    payload-buffer-size: 4kb # max size of payload buffer to output in eve-log
    packet: no # enable dumping of packet (without stream segments)
    metadata: yes # add L7/applayer fields, flowbit and other vars to the alert

        tagged-packets: yes
    
        xff:
          enabled: no
          mode: extra-data
          deployment: reverse
          header: X-Forwarded-For
    
  • stats:
    enabled: yes
    filename: stats.log
    append: yes # append to file (yes) or overwrite it (no)
    totals: no # stats for all threads merged together
    threads: yes # per thread stats

  • syslog:
    enabled: no
    facility: local5

  • drop:
    enabled: no
    filename: drop.log
    append: yes
    filetype: regular # ‘regular’, ‘unix_stream’ or ‘unix_dgram’

  • file-store:
    enabled: no # set to yes to enable
    log-dir: files # directory to store the files
    force-magic: no # force logging magic on all stored files
    force-filestore: no # force storing of all files

  • file-log:
    enabled: no
    filename: files-json.log
    append: yes

    force-magic: no # force logging magic on all logged files

  • tcp-data:
    enabled: no
    type: file
    filename: tcp-data.log

  • http-body-data:
    enabled: no
    type: file
    filename: http-data.log

  • lua:
    enabled: no
    scripts:

logging:
default-log-level: notice

default-output-filter:

outputs:

  • console:
    enabled: yes
  • file:
    enabled: yes
    level: info
    filename: pathToFile
  • syslog:
    enabled: no
    facility: local5
    format: "[%i] <%d> – "

pcap-file:
checksum-checks: auto

app-layer:
protocols:
tls:
enabled: yes
detection-ports:
dp: REDACTED

dcerpc:
  enabled: yes
ftp:
  enabled: yes
ssh:
  enabled: yes
smtp:
  enabled: yes
  mime:
    decode-mime: yes

    decode-base64: yes
    decode-quoted-printable: yes

    header-value-depth: 2000

    extract-urls: yes
    body-md5: no
  inspected-tracker:
    content-limit: 100000
    content-inspect-min-size: 32768
    content-inspect-window: 6144
imap:
  enabled: detection-only
msn:
  enabled: detection-only
smb:
  enabled: yes
  detection-ports:
    dp: REDACTED
nfs:
  enabled: no
dns:
  global-memcap: 32mb
  state-memcap: 1024kb


  tcp:
    enabled: yes
    detection-ports:
      dp: REDACTED
  udp:
    enabled: yes
    detection-ports:
      dp: REDACTED
http:
  enabled: yes
  memcap: 4096mb

  libhtp:
     default-config:
       personality: IDS

       request-body-limit: 100kb
       response-body-limit: 100kb

       request-body-minimal-inspect-size: 32kb
       request-body-inspect-window: 4kb
       response-body-minimal-inspect-size: 40kb
       response-body-inspect-window: 16kb

       response-body-decompress-layer-limit: 2

       http-body-inline: auto
       double-decode-path: no
       double-decode-query: no

     server-config:



modbus:

  enabled: no
  detection-ports:
    dp: REDACTED

  stream-depth: 0

dnp3:
  enabled: no
  detection-ports:
    dp: REDACTED

enip:
  enabled: no
  detection-ports:
    dp: REDACTED
    sp: REDACTED

ntp:
  enabled: no

asn1-max-frames: 256
coredump:
max-dump: unlimited

host-mode: sniffer-only

max-pending-packets: 1024

runmode: autofp

autofp-scheduler: hash

default-packet-size: 9000

unix-command:
enabled: auto
legacy:
uricontent: enabled

action-order:
- pass
- alert
- drop
- reject

engine-analysis:
rules-fast-pattern: yes
rules: yes

pcre:
match-limit: 6000
match-limit-recursion: 3500

host-os-policy:
windows: []
bsd: []
bsd-right: []
old-linux: []
linux: [0.0.0.0/0]
old-solaris: []
solaris: []
hpux10: []
hpux11: []
irix: []
macos: []
vista: []
windows2k3: []
defrag:
memcap: 12288mb
hash-size: 655360
trackers: 65535 # number of defragmented flows to follow
max-frags: 65535 # number of fragments to keep (higher than trackers)
prealloc: yes
timeout: 60

flow:
memcap: 14336mb
hash-size: 655360
prealloc: 1048576
emergency-recovery: 30
managers: 2 # default to one flow manager
recyclers: 2 # default to one flow recycler thread

vlan:
use-for-tracking: true

flow-timeouts:

default:
new: 30
established: 300
closed: 0
bypassed: 100
emergency-new: 10
emergency-established: 100
emergency-closed: 0
emergency-bypassed: 50
tcp:
new: 60
established: 600
closed: 60
bypassed: 100
emergency-new: 5
emergency-established: 100
emergency-closed: 10
emergency-bypassed: 50
udp:
new: 30
established: 300
bypassed: 100
emergency-new: 10
emergency-established: 100
emergency-bypassed: 50
icmp:
new: 30
established: 300
bypassed: 100
emergency-new: 10
emergency-established: 100
emergency-bypassed: 50

stream:
memcap: 5gb
checksum-validation: yes # reject wrong csums
inline: auto # auto will use inline mode in IPS mode, yes or no set it statically
bypass: yes
reassembly:
memcap: 10gb
depth: 1mb # reassemble 1mb into a stream
toserver-chunk-size: 2560
toclient-chunk-size: 2560
randomize-chunk-size: yes

host:
hash-size: 61440
prealloc: 1000
memcap: 14336mb

decoder:
teredo:
enabled: REDACTED

detect:
profile: custom
custom-values:
toclient-groups: 400
toserver-groups: 400
sgh-mpm-context: auto
inspection-recursion-limit: 3000

prefilter:
default: auto

grouping:
tcp-whitelist: REDACTED
udp-whitelist: REDACTED

profiling:
grouping:
dump-to-disk: false
include-rules: false # very verbose
include-mpm-stats: false
mpm-algo: hs

spm-algo: hs

threading:
set-cpu-affinity: no

luajit:
states: 128

profiling:

rules:
enabled: no
filename: rule_perf.log
append: yes

limit: 10

json: no

keywords:
enabled: yes
filename: keyword_perf.log
append: yes

rulegroups:
enabled: yes
filename: rule_group_perf.log
append: yes
packets:
enabled: yes
filename: packet_stats.log
append: yes

csv:

  enabled: no
  filename: packet_stats.csv

locks:
enabled: no
filename: lock_stats.log
append: yes

pcap-log:
enabled: no
filename: pcaplog_stats.log
append: yes

nfq:

nflog:

  • group: 2
    buffer-size: 18432
  • group: default
    qthreshold: 1
    qtimeout: 100
    max-size: 20000

pfring:
threads: auto

cluster-id: 99

cluster-type: cluster_flow
bpf-filter: not(host REDACTED)

#############################################################################
STARTING SURICATA
#############################################################################
Please note that Suricata is unable to read the interface configurations from the pfring section of suricata.yaml when utilizing fiberblaze cards, so all interfaces should be specified on the command line

Also note that you will see the following errors, fortunately the cluster id defaults to the same for all
so it is not a problem.

Info> - 52349 signatures processed. 1571 are IP-only rules, 12204 are inspecting packet payload, 38390 inspect application layer, 103 are decoder event only
Info> - Using 24 live device(s).
Info> - Unable to find pfring config for interface fbcard:0:a00, using default value or 1.0 configuration system.
Error> - [ERRCODE: SC_ERR_PF_RING_SET_CLUSTER_FAILED(37)] - pfring_set_cluster returned -7 for cluster-id: 1

The output below indicates a successful startup

Info> - RunModeIdsPfringAutoFp initialised
Info> - Running in live mode, activating unix socket
Info> - Using unix socket file ‘/opt/suricata/var/run/suricata/suricata-command.socket’
Notice> - all 64 packet processing threads, 4 management threads initialized, engine started.
#############################################################################
Starting Suricata on the 24 PRBs configured earliers
#############################################################################
/opt/suricata/bin/suricata -v -D --pfring-int fbcard:0:a00 --pfring-cluster-id 1 --pfring-cluster-type cluster_flow --pfring-int fbcard:0:a01 --pfring-cluster-id 1 --pfring-cluster-type cluster_flow --pfring-int fbcard:0:a02 --pfring-cluster-id 1 --pfring-cluster-type cluster_flow --pfring-int fbcard:0:a03 --pfring-cluster-id 1 --pfring-cluster-type cluster_flow --pfring-int fbcard:0:a04 --pfring-cluster-id 1 --pfring-cluster-type cluster_flow --pfring-int fbcard:0:a05 --pfring-cluster-id 1 --pfring-cluster-type cluster_flow --pfring-int fbcard:0:a06 --pfring-cluster-id 1 --pfring-cluster-type cluster_flow --pfring-int fbcard:0:a07 --pfring-cluster-id 1 --pfring-cluster-type cluster_flow --pfring-int fbcard:0:a08 --pfring-cluster-id 1 --pfring-cluster-type cluster_flow --pfring-int fbcard:0:a09 --pfring-cluster-id 1 --pfring-cluster-type cluster_flow --pfring-int fbcard:0:a10 --pfring-cluster-id 1 --pfring-cluster-type cluster_flow --pfring-int fbcard:0:a11 --pfring-cluster-id 1 --pfring-cluster-type cluster_flow --pfring-int fbcard:0:a12 --pfring-cluster-id 1 --pfring-cluster-type cluster_flow --pfring-int fbcard:0:a13 --pfring-cluster-id 1 --pfring-cluster-type cluster_flow --pfring-int fbcard:0:a14 --pfring-cluster-id 1 --pfring-cluster-type cluster_flow --pfring-int fbcard:0:a15 --pfring-cluster-id 1 --pfring-cluster-type cluster_flow --pfring-int fbcard:0:a16 --pfring-cluster-id 1 --pfring-cluster-type cluster_flow --pfring-int fbcard:0:a17 --pfring-cluster-id 1 --pfring-cluster-type cluster_flow --pfring-int fbcard:0:a18 --pfring-cluster-id 1 --pfring-cluster-type cluster_flow --pfring-int fbcard:0:a19 --pfring-cluster-id 1 --pfring-cluster-type cluster_flow --pfring-int fbcard:0:a20 --pfring-cluster-id 1 --pfring-cluster-type cluster_flow --pfring-int fbcard:0:a21 --pfring-cluster-id 1 --pfring-cluster-type cluster_flow --pfring-int fbcard:0:a22 --pfring-cluster-id 1 --pfring-cluster-type cluster_flow --pfring-int fbcard:0:a23 --pfring-cluster-id 1 --pfring-cluster-type cluster_flow -c /opt/suricata/etc/suricata/suricata.yaml
#############################################################################
#############################################################################
Fiberblaze ncardstat shows all rings being utilized in 5-tuple hashing

On-board buffer filling: 0%

ID Name Bypass Size MemPool Numa Use Fill. Peak-f Discards (filling) Rx (packets) Rx (bytes)
0 a00 - 1024 MB 0 0 1 0% 100% 0 1513904231 1426888375643
1 a01 - 1024 MB 0 0 1 0% 27% 0 469006659 388724111010
2 a02 - 1024 MB 0 0 1 0% 70% 0 491013699 446720801374
3 a03 - 1024 MB 0 0 1 0% 93% 0 965579314 1866820686931
4 a04 - 1024 MB 0 0 1 0% 37% 0 634328428 555171527447
5 a05 - 1024 MB 0 0 1 0% 100% 0 1063290376 905913427996
6 a06 - 1024 MB 0 0 1 0% 100% 0 703901344 650406990417
7 a07 - 1024 MB 0 0 1 0% 100% 0 554620983 442350109515
8 a16 - 1024 MB 0 0 1 0% 57% 0 545520966 444564485104
9 a17 - 1024 MB 0 0 1 0% 41% 0 942779161 872349660921
10 a18 - 1024 MB 0 0 1 0% 17% 0 1163771990 1137870424018
11 a19 - 1024 MB 0 0 1 0% 57% 0 712366890 510617166515
12 a20 - 1024 MB 0 0 1 0% 37% 0 514330351 414934613669
13 a21 - 1024 MB 0 0 1 0% 36% 0 493470277 435971518136
14 a22 - 1024 MB 0 0 1 0% 10% 0 472648154 397261795112
15 a23 - 1024 MB 0 0 1 0% 92% 0 569825933 468811236425
32 a08 - 1024 MB 0 0 1 0% 37% 0 565698765 475897268229
33 a09 - 1024 MB 0 0 1 0% 27% 0 562709226 488559170209
34 a10 - 1024 MB 0 0 1 0% 29% 0 487644387 430622823924
35 a11 - 1024 MB 0 0 1 0% 23% 0 516456588 443647287482
36 a12 - 1024 MB 0 0 1 0% 72% 0 541578750 453055211387
37 a13 - 1024 MB 0 0 1 0% 66% 0 516199811 423516223841
38 a14 - 1024 MB 0 0 1 0% 28% 0 479284633 405161743408
39 a15 - 1024 MB 0 0 1 0% 12% 0 517848356 417708609448